Hacker Newsnew | past | comments | ask | show | jobs | submit | rguldener's commentslogin

Nango | Staff backend engineer | Remote | Full time | $120-200k + equity

Nango (YC W23) is an open-source developer infrastructure for product integrations.

We power the integrations of Replit, Semgrep, Motion, Vapi, Exa, and hundreds of other AI companies.

Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.

We are looking for a Staff engineer who can touch every part of the stack and is passionate about dev infrastructure.

Website: https://nango.dev Jobs: https://nango.dev/careers


Nango is an open-source alternative: https://nango.dev

Especially if you use pipedream for integrations in your agent or product.

(I’m one of the founders)


Nango | Staff backend engineer | Remote | Full time | $120-200k + equity

Nango (YC W23) is an open source product: Developer infrastructure for product integrations.

We power the integrations of Semgrep, Motion, Vapi, Exa, and hundreds of other B2B software.

Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.

We are looking for a Staff engineer that can touch every part of stack and is passionate about dev infrastructure.

Website: https://nango.dev Jobs: https://nango.dev/careers


Founder of https://www.nango.dev here.

A lot of teams use us for their Gmail & Google calendar integrations.

If you want to run complex queries across large parts of the data, syncing + indexing on your side will be necessary. Limits on filters, pagination & rate limits make it infeasible to search across most of a user's inbox without tens of seconds to minutes of latency.

But before you sync all the data, I would test if your users actually need to run such queries.

Both Gmail & Google Calendar have a query endpoint that searches across many fields. I would start with a simple tool for your agent to run queries on that, and expand from there if necessary.

Both Nango and Composio could do this for you.

With Nango, you would also get syncs on the same platform, if it turns out you need them.

Hope this helps!


Thank you that is really helpful. Will check Nango out.

When teams integrate Gmail or other tools with Nango, what usually triggers them to start syncing data instead of just using the query endpoints? Is there a specific type of query or user behavior that makes them realize they need to index and sync data? Just curious


It varies a lot. Which is why we always recommend to start with the feature requirements/user problem and work backwards from there.

Examples: - Low latency to show X last emails a person had with a specific email address

- Enriching data from the emails/calendar with other data from your product (E.g. mapping email recipients to contacts)

- Knowing when a calendar event has changed (sometimes also possible with webhooks)

- Detecting deletes (maybe also possible with webhooks, not sure for gmail/calendar)


Nango | Staff backend engineer | Remote | Full time | $120-200k + equity

Nango (YC W23) is an open source product: Developer infrastructure for product integrations. We power the integrations of Semgrep, Motion, Vapi, Exa, and hundreds of other B2B software.

Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.

We are looking for a Staff engineer that can touch every part of stack and is passionate about dev infrastructure.

Website: https://nango.dev Jobs: https://nango.dev/careers


Some candidates don't use linkedIn, which you are requiring it to submit applications. Just some feedback.


The hover state on the "Open Positions" button is broken: https://www.nango.dev/careers


Thanks for the flag, should be fixed!


Open source tools for engineers to build integrations in their products: https://nango.dev


Agree, one MCP server per API doesn’t scale.

With something like https://nango.dev you can get a single server that covers 400+ APIs.

Also handles auth, observability and offers other interfaces for direct tool calling.

(Full disclosure, I’m the founder)


Why do you even need to connect to 400 APIs?

In the end, MCP is just like Rest APIs, there isn't need for a paid service for me to connect to 400 Rest APIs now, why do I need a service to connect to 400 MCPs?

All I need for my users is to be able to connect to one or two really useful MCPs, which I can do myself. I don't need to pay for some multi REST API server or multi MCP server.


Agentic automation is almost always about operating multiple tools and doing something with them. So you invariably need to integrate with a bunch of APIs. Sure, you can write your own MCP and implement everything in it. Or you can save yourself the trouble and use the official one provided by the integrations you need.


My point is though, that you don't need some 3rd party service to integrate hundreds of MCPs, it doesn't make any sense at all.

An "agent" with access to 400 MCPs would perform terribly in real world situations, have you ever tried it? Agents would best with a tuned well written system prompt and access to a few, well tuned tools for that particular agents job.

There's a huge difference between a fun demo of an agent calling 20 different tools, and an actual valuable use-case which works RELIABLY. In fact, agents with tons of MCP tools are currently insanely UNRELIABLE, it's much better to just have a model + one or two tools combined with a strict input and output schema, even then, it's pretty unreliable for actual business use-cases.

I think most folks right now have never actually tried making a reliable, valuable business feature using MCPs so they think somehow having "400 MCPs" is a good thing. But they haven't spent a few minutes thinking, "wait, why does our business need an agent which can connect to Youtube and Spotify?"


People want to not think and throw the kitchen sink at problems instead of thinking what they actually need.


Nango is cool, but pricing is quite high at scale.


There are quite a few competitors in this space, trying to figure the best way about this. I've been recently playing with the Jentic MCP server[0] that seems to do it quite cleanly and appears to be entirely free for regular usage.

[0] https://jentic.com/


We offer volume discounts on all metrics.

Email me on robin @ <domain> and happy to find a solution for your use case


Looks pretty cool, thanks for sharing!


You could use Nango for the OAuth flow and then pass the user’s token to the MCP server: https://nango.dev/auth

Free for OAuth with 400+ APIs & can be self-hosted

(I am one of the founders)


Nango | Remote | Hiring fullstack engineers & Entrepreneuer in Residence

Nango is an open-source platform for product integrations with 300+ APIs.

B2B SaaS use us to connect their product with all the other SaaS their customers use. Learn more here: https://www.nango.dev

We are a late-seed stage, YC-backed startup and fully remote (US & EU).

Hiring for fullstack engineering roles (EST timezone) and an entrepreneur in residence (also EST based).

More details: https://www.nango.dev/jobs


A year ago we implemented OAuth for 100 popular APIs.

Our experience was exactly like OP describes: https://www.nango.dev/blog/why-is-oauth-still-hard


I worked on a system that implemented OAuth against maybe a half-dozen APIs. None of them were the same. It's a series of rough guidelines, a pattern, not a spec.


The extra annoying part is that learning each auth is basically a single-use exercise. Sure, you get better from 0-5 but from 5-100 it's mostly just grumbling and then trying to forget whatever esoteric "standard" was implemented.

Source- done over 300 system connections. Save the straight API keys, they're all special and unique snowflakes.


Almost the exact same sentiment as yours, but from an earlier conversation, and it really stuck with me.

“… one of the principle issues is that it's less a protocol and more a skeleton of a protocol.”

https://news.ycombinator.com/item?id=35720336


See also: EDI / EDIFACT / ANSI X12. It was supposed to standardize the way businesses exchange data, but every company implements it differently so integrations take forever, and there's an entire of middlemen offering solutions to make it faster.


The RFC reads very much like a spec and not like a rough guideline. What are you talking about when you say guideline?


I've read the RFCs several years back and they did not feel clearly written, not like a protocol spec. Maybe it was just me. The reality is each OAuth implementation is unique. Almost no two are the same.


All the problems mentioned in the blog post are due to the providers not following what the spec clearly said.

If you have an example of where that's not the case, I would also love to hear as I work in this area (perhaps you're thinking about how OAuth does not specify at all how authentication happen? But that was a good call, OAuth 1 did and it was too limiting... also OpenID Connect is pretty widely adopted now, and it fills that gap well).


"Clearly" is relative. If all these providers are having problems with the spec... what does that tell you?


What that tells me is that people who cannot read and understand a specification (or willingly ignore what the spec says) are implementing it anyway. I claim the spec is completely clear on all the points raised in the blog post. You can't just handwave that away without specifically telling what point was unclear.


Just imagine if we had these problems with TCP!


It's how people follow something that says what it is.


The RFC is not what people actually implement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: