Nango | Staff backend engineer | Remote | Full time | $120-200k + equity
Nango (YC W23) is an open-source developer infrastructure for product integrations.
We power the integrations of Replit, Semgrep, Motion, Vapi, Exa, and hundreds of other AI companies.
Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.
We are looking for a Staff engineer who can touch every part of the stack and is passionate about dev infrastructure.
Nango | Staff backend engineer | Remote | Full time | $120-200k + equity
Nango (YC W23) is an open source product: Developer infrastructure for product integrations.
We power the integrations of Semgrep, Motion, Vapi, Exa, and hundreds of other B2B software.
Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.
We are looking for a Staff engineer that can touch every part of stack and is passionate about dev infrastructure.
A lot of teams use us for their Gmail & Google calendar integrations.
If you want to run complex queries across large parts of the data, syncing + indexing on your side will be necessary. Limits on filters, pagination & rate limits make it infeasible to search across most of a user's inbox without tens of seconds to minutes of latency.
But before you sync all the data, I would test if your users actually need to run such queries.
Both Gmail & Google Calendar have a query endpoint that searches across many fields. I would start with a simple tool for your agent to run queries on that, and expand from there if necessary.
Both Nango and Composio could do this for you.
With Nango, you would also get syncs on the same platform, if it turns out you need them.
Thank you that is really helpful. Will check Nango out.
When teams integrate Gmail or other tools with Nango, what usually triggers them to start syncing data instead of just using the query endpoints?
Is there a specific type of query or user behavior that makes them realize they need to index and sync data? Just curious
Nango | Staff backend engineer | Remote | Full time | $120-200k + equity
Nango (YC W23) is an open source product: Developer infrastructure for product integrations. We power the integrations of Semgrep, Motion, Vapi, Exa, and hundreds of other B2B software.
Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.
We are looking for a Staff engineer that can touch every part of stack and is passionate about dev infrastructure.
In the end, MCP is just like Rest APIs, there isn't need for a paid service for me to connect to 400 Rest APIs now, why do I need a service to connect to 400 MCPs?
All I need for my users is to be able to connect to one or two really useful MCPs, which I can do myself. I don't need to pay for some multi REST API server or multi MCP server.
Agentic automation is almost always about operating multiple tools and doing something with them. So you invariably need to integrate with a bunch of APIs. Sure, you can write your own MCP and implement everything in it. Or you can save yourself the trouble and use the official one provided by the integrations you need.
My point is though, that you don't need some 3rd party service to integrate hundreds of MCPs, it doesn't make any sense at all.
An "agent" with access to 400 MCPs would perform terribly in real world situations, have you ever tried it? Agents would best with a tuned well written system prompt and access to a few, well tuned tools for that particular agents job.
There's a huge difference between a fun demo of an agent calling 20 different tools, and an actual valuable use-case which works RELIABLY. In fact, agents with tons of MCP tools are currently insanely UNRELIABLE, it's much better to just have a model + one or two tools combined with a strict input and output schema, even then, it's pretty unreliable for actual business use-cases.
I think most folks right now have never actually tried making a reliable, valuable business feature using MCPs so they think somehow having "400 MCPs" is a good thing. But they haven't spent a few minutes thinking, "wait, why does our business need an agent which can connect to Youtube and Spotify?"
There are quite a few competitors in this space, trying to figure the best way about this. I've been recently playing with the Jentic MCP server[0] that seems to do it quite cleanly and appears to be entirely free for regular usage.
I worked on a system that implemented OAuth against maybe a half-dozen APIs. None of them were the same. It's a series of rough guidelines, a pattern, not a spec.
The extra annoying part is that learning each auth is basically a single-use exercise. Sure, you get better from 0-5 but from 5-100 it's mostly just grumbling and then trying to forget whatever esoteric "standard" was implemented.
Source- done over 300 system connections. Save the straight API keys, they're all special and unique snowflakes.
See also: EDI / EDIFACT / ANSI X12.
It was supposed to standardize the way businesses exchange data, but every company implements it differently so integrations take forever, and there's an entire of middlemen offering solutions to make it faster.
I've read the RFCs several years back and they did not feel clearly written, not like a protocol spec. Maybe it was just me. The reality is each OAuth implementation is unique. Almost no two are the same.
All the problems mentioned in the blog post are due to the providers not following what the spec clearly said.
If you have an example of where that's not the case, I would also love to hear as I work in this area (perhaps you're thinking about how OAuth does not specify at all how authentication happen? But that was a good call, OAuth 1 did and it was too limiting... also OpenID Connect is pretty widely adopted now, and it fills that gap well).
What that tells me is that people who cannot read and understand a specification (or willingly ignore what the spec says) are implementing it anyway. I claim the spec is completely clear on all the points raised in the blog post. You can't just handwave that away without specifically telling what point was unclear.
Nango (YC W23) is an open-source developer infrastructure for product integrations.
We power the integrations of Replit, Semgrep, Motion, Vapi, Exa, and hundreds of other AI companies.
Small but highly experienced team, hard technical challenges (scale + infra devtool running untrusted user code), tons of ownership, direct customer contact, growing very fast.
We are looking for a Staff engineer who can touch every part of the stack and is passionate about dev infrastructure.
Website: https://nango.dev Jobs: https://nango.dev/careers