Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe I'm misunderstanding something, but that's about 2700 a second. Or about 3Mbps.

Even a very unoptimized application running on a dev laptop can serve 1Gbps nowadays without issues.

So what are the constraints that demand a complex architecture?



I'm not the OP but a few things:

* Reading/fetching the data - usernames, phone number, message, etc.

* Generating the content for each message - it might be custom per person

* This is using a 3rd party API that might take anywhere from 100ms to 2s to respond, and you need to leave a connection open.

* Retries on errors, rescheduling, backoffs

* At least once or at most once sends? Each has tradeoffs

* Stopping/starting that many messages at any time

* Rate limits on some services you might be using alongside your service (network gateway, database, etc)

* Recordkeeping - did the message send? When?


I literally spent the last week speccing out a system just like this and you are completely correct. You’ve touched on almost every single thing we ran into.


Oh, I absolutely agree that the complexity is in these topics. I'm just sceptic that they're enough to turn a task that could run on a laptop into one that requires an entire cluster of machines.

The third party API is the part that has the potential to turn this straightforward task into a byzantine mess, though, so I suspect that's the missing piece of information.

I'm comparing this to my own experience with IRC, where handling the same or larger streams of messages is common. And that's with receiving this in real time, storing the messages, matching and potentially reacting to them, and doing all that while running on a raspberry pi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: