I do find it hilarious that Apple expects companies to fork over 30% of the price of renting a movie in an app and considers that "fair" and then on platforms where THEY would have to fork over 30% (e.g., Apple TV app on Google TV) they don't let you rent at all. So everyone gets a shittier worse experience everywhere on the off-chance they can extract rent.
Edit: Our requirement was to process this queue as fast as possible and that means more workers. With process based concurrency that is very costly as you have explained.
Yeah, everyone wants to process their queue as fast as possible but "as fast as possible" practically means a cap on the maximum allowed delay. Otherwise, why stop at 30 workers? Go for 300. 3000?
Also, if the workers shared all the code, you could have used unicorn to fork the processes after the code loading was complete. The 400MB per process would then instantly come down to something ~10MB per process at which point rewriting would have been delayed for another year or so.
As fast as twilio can accept and process without throttling, beyond that its not much useful.
Unicorn forking benefit is overrated, we used it and we don't see much benefit for long running processes.
Sidekiq is good alternative but that means some rewrite(for our app anyway). Secondly Sidekiq looks mature today, I started working on some of these changes 2 years ago.
Can you explain why using sidekiq involve a rewrite? AFAIK, using sidekiq you just have to make sure that your jobs are threadsafe not the whole app which is not very hard.
2 years ago, ruby was not COW friendly. So yeah, there was not much benefit to forking if you were using 1.9.3. Not sure how well does ruby 2.x fares in that respect
You have to make all code threadsafe that execute from Job or you have to decouple Job code and App code.(which probably be required anyway because I'm not sure sidekiq supports old rubies)
So, in any case you have to rewrite as much as code that I rewrote in Go and decoupled from main App. (its not lot of code, I mentioned in talk)
How is unicorn forking relevant in this context? Since they had memory usage problems with workers I assumed they were using resque(which uses forking)/delayed job
wkhtmltopdf and phantomjs both worked similarly, currently I'm using phantomjs.
And I'm not splitting pdf but splitting html generation work load, and then create individual pdfs from those html chunks. Then they will be joined together (using pdfunite). I found this much faster then joining html and generating large pdf.
Ok. Are you using phantomjs 1 or 2 ? Any reason to choose phantomjs or wkhtmltopdf? We are using wkhtmltopdf because it creates Table of Contents for PDFs and also clickable links
Our ruby application had memory problems and thats why it needed overhaul, but instead of changing architecture and rewriting same thing in Ruby I choose Go. Sure, rewriting made it better, doing it in Go made it lot better.
One of the thing I said in presentation was "This performance numbers look impressive but ignore them, by rewriting in Ruby could have improve them may be not huge margin but still they would have been better"
I choose Go not because language was better, and not only for performance.
Simple deployment was key point and deployment is not just about deploy and forget, there are entire companies founded around deploying and maintaining Ruby apps for you because it's not simple thing for tiny startup.
Exactly right any language could have worked, and we didn't replaced Ruby, just few part of our big application is rewritten.
I did in Go not because of language is better, ecosystem and culture is better but not language part. I will suggest you to do small real life project in Go.
Problem is I don't know Java. But I know enough Java to know that it will take lot longer for me. And I might not have simple deployment benefits, thats important feature of Go.
You exactly captured my sentiment, when I started doing this I don't realized all benefits. Because of Go's simplicity in deploying and maintaining, many small apps doesn't add much overhead. Now you can scale individual component.
Only risk is, you break things into many component then you should so balance is required.
As I said below, if you tread off the full-stack-track, you get a lot of the go-like things in Ruby as well.
My point being: I usually rate architectural changes as more important as a change in development details (and the programming language might be a big one there, but it still is one, IMHO).
I'll tell you why: a lot of the sentiments people bring when they now switch from Ruby to Go, I've heard before. When people started Ruby - I was already doing Ruby for ~2 years before Rails even came out at it got me into the position of saying: "just you wait until you see the bad parts". They attributed a lot of things to Ruby, while they were really changing their development model.
Exactly, Ruby app was monolith so it was consuming much more then this single feature would require. Now there is double benefit Go already uses lot less memory and its microservice. Plus with Ruby we have to run multiple processes so whatever memory single process consume * # of workers.
I could have rebuilt in Ruby and that was my first thought but deploying and maintaining Ruby apps are lot harder then you can imagine. I didn't like idea of maintaining lots of small ruby apps. Once you deploy single Go application, you might not want to deploy another Ruby app.
Not going to argue your decision, the process of splitting up is also a chance to change the platform.
We do a lot of non-Rails Ruby-work and did implement Microservice-Architectures in it and found it rather unproblematic. The trick is not to start with a full-stack and build down, but to start with a small kernel of an app, deploy it, add something, deploy it, iterate, iterate, iterate.
Plus with Ruby we have to run multiple processes so whatever memory single process consume times # of workers.
I don't use the Ruby stack, but if forking happens after loading libraries, this is not true, since most UNIXes (such as Linux) use COW memory pages when forking. So, it may appear that you use N times the memory of a single process, but most of their memory pages are shared.
Yes, but garbage-collected languages with embedded markers (which Ruby used for a long time) are not at all COW-friendly (because GC runs touch the pages).
Only in recent Ruby versions is that not a problem (and was changed for exactly that reason).
Apple's attitude towards developers complaining about App Store was largely "Deal with it" but seems like they themselves can't deal with it.