Cloud providers (particularly the hyperscalers) are ultimately bundles of multiple services. Given that the hyperscalers do almost everything, you could extend this point in a variety of ways: why do companies already on AWS bother to use MongoDB, Snowflake, or even GitHub when DynamoDB, Redshift, and CodeCommit exist?
The answer tends to boil down to a combination of developer experience, performance, and pricing. Fwiw the actual platform offerings on GCP are also more intuitive than the equivalent services on AWS + Azure where most businesses/startups are hosting services
Edit: cloud vendor lock-in is also a very real phenomenon regardless of how much it just "looks like" all cloud providers should be easily interchangeable. Needless to say, the incentives when you make money selling compute are to keep people on your stuff
Our archived repo functioned as more of a Kubernetes-centered dashboard - Porter Cloud is intended to offer a more complete PaaS experience including spinning up non-application resources like databases
Thanks! Followup question: After you "eject" the app and start paying AWS directly but continue using porter, is the experience more like the archived repo? Or, is it still Porter Cloud just with different billing underneath?
The experience after ejecting to your own AWS account is the same as Porter Cloud. If you're using something like Postgres on Porter Cloud that also gets switched to wrap RDS under the hood in the AWS case
At the moment we only support the major hyperscalers since they're the most common ejection destinations, but we're considering adding more options. No concrete plans for this at the moment though
I'd like to reframe this a bit (Porter co-founder btw). The way we see it, the core value of many PaaS solutions is the reduction of DevOps overhead by allowing teams to focus engineering resources on product and not generic infra maintenance tasks.
Most of our existing users are companies that are already using Porter in their own AWS/GCP/Azure because they want to reduce time spent on cloud management as they continue to grow. Companies like Heroku exclusively provide this service in a hosted cloud environment where they also resell the underlying infrastructure to you (similar to Porter Cloud), but we want to be flexible in delivering the same value on any cloud provider.
If we're doing our job, we will continue to automate enough generic DevOps work where Porter is delivering value even as you scale in your own cloud. We have a good number of late-stage startups (and even some public companies) that have DevOps teams in place using us precisely this way to handle core parts of their infra and application lifecycle management.
Porter Cloud is intended as a way to "get off the ground," but our staying value lies in continuing to reduce the same DevOps overhead even once you're running in your own cloud account
Founder here - the "up to 3x cheaper than Heroku" depends on the exact compute profile, but as a point of reference, Heroku pricing starts at $250/mo for a single 2.5 GB RAM instance on their Performance tier (https://www.heroku.com/dynos). Generously assuming that you get 2 dedicated vCPU cores, the equivalent Porter cost is ~3-4x cheaper
Edit: Porter Cloud also supports Postgres and our in-your-own-cloud offering just uses RDS under the hood for AWS
We're building a PaaS that runs in a user's own cloud (basically Heroku on k8s). We've converted some of Heroku's largest enterprise users as well as a large base of high-growth startups despite starting just a little over a year and a half ago.
We're still a team of six but we believe in 10x engineers and are looking to grow our in-person team in NYC.
We're building a PaaS that runs in a user's own cloud (basically Heroku on k8s). We've converted some of Heroku's largest enterprise users as well as a large base of high-growth startups despite starting just a little over a year and a half ago.
We're still a team of only six people but we believe in 10x engineers and are looking to grow our in-person team in NYC.
OP and Porter founder here. The article was meant to outline the most common technical limitations we see companies on Heroku bump up against as they outgrow Heroku. For individuals and teams running smaller workloads on Heroku where saving $ is a chief concern, Heroku is probably still a good option even though they’re declining in market share (this unprecedented recent outage aside). Porter is designed for companies that are maturing off Heroku for the technical reasons we mention or for those already looking to get the automation of Heroku in their own AWS/GCP cloud.
It caught my eye too, but for a different reason, this bit doesn't seem right:
> which is to say, virtually all deployments
My understanding is if deploying with `git push heroku main`, that application's GitHub repository was not viewable by hackers (but those apps deployed through 'Heroku GitHub Deploys' were). (please tell me if my understanding is incorrect).
I think most Heroku users would deploy with `git push heroku main`, although that's purely hunch.
Unrelated, but I'd add one more thing to the article, which is that Heroku docs aren't easy to give feedback on. I'd love for the docs to be on GitHub so shortcomings or inaccuracies can quickly be addressed. Currently, to point out a correction to the docs, you'd have to write a support ticket and 100% chance that support ticket isn't going beyond the person who received it, so nothing will get actioned.
The answer tends to boil down to a combination of developer experience, performance, and pricing. Fwiw the actual platform offerings on GCP are also more intuitive than the equivalent services on AWS + Azure where most businesses/startups are hosting services
Edit: cloud vendor lock-in is also a very real phenomenon regardless of how much it just "looks like" all cloud providers should be easily interchangeable. Needless to say, the incentives when you make money selling compute are to keep people on your stuff