Hacker Newsnew | past | comments | ask | show | jobs | submit | skar5151's commentslogin

Great article. Looking forward to the workshop


Great use case ! The first tier compute would however need to move to the edge to ensure realtime actions. Longer term down analytics still stay on the cloud. I would really like to see containers move to the edge for localized apis which Synch offline with APIs on the cloud.


Is a Dapp the equivalent of a Microservice with its own contract ?


Not exactly in the traditional sense, a Dapps is a decentralized app that runs on Ethereum, to bridge the gap beyond blockchain execution, the embodiment of a Dapp across all of its smart contracts to a traditional application is ideally - a microservice (and API) :)


Is this the right math ? 1 microservice = 1 REST API endpoint = 1 container = 1 node process ?


There are no hard rules about how to divide up your microservices and how atomic each one can be. This is an example at the lowest level as projected by Nginx.

We recommend deploying microservices that are encapsulated by domain because 1. locality of operations, 2. dependencies are going to require you rev as a unit. For us, that model could be running in it's own container or with several other related models.


Great question!


After reviewing hundreds of applications from talented tech innovators, we’re pleased to announce the 2nd annual Node.js and Docker Innovator Program finalists. Follow along during coming year as we highlight these tech innovators, learn why they selected Node.js and Docker, and share their production best practices and user stories. One of these finalists will also be honoured with the Innovator of the Year award. Congratulations to each of this year’s finalists!


Would be very surprised if there is a CIO/CTO without a public/private/hybrid cloud adoption roadmap to show...well maybe if your main business is operating legacy data-centers. Even the later gang is wondering how to turn the decade long infrastructure investment into a cash cow. But need to be careful, cloud does not mean AWS or lockins into popular public cloud. You are right API driven architectures are the way to go in any combination cloud you build. Be it provisioning, scheduling, orchestration, scaleout...APIs are the new delivery bus.


We're talking to many new gen cloud companies that are already in production and have a burgeoning business that they have to support. They're all "sensitive" to cloud lock in and portability is definitely of concerns. What's also interesting is that many have moved from cloud to cloud like AWS to Azure. And many have a combination of clouds as well - a lot more than an I had anticipated!


The writing is on the wall. Traditional services just won't cut it for mobile and IoT. Who is willing to wait for 2-3 seconds for each service call to respond and be parsed per mobile event. I think this is also driven by SPA style programming where micro data segments need to be served almost at realtime vs. loading the entire service data. Microservices are the clear winner

Question is what is the migration path for enterprises from SOA to Microservices? And how much heavy lifting in terms of rewrite is needed.


Everyone who's been in the trenches know that scorched earth big bang replatforming rarely succeeds and never gets completed. Use scripting to wrap around your existing services, carve out microservices with clean interfaces, intense domain focus and go distributed the way you had hoped. We help you do this 0-60 flat in visual canvas, full source code transparency, saving you tons of time repeatedly.


I like kubernetes, but it is not well supported commercially and still has stability issues at large scale. And I absolutely hate running on AWS or even on my own VMs as it just kills the performance and server density vs running schedulers on bare metal. Are you planning on supporting other schedulers like Marathon ? Kinda BYOS - bring your own scheduler ?


We've standardized on docker containers, so any scheduler should be able to run your APIs and microservices. For now we've used Kubernetes for deployment and managing those containers. If you have your own scheduler it would be possible to swap it out with an on-premise deployment. What Kubernetes lacks in terms of dedicated professional support it makes up with the most active and helpful community.


Like the thought leadership. However predicting costs in Serverless environments can be a challenge. Infrastructure costs, even if elastic are more predictable. Thoughts on how to project spend or determining compute resources per function ?


Very very true. Cost of compute is already published per cloud provider. For example, on AWS, it's currently first 1M is free. After that it's a million per $.20. At some point of volume depending on your use case there's going to be an inflection of just running your own container or VM. The bigger challenge here is tracking and understanding your usage as it pertains to your application, beyond the compute resources on a per call basis. We're intending on building governance and metering for infrastructure usage as part of our orchestration solution to help shed light on this area.


Good article. Would love if you share thoughts on tracking container density during scheduling and orchestration. Containers on VMs is like going backwards in design. VM tools have made life easy with auto-scaling et all on public clouds, however the future is containers on bare metal. Gets you much higher performance and server density #vmless.


That's true to a certain degree. Bare metal VMs running containers server give you OS ioslation but at a cost of an abstraction layer that is not necessarily needed if your interest is only at the container level. That's pretty much why CoreOS, RancherOS and other technologies are out there to be as slim of an OS to just run containers and the bare minimum system processes required. Even system processes are run as containers in RancherOS! APM and other system monitoring tools need to be tweaked for a huge multiplier of processes that are now your distributed app versus that one process that might've been your JVM for example. There's a concerted ops and devops effort that's required to hook into your orchestration layer to get that kind of insight. Roman's next post is going to cover the reasons why we chose the OSS orchestrator that we did.

When it comes to APIs and microservices, you're almost inevitably faced with container sprawl - especially with the ability to deploy quickly and deploy often. Having a good orchestration as described in this post gives you the ability to mind those containers across multiple machines - bare metal or VM, doesn't matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: