Hacker Newsnew | past | comments | ask | show | jobs | submit | more meekins's commentslogin

Serverless runtimes don't shut down the instance immediately after the invocation is complete but keep it up for an undetermined time depending on many factors (usually ~15 minutes) in order to process a possible new event without the start-up overhead (cold start). This means that in a system with constant traffic some instances may have a surprisingly long uptime. The instance gets destroyed when it throws an error though so a function with a disabled GC would clean the state when it runs out of memory. Slower processing done by memory-starved instances would probably eliminate any minimal performance gain you could get by disabling the GC though. (EDIT: I just realized that with no GC desperately spinning in place the starvation would not have much performance impact so this would actually be a very interesting approach if you can tolerate some invocations erroring out on out of memory errors, sorry!)

In regards to performance optimization of Lambdas, start-up time is the most important factor as cold starts happen also in cases of concurrent requests when no warm instances are available. This means that Go, Node and Python are pretty sweet for serverless. Thanks to recently released Lambda SnapStart feature that launches new instances from snapshots taken after application initialization Java (even with DI frameworks) is a plausible option if you're inclined that way. Previously you had to use a custom runtime and AOT compilation (that rules out any runtime DI stuff) to achieve tolerable performance with Java on Lambda.


I stopped using the Severless framework when CDK came out. For simple cases it's still fine I guess but a lot of time I found myself falling back to plain CloudFormation (ugh) or relying in plugins with questionable maintenance status. I would not recommend it for new projects and even AWS SAM makes applying some best practices like least privilege principle easier.


CDK is pretty good, the one main painpoint is that some resources are not supported so again you are going to revert to writing cloudformation. I really wish one of AWS’ criteria for “done” when a mew resource is added is that it must be added to the CDK.


In contrast to provider-native serverless solutions, cloud agnostic solutions carry a high price tag for the entire application life cycle in form of operations - those people petting your k8s. A common misconception regarding serverless applications is focusing on compute alone and in that context an agnostic solution may seem lucrative after traffic reaches certain threshold, justifying running a cluster 24/7. However many scalable application architecturs benefit from asynchronous processing and event-driven models which require reliable messaging infrastructure with considerable operational overhead. This is where serverless applications utilizing managed services shine, making it possible for small teams to deliver very impressive things by outsourcing the undifferentiated ops work to AWS. On the other hand, if the compute layer is the only lock-in-inducing component in your architecture, a properly architected application is relatively easy to migrate to a serverful model. As a crude simplification, just replace the API Gateway with Express.


Or module level state (Go, Python) which is in many ways even worse


- Work laptop running Linux

- Sofa laptop running MacOS

- Raspberry Pi running Linux (RetroPie)

- SFF gaming desktop running Windows 10 Pro

- reMarkable 2

- Two Android phones (work & personal)


> we'd only be able to offer real "edge compute" to a small number of big enterprises with deep pockets, rather than at prices affordable to everyone

What a nice way to formulate the tradeoffs between cost and security. When Workers came out with huge headlines about performance and cost I was very disappointed to find out there was no special technical wizardry behind that but just conscious trade-offs in disregard to customer data security. Workers simply skip all the sandboxing steps other providers take to implement a multi-tenant application runtime in a secure manner. So far the mitigations in place seem to make the possibility of Cloudflare customers getting bitten by a V8 vulnerability unlikely instead of impossible.

Sure, the platform has interesting ideas and I'm looking forward to try it out as a full-stack serverless platform. I just cannot foresee running anything serious on it before they come up with a more convincing security story.


Other providers run attacker-provided native code directly on hardware, deeply relying on bug-free silicon for their whole security model to work. I honestly think that's far more precarious than what Workers is doing.


Security, ability to outsource pretty much everything ops-related besides cloud resource provisioning and deployments to the cloud provider and developer focus on business logic.

I don't know a better dev environment than a (possibly scaled down) personal replica of production environment in the cloud. With proper tooling (e.g. Serverless Stack or SAM) you can achieve very fast code updates so the old argument of slow feedback due to having to deploy changes to the cloud on each iteration is getting less and less true as well.

With more traditional models already keeping your OS, possible container images, web server and any other middleware secure and up-to-date is pretty expensive if you want to do it properly.

Going all-in on serverless might not make much sense for a large software product company but when building bespoke business software it allows small teams to do wonderful things very cost-effectively.


> consistent and pervasive security model with Azure AD

Wait, this is the first time I hear this about Azure. Could you elaborate? It is possible that things have improved significantly since I last worked with Azure but lack of a consistent security model (like IAM on AWS) to control human and service (Azure Functions, App Service apps etc) access to specific resources (Cosmos databases, EventHubs etc) especially painful.


some of it is wonky, such as the login model for postgres on Azure SQL (you create login-capable postgres groups that exactly mirror the name of an Azure AD group, and then the "password" you pass in is actually a JWT proving YOU are in fact a member of that AD group -- so you have to hit a funky endpoint to get a time-limited "password")


To me a DNS lookup spinning up a container on Fargate looks both very cool and scary at the same time.


I’d never heard of the approach before and assumed it wasn’t possible so that’s a nice TIL- but ya relying on obscurity to contain costs seems like a recipe for a surprise bill.


Better be sure not to share DNS name to anyone


Sounds like optimizing data partition and blob sizes in Athena and/or Glue


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: