To be clear, colima isn't a fork of docker. It's just the lima VM with the docker OCI runtime + cli which is FOSS and always has been. Docker Desktop is the pile of garbage you can kinda sorta replace it with, but PodMan and PodMan Desktop is closer to a clone of Docker than Colima. Colima _is_ Docker
We all have personal AWS environments and use them as need arises at my org. Doesn't stop the fact cloudformation deployments take inordinate amounts of time for seemingly no reason. Basic shit like pushing a new ECS task takes 10+ minutes alone. Need to push an IAM policy change by itself? 5 minutes. Maybe it's the CDK, but we've only been on that a couple years, prior we used a ansible and cloudformation templates directly but it wasn't any better. This compounds with each dev and each change across multiple stacks. Not only that cloudformation easily gets "stuck" in unrecoverable states when rollback fails and you have to manually clean up to clean up drift which can easily eat your entire day. I'll note that our stacks have good separation by concerns, doesn't matter. A full deployment of a single ECS service easily takes 30 minutes. This is so wasteful it's absurd. I'd love to NOT have to use a shim like LocalStack but the alternative is what?
It’s never taken 30 minutes to pass in a new parameter value for the Docker container.
Also as far as rollbacks just use —disable-rollbacks.
The only time I’ve had CFT get stuck is using custom resources when I didn’t have proper error handling and I didn’t send the failure signal back to CFT.
Failed deployments without rollbacks still leave you in a unusable state and manual rollbacks of a failed service deployment can take as long to cleanup as the failed rollback you just disabled especially when dealing with persistent resources. That linked fargate stack is fairly bare bones in comparison to what we run in ECS and we maintain our own AMIs that are built nightly for security updates and ECR resources from docker build pipelines which need to go together in a real AWS environment to have any hope of actually working. A failure in one has cascading effects on others and cleanup is a pain. Passing a new parameter isn't a real exercise and we need a new docker build with every code change. Glad you have a minimalist setup and can get by with what? 10m deployments end to end? Sadly that's not the world I live in...
Why are you running your own AMIs for ECS instead of just using Fargate?
The build pipeline I used in CodeBuild was build the Docker container and a sidecar Nginx container.
The parameter you pass in is the new Docker container you just built.
But how would LocalStack help?
You also don’t have massive CDK apps. The Docker images are going to change much more frequently than your persistent layer. You’re not going to be bringing up and down your VPCs, database clusters etc.
Own AMIs? Simply cost. No other reason, although we're evaluating it again, so we'll see.
We actually have several "massive" CDK projects now, depending on what metric you use for determining size. Our largest CDK app has more than 60 stacks, but with a cellular architecture that's artifially inflating the numbers a bit (n unique stacks against k AWS accounts where k > n but for n > 20, < 100) Maybe the speed at which we change persistent layers (99% additive) will slow down someday, but when you maintain a large number of services (>14) with constantly changing external contracts, it probably won't; it hasn't the last 6 years, it only gets faster.
Which services weren't supported in your use case? Currently with our enterprise contract we use all the usual suspects:
AppConfig, DynamoDB, ElastiCache, Kinesis streams, RDS/Aurora with innodb engine, S3, SecretsManager, SNS, and SQS. I'm probably forgetting a few, but we haven't hit anything unsupported (yet.)
I also haven't touched any pod stuff and have no plans to. Probably just luck of the draw we didn't hit any holes or issues, but we tend not to use any esoteric features in AWS land.
The point of those pushing AI at the top is precisely to leave all human devs "behind", as it were. Anyone who thinks otherwise is not paying attention. Whether or not they succeed in their endeavors, time will tell. In either case, if their towers of money fail to deliver on the promise or not (like the last 3 AI winters I've lived through) doesn't mean we won't have a bunch of new useful tools at our disposal in the end.
I've seen rust codebases that would make you cry along with perfectly well architected applications written in both perl and php. You're just playing into common language silo stereotypes. A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work. I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.
> You're just playing into common language silo stereotypes.
Yes, the stereotype is what I brought up on purpose.
> A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work.
More relevantly, a competent developer can use AI just like one can use PHP. It buys enormous value in the short term.
> I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.
Yes, just like people who use PHP can make excellent programs. Nobody in this conversation implied anyone was lesser than another.
So you're saying reputations of "atrociousness" in both cases (AI users and implied poor quality producing software devs) aren't warranted? That wasn't clear in your post (at least to me.) Simply pointing out a correlation of negative stereotypes without refuting evidence just helps reinforce them.
The implication being that execs want folks who "reach for AI" to meet some arbitrary contract targets? Sounds like optimizing for the wrong things but I've seen crazier schemes.
In my opinion the end goal of those execs pushing AI is the age old goal of seizing the means of production (of software in this case) by reducing the worker to a machine. It'll likely play out in their favor honestly, as it has many times in the past.
C# is cross platform, I'd bet money that most .Net services run on Linux these days (Azure runs more Linux VMs than Windows VMs after all) This just fills the client side gap so you can unify the full stack under one language a la node etc
This is already happening, many days I am that grumpy "code janitor" yelling at the damn kids to improve their slop after shit blows up in prod. I can tell you It's not "fun", but hopefully we'll converge on a scalable review system eventually that doesn't rely on a few "olds" to clean up. GenAI systems produce a lot of "mostly ok" code that has subtle issues you on catch with some experience.
Maybe I should just retire a few years early and go back to fixing cars...
Yeah I imagine it has to be utterly thankless being the code janitor right now when all the hype around AI is peaking. You're basically just the grumpy troll slowing things down. And God forbid you introduce a regression bug trying to clean up some AI slop code.
Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.
I’ve got a “I haven’t written a line of code in one year” buddy whose startup is gaining traction and contracts. He’s rewritten the whole stack twice already after hitting performance issues and is now hiring cheap juniors to clean up the things he generates. It is all relatively well defined CRUD that he’s just slapped a bunch of JS libs on top of that works well enough to sell, but I’m curious to see the long term effects of these decisions.
Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed
I've had to slowly and painfully learn the lesson that early on in a company's lifycycle it doesn't really matter how terrible the code is as long as it mostly works. There are of course exceptions like critical medical applications and rocket/missile guidance systems but as a general rule code quality is only a problem when it inevitably bites you much farther down the line, usually when customers start jumping ship when it's obvious you can't scale or reach uptime contact targets. By then you'll hopefully have enough money saved from your initial lax approach to put some actual effort into shoring up the losses before they become critical. Sometimes you just get by with "good enough" for decades and no one cares. For someone that cares about the quality of their work it can be sad state of affairs, but I've seen this play out more times than I'd care to.
> There are of course exceptions like critical medical applications and rocket/missile guidance systems but as a general rule code quality is only a problem when it inevitably bites you much farther down the line, usually when customers start jumping ship when it's obvious you can't scale or reach uptime contact targets.
My experience is it hits both new-feature velocity and stability (or the balance between those two) really early, but lots of managers don't realize that this feature that's taking literal months could have been an afternoon with better choices earlier on (because they're not in a position to recognize those kinds of things). For that matter, a lot of (greener) developers probably don't recognize when the thing that's a whole-ass project for them could have been toggling a feature flag and setting a couple config entries in the correct daemon, with better architecture, because... they don't even know what sort of existing bulletproof daemon ought to be handling this thing that somehow, horrifically, ended up in their application layer.
So the blame never gets placed where it belongs, and the true cost of half-assed initial versions is never accounted for, nor is it generally appreciated just how soon the bill comes due (it's practically instantly, in many cases).
There are phases in a company's lifecycle which carries different weights associated with code quality depending on factors like the domain, how many customers you have, what your risk aversion is etc. I'm just saying don't build a cathedral when a mole hill will do. If the product doesn't work that's another story, it still needs to stand up without falling over when you look at it sideways and having only juniors would be a good way to get the latter. Use basic design principles, and proven architectures but don't sweat things like code coverage, reinventing wheels because you think you can do it better than something you can just grab off the shelf rn. It'll inevitably be a bit of a hodgepodge in the beginning but that's ok.
Consider early code as "throwaway", don't spend your limited time rewriting anything already working "better" unless you actually have the leisure to do so (few actually do, and even fewer realize they don't)
I enjoy writing and designing software systems, and have since my first apple ii use in 2nd grade writing logo programs (the turtle drawing programming language)
I write software in my spare time, for fun, as it scratches a particular itch in my brain, but I also enjoy a lot of other hobbies as well: woodworking, car repair, boating, beekeeping...
Having a 9 to 5 desk job in any field is it's own type of soul crushing, even moreso as of late for myself personally. However, if I need to perform the song and dance to support my family, I'll at least do it to the tune of something I enjoy. With software engineering I can at least "get lost in" the work, so the drudgery can be temporarily forgotten until I can get home to my family and side projects.
Isn't a docker image basically a universal binary at this point? It's a way to ship a reproducible environment with little more config than setting an ENV var or two. All my local stuff runs under a docker compose stack so I have a container for the db, a container for redis, LocalStack, etc
I'm not saying it's ideal, just saying that's what we've shifted to for repeatable programs. Your Linux "universal" binary certainly won't work on your Mac directly either...
reply