Although I love localstack and am grateful for what they have done, I always thought that an open community-driven solution would be much more suitable and opens a lot of doors for AWS engineers to contribute back. I’m certain that it’s on their best interest to do so (specially as many of their popular products have local versions)
It’s a no-brainer to me as AI adoption continues to increase: local-first integration testing is a must and teams that are equipped to do so will be ahead of everyone else
How is it in AWS' best interest to provide support or changes for a FOSS clone (albeit with ephemeral storage)? I believe that providing locked down local-first containers themselves for various services (which they do for services like ddb) would make more sense. I'm sure no one at AWS would take a bug report seriously saying random FOSS thing doesn't work with their official client sdks...
I think it's too early to even say if Floci will become something that people use so time will tell, but at AWS, they already had some "informal" support for localstack[1] and it's always been a "commercial product" (so you can imagine how controversial it must be internally to support a third-party clone). I'm only saying that a FOSS version is somewhat less controversial for them to support and although I would love for AWS to have something of their own they clearly have other priorities.
100% this. especially with agentic workflows actually mutating state now. local testing is the only safe way to see what happens when a model hallucinates a table drop without burning an actual staging database.
People often overlook how all the NSA-related activities and government overreach come with a nice memo from officials stating how "lawful" the questionable actions they're taking are.
Do you know who isn't a dummy? Sam. The crucial part of that statement is that the DoD will use OpenAI systems "lawfully and responsibly," which I don't doubt is written somewhere in their contract. However, those terms are so open-ended that it's impossible for OpenAI to enforce. Sam could have clarified in his tweet that they explicitly prohibited the use of their technology for mass surveillance and autonomous killings, but he deliberately chose not to and to simply say, "We told them not to do bad things." which smells like bullshit
No contract can require the government to “reflect” something in law, aside from the fact that the DoD is not a legislative body. So whatever Sam is talking about can only be lip service.
So does this mean that OpenAI will give whatever the DoD asks for and they will pinky swear that it won’t be used for mass surveillance and autonomous killing machines?
and we know we can trust openAI because they were founded on "open" and "safe" AI (up until they realized how much money there was to be made, at which point their only value changed to "make money")