Hacker Newsnew | past | comments | ask | show | jobs | submit | tl2do's commentslogin

I'm interested in this too. I've been using STM32 NUCLEO boards, which are cheap and capable, but even the smallest ones are noticeably larger than this. I'd love to see an STM32 version of this project.

https://tomu.im/somu.html

This is an stm32l432kc in the form of a yubikey nano.


That's just nuts. That's more processing power than the first 10 computer I owned combined.

This reminds me of Gresham's Law: "bad money drives out good." But here, the result is inverted—efficient replicators drive out the less efficient.

Well, (somewhat tongue-in-cheek,) one of the defining characteristics of "good money" is that it is very inefficient to replicate!

(Several places online list "durability; portability; divisibility; scarcity; fungibility; and acceptability" as the key characteristics of "good money." Difficulty-of-replication pertains to the "scarcity" characteristic: if it's easy to duplicate, that makes it "bad" money, not "good" money.)


Bad money only drives out good money under fiat. Absent legal tender laws, the opposite is true.

AWS forces an explicit default choice—Allow or Block. Azure defaults to passive "Detection," requiring a manual switch to "Prevention." An AWS engineer, used to making this conscious decision, might miss that Azure requires a separate, critical step to actually turn protection on.

Intriguing, but...

Around last summer (July–August 2025), I desperately needed a sandbox like this. I had multiple disasters with Claude Code and other early AI models. The worst was when Claude Code did a hard git revert to restore a single file, which wiped out ~1000 lines of development work across multiple files.

But now, as of March 2026, at least in my experience, agents have become more reliable. With proper guardrails in claude.md and built-in safety measures, I haven't had a major incident in about 3 months.

That said, layering multiple safeguards is always recommended—your software assets are your assets. I'd still recommend using something like this. But things are changing, bit by bit.


No doubt they are getting better, but even a 0.1% chance of “rm -rf” makes it a question of “when” not “if”. And we sure spin that roulette a lot these days. Safehouse makes that 0%, which is categorically different.

Also, I don’t want it to be even theoretically possible for some file in node_modules to inject instructions to send my dotfiles to China.


Prompt injection attacks are very much a thing. It doesn't matter how good the agent is, its vulnerable, and you don't know what you don't know.

Where are we at with SOTA or reliable prompt injection detection mechanisms?

Look into git reflog. If the changes were committed, it was almost certainly possible to still restore them, even if the commit is no longer in your branch.

There are probably other tools like this that keep version history based on filesystem events, independent from the project's git repository

https://www.jetbrains.com/help/idea/local-history.html


I always add Node.js even for non-JS dev work (many tools need it):

bash echo "==> Installing Node.js (LTS)" curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - sudo apt-get install -y nodejs


Good call, worth adding!

Training students to write a single theme in multiple styles—including intentionally "bad" writing—is "originally" a great educational method. It teaches real composition by helping students understand what works and what doesn't. It builds good criteria in students.

But, the article's focus on writing "worse" for AI detectors misses what is important. Trying to distinguish humans from machines does not develop student capability. In fact, it's a fleeting technique because AI writing styles will vary and improve over time.


Quick telephony question: how can calls from payphones to (888) 683-6697 be toll-free for the caller? I’m Japanese, so I may be missing something, but I don’t understand the mechanism that makes this free (or low-cost enough) to run as a free service.

Most countries have some numbers with receiver pays or reverse billing as a feature. Sometimes called 'free phone'.

My research says Japan has several prefixes, 0120, 0800, 0088, 0531, although not all phone numbers in these prefixes are available to all callers.

In the US, holders of a toll free number pay their carrier a per minute rate (sometimes with billing increments of 1 second), as well as a fee per call from payphones.

The per minute pricing is pretty reasonable typically one or two cents per minute; I think the per call cost from a payphone are more significant, although I don't see this listed by most providers; I seem to recall it being pretty hefty (and some toll free numbers would not accept calls from payphones as a result), but maybe changes in the network and intercarrier billing have resulted in a smaller fee or it's just not relevant to most people because payphones are hard to find.


Thanks. Your explanation can solve my question. In Japan, the receiver will pay for toll free call around 30 yen per call. Here it is difficult to run free service like Payphone Go considering the cost.

the owner of the (888) 683-6697 line pays for incoming calls

I couldn’t find it at first, but it is there on SourceForge: https://sourceforge.net/p/opencamera/code/ci/master/tree/app...

From my experience as a software engineer, doubling my productivity hasn’t reduced my workload. My output per hour has gone up, but expectations and requirements have gone up just as fast. Software development is effectively endless work, and AI has mostly compressed timelines rather than reduced total demand.

There's a famous quote by a cyclist, "It never gets easier, you just go faster"

It is not going to reduce your workload. It is going to remove one of your co-workers.

This seems unlikely. My company is in competition with a number of other startups. If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.

> If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.

This assumes that the companies' business growth is a function of the amount of code written, but that would not make much sense for a software company.

Many companies (including mine) are building our product with an engineering team 1/4 the size of what would have been required a few years ago. The whole idea is that we can build the machine to scale our business with far fewer workers.


How many companies have you worked at in the past where the backlog dried up and the engineering team sat around doing nothing?

Even in companies that are no longer growing I've always seen the roadmap only ever get larger (at that point you get desperate to try to catch back up, or expand into new markets, while also laying people off to cut costs).

Will we finally out-write the backlog of ideas to try and of feature requests? Or will the market get more fragmented as more smaller competitors can carve out different niches in different markets, each with more-complex offerings than they could've offered 5 years ago?


> This seems unlikely

This is already happening. Fewer people are getting hired. Companies are quietly (sometimes not, like Block) letting people go. At a personal level all the leaders in my company are sounding the “catch up or you’ll be left behind” alarm. People are going to be let go at an accelerated pace in the future (1-3 years).


I don’t think that addresses my point. I understand a lot of companies are firing under the guise of AI, but it’s unclear to me whether AI is actually driving this - especially when the article we are both responding to says:

> We find no systematic increase in unemployment for highly exposed workers since late 2022


It depends on the "shape" of the company. Larger companies have a lot more of what I call "Conway Overhead", basically a mix of legit coordination overhead and bureaucracy. Startups by necessity have a lot less of that, and so are better "shaped" to fully harness AI.

> This seems unlikely.

It is absolutely likely. The hiring market for juniors is fucked atm.


That's not necessarily a result of AI, you also have to consider the broader economic environment. I mean, it was also difficult to get a job as a graduate in 2008, whereas it's typically been easier to get a job when credit is cheap.

It sure was, but as far as I'm aware, 2026 isn't in the middle of a generation-scale economic collapse.

(And if it is, what is the cause?)


Isn't it, for something like 70-80% of families? Just in slow-motion?

How long have we been hearing about crushing affordability problems for property? And how long ago did that start moving into essentials? The COVID-era bullwhip-effect inflation waves triggered a lot of price ratcheting that has slowed but never really reversed. Asset prices are doing great, as people with money continue to need somewhere to put it, and have been very effective at capturing greater and greater shares of productivity increases. But how's the average waiter, cleaning-business sole-proprietor, uber driver, schoolteacher, or pet supply shopowner doing? How's their debt load trending? How's their savings trending?


There’s a difference between a collapse and a slowdown. We don’t need a collapse for hiring to slow down [1,2]. I think we’re finally just seeing the maturation of software development. Software is increasingly a commodity, so maybe the era of crazy growth and hiring is over. I don’t think that we need AI to explain this either, although possibly AI will simply commodify more kinds of software.

[1] https://www.npr.org/2026/02/12/nx-s1-5711455/revised-labor-d...

[2] https://www.marketplace.org/story/2025/12/18/expect-more-of-...


FAANG realizing that they can't make infinite money by expanding into every possible market while paying FAANG salaries for low-scale-CRUD-prototyping roles has a lot to do with this, and that started a bit earlier than the AI wave.

Lots going on right now in the market, but IMO that retreat is the biggest one still.

Many companies were basically on a path of infinite hiring between ~2011 and ~2022 until the rapid COVID-era whiplash really drove home "maybe we've been overhiring" and caused the reaction and slowdown that many had been predicting annually since, oh, 2015.


You can't be a manager without anyone to manage.

There's a lot of perverse interests and incentives at play.


Manager gigs at FAANG are pretty rough right now in my network, you can't be a manager when the higher-ups notice your group isn't a big revenue generator and so doesn't justify new hires and bigger org charts, and cutting the middlemen is the easiest way to juice the ROI numbers. If the ICs that now have 1/3 the managerial structure and have to wear more hats don't turn things around, oh well, it's not a critical area anyway, just nuke it.

You can be an exec with 10-20% fewer random products/departments in your company, and maybe 40% fewer middle managers in the rest of them. You might even get a nice bonus for cutting all that cost! Bonuses for growth, bonuses for "efficiency" when the macro vibe shifts. Trim sails and carry on.


Because of overhiring during the post-COVID free money glitch, not because of AI.

Aren't we both responding to an article which says:

> We find no systematic increase in unemployment for highly exposed workers since late 2022


It was fucked before AI became "mainstream" too. Companies overhired during and after covid.

Erm its been fucked for many years across many professions, it was just less so for software engineering in particular. Now entry into the S-E profession is taking a hit.

Also dont forget theres only so many viable revenue-generating and cost-saving projects to take. And said above - overhiring in COVID.


There's definitely tone deaf statements from managers/leaders like "AI will allow us to do more with less headcount!" As if the end worker is supposed to be excited about that, knuckleheads, lol.

Yeah I’ve been scratching my head about this too. Like, if my boss said this, I would basically start looking for a new job right then and there. Seems like a good way to drive off your own talent.

In a bear market in a bloated company, maybe. We’re still actively hiring at my startup, even with going all-in on AI across the company. My PM is currently shipping major features (with my review) faster and with higher-quality code than any engineer did last year.

>My PM is currently shipping major features (with my review) faster and with higher-quality code than any engineer did last year

That's... not a good look for your engineers?


It’s hard to compare, honestly. Last year, my PM didn’t have the AI tools to do any of this, and engineers were spread thin. Now, the PM (with a specialized Claude Code environment) has the enthusiasm of a new software engineer and the product instincts of a senior PM.

This is how it will go at least in the near term. Engineers will be phased out slowly by product/project management that will prompt the tool instead of the tech lead for the changes they want.

And in the longer term those people will also get deprecated.


> In a bear market in a bloated company, maybe

Then any company that was staffed at levels needed prior to the arrival of current-level LLM coding assistants is bloated.

If the company was person-hour starved before, a significant amount of that demand is being satisfied by LLMs now.

It all depends on where the company is in the arc of its technology and business development, and where it was when powerful coding agents became viable.


Another way to look at it: the gains that AI provides do not go to the worker, they go to the shareholder.

Or just make time for more Very Important Meetings.

This - I can't think of any place I've ever worked where development ever outpaced backlog and tech debt.

When you work long enough you'll find it. Places where changing software is risky you can end up waiting for approvals. Places where another company purchased yours or you are getting shutdown soon and there is no new work. Sometimes you end up on a system that they want to replace but they never get around to it.

Being overworked is sometimes better than being underworked. Sometimes the reserve is better. They both have challenges.


Outside of purchased-and-being-shutdown, these are still frequently "we want to do things but we're scared of breaking things" situations, not "we don't want to do anything." Even if the things they want to do are just "we want to move off this 90s codebase before everyone who knows how it works is dead."

In that sort of high-fear, change-adverse environment "get rid of all the devs and let the AI do it" may not be the most compelling sales pitch to leadership. ("Use it to port the code faster so we can spend more time on the migration plan and manual testing" might have better luck.)


None of these are development conquering all goals.


Worst time to be an employee, as you are expected to work faster and faster. (The approach is very much quantity over quality.)

Best time to be a solo founder in underserved markets :)


That’s the economy in general. Labor saving innovations increase productivity but do not usually reduce work very much, though they can shift it around pretty dramatically. There are game theoretic reasons for this, as well as phenomena like the hedonic treadmill.

Ideal state for every company is to have minimum input costs with maximum output costs. Labor always gets cut out of the loop because it’s one of the most expensive input costs.

The goal has always and will always be to complete as much as possible in the time allotted.

At the risk of being the person who says, "it's capitalism," (I know I know).... When making profit is the dominant intent of a company, a worker doing something faster doesn't lead to the worker doing less. It leads to the worker producing more in the same time. If doing more yields too much of the thing produced for the market to handle, the company either A. creates more need for the more produced (fabricate necessity), or B. creates a new need for a new thing, and a new thing for you to produce. There's no getting off the wheel for the worker in capitalism.

In my day-to-day coding work, the top 3 coding agents are already good enough for me. On SWE-bench Verified, mini-SWE-agent + GPT-5.2 Codex is 72.8. I don’t see a comparable GPT-5.3 Codex number there, so I’m using 5.2 as the baseline. On OpenAI’s GPT-5.4 page (SWE-Bench Pro, Public), the score improves from 55.6 (GPT-5.2) to 57.7 (GPT-5.4), which is about +2.1 points. It’s a different benchmark, so this is only a rough signal, but I’d expect a similar setup on SWE-bench Verified to improve by a few points, not by a huge jump. I’m interested in how GPT-5.4 in Codex changes real-world results.

Recent SWE-bench Verified scores I’m watching:

Claude 4.5 Opus (high reasoning): 76.8

Gemini 3 Flash (high reasoning): 75.8

MiniMax M2.5 (high reasoning): 75.8

Claude Opus 4.6: 75.6

GPT-5.2 Codex: 72.8

Source: https://www.swebench.com/index.html

By the way, in my experience the agent part of Codex CLI has improved a lot and has become comparable to Claude Code. That is good news for OpenAI.


I would recommend https://swe-rebench.com for comparison. It is always based on new problems.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: