Hacker Newsnew | past | comments | ask | show | jobs | submit | quinndupont's commentslogin

Has some similar conclusions to my Job Quality-Adjusted Displacement Index https://github.com/quinndupont/JQADI

I built my own MCP server to do some of this but I like the “enrichment” feature and NEO4J relationality.

I’m waiting for the agentic models trained on virus and worm datasets to join the red team!

Summary: good scientific theories have “reach,” which is not defined in any precise way. Reach has complexity and this can be handled with large parameter neural networks. Assumptions: mechanistic and deterministic worldview; epistemological perfection is the goal (perfect knowledge of facts).

HN optimal posting time & advertising value model.

Update to Anthropic's "Labor market impacts of AI" measurements with a focus on quality of work.

AI is coming for jobs—but the real risk isn’t where most people are looking.

The leading AI exposure indices (Anthropic, Eloundou et al.) focus on which jobs get automated. They treat low exposure as “safe.”

But the least exposed workers—cooks, roofers, dishwashers, construction laborers—are often in the worst jobs: low pay, high physical toll, short career spans, and little upward mobility. Safe from AI, but not from burnout or injury.

I built JQADI (Job Quality-Adjusted Displacement Index) to combine AI exposure with job quality. It surfaces three kinds of risk:

High AI exposure → classic displacement risk Low AI, low quality → “trapped” workers in grinding, unsustainable jobs Moderate AI, low quality → partial automation strips cognitive work and leaves physical drudgery (the “task residual” effect)

Findings: 83.5M workers are in low-AI, low-quality jobs. Customer service reps, data entry keyers, and medical records specialists sit at the intersection of high exposure and poor quality. Meanwhile, chief executives and lawyers are both low-exposure and high-quality.

The index uses ONET, BLS, and Anthropic exposure data. Code and methodology are open source. LINK https://github.com/quinndupont/JQADI


I mean, if that's true, then the governments around the world better damn well be prepared.

There's a ton of millennials (myself included) turning 40, that have been in this field since 2005 or earlier. It's all we know, and at this point we're getting too old to just "go do physical labor for minimum wage so AI can write code instead." I'm certainly too old to go back to school and try to pass the bar example to be a lawyer at 50+, and I have zero interest in any kind of people management whatsoever.

IMO Anthropic, OpenAI, Google, etc. should all be helping governments work toward a plan and lobbying for regulation on it instead of just charging full steam ahead "damn the consequences, those are someone else's problem."

It's going to obliterate what little is left of the middle class and leave a massive amount of unemployed middle aged tech workers with no where to go. What then? We either get ahead of the problem now (Outlook not so good), or we collapse into massive civil unrest and chaos.


Agreed. My point for spinning up this alternative metric is that the policy implications of the original suggest people should go out and get tough, dirty, dangerous jobs if they don't want to be displaced. But, there's a reason you don't see many 60 year old people in the trades.

What’s your reasoning for labeling lawyers as low-exposure?

My partner is a lawyer (prosecutor for a large city). The reason she is at low risk is simply because of the rate of adoption of AI tooling (or ANY tooling for that matter). IT in the public sector (particularly city government) is so much worse than I ever could have imagined before meeting my partner.

Our city just spent >$15MM on "case management software" that took 5 years to build by some fly-by-night outfit in California who won the contract, haphazardly bolted together MSFT Azure components, then vanished with zero support.

These teams can't in good faith freely adopt AI tooling into their workflow because they don't have the bandwidth to do it well, so they don't do it at all.


That's largely based on the original analysis and their methodology. "Responsibility" (only attributable to humans) is one reason, another is that judges probably don't want to speak with robots in court.

This isn’t ChatGPT-style slop. There’s some secret sauce and it takes your process from 30-60 min per application to just minutes. It’s been through many recruiters for testing.

Interesting to see the mathematical solution space get optimized away. On account of “there’s no accounting for taste” this actually makes me hopeful that creative workers have durable skills that can’t be optimized, which I can’t say about mathematics and computer science.

SO MANY ADVERTISEMENTS. Tis a shame everything has to be fluffed up and sold


We really need to do more adversarial interoperability. There should be a browser or at least a .onion site that blocks ads and bypasses paywalls.

Yes, it would be illegal - just like a great many good things in the past, some of which led to the law being changed.

The Decline of Deviance essay posted here a few days ago says people used to take legal risks a lot more often than they do today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: