Hacker Newsnew | past | comments | ask | show | jobs | submit | xyzzy123's commentslogin

Run Claude Code on the node and ask it to debug the issue...

Actually I tried using Claude to debug it... didn't work either.

Thanks

I'm not very familiar with the field on a practical basis.

What parts of the job require judgement that is resistant to automation? What percentage of customers need that?

If the hours an accountant spends on a customer go from 4 per month to 1, do you reckon they can sustainably charge the same?


Why would better efficiency mean they have to charge less?

Because your competitor will double their number of customers, and halve their prices— forcing you to do the same.

So then everyone would continue earning the same as before.

I guess the problem is that it seems like an opaque financial product. ETFs or crypto are not useful for "saving" but the risk profile is at least a known quantity.

The return you quote is a numeric range but there's no corresponding number or signal you provide for risk and we have to trust you. So, how can we trust you? Independent audits and deposit guarantees.


An important security measure for who, though? The servers at the bank should "never trust the client" in case the attestation is bypassed or compromised, which is always a risk at scale.

If it's an important safety measure _for me_, shouldn't I get to decide whether I need it based on context?

I think it's fair for banks to apply different risk scores based on the signals they have available (including attestation state), but I also don't want the financial system, government & big tech platforms to have a hard veto on what devices I compute with.


It's an anti-brute-force mechanism. It's not for you, it's for all the other accounts that an unattested phone (or a bot posing as an unattested phone that just stole somebody's credentials via some 0-day data exfiltration exploit) may be trying to access.

Sure, banks could probably build a mechanism that lets some users opt out of this, just as they could add a Klingon localization to their apps. There just isn't enough demand.


If you work on mobile apps you will notice that full attestation is too slow to put in the login path. [This might be better than it used to be, now in 2026].

I don't think a good security engineer would rely on atty as "front line" anti brute force control since bypasses are not that rare. But yeah you might incorporate it into the flow. Just like captchas, rate limiting, fingerprints etc and all the other controls you need for web, anyway.

I know I'm quibbling. My concern is that future where banks can "trust the client" is a future of total big tech capture of computing platforms, and I know banks and government don't really care, but I do.


> you work on mobile apps you will notice that full attestation is too slow to put in the login path

Hm, Play Integrity isn't that slow on Android, from my experience.

> don't think a good security engineer would rely on atty as "front line" anti brute force control since bypasses are not that rare

I'm not privy to device-wide bypasses of Play Integrity that ship with Trusted Execution Environment (which is pretty much all ARM based Androids), Secure Element, and/or Hardware Root of Trust, but I'd appreciate if you have some significant exploit writeups (on Pixels, preferably) for me to look at?

> My concern is that future where banks can "trust the client" is a future of total big tech capture of computing platforms

A valid concern. In the case of smart & personal devices like Androids though, the security is warranted due to the nature of the workloads it tends to support (think Pacemaker / Insulin monitoring apps; government-issued IDs; financial instruments like credit cards; etc) and the ubiquity & proliferation of the OS (more than half of all humanity) itself.


> Insulin monitoring apps

A monitoring app doesn't even interact with systems you don't own. Just put a liability disclaimer for running modified versions.

> warranted

Decided by whom? And why is Google trusted, not me? At minimum, I shouldn't face undue hardship with the government due to refusing to deal with a third party, unless we first remove most of Google's rights to set the terms.


> A monitoring app doesn't even interact with systems you don't own. Just put a liability disclaimer for running modified versions.

This is unserious when Insulin overdose can be fatal.

> And why is Google trusted, not me?

(Hardware-assisted) Attestation on Android doesn't require apps to "trust Google".


> Insulin monitoring apps

Funny that you say that, but the so far best artificial pancreas that is completely free and open source will soon be much harder to install to any Android phone without every user getting a valid key from Google.

In Germany, doctors even recommend these tools if they work. Because they make patients who know what they are doing healthier and more safe.

Naturally me and hundreds of other diabetics have already contacted our EU representative due to the changes Google is planning to make in their platform.

https://androidaps.readthedocs.io/en/latest/

This tool has literally saved my life.


> I'm not privy to device-wide bypasses of Play Integrity that ship with Trusted Execution Environment (which is pretty much all ARM based Androids), Secure Element, and/or Hardware Root of Trust, but I'd appreciate if you have some significant exploit writeups (on Pixels, preferably) for me to look at?

Hi, you don't have the break the control on the strongest device. You only have to break it on the weakest device that's not blacklisted.

The situation is getting better as you note, but in the past the problem was that a lot of customers have potatos and you get a lot of support calls when you lock them out.

> think Pacemaker / Insulin monitoring apps; government-issued IDs; financial instruments like credit cards; etc

I agree with you on the need for trustworthy computing. I mainly disagree on who should ultimately control the trust roots.


We can only hope they continue to be found so there would at least be a small cost for this kind of indignity.

> total big tech capture of computing platforms

Correct. And the end of ownership, privacy, and truth too. If something can betray you on someone else's orders, it's not yours in the first place. You'll own nothing and if you aren't happy, good luck living in the woods.


Org processes have not changed. Lots of the devs I know are enjoying the speedup on mundane work, consuming it as a temporary lifestyle surplus until everything else catches up.

You can't saw faster than the wood arrives. Also the layout of the whole job site is now wrong and the council approvals were the actual bottleneck to how many houses could be built in the first place... :/


Basically this. My last several tickets were HITL coding with AI for several hours and then waiting 1-2 days while the code worked its way through PR and CI/CD process.

Coding speed was never really a bottleneck anywhere I have worked - it’s all the processes around it that take the most time and AI doesn’t help that much there.


True story; I wanted to make a tiny update to our CI / CD to upload copies of some artifacts to S3. It took 1min for the LLM to remind me of the correct syntax in aws cli to do the upload and the syntax to plug it into our GitHub Actions. It then took me the next 3 hours to figure out which IAMs needed to be updated in order to allow the upload before it was revealed that Actually uploading to the S3 requires the company IT to adjust bucket policies and this requires filing a ticket with IT and waiting 1-5 business days for a response then potentially having a call with them to discuss the change and justify why we need it. So now it's four days later and I still can't push to S3.

AI reduced this from a 5-day process to a 4.9-day process


I’m seeing it slightly differently. So much of our new slowdown is rework because we’ve seen a bunch more API and contract churn. The project I’m on has had more rework than I care to contemplate and most of it stems from everyone’s coding agents failing to stay synced up with each other on the details and their human handlers not noticing the discrepancies until we’re well into systems integration work.

If I may hijack your analogy, it would be like if all the construction crews got really fast at their work, so much so that the city decided to go for an “iterative construction” strategy because, in isolation, the cost of one team trying different designs on-site until they hit on one they liked became very small compared to the cost of getting city planners and civil engineers involved up-front. But what wasn’t considered was the rework multiplier effect that comes into play when the people building the water, sewage, electricity, telephones, roads, etc. are all repeatedly tweaking designs with minimal coordination amongst each other. So then those tweaks keep inducing additional design tweaks and rework on adjacent contractors because none of these design changes happen in a vacuum. Next thing you know all the houses are built but now need to be rewired because the electricity panel is designed for a different mains voltage from the drop and also it’s in the wrong part of the house because of a late change from overhead lines in the alleys to underground lines below the street.

Many have observed that coding agents lack object permanence so keeping them on a coherent plan requires giving them such a thoroughly documented plan up front. It actually has me wondering if optimal coding agent usage at scale resembles something of a return to waterfall (probably in more of a Royce sense than the bogeyman agile evangelists derived from the original idea) where the humans on the team mostly spend their time banging out systems specifications and testing protocols, and iteration on the spec becomes somewhat more removed from implementing it than it is in typical practice nowadays.


It seems like the best way the humans could compete is to be a bit more flexible on interpretations than a fully compliant AI would be allowed to be. This applies all the way from small to very large businesses.

AI for this stuff is sort of a double edged sword; eventually it'll be forced to be legible in ways that you can't force a human mind to be. There are lots of domains where in theory AI would be great, but in practice everyone actually wants a certain amount of ambiguity and confusion to exist.


Many places have "dev", "test" "prod"... but IMHO you need "sandpit" as well.

From an ops point of view as orgs get big enough, dev wraps around to being prod-like... in the sense that it has the property that there's going to be a lot of annoyed people whose time you're wasting if you break things.

You can take the approach of having more guard rails and controls to stop people breaking things but personally I prefer the "sandpit" approach, where you have accounts / environments where anything goes. Like, if anyone is allowed to complain it's broken, it's not sandpit anymore. That makes them an ok place to let agents loose for "whole system" work.

I see tools like this as a sort of alternative / workaround.


Sandpit should be a personal (often local, if possible) dev environment. The reason people get mad about dev being broken for long periods of time is that they cannot use dev to test their changes if your code (that they depend on) is broken in dev for long periods of time.

Agreed on all points. Local loops are faster and safer wherever possible.

But particularly for devops / systems focused work, you lose too much "test fidelity" if you're not integrating against real services / cloud.


There’s no sandpit, only prod and dev, and you’re not allowed to break prod. Your developers work in partitions of prod. Dev is used for DR and other infra testing.

Well that’s just - dumb

Wanna elaborate?

Account vending machines where every dev can spin up thier own account is a thing and still under the control of some type of guardrails.

A bottle of water in the desert and a bottle of water in your fridge don't have the same value.


But politics isn't involved.

The point of tender is to represent value.

Your water in the desert costing 100 times what it costs where it rains is meant to represent its scarcity here vs over there.

Take cuban dollars vs normal dollar. In there the two tenders aren't proxy for value. Proxy for a political control so that the wealthy visitor pays 10 times, for the same bottle of water on the same shelf.


You're describing gouging and gold rush pricing, which thankfully not a feature any major economic system relies on for everyday operation.

It's entirely reasonable to expect logistical costs to inflate the price of a good, so the price should reflect the market equilibrium value of the service of bringing water into the middle of the desert, not the good.


That's only because, for some reason, people have conflated the words "value," "price," and "cost" until the terms are indistinguishable.


We are hoarding money to buy many bottles. The chinese are making and filling it while drilling wells that dont seem financally viable. They spoiled their entire bottle buying budget on that? What dumb communist central planners.


It looks a LOT like a CIA front.

EDIT: Sorry... that is too strong... "state aligned influence media". Note that the headline might be true, or it might not, but that source is quite glowy.


The top down push for timelines is because:

In Australia, an SDE + overhead costs say $1500 / work day, so 4 engineers for a month is about $100k. The money has to be allocated from budgets and planned for etc. Dev effort affects the financial viability and competitiveness of projects.

I feel like many employees have a kind of blind spot around this? Like for most other situations, money is a thing to be thought about and carefully accounted for, BUT in the specific case where it's their own days of effort, those don't feel like money.

Also, the software itself presumably has some impact or outcome and quite often dates can matter for that. Especially if there are external commitments.


The only approach that genuinely works for software development is to treat it as a "bet". There are never any guarantees in software development.

1. Think about what product/system you want built.

2. Think about how much you're willing to invest to get it (time and money).

3. Cap your time and money spend based on (2).

4. Let the team start building and demo progress regularly to get a sense of whether they'll actually be able to deliver a good enough version of (1) within time/budget.

If it's not going well, kill the project (there needs to be some provision in the contract/agreement/etc. for this). If it's going well, keep it going.


How would you decide between doing project (a) this quarter, or project (b)?

If you cannot (or refuse to) estimate cost or probability of success in a timebox you have no way to figure out ROI.

To rationally allocate money to something, someone has to do the estimate.


The exact same way you'd treat any other investment decision.

In the real world, if you've got $100k, you could choose to invest all of it into project A, or all into project B, or perhaps start both and kill whichever one isn't looking promising.

You'd need to weigh that against the potential returns you'd get from investing all or part of that money into equities, bonds, or just keeping it in cash.


You mean… by making a forward-looking estimates of cost, time-to-value, return? (even if it's implicit, not documented, vibes-based?).

When devs refuse to estimate, it just pushes the estimating up the org chart. Execs still have to commit resources and do sequencing. They’ll just do it with less information.


What you're asking is the equivalent of going to a company whose equity you've bought and asking them: what's the price going to be in 6 months' time?


Doesn't this ignore the glaring difference between a plumbing task and a software task? That is, level of uncertainty and specification. I'm sure there are some, but I can't think of any ambiguous plumbing requirements on the level of what is typical from the median software shop.


Sorry, I edited the plumbing refence out of my comment because I saw a sibling post that made a similar point.

I agree there is less uncertainty in plumbing - but not none. My brother runs a plumbing company and they do lose money on jobs sometimes, even with considerable margin. Also when I've needed to get n quotes, the variation was usually considerable.

I think one big situational difference is that my brother is to some extent "on the hook" for quotes (variations / exclusions / assumptions aside) and the consequences are fairly direct.

Whereas as an employee giving an estimate to another department, hey you do your best but there are realistically zero consequences for being wrong. Like maybe there is some reputational cost? But either me or that manager is likely to be gone in a few years, and anyway, it's all the company's money...


How much plumbing knowledge do you have?


I bet if SWEs were seeing anywhere near that 1.5k per day they’d be more inclined to pay attention.

But when you get paid less than half that it doesn’t feel like a problem to worry about. At 300/day of take-home pay, one more day here or there really isn’t going to make a difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: