Hacker Newsnew | past | comments | ask | show | jobs | submit | tbrownaw's commentslogin

> is that it’s always the people who claim to be most afraid of ai who are the quickest to absolve humans of responsibility and assign it to AI.

But that seems entirely consistent? A tool isn't nearly as scary as an alien lifeform.


That sounds like an excellent match for containers.

Completely agree. We need to get the DB into a container, which is somewhat easier said than done for a long-lived project that didn't do it initially.

Sometimes I tell the AI to change something, sometimes I just do it myself. Sometimes I start to do it and then the magic tab-complete guesses well enough that I can just tab through the rest of it.

Sometimes the magic tab-complete insists on something silly and repeatedly gets in the way.

Sometimes I tell the AI to do something, and then have to back out the whole thing and do it right myself. Sometimes it's only a little wrong, and I can accept the result and then tweak it a bit. Sometimes it's a little wrong in a way that's easy to tell it to fix.


> Here's the exact list of what's restricted if you don't verify:

> >Content Filters:

Sound like something people might not want tied to real-world identities.

> >Age-gated Spaces:

So, #politics in my local instance.


Age verification is an excuse for identity checking.

I remember when people who used this site were rational experts. The emotional outbursts here are a bit disappointing.

> How have we not blown ourselves up yet?

It's not that we haven't, it's just that we can only observe from those few realities where we didn't.


Or it has an annoying learning curve.

How exactly are we supposed to hear about something that failed in the early stages?

There are a number of ways. Obviously Dropbox would be one case of "early and didn't fail" that could have been "early and failed", and we would have heard about it.

By listening to your friends and circle

> The problem is that agent-written code has a provenance gap. When a human writes code, the reasoning lives in their head, in Slack threads, in PR descriptions. When an agent writes code, the reasoning evaporates the moment the context window closes.

The described situation for human-written code isn't much better. What actually works is putting a ticket (or project) number in the commit message, and making sure everything relevant gets written up and saved to that centralized repository.

And once you have that, the level of detail you'd get from saving agent chats won't add much. Maybe unless you're doing deliberate study of how to prompt more effectively (but even then, the next iteration of models is just a couple months away)?


> if you can't see the value in this, I don't know what to tell you.

"I can't articulate why this is valuable."


Please don't use quotation marks to make it look like you're quoting someone when you aren't. That's an internet snark trope and we're trying to avoid those on HN.

https://news.ycombinator.com/newsguidelines.html


Look it’s obvious at this point to anyone who is actually using the tools.

We can articulate it but why should we bother when it’s so obvious.

We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.


Not really, because those details aren't actually relevant to code archaeology.

You could have someone collect and analyze a bunch of them, to look for patterns and try to improve your shared .md files, but that's about it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: