Hacker Newsnew | past | comments | ask | show | jobs | submit | diarrhea's commentslogin

Just this month Google shipped what I understand as hard limits in AI Studio/Gemini/whatever it's called this week. I had existing billing alerts (best you could do before IIUC), but set these new hard limits up immediately. Feels good!


I noticed this, too. valeriozen, can you explain what happened here?

Context, two nearly identical comments from different users.

hackerman70000 at 16:09 https://news.ycombinator.com/item?id=47677483 :

> Cloudflare pushing PQ by default is probably the single most impactful thing that can happen for adotpion. Most developers will never voluntarily migrate their TLS config. Making it the default at the CDN layer means millions of sites get upgraded without anyone making a decision

valeriozen at 16:17 https://news.ycombinator.com/item?id=47677615 :

> cloudflare making pq the default is the only way we get real adoption. most devs are never going to mess with their tls settings unless they absolutely have to. having it happen at the cdn level is the perfect silent upgrade for millions of sites without the owners needing to do anything


They're using the same AI model?

Interesting, though I disagree on basically all points...

> No Silver Bullet

As an industry, we do not know how to measure productivity. AI coding also does not increase reliability with how things are going. Same with simplicity, it's the opposite; we're adding obscene complexity, in the name of shipping features (the latter of which is not productivity).

In some areas I can see how AI doubles "productivity" (whatever that means!), but I do not see a 10x on the horizon.

> Kernighan's Law

Still holds! AI is amazing at debugging, but the vast majority of existing code is still human-written; so it'll have an easy time doing so, as indeed AI can be "twice as smart" as those human authors (in reality it's more like "twice as persistent/patient/knowledgeable/good at tool use/...").

Debugging fully AI-generated code with the same AI will fall into the same trap, subject to this law.

(As an aside, I do wonder how things will go once we're out of "use AI to understand human-generated content", to "use AI to understand AI-generated content"; it will probably work worse)

> just ask AI to rewrite the code

This is a terrible idea, unless perhaps there is an existing, exhaustive test harness. I'm sure people will go for this option, but I am convinced it will usually be the wrong approach (as it is today).

> Dijkstra on the foolishness of programming in natural language

So why are we not seeing repos of just natural language? Just raw prompt Markdown files? To generate computer code on-the-fly, perhaps even in any programming language we desire? And for the sake of it, assume LLMs could regenerate everything instantly at will.

For two reasons. The prompts would either need to raise to a level of precision as to be indistinguishable from a formal specification. And indeed, because complexity does become "exponentially harder"; inaccuracies inherent to human languages would compound. We need to persist results in formal languages still. It remains the ultimate arbiter. We're now just (much) better at generating large amounts of it.

> Lehman’s Law

This reminds me of a recent article [0]. Let AI run loose without genuine effort to curtail complexity and (with current tools and models) the project will need to be thrown out before long. It is a self-defeating strategy.

I think of this as the Peter principle applied to AI: it will happily keep generating more and more output, until it's "promoted" past its competence. At which point an LLM + tooling can no longer make sense of its own prior outputs. Advancements such as longer context windows just inflate the numbers (more understanding, but also more generating, ...).

The question is, will the market care? If software today goes wrong in 3% of cases, and with wide-spread AI use it'll be, say, 7%, will people care? Or will we just keep chugging along, happy with all the new, more featureful, but more faulty software? After all, we know about the Peter principle, but it's unavoidable and we're just happy to keep on.

> Jevons Paradox

My understanding is the exact opposite. We might well see a further proliferation of information technologies, into remaining sectors which have not yet been (economically) accessible.

0: https://lalitm.com/post/building-syntaqlite-ai/


> The question is, will the market care? If software today goes wrong in 3% of cases, and with wide-spread Al use it'll be, say, 7%, will people care? Or will we just keep chugging along, happy with all the new, more featureful, but more faulty software?

This is THE question. I honestly think the majority will gladly take an imperfect app over waiting for a perfect app or perhaps having no app at all. Some devs might be able to stand out with a polished app taking the traditional approach but it takes a lot longer to achieve that and by that point the market may be different, which is a risk



Perfect


This take was accurate about 2 years ago, up until perhaps one year ago. Current capabilities far exceed what you are outlining, for example using Claude Opus models in a harness such as Claude Code or OpenCode.


I use unbound (recursive resolver), and AdGuard Home as well (just forwards to unbound). Unbound could do ad-blocking itself as well, but it's more cumbersome than in AGH. So I use two tools for the time being.

The upside is there's no single entity receiving all your queries. The downside is there's no encryption (IIRC root servers do not support it), so your ISP sees your queries (but they don't receive them).


I have the same question, I am confused by the premise of this article. Now you're recording everything twice?


Agree. I am note sure I understand the premise of the article. You're now recording encountered errors twice, which can look like

    cancel(fmt.Errorf(
        "order %s: payment failed: %w", orderID, err,
    ))
    return fmt.Errorf("order %s: payment failed: %w, orderID, err)
Not only that, isn't this a "lie"? You're cancelling the context explicitly, but that's not necessary is it? Because at the moment the above call fails, the called-into functions might not have cancelled the context. There might be cleanup running later on which will then refuse to run on this eagerly cancelled context. There is no need to cancel this eagerly.

Perhaps I'm not seeing the problem being solved, but bog-standard `return err` with "lazy" context cancellation (in a top-level `defer cancel()`), or eager (in a leaf I/O goroutine) seems to carry similar functionality. Stacking both with ~identical information seems redundant.


Feldera speak from lived experience when they say 100+ column tables are common in their customer base. They speak from lived experience when they say there's no correlation in their customer base.

Feldera provides a service. They did not design these schemas. Their customers did, and probably over such long time periods that those schemas cannot be referred to as designed anymore -- they just happened.

IIUC Feldera works in OLAP primarily, where I have no trouble believing these schemas are common. At my $JOB they are, because it works well for the type of data we process. Some OLAP DBs might not even support JOINs.

Feldera folks are simply reporting on their experience, and people are saying they're... wrong?


Haha, looks like it.

I remember the first time I encountered this thing called TPC-H back when I was a student. I thought "wow surely SQL can't get more complicated than that".

Turns out I was very wrong about that. So it's all about perspective.

We wrote another blog post about this topic a while ago; I find it much more impressive because this is about the actual queries some people are running: https://www.feldera.com/blog/can-your-incremental-compute-en...


I have been using nixos-rebuild with target host and it has been totally fine.

The only thing I have not solved is password-protected sudo on the target host. I deploy using a dedicated user, which has passwordless sudo set up to work. Seems like a necessary evil.


I do this to remote deploy and it works fine even from my mac

> nix run nixpkgs#nixos-rebuild -- switch --flake .#my-flake-target --target-host nixos@$192.168.x.x --sudo --ask-sudo-password --no-reexec


> I deploy using a dedicated user, which has passwordless sudo set up to work.

IMO there is no point in doing that over just using root, maybe unless you have multiple administrators and do it for audit purposes.

Anyway, what you can do is have a dedicated deployment key that is only allowed to execute a subset of commands (via the command= option in authorized_keys). I've used it to only allow starting the nixos-upgrade.service (and some other not necessarily required things), which then pulls updates from a predefined location.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: