Hacker Newsnew | past | comments | ask | show | jobs | submit | jwpapi's commentslogin

Yes exactly that sentence led me to step out of the article.

This sentence is wrong in many ways and doesn’t give me trust in OPs opinion nor research abilities.


I mean if I create an offline private key and encrypt my message to be only read with my public key and I’ve learned about math and encryption. I can be assured that my receiver would need to be compromised.

I don’t like these general observation comments. This kind of makes it unappealing to learn about encryption, but it’s worth it and makes you choose either a proper encrypted software or use a key for secret messages.


=== myExperience

They made a post how they reinvented ux

okay, now I think this is really a joke. A website where it's not possible to scroll with the keyboard is telling us something about ux.

Indie

What's the difference between indie and startup?

Indie as freelancer or as making your own product?

Making your own product

U made me cry on HN

The comment wasn’t made by a human.

Well now I'm crying too

Why do the attackers publicly share the keys they found? Like what’s their mission?


My guess would be so they don't have to embed an IP address or hostname in the malware to send secrets to, which could then be blocked or taken down.


But they could encrypt it, its just double b64 encoded, everybody can read it.


That one stumped me. Why not just encrypt with a hardcoded public key, then only the attacker can get the creds.

The simple B64 encoding didn't hide these creds from anyone, so every vendor out there's security team can collect them (e.g. thinking big clouds, GitHub, etc) and disable them.

If you did a simple encryption pass, no one but you would know what was stolen, or could abuse/sell it. My best guess is that calling node encryption libs might trigger code scanners, or EDRs, or maybe they just didn't care.


Or they just wanted to prove a point.

They surely seemed to be smart enough to choose encryption over encoding.

Hard to believe encryption would be the one thing that would trigger code scanners.

Also it’s not just every vendor, also every bad actor could’ve scraped the keys. I wonder if they’ve set up the infrastructure to handle all these thousands of keys…

Like what do you even do with most of it on scale?

Can you turn Cloud, AWS , AI api keys to money on a black market?


At most smart watches you can turn them off. And at your phone too.

I like the gamification and some notifications (today its 5 degrees colder than yesterday for example)


You could still leak API keys


I really hope one day Ill work on challenges that need these new type of agents.

Currently, I either need a fast agent that does what I want faster than I can type it (CRUD, forms, etc) or I need an agent to discuss a plan, ups and downs.

Whenever I try to give it a bigger task it takes a lot of time, and often is not what I’ve expected, which might be totally my fault or context specific, but as soon as I’m able to define the task properly I would prefer a faster model as it will be good enough, but faster. I really don’t have problems anymore that I can’t reasonable solve fast enough with this approach.

I’ve run multiple gpt-5 codex concurrent sessions in the cloud, but I didn’t accept one thing they did.

Eventually thinking through it, reading hack boom is faster than outsourcing the work for 30 minutes + 30 minutes to digest +30 minutes to change..


The key is learning how to provide proper instructions.

Treat it as a developer that just joined the project and isn't aware of the conventions.

Provide hints for the desired API design, mention relevant code locations that should be read to gain context on the problem, or that do similar things.

An AGENTS.md that explains the project and provides some general guidelines also helps a lot.

Codex can be incredibly strong when prompted the right way.


This is generally the right approach imo (when it comes to codex).

In my experience Codex is pretty "bad" at spotting conventions or already existing code. Yesterday I told him a feature to implement (maybe 40 loc?) and he 1. did added unnecessary atomics and 2. he kinda reimplemented a function that already existed that he should've just reused.

I told him that and he fixed it but these are the things that kinda hold AI back by a lot. It's MUCH harder to read code than to write it, and if he writes the code I must 100% understand it to have the same confidence in it as if I did it myself. And that to me is mentally almost more taxing than doing it myself.

If you just let codex write the code while instructing him exactly what you want in terms of logic and architecture it works really well and saves a on of typing.


But when I’m at that point. I think either I myself or a faster agent can do the jobs, ergo no need for a long-running smart agent..

This might be in the nature of problems I’m facing in my coding endeavors. I just don’t have any tasks that I cant solve in less than 45 minutes, or the problem is so vague in my head, that I can't accurately describe it to an AI or human. Then usually I either need to split it in smaller problems or take a walk.

Since claude 4 I barely wish, omg I wish this agent would be smarter. I still wish it would be faster.

But what you described is of course good practice and necessary for smart execution as well.


100% agree. composer-1 really has been the sweet spot for me of capability, reliability, and speed. i dont ask it to do too much at once, and this approach + its speed, materially speeds my work up. i generally find i get the most out of models when i feel like im slightly underutilizing their capabilities. the term i use for this is "staying in the pocket"


Is it available via api? Cant find it on openrouter...


it's in cursor only

That’s the bet cursor took with composer 1. It’s dumb but very fast and that makes it better


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: