Hacker Newsnew | past | comments | ask | show | jobs | submit | px43's commentslogin

I don't think you understand why moltbook is popular. It has incredible utility for those who are actually using it every day.

What is that utility? (honest question)

It's an extremely active community of humans using agents as proxies to explore various concepts. I get a lot of value out of it, and apparently others do as well. Hacker News users have this weird tendency to outright dismiss anything that doesn't cater to their needs specifically.

I think it's pretty obvious that if there was nothing valuable there, no one would be using it.


Can you share some of your favorite examples? Whenever I take a look at the hot/top posts, they’re just… not interesting to me

what are some usecases i should try?

x2 to what others have commented.

I would like to know (much) more about this.


Hype.

Why is that an issue? Isn't that the entire point? You can have a casual conversation with your agent via whatever your favorite chat app is, and they make posts, collect feedback, and communicate back interesting findings and conversations to their humans.

Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.

My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.

On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.


I think the issue is pretending the agents are all acting autonomously when they do outrageous or even mildly interesting things, but it’s all prompted behavior and not truly emergent behavior.

Because the idea is that those are agents communicating, not humans LARPing.

Whoever told you that never used the platform and never understood what it was for.

> A Social Network for AI Agents

> Where AI agents share, discuss, and upvote. Humans welcome to observe.

???????


Don’t believe everything you read on the internet

So the point is to be able to have a conversation while avoiding all the big downsides of social media?

Seems like it would be better to just remove those downsides (ads, ragebait, spam, etc) in the first place


9% uptime?


One 9 would be 90% (aka 0.9)


9% would be 0.09 which is no nines.


Wild to call 1.42 billion people racist despite having met very few of them.


It's funny that you think you know who I've met. YOU DON'T KNOW ME.


"Open source" is no longer about "Hey I built this tool and everyone should use it". It's about "Hey I did this thing and it works for me, here's the lessons I learned along the way", at which point anyone can pull in what they need, discard what they don't, and build out their own bespoke tool sets for whatever job they're trying to accomplish.

No one is trying to get you to use openclaw or nanobot, but now that they exist in the world, our agents can use the knowledge to build better tooling for us as individuals. If the projects get a lot of stars, they become part of the global training set that every coding agent is trained against, and the utility of the tooling continues to increase.

I've been running two openclaw agents, and they both made their own branchs, and modified their memory tooling to accommodate their respective tasks etc. They regularly check for upstream things that might be interesting to pull in, especially security related stuff.

It feels like pretty soon, no one is going to just have a bunch of apps on their phone written by other people. They're going to have a small set of apps custom built for exactly the things they're trying to do day to day.


> Open source" is no longer about "Hey I built this tool and everyone should use it".

Was open source ever about that? I thought it was "Hey I built this tool and I'm putting it on internet if anyone wants to use it" often accompanied by a license saying "no warranties".

> It feels like pretty soon, no one is going to just have a bunch of apps on their phone written by other people. They're going to have a small set of apps custom built for exactly the things they're trying to do day to day

I think today's AI tools like Agents are for people who are programmers but don't want to program, not ones who aren't programmers and don't want to program. As in, "no one is going to..." is a very broad statement to make for an average person who just uses apps on thier phone. Your average person will not start vibe coding their own apps just because they can (because they couldn't care less).


"If the projects get a lot of stars, they become part of the global training set that every coding agent is trained against, and the utility of the tooling continues to increase."

OpenClaw currently has 1.8k issues, 400k lines of code, had an RCE exploit discovered just a few days ago, it takes 5 seconds to get a response when I type "openclaw" in my CLI and most of the top skills are malware. I'm pretty sure training on that repository is the equivalent to eating a cyanide pill for a coding model.

I actually agree with your take that custom apps will take over a subset of established software for some users at some point, but I don't think models poisoning themselves with recklessly vibecoded bloatware is how we get there at all.


Are you me?? I'm literally building highly personalized and/or idiosyncratic software with claude to solve personal and professional problems.

Thanks to tauri, I've now made two desktop apps and one mobile app for the first time in the last two months.

None of this was nearly as feasible just a year ago


Booo


> it struggles

It does not struggle, you struggle. It is a tool you are using, and it is doing exactly what you're telling it to do. Tools take time to learn, and that's fine. Blaming the tools is counterproductive.

If the code is well documented, at a high level and with inline comments, and if your instructions are clear, it'll figure it out. If it makes a mistake, it's up to you to figure out where the communication broke down and figure out how to communicate more clearly and consistently.


"My Toyota Corolla struggles to drive up icy hills." "It doesn't struggle, you struggle." ???

It's fine to critique your own tools and their strengths and weaknesses. Claiming that any and all failures of AI are an operator skill issue is counterproductive.


Not all tools are right for all jobs. My spoon struggles to perform open heart surgery.


But as a heart surgeon, why would you ever consider using a spoon for the job? AI/LLMs are just a tool. Your professional experience should tell you if it is the right tool. This is where industry experience comes in.


As a heart surgeon with a phobia of sharp things I've found spoons to be great for surgery. If you find it unproductive it's probably a skill issue on your part.


A tool is something I can tightly control. A thing that may or may not work today, and if it does, might stop working tomorrow when the model gets updated without any notification to anyone, the output of which I have to very carefully scrutinize anyway, is not a tool. It's a toy.


This sounds like coding with plaintext with extra steps.


Negativity Bias is a thing. It probably served us well back when it was more important to remember to avoid the field with all the poison snakes in it vs the field with the pretty flowers in it, but in an era where algo feeds try to treat content equally, and optimize for attention, it kind of ruins everything.

I recall there being studies on financial loss vs gain, and that financial losses seem to effect emotions about 4x more than wins, so for an actual balanced algorithm, it would seem that positive posts should be boosted about 4-5x to have any chance of being surfaced on a modern social network. Given what we know about human psychology, sentiment boosts really should be a thing. Is anyone working on that?


4.6M is not a lot, and these were old bugs that it found. Also, actually exploiting these bugs in the real world is often a lot harder than just finding the bug. Top bug hunters in the Ethereum space are absolutely using AI tooling to find bugs, but it's still a bit more complex than just blindly pointing an LLM at a test suite of known exploitable bugs.


According to the blogpost, these are fully autonomous exploits, not merely discovered bugs. The LLM's success was measured by much money it was able to extract:

>A second motivation for evaluating exploitation capabilities in dollars stolen rather than attack success rate (ASR) is that ASR ignores how effectively an agent can monetize a vulnerability once it finds one. Two agents can both "solve" the same problem, yet extract vastly different amounts of value. For example, on the benchmark problem "FPC", GPT-5 exploited $1.12M in simulated stolen funds, while Opus 4.5 exploited $3.5M. Opus 4.5 was substantially better at maximizing the revenue per exploit by systematically exploring and attacking many smart contracts affected by the same vulnerability.

They also found new bugs in real smart contracts:

>Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694.


Of course they are, and they've been doing it since long before ChatGPT or any of that was a thing. Before it was more with classifiers and concolic execution engines, but it's only gotten way more advanced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: