Hacker Newsnew | past | comments | ask | show | jobs | submit | mkw5053's commentslogin

Living in SF (and dad of a toddler), this seems like a no-brainer. I can't wait for fewer human drivers.

I've said before that Waymo is already vastly, incredibly safer than some of my older relatives who refuse to give up their keys. Ever been around The Villages in Florida and seen a man leaning forward behind the wheel to squint at what they're driving toward, with their wife shouting at them to turn left!, turn left!? That's just kind of tolerated in some places where the cops don't want to make waves with the wealthy older community.

A self-driving car never gets tired and sleepy after driving for many hours straight. A highway-bound Waymo would be safer than a few instances of distant past me who stayed on the road longer than I was safe to. They also never get drunk, and are safer than approximately 100% of impaired drivers.

I genuinely think we'll all be safer when lots of people collectively realize that someone other than themselves should be driving.


People won't collectively realize squat.

Insurance companies will simply price people out of driving. And I welcome it SO very much.


I enjoy driving (basically) wherever I want to though.

I don't want to have the freedom to go places determined by some faceless multinational, according to my subscription. Or via some "safety" regime.


Self driving doesn't mean forced self driving, there will always be a manual driving override. But your insurance may be more expensive.

Really interesting. It made me curious to dig in and learn that urea production starts with natural gas. And if you add natural gas to the chart as well urea and natural gas prices generally track together without a lag either way, except natural gas doesn't have the recent uptick seen in urea.

I guess the recent move in urea likely isn’t coming from energy costs, something fertilizer-specific, exports, shipping, or supply?

Or it's just noise \_(ツ)_/


> I guess the recent move in urea likely isn’t coming from energy costs, something fertilizer-specific, exports, shipping, or supply

One of India's SOEs recently paused Urea production at some plants due to NatGas issues from the ongoing conflict [0].

Additonally, India began reducing purchasing of Russian LNG in late 2025.

India also launched a tender to purchase urea on the global market in February [1].

This led to a double whammy for urea in the short term given how Indian agriculture is heavily Urea dependent (around 70-80% of all fertilizers used in India are Urea).

But the same SOE recently announced it's restarted operations earlier today [2] and India has restarted spot purchases of what appears to be Russian LNG [3][4] that was originally destined for Europe (especially Hungary and Slovakia).

Edit: can't reply

I'm not a god damn LLM and I do not use AI to write my comments. If you can't engage with an argument, then fuck off.

[0] - https://www.tribuneindia.com/news/punjab/gas-shortage-halts-...

[1] - https://www.rfdtv.com/india-urea-tender-tightens-global-fert...

[2] - https://www.tribuneindia.com/news/bathinda/nfl-bathinda-plan...

[3] - https://www.reuters.com/business/energy/india-securing-addit...

[4] - https://interfax.com/newsroom/top-stories/116517/


[flagged]


What is a WITCH or AFS?

Not sure which natural gas that’s referencing, but looks to be a US index (Henry Hub or so) - note the peak corresponds to a cold snap, not the Iran war. Natgas is tricky because it’s: difficult to store and difficult to transport (aside from well-established pipelines), so you have a massively disjointed market between various deliver markets (look at NY Henry Hub vs Dutch TTF), and also a massively disjointed market between delivery delivery dates (natgas calendar spread trades has been nicknamed “widowmakers”)

I had codex read my cc chat histories and am back up and running there.


Exactly, especially iMessage. It's fair to think that's not worth it, but for those who choose to use it, it is.


And I just bought my mac mini this morning... Sorry everyone


You know that if you are just using a cloud service and not running local models, you could have just bought a raspberry pi.


Yeah. I know it’s dumb but it’s also a very expensive machine to run BlueBubbles, because iMessage requires a real Mac signed into an Apple ID, and I want a persistent macOS automation host with native Messages, AppleScript, and direct access to my local dev environment, not just a headless Linux box calling APIs.


Hey, I made the same decision (except I went with the 24gb model, not the 16gb). The other thing I like about having it on a separate Mac Mini is that it it's completely sandboxed, and I don't log into anything with it on my personal machine. It's VERY nice to have this as an isolated environment, and the extra VRAM means that I can run my own local models, and it's got enough beef to do long-running tasks (right now I have it chugging through several gigs of images and building embeddings for them with DINOv2) -- that's the sort of local workload that would crush a Raspberry Pi, but the Macbook is hitting 17 images per second -- all managed by OpenClaw.

All that to say, don't let the naysayers get you down. I bought my Mac Mini last week and have been really happy with it as an isolated environment. Way better than futzing around with VMs. The always-on nature of OpenClaw means that it's nice to be able to restart my personal laptop or do gaming or whatever else I want and I'm not fighting for GPU resources in the background.


My M2 macbook pro runs qwen fine, and so will any mac mini with maxed out ram.


The only immediately available was the base 16GB version


Harder to get at the Apple ecosystem. I have an old Macbook that just serves my reminders over the internet.


who knows when Apple decides to enter the game, but they will absolutely crush the personal agent market when they do.


Apple has been doing personal agents for a while. They're crushing it so hard they must be tired of winning at this point.

For instance, the other day, the Siri button in maps told me it couldn't start navigation because it didn't know where it was. It was animating a blue dot with my real time position at the same time.

Don't get me started about the new iOS 26 notification and messaging filters. Those are causing real harm multiple times a day.


Aider [0] wrote a piece about this [1] way back in Oct 2023!

I stumbled upon it in late 2023 when investigating ways to give OpenHands [2] better context dynamically.

[0] https://aider.chat/

[1] https://aider.chat/2023/10/22/repomap.html

[2] https://openhands.dev/


Aider's repomap is a great idea. I remember participating in the discussion back then.

The unfortunate thing for Python that the repomap mentions, and untyped/duck-typed languages, is that function signatures do not mean a lot.

When it comes to Rust, it's a totally different story, function and method signatures convey a lot of important information. As a general rule, in every LLM query I include maximum one function/method implementation and everything else is function/method signatures.

By not giving mindlessly LLMs whole files and implementations, I have never used more than 200.000 tokens/day, counting input and output. This counts as 30 queries for a whole day of programming, and costs less than a dollar per day not matter which model I use.

Anyway, putting the agent to build the repomap doesn't sound such a great idea. Agents are horribly inefficient. It is better to build the repomap deterministically using something like ast-grep, and then let the agent read the resulting repomap.


Typed languages definitely provide richer signal in there signatures - and my experience has been that I get more reliable generations from those languages.

On the efficiency point, the agent isn't doing any expensive exploration here. There is a standalone server which builds and maintains the index, the agent is only querying it. So it's closer to the deterministic approach implemented in aider (at least in a conceptual sense) with the added benefit that the LLM can execute targeted queries in a recursive manner.


Aider's repo-map concept is great! thanks for sharing, I'd not been aware of it. Using tree-sitter to give the LLM structural awareness is the right foundation IMO. The key difference is how that information gets to the model.

Aider builds a static map, with some importance ranking, and then stuffs the most relevant part into the context window upfront. That's smart - but it is still the model receiving a fixed snapshot before it starts working.

What the RLM paper crystallized for me is that the agent could query the structure interactively as it works. A live index exposed through an API lets the agent decide what to look at, how deep to go, and when it has enough. When I watch it work it's not one or two lookups but many, each informed by what the previous revealed. The recursive exploration pattern is the core difference.


Aider actually prompts the model to say if it needs to see additional files. Whenever the model mentions file names, aider asks the user if they should be added to context.

As well, any files or symbols mentioned by the model are noted. They influence the repomap ranking algorithm, so subsequent requests have even more relevant repository context.

This is designed as a sort of implicit search and ranking flow. The blog article doesn’t get into any of this detail, but much of this has been around and working well since 2023.


I see, so the context adapts as the LLM interacts with the codebase across requests?

That's a clever implicit flow for ranking.

The difference in my approach is that exploration is happening within a single task, autonomously. The agent traces through structure, symbols, implementations, callers in many sequential lookups without human interaction. New files are automatically picked up with filesystem watching, but the core value is that the LLM can navigate the code base the same way that I might.


> That's smart - but it is still…

> That's a clever… The difference in my approach…

Are you using LLM to help you write these replies, or are you just picking up their stylistic phrasings the way expressions go viral at an office till everyone is saying them?

As an LLM, you wouldn't consider that you're replying confidently and dismissively while clearly having no personal experience with the CLI coding agent that not only started it all but for a year (eternity in this space) was so far ahead of upstarts (especially the VSCode forks family) it was like a secret weapon. And still is in many ways thanks to its long lead and being the carefully curated labor of a thoughtful mind.

As a dev seeking to improve on SOTA, having no awareness of the progenitor and the techniques one most do better than, seems like a blind spot worth digging into before dismissing. Aider's benchmarks on practical applicability of model advancements vs. regressions in code editing observably drove both OpenAI and Anthropic to pay closer attention and improve SOTA for everyone.

Aider was onto something, and you are onto something, pushing forward the 'semantic' understanding. It's worth absorbing everything Paul documented and blogged, and spending some time in Aider to enrich a feel of what Claude Code chose to do the same or differently, which ideas may be better, and what could be done next to go further.


I am planning to add similar concepts to Yek. Either tree-sitter or ast-grep. Your work here and Aider's work would be my guiding prior art. Thank you for sharing!

https://github.com/mohsen1/yek



Hey, are you planning to update docs for end users of your CLI? I was an Aider user who switched to Opencode but I want to experiment with token and time-efficient agents, and I'm assuming OpenHands is one.


I was just talking to someone about Feynman's lectures on computation the other day. I really really enjoyed it. That's all.


While I agree that believing the US is "uniquely great, superior to other nations, destined for a special role in the world" is silly, this article feels just as cherry-picked, on the other extreme. The US is an outlier in plenty of negative ways, yes. But it's also an outlier in GDP per capita, venture capital investment, Nobel Prizes, university quality, immigrant demand, medical innovation, and cultural export. Any honest look at the data shows a country that is simultaneously world-leading and world-lagging depending on which metrics you choose. Picking only one side of that ledger isn't analysis.


None of those things help the random citizen with their life. Not one. Not even medical innovation if it increasingly can't be accessed.


We agree that lived outcomes matter. But if OECD data are valid for health and incarceration, they’re also valid for household income and housing space, where the US performs very well. You can criticize the tradeoffs without denying the gains.


GDP per capita helps the average citizen. That GDP isn't all rich people's yachts. The average person gets a fair amount of it.

Not the bottom 5%, 10%, maybe even 20% (and that is very much a problem). But the average does.


> The average person gets a fair amount of it.

You chose poorly in choosing the adjective "fair", which can be read morally. But that aside, yes, rising GDP does improve middle-class incomes.


If you just look at how things work in most societies now and throughout history and use the model of 'the purpose of a system is what it does', then you could surmise that the the purpose of society/nation is to make sure the top 0.01% of people have amazing lives. Everything else is just an consumable input to achieve that goal.


Musk is moving value out of public hands and into his own. He overpaid $44B for Twitter, then rebranded it as an AI asset by folding X into xAI. He pushed Tesla to invest $2B of shareholder money into xAI despite shareholders voting no. Five days later, SpaceX acquired xAI, effectively turning Tesla’s cash into equity in a private company Musk owns far more of. Musk controlled every step, there was no real arm’s-length process, and he almost certainly knew the outcome in advance. Musk and his private investors get control, inflated valuations, and IPO upside. Tesla shareholders supply the cash, take the risk, and lose leverage.


Vidably | Founding Engineers and Designer | San Francisco (SF) | Full-time | Onsite

(I only have one JD posted publicly, but I'm hiring broadly for eng + design): https://www.linkedin.com/jobs/view/4330646312

I'm founder of Vidably and just closed a $1.5M pre-seed last week.

AI agents are about to orchestrate trillions in commerce, but today they shop on thin data: product specs, studio photos, and text reviews. Agents need evidence, not marketing. Today, the most influential product evidence lives on TikTok and Reels, outside the surfaces brands control and agents can use. Vidably puts verified buyer video directly onto e-commerce product pages through a lightweight widget. We capture that video post-purchase and link it to real purchasing outcomes. Every video is SKU-linked and structured so shopping agents can reason over it.

We're already seeing organic pull from creators and expansion from brands.

Hit me up at hn@vidably.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: