Hacker Newsnew | past | comments | ask | show | jobs | submit | CTDOCodebases's commentslogin

Because when a service is running at a loss the investors want their money to be spent efficiently.

Judging by the news isn’t the pivot to creating autonomous driving systems for other manufacturers cars?

If I’m understanding correctly the pivot is to sell just the autonomous driving systems. This way it can be trained on more data. It’s a hard sell to do this while competing against the car makers whose business they are trying to court.

Selling actual cars was like Uber when they started with a black car service. Get into the luxury market then leverage that so get into the mass market.

Perhaps this is why Elon has been so adamant about not using LiDAR


What does any of this have to do with Optimus? Driving a car by sticking a humanoid robot in the driver seat would be amusing but is a terrible idea.

I got confused between Optimus and Dojo and assumed that Tesla had a seperate internal AI division called Optimus.

In light of this I think it makes sense though. Tesla lost the government subsidies so it can't compete. Possibly the only way it can would be to have an autonomous workforce then to leverage that into selling picks and shovels (Optimus humanoid robots) to other automotive manufacturers.


My drives have an AFR of 0.41

It looks like I picked a good vintage which is good because the same drives are approaching 2x the price today.


I'm still running ~40 WUH721414ALE6L4 purchased ~2020 for $110 ea. They're $320 ea now, used.

I never thought I would own commodity hardware that would increase in value over time. When this AI bubble pops like dotcom 1.0, the definancialization is going to be painful.


The margins on software are incredibly high and perhaps this is just the cost of having maintainable output.

Also I think you have to consider development time.

If someone creates a SaaS product then it can be trivially cloned in a small timeframe. So the moat that normally exists becomes non existent. Therefore to stay ahead or to catch up it’s going to cost money.

In a way it’s similar to the way FAANG was buying up all the good engineers. It starves potential and lower capitalised but more nimble competitors of resources that it needs to compete with them.


I wonder over the long term how programmers are going to maintain the proficiency to read and edit the code that the LLM produces.


There were always many mediocre engineers around, some of them even with fancy titles like "Senior," "Principal", and CTO.

We have always survived it, so probably we can also survive mediocre coders not reading the code the LLM generates for them because they are unable to see the problems that they were never able to see in their handwritten code.


Honestly it’s not that hard. I already coded less and less as part of my job as I get more senior and just didn’t have time, but I was still easy to do code reviews and fix bugs, sit down and whip out a thousand lines in a power session. Once you learn it doesn’t take much practice to maintain it. A lot of traditional coding is very inefficient. With AI it’s like we’re moving from combustion cars to EVs, the energy efficiency is night and day, for doing the same thing.

That said, the next generation may struggle, but they’ll find their way.


Personally I planned to allocate weekly challenges to stay sharp.


It’s going to be extremely difficult if PR and code reviews do not prune unnecessary functions. From what I’m experiencing now, there’s a lot of additional code that gets generated.


"That's the neat part—you don't." Eventually the workflow will be to use the LLM to interpret the LLM-generated codebase.


I don’t read or edit the code my claude code agent produces. That’s its job now. My job is to organize the process and get things done.


In this case why can’t other agents just automate your job completely ? They are capable of that. What do you bring in the process of still doing manual organization ?


I still have to tell it what to do, and often how to do it. I manage its external memory and guidelines, and review implementation plans. I’m still heavily involved in software design and test coverage.

AI is not capable yet of automating my job completely – I anticipate this will happen within two years, maybe even this year (I’m an ML researcher).


Do you mean, from your perspective, within 2 years humans won’t be able to bring anything of value to the equation in management and control ?


No, I mean that my job in its current form – as an ML researcher with a phd and 15 years of experience - will be completely automated within two years.


Is the progress of LLMs moving up abstraction layers inevitable as they gather more data from each layer? First, we fed LLMs raw text and code and now they are gathering our interactions with the LLM regarding generated code. It seems like you could then use the interactions to make a LLM that is good at prompting and fixing another LLMs generated code. Then its on to the next abstraction layer.


What you described makes sense, and it's just one of the things to try. There are lots of other research directions: online learning, more efficient learning, better loss/reward functions, better world models from training on Youtube/VR simulations/robots acting in real world, better imitation learning, curriculum learning, etc. There will undoubtedly be architectural improvements, hardware improvements, longer context windows, insights from neuroscience, etc. There is still so much to research. And there are more AI researchers now than ever. Plus current AI models already make us (AI researchers) so much more productive. But even if absolutely no further progress is made in AI research, and foundational model development stops today, there's so much improvement to be made in the tooling around the models: agentic frameworks, external memory management, better online search, better user interactions, etc. The whole LLM field is barely 5 years old.


If you want a machine (or in fact another human) to do something for you, there are two tasks you cannot delegate to them:

a) Specify what you want them to do.

b) Check if the result meets your expectations.

Does your current job include neither a nor b?


A/B happen at different abstractions levels. My abstraction level will be automated. My manager’s level will probably last another year or so.


So your assumption is that it will ultimately be the users of software themselves who will throw some every day language at an AI and it will reliably generate something that meets those users' intuitive expectations?


Yes, it will be at least as reliable as an average software engineer at an average company (probably more reliable than that), or at least as reliable as a self-driving car where a user says get me to this address, and the car does it better (statistically) than an average human driver.


I think this could work for some tasks but not for others.

We didn't invent formal languages to give commands to computers. We invented them as a tool for thinking and communicating things that are hard to express in natural language.

I doubt that we will stop thinking and I doubt that it will ever be efficient to specify tasks purely in terms of natural language.

One of my first jobs as a software engineer was for a bank (~30 years ago). This bank manager wasn't a man of many words. He just handed us an Excel sheet as a specification for what he wanted us to implement.


My job right now is to translate natural English statements from my bosses/colleagues into natural English instructions for Claude. Yes, it takes skill and experience to do this effectively. But I don't see any reasons Gemini 4, Opus 5 or GPT-6 won't be able to do this just as well as I do.


What are you going to do for work in 2 years?


I have enough savings for a few years, so I might just move to a lower COL area, and wait it out. Hopefully after the initial chaos period things will improve.


For someone at your position with your experience it’s quite depressing that your job is going to be automated. I feel quite anxious when I see young generations in my country that say themselves they are lazy about learning new things. The next generation will be useless to capitalist societies, in a sense that they won’t be able to bring value through administrative or white collar work. I hope some areas of the industry will move slowly toward AI

simonw alert!!!


The security theatre is there to make people feel safe.

It's about emotion not logic.


...or be very anxious and resent air travel. I don't feel any safe through body searches, coupled with belt/coat removal, not wearing glasses and what not.

Personally, I don't know a single person who feels more secure due to the checks.


And to make some people richer.


Get a lazy boy, fit a split keyboard to each arm and develop AGI then. I’m sick of these RAM prices.


> This has been going on for years; the right simply refuses to countenance the possibility of legitimate organic opposition, while also being chronically unable to provide any evidence for their claims.

Even their own. Jan 6 for example. It was a guided tour given by FBI agitators apparently.


It all makes sense if you think of it in the form of emotions, and not rationality.

Unless there is some concrete penalty (and even then!) why wouldn’t they believe the thing that makes them feel justified and righteous.


They could allow submitters to double down on submissions escalating the bug to more skilled and experienced code reviewers who get a cut of the doubled submission fee for reviews.


It's all fun and games until someone decides to bet $100K that someone they don't like won't die via assassination this year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: