IME you don't even have to do multiple things at the same time to reach that cognitive fatigue. The pace alone, which is now much higher, could be enough to saturate your cognitive capabilities.
For me one unexpected factor is how much it strains my executive function to try and maintain attention on the task at hand while I’m letting the agent spin away for 5-10 minutes at a stretch. It’s even worse than the bad old days of long compile times because at least then I could work on tests or something like that while I wait. But with coding agents I feel like I need to be completely hands off because they might decide to touch literally any file in the repository.
It reminds me a bit of how à while back people were finding that operating a level 3 autonomous vehicle is actually more fatiguing than driving a vehicle that doesn’t even have cruise control.
For me it's the volume of things that I am now capable of doing in so much shorter amount of time - this leaves almost no space for resting but incurs much more strain on my cognitive limits.
If I’m writing tests and implementation for the same problem, there isn’t so much context switch. Same business domain, same domain model, same API, same contract and invariants. I’m just switching between taking the measurements and making the cuts. Which is a smart thing to do anyway because you can accumulate a lot of need for rework very quickly if you make a bunch of cuts in a row without stopping to confirm you’re making them correctly.
Hesai has driven the cost into the $200 to 400 range now. That said I don't know what they cost for the ones needed for driving. Either way we've gone from thousands or tens of thousands into the hundreds dollar range now.
Looking at prices, I think you are wrong and automotive Lidar is still in the 4 to 5 figure range. HESAI might ship Lidar units that cheap, but automotive grade still seems quite expensive: https://www.cratustech.com/shop/lidar/
Those are single unit prices. The AT128 for instance, which is listed at $6250 there and widely used by several Chinese car companies was around $900 per unit in high volume and over time they lowered that to around $400.
The next generation of that, the ATX, is the one they have said would be half that cost. According to regulator filings in China BYD will be using this on entry level $10k cars.
Hesai got the price down for their new generation by several optimizations. They are using their own designs for lasers, receivers, and driver chips which reduced component counts and material costs. They have stepped up production to 1.5 million units a year giving them mass production efficiencies.
That model only has a 120 degree field of view so you'd need 3-4 of them per car (plus others for blind spots, they sell units for that too). That puts the total system cost in the low thousands, not the 200 to 400 stated by GP. I'm not saying it hasn't gotten cheaper or won't keep getting cheaper, it just doesn't seem that cheap yet.
That was 2 generations of hardware ago (4th gen Chrysler Pacificas). They are about to introduce 6th gen hardware. It's a safe bet that it's much cheaper now, given how mass produced LiDARs cost ~$200.
Otto and Uber and the CEO of https://pronto.ai do though (tongue-in-cheek)
> Then, in December 2016, Waymo received evidence suggesting that Otto and Uber were actually using Waymo’s trade secrets and patented LiDAR designs. On December 13, Waymo received an email from one of its LiDAR-component vendors. The email, which a Waymo employee was copied on, was titled OTTO FILES and its recipients included an email alias indicating that the thread was a discussion among members of the vendor’s “Uber” team. Attached to the email was a machine drawing of what purported to be an Otto circuit board (the “Replicated Board”) that bore a striking resemblance to – and shared several unique characteristics with – Waymo’s highly confidential current-generation LiDAR circuit board, the design of which had been downloaded by Mr. Levandowski before his resignation.
The presiding judge, Alsup, said, "this is the biggest trade secret crime I have ever seen. This was not small. This was massive in scale."
(Pronto connection: Levandowski got pardoned by Trump and is CEO of Pronto autonomous vehicles.)
You are underestimating the complexity of the task so do other people on the thread. It's not trivial to implement a working C compiler very much so to implement the one that proves its worth by successfully compiling one of the largest open-source code repositories ever, which btw is not even a plain ISO C dialect.
You thought your course mates would be able to write a C compiler that builds the Linux?
Huh. Interesting. Like the other guy pointed out, compiler classes often get students to write toy C compilers. I think a lot of students don't understand the meaning of the word "toy". I think this thread is FULL of people like that.
I took a compilers course 30 years ago. I have near zero confidence anyone (including myself) could do it. The final project was some sort of toy language for programming robots with an API we were given. Lots of yacc, bison, etc.
If it helps, I did a PhD in computer science and went to plenty of seminars on languages, fuzz testing compilers, reviewed for conferences like PLDI. I’m not an expert but I think I know enough to say - this is conceptually within reach if a PITA.
Hey! I built a Lego technic car once 20 years ago. I am fully confident that I can build an actual road worthy electric vehicle. It's just a couple of edge cases and a bit bigger right? /s
That's really helpful, actually, as you may be able to give me some other ideas for projects.
So, things you don't think I or my coursemates could do include writing a C compiler that builds a Linux kernel.
What else do you think we couldn't do? I ask because there are various projects I'll probably get to at some point.
Things on that list include (a) writing an OS microkernel and some of the other components of an OS. Don't know how far I'll take it, but certainly a working microkernel for one machine, if I have time I'll build most of the stack up to a window manager. (b) implementing an LLM training and inference stack. I don't know how close to the metal I'd go, I've done some low level CUDA a long time ago when it was very new and low-level, depends on time. I'll probably start the LLM stuff pretty soon as I'm keen to learn.
Are these also impossible? What other things would you add to the impossible list?
Building a microkernel based OS feels feasible because it’s actually quite open ended. An “OS” could be anything from single user DOS to a full blown Unix implementation, with plenty in between.
Amiga OS is basically a microkernel and that was built 40 years ago. There are also many other examples, like Minix. Do I think most people could build a full microkernel based mini Unix? No. But they could get “something” working that would qualify as an OS.
On the other hand, there are not many C compilers that build Linux. There are many implementations of C compilers, however. The goal of “build Linux” is much more specific.
"The union" should be "a union", of which, companies are rarely a part of ie zero.
Their workforce may be a member of a union, some equals in grade may belong to different unions.
Public grants are nice but they have couple of shortcomings and which is why they can get you only that far. They are normally low in capital, the execution is really slow (couple of months to one year), and larger grants too involve politics. The process is too formal (inflexible and too time-consuming) and also quite discriminating to individuals/small-groups who do have the big ideas but are not running the business already (I mean how can they). Proposal evaluation also has its own shortcomings - there's very little incentive for the actual experts to join the evaluation process (it's paid pennies) and generally speaking this leads to another chicken&egg problem - you're presenting something novel to the pool of people who might not have the capacity to understand the idea - neither the vision nor execution.
That said, I am not attracted to the VC culture but their process delivers the value which creates successful companies.
NLNET is always coming up at FOSDEM. Since they have a decent track record of issuing grants, the EU delegates them some money to use in their own less bureaucratic granting process. They call this "cascade funding". NLNET has funded a lot of random individual projects you can find on their website. Nominally, your proposal must have something to do with their goals.
This year there is more emphasis on bringing complete solutions to market. Previously they were funding much more experimentation.
> This year there is more emphasis on bringing complete solutions to market. Previously they were funding much more experimentation.
That's the step in the right direction however there's what I believe is a major issue with the NLNET scheme - there is no fastrack possibility for really great ideas with very potent market impact - you have to spend (lose) ~year to prove your idea is worthy by applying to Zero Commons or similar grant instead of just getting the 200-500k to really get the project hitting the ground.
One year is exceptionally long period in tech, and if the idea is right, you need to have all the resources to execute it - working solely on the project for the whole year for 50,000 EUR is simply not the strategy that can work out in a highly competitive (world) space.
How should they know your project is worth investing 500k? I heard they've got 3x8M, per year I presume, so 500k is a huge chunk of that. Everyone thinks their project is worth 500k, what makes yours different from the rest?
> How should they know your project is worth investing 500k? I heard they've got 3x8M, per year I presume, so 500k is a huge chunk of that. Everyone thinks their project is worth 500k, what makes yours different from the rest?
Well that's the job of VCs, that's what they're expert at.
There's also another model where established industrial communities set up research centers to fund projects that might help their common problems.
Yes, many might believe that their project is worth more than it really is but in my proposition authors of the idea are not the ones who get to decide that but people from NLNET or whatever grant. What I am saying is that currently there is no such process at all and this is a foundational problem with the way how these grants are working.
I guess not. Can you specify it in more concrete terms? They're not just buying your project for an arbitrary price, or VC-investing, they're paying your living costs and hosting costs while you create a donation to the public good, that's how grants work.
They give you 500k if you have a really good reason why you need that... and why the result is worth it... and why your project is more worth it than all the other several projects, combined, they could spend the same 500k on. Most of them are one or two people's living cost for 6 months to a year or so.
I genuinely hope you're a bot. If you're not then please consider being respectful in your conversations and address the question being asked rather than moving goalposts - it is extremely annoying. If you're out of your arguments, learn to say "I don't know".
And I also do hope, if you're a human, that you're not sitting anywhere close to decision making committee be it in NLNET or any other grant program because if you do, it fits into the (terrible) narrative of software market in the EU.
Ad hominem does not apply to bots or trolls and you're one of those two. And I'm not sure what was ad hominem about my response. You're the one being ignorant here
I built a moderately complex and very good looking website in ~2 hours with the coding agent. Next step would be to write a backend+storage, and given how well the agent performs in these type of tasks, I assume I will be able to do that in the manner of hours too. I have never ever touched any of the technology involving the web development so, in my case, I can say that I no more need a full-stack dev that in normal circumstances I would definitely do. And the cost is ridiculous - few hours invested + $20 subscription.
I agree however on the point that no prior software engineering skills would make this much more difficult.
Yeah, I don't doubt you, it's really effective at knocking out "simple" projects, I've had success vibe-coding for days, but eventually unless you have some reins on the architecture/design, it falls down over it's own slop, and it's very noticeable as the agent spends more and more time trying to work in the changes, but it's unable to.
So the first day or two, each change takes 20-30 minutes. Next day it takes 30-40 minutes per change, next day up to an hour and so on, as the requirements start to interact with each other, together with the ball of spaghetti they've composed and are now trying to change without breaking other parts.
Contrast that with when you really own the code and design, then you can keep going for weeks, all changes take 20-30 minutes, as at day one. But also means I'm paying attention to what's going on, so no vibe-coding, but pair programming with LLMs, and also requires you to understand both the domain, what you're actually aiming for and the basics of design/architecture.
The point was not in simplicity but rather in if AI is replacing some people's jobs. I say that it certainly is, as given by the example, but I also acknowledge that the technology is still not at the point where human engineers are no more required in the loop.
I built other things too which would not be considered trivial or "simple", or as you say they're architecturally complex, and they involve very domain specific knowledge about programming languages, compilers, ASTs, databases, high-performance optimizations, etc. And for a long time, or shall I say never, have I felt this productive tbh. If I were to setup a company around this, which I believe I could, in pre-LLM era I'd quite literally have to hire 3-5 experienced engineers with sufficient domain expertise to build this together with me - and I mean not in every possible potential but the concrete work I've done in ~2 weeks.
> The point was not in simplicity but rather in if AI is replacing some people's jobs. I say that it certainly is, as given by the example, but I also acknowledge that the technology is still not at the point where human engineers are no more required in the loop.
I feel like you have missed emsh's point which is that AI agents significantly become muddled up if your project's complex.
I feel the same way personally. If I don't know how the AI code interacts with each other, I feel a frustration as long as the project continues precisely because of the fact that they mention about first taking less time and then taking longer and longer time having errors which it missed etc.
I personally vibe code projects too but I will admit that there is this error.
I have this feeling that anything really complex will fall heels first if complexity really grows a lot or you don't unclog the slop.
This is also why we are seeing "AI slop janitors" humans whose task is to unsloppify the slop.
Personally I have this intution that AI will create really good small products, there is no denying in that, but those were already un-monetizable or if they were, then even in the past, they were really easy to replicate, this probably just lowered the friction
Now if your project is osmething commercial and large, I don't know how much AI slop can people trust. At some point if people depend on your project which is having these issues because people can understand if the project's AI generated or not, then that would have it issues too.
And I am speaking this from experience after building something like whmcs in golang in AI. At first, I am surprised and I feel as if its good enough for my own personal use case (gvisor) and maybe some really small providers. But when I want it to say hook to proxmox, have the tmate server be connected with an api to allow re-opening easier, have the idea of live migration from one box to another etc., create drivers for the custom firecrackers-ssh idea that I implemented once again using AI.
One can realize how quickly complexity adds in projects and how as emsh's points out that it becomes exponentially harder to use AI.
Purely server rendered HTML can be a website. Static HTML pages with a server doing no more than S3 does can be a website. Websites existed long before SPAs were a twinkle in anyone’s eye.
reply