Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Like another comment mentioned, sigmoid curves [1] are ubiquitous with neural network systems. Neural network systems can be intoxicating because it's so "easy" (relatively speaking) to go from nothing to 80% in extremely short periods of time. And so it seems completely obvious that hitting 100% is imminent. Yet it turns out that each percent afterwards starts coming exponentially more slowly, and we tend to just bump into seemingly impassable asymptotes far from where we'd like to be.

~8 years ago when self driving technology was all the rage and every major company was getting on board with ever more impressive technological demos, it seemed entirely reasonable to expect that we'd all be in a world of complete self driving imminently. I remember mocking somebody online around the time who was pursuing a class C/commercial trucking license. Yet now a decade later, there are more truckers than ever and the tech itself seems further away than ever before. And that's because most have now accepted that progress on such has basically stalled out in spite of absolutely monumental efforts at moving forward.

So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure. And many of those generally creative domains are the ones LLMs are paradoxically the weakest in - like writing. Reading a book written by an LLM would be cruel and unusual punishment given then current state of the art. One domain I do see them completely taking over is search. They work excellently as natural language search engines, and "failure" in such is very poorly defined.

[1] - https://en.wikipedia.org/wiki/Sigmoid_function



I'm not really sure your self-driving analogy is apt here. Waymo has cars on the road right now that are totally autonomous, and just expanded its footprint. It has been longer and more difficult than we all thought, and those early tech demos were a glimmer of what was to come; then we had to grind to get there, with a lot of engineering.

I think what maybe seems not obvious amidst the hype is that there is a hell of a lot of engineering left to do. The fact that you can squash the weights of a neural net down to 3 bits per param and it still works -- is evidence that we have quite a way to go with maturing this technology. Multimodality, improvements to the UX of it, the human-computer interface part of it. Those are fundamental tech things, but they are foremost engineering problems. Getting latency down. Getting efficiency up. Designing the experience, then building it out.

25 years ago, early tech demos on the internet were promising that everyone would do their shopping, entertainment, socializing, etc... online. Breathless hype. 5 years after that, the whole thing crashed, but it never went away. People just needed time to figure out how to use it and what it was useful for, and discover its limitations. 10 years after that, engineering efforts were systematized and applied against the difficult problems that still remained. And now: look at where we are. It just took time.


I don't think he's saying that AGI is impossible — almost noone (nowadays) would suggest that it's anything but an engineering challenge. The argument is simply one of scale, i.e. how long that engineering challenge will take to solve. Some people are suggesting on the order of years. I think they're suggesting it'll be closer to decades, if that.


AGI being "just an engineering challenge" implies that it is conceptually solved, and we need only figure out how to build it economically.

It most definitely is not.


Waymo cars are highly geofenced in areas with good weather and good quality roads. They only just (in January) gained the capability to drive on freeways.

Let me know when you can get a Waymo to drive you from New York to Montreal in winter.


> Waymo cars are highly geofenced in areas with good weather and good quality roads. They only just (in January) gained the capability to drive on freeways

They are an existence proof that the original claim that we seem further than ever before is just wrong.


There are 6 categories of self driving, starting at 0. The final level is the one we've obviously been aiming at, and most were expecting. It's fully automated self driving in all conditions and scenarios. Get in your car anywhere, and go to anywhere - with capability comparable to a human. Level 4, by contrast, is full self driving under certain circumstances and generally in geofenced areas - basically trolleys without rails. Get in a car, so long as conditions are favorable, and go to a limited set of premapped locations.

And level 4 is where Waymo is, and is staying. Their strategy is to to use tiny geofenced areas with a massive amount of preprocessing, mapping out every single part of an area, not just in terms of roads but also every single meta indicator - sign, signals, cross walks, lanes, and so on. And it creates a highly competent, but also highly rigid system. If road conditions change in any meaningful way, the most likely outcome with this strategy is simply that the network gets turned off until the preprocessing can be carried and reuploaded again. That's completely viable in small geofenced areas, but doesn't generalize at all.

So the presence of Waymo doesn't say much of anything about the presence of level 5 autonomy. If anything it suggests Waymo believes that level 5 autonomy is simply out of reach, because the overwhelming majority of tech that they're researching and developing would have no role whatsoever in level 5 automation. Tesla is still pushing for L5 automation, but if they don't achieve this then they'll probably just end up getting left behind by companies that double down on L4. And this does indeed seem to be the most likely scenario for the foreseeable future.


This sounds suspiciously like that old chestnut, the god of the gaps. You're splitting finer and finer hairs to maintain your position that, "no, really, they're not really doing what I'm saying they can't do", all the while self-driving cars are spreading and becoming more capable every year.

I don't think we have nearly as much visibility on what Waymo seems to believe about this tech as you seem to imply, nor do I think that their beliefs are necessarily authoritative. You seem disheartened that we haven't been able to solve self-driving in a couple of decades, and I'm of the opinion that geez, we basically have self-driving now and we started trying only a couple of decades ago.

How long after the invention of the transistor did we get personal computers? Maybe you just have unrealistic expectations of technological progress.


Level 5 was the goal and the expectation that everybody was aiming for. Waymo's views are easy to interpret from logically considering their actions. Level 4, especially as they are doing it, is in no way whatsoever a stepping stone to level 5. Yet they're spending tremendous resources directed towards things that would have literally and absolutely no place in level 5 autonomy. It seems logically inescapable to assume that not only do they think they'll be unable to hit level 5 in the foreseeable future, but also that nobody else will be able to either. If you can offer an alternative explanation or argument, please share!

Another piece of evidence also comes from last year when Google scaled back Waymo with layoffs as well as "pausing" its efforts at developing self driving truck technology. [1] That technology would require something closer to L5 autonomy, because again - massive preprocessing is quite brittle and doesn't scale well at all. Other companies that were heavily investing in self-driving tech have done similarly. For instance Uber sold off its entire self-driving division in 2021. I'm certainly happy to hear any sort of counter-argument, but you need some logic instead of ironically being the one trying to mindread me or Waymo!

[1] - https://www.theverge.com/2023/7/26/23809237/waymo-via-autono...


Not necessarily. If self-driving cars "aren't ready" and then you redefine what ready is, you've absolutely got your thumb on the scale of measuring progress.


Other way around: Self driving cars "are ready" but then people in this thread seemed to redefine what ready means.


Why do some people gloat about moving goalposts around?

15 years ago self driving of any sort was pure fantasy, yet here we are.

They'll release a version that can drive in poor weather and you'll complain that it can't drive in a tornado.


> "15 years ago self driving of any sort was pure fantasy, yet here we are."

This was 38 years ago: https://www.youtube.com/watch?v=ntIczNQKfjQ - "NavLab 1 (1986) : Carnegie Mellon : Robotics Institute History of Self-Driving Cars; NavLab or Navigation Laboratory was the first self-driving car with people riding on board. It was very slow, but for 1986 computing power, it was revolutionary. NavLab continued to lay the groundwork for Carnegie Mellon University's expertise in the field of autonomous vehicles."

This was 30+ years ago: https://www.youtube.com/watch?v=_HbVWm7wdmE - "Short video about Ernst Dickmanns VaMoR and VaMP projects - fully autonomous vehicles, which travelled thousands of miles autonomously on public roads in 1980s."

This was 29 years ago: https://www.youtube.com/watch?v=PAMVogK2TTk - "A South Korean professor [... Han Min-hong's] vehicle drove itself 300km (186 miles) all the way from Seoul to the southern port of Busan in 1995."

This was 19 years ago: https://www.youtube.com/watch?v=7a6GrKqOxeU - "DARPA Grand Challenge - 2005 Driverless Car Competition"


Stretching the timeline to 30 years doesn't make the achievement any less impressive.


It's okay! We'll just hook up 4o to the Waymo and get quippy messages like those in 4o's demo videos: "Oh, there's a tornado in front of you! Wow! Isn't nature exciting? Haha!"

As long as the Waymo can be fed with the details, we'll be good. ;)

Joking aside, I think there are some cases where moving the goalposts is the right approach: once the previous goalposts are hit, we should be pushing towards the new goalposts. Goalposts as advancement, not derision.

I suppose the intent of a message matters, but as people complain about "well it only does X now, it can't do Y" - probably true, but hey, let's get it to Y, then Z, then... who knows what. Challenge accepted, as the worn-out saying goes.


It's been 8 years and I still don't have my autonomous car.

Meanwhile I've been using ChatGPT at work for _more than a year_ and it's been tremendously helpful to me.

This is not hype, this is not about how AI will change our lives in the future. It's there right here, right now.


Of course. It's quite a handy tool. I love using it for searching documentation for some function that I know the behavior of, but not the name. And similarly, people have been using auto-steer, auto-park, and all these other little 'self driving adjacent' features for years as well. Those are also extremely handy. But the question is, what comes next?

The person I originally responded to stated, "We’re moving toward a world where every job will be modeled, and you’ll either be an AI owner, a model architect, an agent/hardware engineer, a technician, or just.. training data." And that far less likely than us achieving L5 self driving (if not only because driving is quite simple relative to many of the jobs he envisions AI taking over), yet L5 self driving seems as distant as ever as well.


> So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure.

Yep. So basically they're useful for a vast, immense range of tasks today.

Some things they're not suited for. For example, I've been working on a system to extract certain financial "facts" across SEC filings. ChatGPT has not been helpful at all either with designing or implementing (except to give some broad, obvious hints about things like regular expressions), nor would it be useful if it was used for the actual automation.

But for many, many other tasks -- like design, architecture, brainstorming, marketing, sales, summarisation, step by step thinking through all sorts of processes, it's extremely valuable today. My list of ChatGPT sessions is so long already and I can't imagine life without it now. Going back to Google and random Quora/StackOverflow answers laced with adtech everywhere...


> I've been working on a system to extract certain financial "facts" across SEC filings. ChatGPT has not been helpful at all

The other day, I saw a demo from a startup (don't remember their name) that uses generative AI to perform financial analysis. The demo showed their AI-powered app basically performing a Google search for some companies, loosely interpreting those Google Stock Market Widgets that are presented in such searches, and then fetching recent news and summarizing them with AI, trying to extract some macro trends.

People were all hyped up about it, saying it will replace financial analysts in no time. From my point of view, that demo is orders of magnitude below the capacity of a single intern who receives the same task.

In short, I have the same perception as you. People are throwing generative AI into everything they can with high expectations, without doing any kind of basic homework to understand its strengths and weaknesses.


> So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure.

But is this not what humans do, universally? We are certainly good at hiding it – and we are all good at coping with it – but my general sense when interacting with society is that there is a large amount of nonsense generated by humans that our systems must and do already have enormous flexibility for.

My sense is that's not an aspect of LLMs we should have any trouble with incorporating smoothly, just by adhering to the safety nets that we built in response to our own deficiencies.


The sigmoid is true in humans too. You can get 80% of the way to being sort of good at a thing in a couple of weeks, but then you hit the plateau. In a lot of fields confidently knowing and applying this has made people local jack of all trades experts... the person that often knows how to solve the problem. But Jack is no longer needed so much. ChatJack got`s your back. Better to be a the person who knows one thing in excruciating detail and depth, and never ever let anyone watch you work or train on your output.


I think it's more like an exponential curve where it looks flat moments before it shoots up.

mapping th genome was that way. On a 20yr schedule, barely any progress for 15 and then poof, done ahead of schedule




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: