Hacker Newsnew | past | comments | ask | show | jobs | submit | feisty0630's commentslogin

Interesting that it reads a bit like it came from a Markov chain rather than an LLM. Perhaps limited training data?


Early LLMs used to have this often. I think's that where the "repetition penalty" parameter comes from. I suspect output quality can be improved with better sampling parameters.


It is lacking all recorded text from the past 200 years. ;)

It would be interesting to know how much text was generated per century!


I fail to see how the two concepts equate.

LLMs have neither intelligence nor problem-solving abillity (and I won't be relaxing the definition of either so that some AI bro can pretend a glorified chatbot is sentient)

You would, at best, be demonstrating that the sharing of knowledge across multiple disciplines and nations (which is a relatively new concept - at least at the scale of something like the internet) leads to novel ideas.


I've seen many futurists claim that human innovation is dead and all future discoveries will be the results of AI. If this is true, we should be able to see AI trained on the past figure it's way to various things we have today. If it can't do this, I'd like said futurists to quiet down, as they are discouraging an entire generation of kids who may go on to discover some great things.


> I've seen many futurists claim that human innovation is dead and all future discoveries will be the results of AI.

I think there's a big difference between discoveries through AI-human synergy and discoveries through AI working in isolation.

It probably will be true soon (if it isn't already) that most innovation features some degree of AI input, but still with a human to steer the AI in the right direction.

I think an AI being able to discover something genuinely new all by itself, without any human steering, is a lot further off.

If AIs start producing significant quantities of genuine and useful innovation with minimal human input, maybe the singularitarians are about to be proven right.


I'm struggling to get a handle on this idea. Is the idea that today's data will be the data of the past, in the future?

So if it can work with whats now past, it will be able to work with the past in the future?


Essentially, yes.

If the prediction is that AI will be able to invent the future. If we give it data from our past without knowledge of the present... what type of future will it invent, what progress will it make, if any at all? And not just having the idea, but how to implement the idea in a way that actually works with the technology of the day, and can build on those things over time.

For example, would AI with 1850 data have figured out the idea of lift to make an airplane and taught us how to make working flying machines and progress them to the jets we have today, or something better? It wouldn't even be starting from 0, so this would be a generous example, as da Vinci way playing with these ideas in the 15th century.

If it can't do it, or what it produces is worse than what humans have done, we shouldn't leave it to AI alone to invent our actual future. Which would mean reevaluating the role these "thought leaders" say it will play, and how we're educating and communicating about AI to the younger generations.


> I would have a chance to get a rudimentary insight on what the world was like at that time

Congratulations, you've reinvented the history book (just with more energy consumption and less guarantee of accuracy)


History books, especially those from classical antiquity, are notoriously not guaranteed to be accurate either.


Do you expect something exclusively trained on them to be any better?


To a large extent, yes. A model trained on many different accounts of an event is likely going to give a more faithful picture of that event than any one author.

This isn't super relevant to us because very few histories from this era survived, but presumably there was sufficient material in the Library of Alexandria to cover events from multiple angles and "zero out" the different personal/political/religious biases coloring the individual accounts.


You express a desire for more FreeBSD posts and then immediately wade into all the typical flame-warring that surrounds most BSD/ZFS posts (systemd, ECC RAM), and it's been that way for over a decade at this point.


Are you from the middle ages, or are you so out of touch with blue-collar work that you're under the impression the average sewer worker has to manually handle waste?


It's not "dumb", you're just presenting a steelman that directly contradicts what the person you're replying to wrote.

You might indeed be shocked to find that not everyone consumes fast food.


What strikes me about this exchange is no one is talking about the money. In the past, you could do either and no one had to care except you. Now a lot of jobs that people could find fulfilling aren't because the economy is so distorted, so how are we supposed to honestly look at this? I guess let's walk these people off the plank and get this over with...


Did the people who worked on the farm to grow and harvest your food enjoy it?


Are you seriously and earnestly arguing that harm-minimisation is useless and we should all just open the human-suffering throttle, or did you just not think that far ahead?

I am hoping the latter. Being foolish is far more temporary a condition than being cruel.


Increasing productivity is how we minimize harm. Many people hate their job but are happy to have it because it allows them to consume things. More production = less suffering


How are you “minimizing harm” by pearl clutching about not eating fast food? The front line people you are interacting with at the fast food restaurant or the grocery store have it easiest in the chain of events that it takes food to get to you. Do you think that fast food workers have it harder than the people at the grocery store?


0.8*harm < 0.81*harm - hope this helps!

Also, the core point is about people being able to find meaning in their work. That you've decided to laser in on this specific point to go on a tangent of whattaboutism is largely irrelevant.

Have a nice day.


The fact is that most of the 3-4 billion+ people on earth don’t “find meaning in their work” and they only work because they haven’t overcome their addiction to food and shelter. If the point was irrelevant to your argument, why make it?


I didn't actually make the point initially. I was challenging the reply's point that:

a) just because some people are miserable at work, doesn't mean we shouldn't care that other people might become miserable at work

b) Someone saying they prefer their food to be made without suffering is clearly a hypocrite in all cases because... there are miserable people in fast food jobs?

I mean... really. Come on now.


People who work in fast food may not be “passionate” about their job. But they aren’t “suffering”. You aren’t relieving anyone’s “suffering” by not eating fast food or even if there was no fast food. They aren’t “suffering” anymore than people working at the grocery store.

Cry me a river for software developers (been delivering code professionally for 30 years and before that as a hobbyist) because now we have something that makes us more efficient.


I don't know if you're intentionally being obtuse or you just failed third grade reading comprehension, but can you please go argue with the people actually making these points (rather than me, a random person who has replied to them)?


So exactly what point are you trying to make? That software developers - at least the employed ones - “are suffering” because of AI? That you don’t eat fast food because you believe the employees are being exploited? What exactly is your point?


As soon as Intel killed Itanium, the clock was ticking for HP-UX.


It's almost as if different equipment can serve different purposes...


There are F500 companies shipping Ubuntu Core on devices that will only permit signed firmware, so I'm not sure your assessment is correct.

https://buildings.honeywell.com/au/en/products/by-category/b...


Depending on the product, this might be OK! If you've ever had cause to closely read the GPLv3, the anti-tivoisation clause for some reason is only really aimed at "User products" (defined as "(1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling"). This one looks like it's a potential grey area, since it's not obvious if it's intended for buildings that anyone would live in.


I worked on an embedded product that was leased to customers (not sold). The system included GPLv3 portions (e.g. bash 5.x) but they concluded that we did not need to offer source code to their cuatomers.

The reasoning was that the users didn’t own the device. While I personally believe this is not consistent with recent interpretations of the license by the courts, I think they concluded that it was worth the risk of a customer suing to get the source code, as the company could then pull the hardware and leave that customer high and dry. It is unlikely any of their users a would risk that outcome.


Take a look at their customer testimonials [0] and ask yourself if they have recently made anticompetitive or user-hostile moves. Now, ask yourself: do you think they like being beholden to a license that makes it harder for them to keep their monopolies?

[0]: https://ubuntu.com/pro/

Edited to add: it would be cool if, instead of the top-most wealth-concentrators F[500:], there was an index of the top-most wealth-spreaders F[:500]. What would that look like? A list of cooperatives?


As long as nobody sues them everything is fine


And if drivers followed the Safe Driving Protocol (SDP), we wouldn't need airbags. Real life happens regardless of the imaginary frameworks infosec people dream up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: