Hacker Newsnew | past | comments | ask | show | jobs | submit | ruleforty's commentslogin

Might I suggest the LMN-3 - soup to nuts the most positive project I see and gaining traction and affordability: https://youtu.be/h5UmPTttN1s


Apollo is and hopefully remains still a thing. I don’t see a promotion of the shit Reddit app here as acceptable because if it’s mobile we’re talking about - unless you have something like Tailscale running on a NAS at home - the pi-hole and ad blocking point is also moot.

The reason Apollo is so successful is because it gives users all the fuss without the noise and Reddit is pushing down noise more and more.


It’s looking like the answer to your question is probably blood red: https://www.msn.com/en-us/money/savingandinvesting/fidelity-...


Yeah so OPs point about "making plenty of money" isn't really true. Now, I'm sure they could pull a twitter and layoff 75% of the company and reach profitability.


No this is the entire point I'm making, they are not interested in creating a sustainable infrastructure for the internet, they want investors they want to IPO they want to grow and grow and grow. If they destroy themselves in the process it's a bet everyone invested along the way is perfectly fine with, demanding even. The entire structure of this is cancerous.


Yeah I get your point now, I agree the entire grow at all costs is not good for building a sustainable company. Investors are never happy with normal growth rates, they want exponential growth which isn't possible after a certain point without ruining your product.


and in some modicum it’ll always have these communities but like was mentioned above: watching cat gifs, some drunk girl smashing a wine bottle, or the random police videos - those non-niche areas are at jeopardy… and good. It’s time they do their little IPO and we all move on to whatever emerges after the dust settles.


That seems to just be a nerded out version of this, perhaps original and not cited, article from many years ago (with the same theme and discourse) but much more digestible and lacking the pretentiousness: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

PS: the scale of the problems and the civilization around the model is in part 2 of the link


The article you link is a derivative and somewhat ELI5 reinterpretation of a lot of other, older work, including in particular the articles Eliezer Yudkowsky and others published on LessWrong, all done before OpenAI was a thing, before deep learning was something widely talked about.

The article GP posted is in direct lineage of the LessWrong body of work/community. It's not "pretentious" or "nerded out" - it's less handwavy, addresses a specific problem, and assumes the reader are broadly familiar with the ideas discussed - whereas the WaitButWhy article is basically AI safety 101.

EDIT: and I will spoil the article somewhat for those on the fence whether to read it: it shows how an explicitly non-agent, limited, nerfed AI could unwittingly trick you into bootstrapping a proper generic AI on top of it - not because it wanted to, or knew it would happen, but because it pattern-matched you a concise and plausibly looking answer, that has a fatal complexity-escalating bug in it.

(Hint/spoiler: you know how you can turn a constrained computational system (e.g. HTML5 + CSS3) into a Turing-complete one just by running it in a loop that preserves its state? Something equivalent happens here.)


I don't think the parent's point is a stylistic criticism of the linked post.

Rather, I'd take the parent as saying that the linked post might cite the properties of current LLM systems but it ultimately isn't using them in it's arguments. Rather, it's same old argument - "start adding capabilities and boom suddenly you have an agent that take over the world".

The understanding we have of current LLMs is that they're very capable as text synthesizers but are quite random with any accuracy concerning the world. We've gone from GPT-2 to ChatGPT and the systems have gotten many times better (as smooth text synthesizers) but haven't gotten many times reliable in the particular descriptions of the world they give. They still say clearly wrong thing without prompting regularly - like every paragraph regularly for anything slightly obscure.

The main thing is that the rise of deep learning has actually highlighted goal accomplishment as a far more difficult task than classification, information retrieval and text/image synthesis. Self-cars keep getting mentioned and that's justified imo 'cause a huge amount of resources have been put into getting modern system to accomplish a fairly defined and limited "real world task" and all those efforts have mostly failed.

The key distinction of goal accomplishment is a system has engaging a very loop of making small judgements, each of which has be correct to an extraordinarily high degree. The linked text elides the difference between these tasks by talking of a vague breakthrough that makes the program "insightful". But we have to consider what's actually needed for what simple one wishes to do. Our present machine, ChatGPT, might, for example, give correct instructions for some complex auto repair (combining the patterns of several simple repair, maybe). But it couldn't "walk you through the process of doing the task" since it would continue it's tendency to say wrong things and such wrong things actually cause damage (like wrong turns in the self driving cars).

Your point and the linked text's point is that these things are insidious and can "get more complex" without one realizing it. But no one has made a reliable goal getting device out of just hooking a Turing machine to a neural network. The point is neural network today aren't "nerfed" in any way in the spectrum of possibilities, they're best people can do and they're making great progress in some measures but still by many fairly clear measures, failing to go beyond their limitations.

I'm not saying that it's impossible that a great advance happens involving making neural network reliable enough to accomplish goals (and to respond robustly the changes in the world, etc,etc). It's not impossible that it would happen at random but it seems more likely than any other advance happening at random. And also, if such an advance happens purposefully, there's no reason to think it will happen in a "you give it the ability to be much accurate and to seek goals but you know nothing about the goals it seeks" way.

And I'm familiar with the Lesswrong community. That group seems very tied to it's initial assumptions and it seems to fail to note the processes involved in current and potential future AIs. This is a long post so I'll just say one consistent error they make is assigning probabilities to fundamental unknowns. That's an abuse of the assumptions of any theory of probability and mostly results in a belief in whatever thing having some chance of appearing without explanation.


I second this with the caveat that stealing someone’s car simply means they don’t have transportation (cost aside). For many theft of a phone could be losing all the things the OP mentions. Your average consumer has no idea what backup codes are. Your average consumer relies on the no contact pay. Your average consumer relies on 2FA.

There is a massively more inconvenience (cost aside again) in having your phone stolen versus your car. It could upend your life a lot worse than having your wallet or laptop stolen as well.

There are only so much things we can expect from the average consumer and theft of a phone massively changes that persons life.

I’m for increasing the resources put into specifically phone theft.

My laptop was stolen from the airport (inside security!) and I had needle like precision with where the cops could find it. Nothing was done and I simply got back the insured value for it and the items with it. It put me out of work for the trip I was taking but I’m not your average consumer and everything was backed up and I did still have my phone. If my phone had been taken and not on me I would have been in a much larger world of pain (though I have backup codes, etc.).


I came across the LMN 3: there’s a lot to unpack but it might be the most mature project in terms of both hardware and software out there. It’s as close to having the right tools, PCBs, etc. for an open source Teenage Engineering OP-1:

https://www.synthtopia.com/content/2022/06/25/the-lmn-3-an-o...


> It's just too good at telling us what we want to hear.

This got me. Imagine segmenting the LLM into a zeitgeist, various demographics, geolocation, etc. and then these all bubbling up to knobs on bots you could turn unleashing content to skew public discord in so many different ways than already are. Make up “facts”, publish articles, generate images, calls to action, etc. for each topic, demographic, etc.

The only image I could conjure up was “Q” from QAnon fame and it’s scary.


Give an infinite number of ChatGPTs writing code an infinite amount of time and they will write a ChatGPT


In my case I don’t enter my home through the garage when leaving the house for the dog or just walking to the local shops and beach.

I solved opening the garage door when leaving by a much simpler approach: “Alexa, I’m leaving”(regex for various phrases) along with an Aqara button if I was on the phone.

For arrival of my car only, and not triggering when I would walk nearby (park and beach 1 block in either direction), I had home assistant dialed in to notice the Bluetooth connection to my cars audio by its ID from within the app etc. along with the proximity and whether I was moving towards or away. Away would close and towards would of course open. This was particularly tricky but as you explain always failsafes in place: garage door opener in car and for the opening mechanism was set to close in 2 mins but reset if there was motion in the garage. If I was marked away and there was motion in the garage it would close and also sound an audible alarm.

None of those of course would have been possible without NodeRed. It’s so much fun to use.


In my experience presence detection, if you want it to be useful, almost always has to be custom for your situation. Everyone has different patterns based on personal preference and living situation. Glad you figured yours out!

NodeRed is great. I don't use it but if you don't want to spend a ton of time writing rote automation in code it's a dope option (and that's an absolutely fair take to have imo).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: