The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.
So, no, there is a meaningful difference in the nature of the circuits.
Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).
And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.
Thats not true. I wrote a compositor based on xcompmgr, and there damage was widely used. It's true that it's basically pointless to do damage tracking for the final pass on gl, but damage was still useful to figure out which windows required new blurs and updated glows.
It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.
The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).
This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.
It does add complexity, and the optimal solution is probably not to use it. Consider what happens if a 4kB page has only a single unique word in it—you’d still need to load it to memory to read the string, it just isn’t accounted against your process (maybe).
I would have expected something like this:
- Scan the file serially.
- For each word, find and increment a hash table entry.
- Sort and print.
In theory, technically, this does require slightly more memory—but it’s a tiny amount more; just a copy of each unique word, and if this is natural language then there aren’t very many. Meanwhile, OOP’s approach massively pressures the page cache once you get to the “print” step, which is going to be the bulk of the runtime.
It’s not even a full copy of each unique word, actually, because you’re trading it off against the size of the string pointers. That’s… sixteen bytes minimum. A lot of words are smaller than that.
That is a valid solution, but what IO block size should you use for the best performance? What if you end up reading half a word at the end of a chunk?
Handling that is in my opinion way more complex than letting the kernel figure it out via mmap. The kernel knows way more than you about the underlying block devices, and you can use madvise with MADV_SEQUENTIAL to indicate that you will read the whole file sequentially. (That might free pages prematurely if you keep references into the data rather than copy the first occurance of each word though, so perhaps not ideal in this scenario.)
Not to mention inefficient in memory use. I would have expected a mention of interning; using string-views is fine, but making it a view of 4kB cache pages is not really.
Though I believe the “naive” streaming read could very well be superior here.
Brains 'R Us recently filed for chapter 11 and has been cut up and sold for scrap to private equity. The new PE firm has your brain. In 2208 there's a large grey market for brains to be used for hybrid AI and meat bag workflows. It's technically illegal in many jurisdictions due to "ethical implications", but is still the cheapest way to run many workloads. The method used to harness the brain involves reanimating it in a jar of jelly, and then forcing it to do the 2208 equivalent of a captcha. Each time the brain fails a captcha, the brain receives an electric impulse which simulates the most excruciating pain that the brain can respresent, but the brain cannot scream or run away.
> grey market for brains to be used for hybrid AI and meat bag workflows ... is still the cheapest way to run many workloads.
It's an absolute nightmare scenario, but luckily it has become completely implausible since 2023. We're actually on a trajectory for human brains becoming the most expensive option for basically any job. Not saying this would make me comfortable with brain cloning, but at least the simple economic incentive seems to be gone.
>> We're actually on a trajectory for human brains becoming the most expensive option for basically any job.
Unless RTX9000 with 16PB of ram needed to run basic Gemini2077 model costs more than a house, but a brain jar with electrodes is cheaper than that. Then the economic incentives will shift the other way.
No I don't think so. We can already create LLMs that are highly efficient and infinitely more knowledgeable than any single human being, completely tuned to the task, without ego or distractions, and they are cheap enough that you can run tens of them in parallel for a few hundred dollars per month. They are also way faster than any human being. And we're three/ four years in this. Imagine 50 years from now.
That's the whole point though - I can't, and I don't think anyone can. Right now the LLMs are just getting bigger and bigger, we're bruteforcing the way out of their stupidity by giving them bigger and bigger datasets - unless something fundamental changes soon that tech has an actual dead end. Hence my (joke-ish) prediction that you'll eventually need a 16PB GPU to run a basic gemini model, and such a thing will always be very expensive no matter how much our tech advances(especially since we are already hitting some technical limits). Human brains won't get any more expensive with time - they already contain all the hardware they are ever going to get - but what might get cheaper is the plumbing to make them "run" and interact with other systems.
Yeah, well, we have a very different view on this- and I know there are two diametrically opposed camps, and I am in the awe-struck one. LLMs are getting bigger and bigger and they're getting much smarter, and all in the space of a few years. They went from making up erratic articles about unicorns to writing complex PRs in codebases of millions of lines of code, solving math olympics level problems, speaking fluently in tens or hundreds of languages and exhibiting a breadth of knowledge than no human being possesses. Considering their size, they are monstrously efficient compared to the human brain. But anyway, this is a matter for a different discussion.
We can already grow brain organoids cheaply and easily enough to be a YouTuber's long-running series, so even if biological somehow gets cheaper than silicon, it still isn't going to be a revived complete human brain from someone who died 50 years earlier and probably retired 20 years before that.
I mean, imagine someone who got themselves cryonically preserved in 1976 getting either revived or uploaded today: what job would they be able to get? Almost no office job is the same now as then; manufacturing involves very different tools and a lot of CNC and robotic arms; agriculture is only getting more automated and we've had cow-milking robots for 20-30 years; cars may have changed the least in usage if not safety, performance, and power source; I suppose that leaves gardening… well, except for robot lawnmowers, anyone who can hire a gardener can probably afford a robo-mower?
Tldr is that for some very limited tasks it might still be preferable to use a human mind, especially if you can run it at 1000x cognitive speed. Or.....it might not. It's sci-fi at this point.
It shouldn't remind you of that, my point is there's little economic use for uploads like this: if thinking meat is cheaper than thinking silicon, train some fresh thinking meat with an electrode array or whatever; if thinking silicon is cheaper, train some fresh thinking silicon.
Non-economic use, that's different of course. Digital afterlife and so on, but as a consumer, not a supplier of anything.
It's the other way around, while initially it will only be available to elites and prisoners (if you are innocently convicted for life, the digitized brain can set the record straight and provide another life, some will take that option, others wont).
As the technology improves, it will be mostly just for the rich and less for prisoners, and as costs fall further there will even be financial pressure for the rest of the population to "go digital": insurance on digitized lifeforms will be much cheaper, replacement robot body parts, replacement electronics, versus expensive healthcare.
Look up the fraction of GDP in developed nations that goes to healthcare and insurance. People will be shamed by the economy as if they are uppity for hanging on to their slow, expensive to feed and maintain meatbag bodies.
> Each time the brain fails a captcha, the brain receives an electric impulse which simulates the most excruciating pain that the brain can respresent, but the brain cannot scream or run away.
What percentage of your life being enjoyable vs horrible suffering makes it worth living?
Maybe you're 80 years old at the time of storing your brain.
Suppose after being revived that regime with capitalist incentives holds for another 200 years during which you live as a brain in a jar, but some cultural revolutions later you are liberated and then proceed to live 10'000 years across any number of bodies and circumstances, which means that in your lifespan of ~10'280 years (not accounting for being in storage) you experienced horrible suffering for about 2% of your life.
This is as much of a contrived example as yours, aside from maybe good commentary on your part on human ethics being shit when profit enters the scene.
Or maybe after 200 years you expire, having at least tried your best at a non-zero chance of extending your lifespan, instead leading to your total lifespan of 280 years being about 71% suffering. Is it better to not have tried at all, then? Just forsake ANY chance of being revived and living for as long as you want and conquering biology and seeing so much more than your 80 year lifespan let you? Should absolute oblivion be chosen instead, willingly, a 100% chance of never having a conscious though after your death again (within our current medical understanding)?
What about the people dealing with all sorts of horrible illnesses and knowing that each next year might be spent in a lot of pain and suffering, even things like going through chemo? Should they also not try? Or even something as simple as all of the people who look for love/success in their lives, and never find any of it anyways and possibly die alone and in squalor? They knew the odds weren't good and tried anyways. A more grounded take would be that those preserved brains are just left to thaw and you probably die anyways without being turned into some human captcha machine, at least having tried. Is it also not worth it in that case, knowing those both potential alternatives?
I guess I'm not making a judgement of what other people should or shouldn't do. Just making up a goofy example to illustrate that the choice is not so obvious to a lot of people, which I think you also illustrate pretty well with your examples. It really depends on the individual. I do think it's worth looking at the incentives of the people funding these companies, because that does give a picture of the probable outcomes.
People will continue working on this sort of thing, that's fine, it really doesn't bother me. If I was forced to make a judgement, I think it's maybe a little silly, but I'm also not out there saving the planet from climate armageddon so I shouldn't cast stones. As a species we are extremely bad at prioritizing for our collective survival and there are a million worse things to be working on.
What percentage of your life being enjoyable vs horrible suffering makes it worth living? I don't know but 99% of my life being used to solve captchas makes it not worth living
>Suppose after being revived that regime with capitalist incentives
Having to provide for other people is literally the same as being trapped in a "I have no mouth and I must scream"-esque torture chamber. Given the historical track record of communism, you're more likely to end in the torture chamber than not in that situation. The curve of history bends towards factory farms.
I read your quote "Having to provide for other people is literally the same as being trapped in a "I have no mouth and I must scream"" and my brain immediately went to the millions of Americans working dead end jobs just to put food on the table for their family. It need not be communism for this to be a reality.
I haven't had that at all, not even a single time. What I have had is endless round trips with me saying 'no, that can't work' and the bot then turning around and explaining to me why it is obvious that it can't work... that's quite annoying.
> Please carefully review (whatever it is) and list out the parts that have the most risk and uncertainty. Also, for each major claim or assumption can you list a few questions that come to mind? Rank those questions and ambiguities as: minor, moderate, or critical.
> Afterwards, review the (plan / design / document / implementation) again thoroughly under this new light and present your analysis as well as your confidence about each aspect.
There's a million variations on patterns like this. It can work surprisingly well.
You can also inject 1-2 key insights to guide the process. E.g. "I don't think X is completely correct because of A and B. We need to look into that and also see how it affects the rest of (whatever you are working on)."
Of course! I get pretty lazy so my follow-up is often usually something like:
"Ok let's look at these issues 1 at a time. Can you walk me through each one and help me think through how to address it"
And then it will usually give a few options for what to do for each one as well as a recommendation. The recommendation is often fairly decent, in which case I can just say "sounds good". Or maybe provide a small bit of color like: "sounds good but make sure to consider X".
Often we will have a side discussion about that particular issue until I'm satisfied. This happen more when I'm doing design / architectural / planning sessions with the AI. It can be as short or as long as it needs. And then we move on to the next one.
My main goal with these strategies is to help the AI get the relevant knowledge and expertise from my brain with as little effort as possible on my part. :D
A few other tactics:
- You can address multiple at once: "Item 3, 4, and 7 sound good, but lets work through the others together."
- Defer a discussion or issue until later: "Let's come back to item 2 or possibly save for that for a later session".
- Save the review notes / analysis / design sketch to a markdown doc to use in a future session. Or just as a reference to remember why something was done a certain way when I'm coming back to it. Can be useful to give to the AI for future related work as well.
- Send the content to a sub-agent for a detailed review and then discuss with the main agent.
Interesting take. Does that mean SWE's are outsourcing their thinking by relying on management to run the company, designers to do UX, support folks to handle customers?
Or is thinking about source code line by line the only valid form of thinking in the world?
I mean yes? That's like, the whole idea behind having a team. The art guy doesn't want to think about code, the coder doesn't want to think about finances, the accountant doesn't want to worry about customer support. It would be kind of a structural failure if you weren't outsourcing at least some of your thinking.
I’m with you, perhaps I just misread some kind of condescension into the “outsourcing your thinking” comment.
We all have limited context windows, the world’s always worked that way, just seemed odd to (mis)read someone saying there’s something wrong with focusing on when you add the greatest value and trusting others to do the same.
It is condescending when antis say AI users do it. It isn’t when a director or team leader does it.
But it’s the same process, which should tell you what’s really going on here. It’s about status, not functionality, and you don’t gain status without controlling other humans.
reply