It's hard not to imaging that OpenAI is attempting to build an developer tools eco-system. It makes sense as it's one of the few fields in AI that are currently able to generate sales.
Assuming that we'll come to our senses, I think well be looking back at social media, in it's current form, the same way we now look at the Victorians using opium as cough medicine. It works, but holy shit are you doing it wrong.
Search engines appear to care more about being good "Netizens". It's not like GoogleBot never crashed a site, but it's rare. Search engine bots check if they need to back off for a bit, they check etags, notices if page changes infrequently and slow down their crawler frequency.
If you train an LLM, it's not like you keep a copy of every page around, so there's no point to check if you need to re-scrape a page, you do, because you store nothing.
Personally I think people would be pretty indifferent to the new generation of scrapers, AI or other types, if they at least behaved and slowed down if they notice a site struggling. If they had the slightest bit of respect for others on the web, this wouldn't be an issue.
In March 2025, Drew DeVault wrote a blog post called "Please stop externalizing your costs directly into my face"[1]. I think that is a pretty good guess as to why these bots do not care about frequency of changes, it costs to much.
Every run is basically a fresh run, no state stored, every page is just feed into the machine a new. At least that's my theory.
The AI companies need a full copy of your page, every time they retrain a model. Now they could store that in their own datacenters, but that's a full copy of the internet, in a market where storage costs are already pretty high. So instead, they just externalize the storage cost. If you run a website, a public Gitlab instance, Forgejo, a wiki, a forum, whatever, you basically functions as free offsite storage for the AI companies.
Loads of games from the era roundtripped their textures through lossy S3/DXT compression and then stored them as uncompressed RGB or RGBA.
I know this because I wrote a Unreal Engine texture repacking tool with a "DXT detection" feature so that I wouldn't be responsible for losing DXT compression on a texture which had already paid the price, only to find that this situation was already hyperabundant in the ecosystem.
Many Unreal Engine games of the day could have their size robotically halved just by re-enabling DXT compression in any case where this would cause zero pixel difference. This was at a time before Steam, when game downloads routinely took a day, so I was very excited about this discovery. Unfortunately, the first few developers I emailed all reacted with hostility to an unsolicited tip from what I'm sure they saw as a hacker, so I lost interest in pushing and it went nowhere. Ah well.
The article blew a huge opportunity to showcase the great diversity of “Pioneering Era” 3D accelerators (they weren’t called GPUs until later). But instead they just pretended it was always NVIDIA vs ATI, and threw in a few Voodoos.
It was only 3dfx and NVIDIA (since the TNT) that mattered in the 1990s though. All the other 3D accelerators were only barely better than software rasterization, if at all.
Seeing Quake II run butter smooth on a Riva TNT at 1024x768 for the first time was like witnessing the second coming of Christ ;)
Before that, you could even run Quake with anti-aliasing on one of those "barely better than software rasterization" cards, couldn't even be done on the first Voodoo cards.
The G200 mattered to some degree for a long time, because most x86 servers up until a few years ago would ship a G200 implementation or at least something pretending to be a G200 card as part of their BMC for network KVM.
Probably started out as a real G200 chip which might’ve been the cheapest and easiest to integrate in the 2000s? Or it had the needed I/O features to support KVM (since this would’ve involved reading the framebuffer from the BMC side), or matrox was amenable to adding that.
I remember having a ton of servers with cut down Mach64 chips. They were so bad that you would get horizontal lines flickering across the screen while text was scrolling in an 80x25 text console. I don't know why server manufacturers go to so much effort to make the console as terrible as possible. Are they nostalgic for the 8 bit ISA graphics from the original 5150? They seem offended at the idea that someone might hook a crash cart directly up to their precious hardware.
They were probably forced to update when they dropped older busses. Without a PCI or AGP bus on there they have to find something that can hang off of a PCIe lane.
Even current Dell servers less than a year old ship with G200 graphics. If it works, why change it? A 1998 ASIC can be put in the corner of a modern chipset for pennies or less.
My contributions: Matrox Parhelia for the first card supporting triple-monitors, and ATI All-in-Wonder which did TV out when media centre TVs weren’t really a thing.
I remember there was a kernel module for the Matrox/MPlayer combination. You get a new device that MPlayer could use. You did get `-vo mga` for the console and `-vo xmga` for X11; you couldn't tell the difference, and both produced high-quality hardware YUV output.
Recency bias probably, Iirc I think the 3000 and 4000 series did make significant improvements on RTX performance so compared to the 2000 series it's far more useful today.
4000 certainly did, the "shader execution reordering" gave an meaningful uplift to tasks that "underutilized warp units due to scattered useful pixels".
Matrox was really halfhearted with game support. They seemed far more interested in corporate customers, advertising heavily stuff like "VR" conference calls that nobody wants. They were early with multi-monitor support back when monitors were big, heavy, and expensive. I had a G200 that was the last video card I've ever seen where you could expand the VRAM by slotting in a SODIMM. It also had composite out so you could hook it to a TV. I played a lot of games on it up until Return to Castle Wolfenstein, which was almost playable but the low res textures looked real bad and the framerate would precipitously drop at critical times like when a bunch of Nazis rushed into the room and started shooting.
Last time I saw a Matrox chip it was on a server, and somehow they had cut it down even more than the one I had used over a decade earlier. As I recall it couldn't handle a framebuffer larger than 800x600, which was sometimes a problem when people wanted to install and configure Windows Server.
because it never got opengl driver? Because it was 2x slower than even Savage3D? Nvidia TNT released a month later offering 2x the speed at lower price
Nvidia 6xxx series, which was the first card to support SLI. I remember my gaming pc in college with 6x series card, and being able to get another card and use and SLI bridge that increased performance in some games.
Nvidia GeForce 900 series, which had the Titan with 12gb, first card iirc to able to support larger resolution gaming.
Nvidia RXT series which started with 20xx i think, first card to come with 24gb of ram.
And then the modern 4xxx series which used to fry power cables.
For large swaths of music I'm not sure that matters all that much, not anymore. At least the copy-paste bands had some level of uniqueness, there always seemed to be a distinct sound or gimmick.
I don't really like much of the "mainstream" music right now. It's basically whining, high pitched young men. They all sound exactly the same to me, you can't hear or make outall the words, they play the guitar, sort of and all bass sounds have been scrubbed from the track.
Even if they write their own songs, which I honestly think many do, I don't see the point, when it's basically a stream of high pitched tones which you can't hear. Even if you read the lyrics, they are super generic. Might as well be AI, and I think that's really the point. Most people don't give a fuck, AI or not, who cares, it's noise coming out the speaker or headphones. It's not there because it's music, it's there to be noise and isolate you from the world.
I'm in my 40s, there is a shit ton of modern UX I struggle with. Basically anything gesture based for example, but really a lot of apps are just shit and have no sensible UX design behind them, so you need to try to click everything and hope you don't mess something up.
To me it's easy to see how someone over 70 might simply refuse to use an app. Especially if it doesn't support scaling the UI to well.
"Share" is one of the worst inventions of all. What it does in phones is random across apps and platforms, and usually has nothing to do with what the word "share" means in any other context.
You're sharing data between apps. It's an app->app API, essentially. You can easily send an app store listing to your Reminders "Wishlist" section if you want, for example.
I wasn't even thinking social. Problem is, the actual operation being done is one of:
- Give the other app a temporary/transient copy of a document or a file
- Give the other app the actual file (R/W)
- Give the other app the actual file but some other way (there's at least two in Android now, I believe?)
- Give the other app some weird-ass read-only lens into the actual file
- Re-encode the thing into something else and somehow stream it to the other app
- Re-encode the thing into something else and give it that (that's a lossy variant of transient copy case - example, contact info being encoded into textual "[Name] Blah\n[Mobile] +55 555 555 555" text/plain message).
- Upload it to cloud, give the other app a link
- Upload it to cloud, download it back, and give the other app a transient downloaded copy (?! pretty sure Microsoft apps do that, or at least that's what it feels when I try to "Share" stuff from that; totally WTF)
- Probably something else I'm missing.
You never really know which of these mechanisms will be used by a given app, until you try to "Share" something from it for the first time.
Now, I'm not saying the UI needs to expose the exact details of the process involved. But what it should do is to distinguish between:
1. Giving the other app access to the resource
2. Giving the other app an independent copy of the resource (and spell out if it's exact or mangled copy)
3. Giving the other app a pointer to the resource
In desktop terminology, this is the difference between Save As, Export and copying the file path/URL.
Also, desktop software usually gives you all three options. Mobile apps usually implement only one of them as "Share", so when you need one of the not chosen options, you're SOL.
I don’t think people understand the scale of the issue. Each decade that goes by we welcome a new class of elderly, and each decade that goes by, we continue to write off those elderly users.
The failure of the well-intentioned but insufficient currents solutions is well underlined by this case. Sure, you could get this guy an android phone with a custom launcher, or an iPhone on Assistive Access, and he might be able to place a call. But good luck setting him up on Ticketmaster, or the Dodgers website, or wherever they expect him to go to redeem and utilize his tickets.
reply