The "Epstein class" of multi-billionaires don't need AI at all. They hire hundreds of willing human grifters and make them low-millionaires by spewing media that enables exploitation and wealth extraction, and passing laws that makes them effectively outside the reach of the law.
They buy out newspapers and public forums like Washington Post, Twitter, Fox News, the GOP, CBS etc. to make them megaphones for their own priorities, and shape public opinion to their will. AI is probably a lot less effective than whats been happening for decades already
The content on YouTube should qualify it to be preserved as a treasure of humanity. Think of all the college lectures, conference proceedings, sporting events, citizen journalism, podcasts that are the backbone of new communities and cultural revolutions. It is the modern version of public access TV and expresses a close to compelte spectrum of human interests.
At the same time Google Product Managers (or whatever euphemism for middle management they use at Google) are dumbfoundingly short shighted and myopic. They canibalize and destroy the value of the website to make some engagement or ad numbers go up, persumably in a short term play for promotion. When the consequences of their efforts to enshittify become apparent, they are long gone to another FAANG or moved up to executive level where severance packages mean they suffer no consequences. Its a shame there is no incentive to make the best product for the long term usefullness.
and I've paid full subscription price for a couple of years now to avoid the ads, and I can barely stand what it has evolved into. My screen is 80% games, shorts, "ads" and categories I didn't ask for.
This is different. AI is an existential threat to Google. I've almost stopped using Google entirely since ChatGPT came out. Why search for a list of webpages which might have the answer to your question and then manually read them one at a time when I can instead just ask an AI to tell me the answer?
If Google doesn't adapt, they could easily be dead in a decade.
That's funny. I stopped using ChatGPT completely and use Gemini to search, because it actually integrates with Google nicely as opposed to ChatGPT which for some reason messes up sometimes (likely due to being blocked by websites while no one dares block Google's crawler lest they be wiped off the face of the internet), and for coding, it's Claude (and maybe now Gemini for that as well). I see no need to use any other LLMs these days. Sometimes I test out the open source ones like DeepSeek or Kimi but those are just as a curiosity.
If web-pages don't contain the answer, the AI likely won't either. But the AI will confidently tell me "the answer" anyway. I've had atrocious issues with wrong or straight up invented information that I must search up every single claim it makes on a website.
My primary workflow is asking AI questions vaguely to see if it successfully explains information I already know or starts to guess. My average context length of a chat is around 3 messages, since I create new chats with a rephrased version of the question to avoid the context poison. Asking three separate instances the same question in slightly different way regularly gives me 2 different answers.
This is still faster than my old approach of finding a dry ground source like a standards document, book, reference, or datasheet, and chewing through it for everything. Now I can sift through 50 secondary sources for the same information much faster because the AI gives me hunches and keywords to google. But I will not take a single claim for an AI seriously without a link to something that says the same thing.
Given how embracing AI is an imperative in tech companies, "a link to something" is likely to be a product of LLM-assisted writing itself. Entire concept of checking through the internet becomes more and more recursive with every passing moment.
Google subscriptions and services are so terribly mismanaged that I will be staying away, no matter how incredible this shallow fork of vscode may be.
I remember a previous story months ago about Gemini that had Google PMs trying to hype their product, but it was all question about how nobody knows how to get Gemini API keys with any number of paid subscription.
I grew up in communist Poland before the fall of the wall in 1989 is just one story among many from this year that sounds like an echo of what my parent's told me they had to grow up with.
That so many Americans, in the self proclaimed land of the free, voted for this reality under the delusion they were against "socialism" is the most historically illiterate thing ever.
Look at the places in the US that are favoring Trump and correlate that with the places that have the worst school systems in that country. Also have a look at which party dominated the politics in those areas in the last couple of decades and how they treated their respective school systems. There is a pattern there that is, in a way, self-supporting.
The “American boss” can also over-staff and then fire as they see fit and call it innovation. They hire way more people than they can employ long term, especially during another hype-driven tech bubble. They can get a few billion extra when they sell a company with 1000s of employees instead of 100s. Right after acquisition it’s time for mass layoffs because of “market conditions”. So employees work lives become another asset in speculator’s spreadsheets.
When hackers hear “innovation” they probably think of solving an existing valuable problem better. It seems economists and CEOs think of innovation as finding a better way to extract maximum mental labor for the cheapest price. Then use that to maximize the value of their own equity, at the expense of society if necessary. If you’re building a heavily isolated bunker in Hawaii or New Zealand, you’re not exactly signaling you care about the rest of humanity’s well being.
Because to use lambdas you're asking the language to make implicit heap allocations for captured variables. Zig has a policy that all allocation and control flow are explicit and visible in the code, which you call re-inventing the wheel.
Lambdas are great for convenience and productivity. Eventually they can lead to memory cycles and leaks. The side-effect is that software starts to consume gigabytes of memory and many seconds for a task that should take a tiny fraction of that. Then developers either call someone who understands memory management and profiling, or their competition writes a better version of that software that is unimaginably faster. ex. https://filepilot.tech/
No, (in C++) the lambda can capture the the variable by value, and the lambda itself can be passed around by value. If you capture a variable by reference or pointer that your lambda outlives, your code got a serious bug.
My company issued a warning to developers to stay away from Gemini Code, mostly because the subscription model is such an impenetrable mess. Google AI Pro apparently does not give you any access to Gemini Code, nor do various Google Workspace subscriptions.
Google as a company seems to have become an incoherent set of exec fiefdoms and their personal campaigns. I wouldn't be surprised if Gemini Code will join https://killedbygoogle.com , and re-surface as something else in due time. Anthropic and OpenAI are much better focused on the developer experience.
I was rather disappointed and confused that the "Google One" or "Google AI Pro" subscription does not give you access to use Gemini API keys, and you can't use this Gemini CLI either.
Google services have become a patchwork of painfully confounding marketing terms that mean nothing and obfuscate what they actually provide.
Most of the time when I hear about a new and interesting Google thing like in a Google I/O keynote, its only available in a limited preview for some secret or randomly selected group of people. Then by the time its generally available I've forgotten about it, there is a better competitor, or just as likely the project has been canceled (see https://killedbygoogle.com/).
Whenever some enthusiastic developer suggests a new google service at work they are quickly dissuaded by senior developers that have been through their churn before.
They buy out newspapers and public forums like Washington Post, Twitter, Fox News, the GOP, CBS etc. to make them megaphones for their own priorities, and shape public opinion to their will. AI is probably a lot less effective than whats been happening for decades already
reply