Organize in your country and advocate for data deletion jubilees, organize in your country to champion new taxes against US digital services, organize in your country to advocate for homegrown solutions over US tech.
If you aren't actively organizing you aren't going to accomplish anything.
Remember that people power trumps monetary power, but you have to commit for people power to work.
Why? Every country on Earth is capable of creating and maintaining software. There is nothing unique about America or Silicon Valley (outside of the massive amounts of corporate welfare), devs can be found anywhere and who better to write software for local citizens than the local citizens themselves?
We know how useful open source software is, there's no reason why this can't be replicated across the planet.
Not because they cannot do it, but because why they're doing it, which in turn becomes what they're doing. America is being perceived as isolationist, so countries solve that by becoming isolationist about what software they use, whether its open source or not is kind of irrelevant, though in several cases the software will primarily be focused on the countries own language.
The better alternative in my eyes is to contribute to existing open source, and only if the US becomes hostile against this, fork said code and move on.
1. Request your data. Email idv-privacy@withpersona.com or privacy@withpersona.com. Under GDPR, they have 30 days to respond.
2. Request deletion. The verification is done. LinkedIn already has the result. There is no reason for Persona to keep your passport scan and facial geometry on their servers. Ask them to delete it.
3. Contact their DPO. dpo@withpersona.com — that’s their Data Protection Officer. If you want to object to them using your documents as AI training data under “legitimate interests,” this is where you do it.
4. Think twice before verifying. That blue badge might not be worth what you’re trading for it. A checkmark is cosmetic. Biometric data is forever.
I'm very confused here. The monthly plans are meant to be used inside of Google's walled garden, but people are somehow able to capture (?) and re-use the oAuth token?
Regardless, I thought it was pretty obvious that things like OpenClaw require an API account, and not a subsidized monthly plan.
Exactly, OpenClaw (or I think possibly an addon/extension or unofficial method) is allowing Googles Antigravity authentication to connect the app. This allows for 'unlimited' calls through Antigravity models with a subscription, instead of the proper Gemini/Google AI Studio API key method (charged per million tokens)
API usage can get very high for automatic operations, especially with apps like Kilo/Roo/Cline, and now with OpenCode/OpenClaw. I often blast through $10-20 in a single day of just regular OpenCode usage through OpenRouter
If I could pay a subscription and get near unlimited use (with rate limits), of course I'd do that, but not like this. I'm pretty sure Antigravity has ToU somewhere that indicates it's only allowed for use in Antigravity and nowhere else, since I've seen other threads on this happening: https://github.com/jenslys/opencode-gemini-auth/issues/50
Like fast fashion, but for software development. One piece of software, one-time use: run, have fun, delete. No maintenance, no support, and no regret.
That'd be true if there were more TUI applications being developed, but I'm not sure that's necessarily the case, since there have always been a lot of them out there. It seems like people are talking about them more often, though.
Exactly. If you take a look, most of these repos have Claude as a contributor, so I suppose it's Claude that recommends using TUIs and not the developers themselves.
In the same period of 2024, there were ~700 repos on github.
Maybe you're right, but my hunch is AI has increased the number of all types of projects, and would have to see some evidence that AI was disproportionately building TUIs. The trend of new TUIs was well known before coding agents were a thing, and my evidence is the half dozen or so tools I use every single day that came out less than ten years ago but before coding agents.
I'm concerned that this fits in "using today's innovation to solve outdated paradigms".
Google has A2A: An Agent-to-Agent Protocol. SaaS is plumetting in value.
Arbitrary semantics made sense when communications were human-dominated.
If agents dominate these fields, why wouldn't they simply set their own protocols and methods to communicate both text, binary, and agreed data structures?
There's an assumption that email is somehow the best channel, when you've found yourself that the most popular, functional interfaces don't align with your expectations.
Then, ultimately I have a single agent that can sit in numerous communication platforms, such as email
Fair concern, and I agree on the end state. Agents will eventually use native agent-to-agent protocols.
The question is the transition, because email is undoubtedly the most ubiquitous channel of communication in today. I would only give my agent an A2A integration if your agent has an A2A integration, but because you don't we are at a stalemate. I'd rather just give my agent an inbox where I know it can communicate with the other billions of people that already have an email address.
Email isn’t the final protocol for agents. It’s the bridge that lets them participate in today’s internet while native agent protocols/networks emerge.
And you probably don't want to dump the emails first thing into the agent's context. You should insert a cup of coffee and the morning newspaper into the context, and only then the emails.
> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.
Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.
Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
It feels like federated networks with open-sourced feed algorithms are the best path forward.
If AI removes any technical limitations, and automates content management, what's stopping a content creator from owning what they create and distributing it themselves?
The magic lies in the two-sided coin of promotion vs. spam filtering.
The web started off as a pretty peer to peer system, but almost immediately people built directories and link farms as means to find things. You can make a system as distributed as you want, but that only works for content which people know to find. Which is great for piracy, as e.g. movies and TV shows are advertised everywhere else and can be found by title.
For social media, the recommendation engine is a critical part of the appeal to users.
Why do so many tech people push this "federation is a panacea" idea despite all evidence to the contrary? I don't get it.
First, the obvious: if federation was clearly superior, it would've won. No medium since email has been federated and even that's dominated by a handful of players. Running your own email server is... nontrivial.
Second, users don't care abou tthis. Like at all.
Third, supposedly tech-savvy people don't seem willing or able to merely scratch the surface of what that looks like and how it would work.
Fourth, there's a lot of infrastructure you need such as moderation and safety that would need to be replicated for each federated provider.
Lastly, zero consideration is given to the problems this actually creates. Look at POTS. We have spam and providers that are bad actors and effectively launder spam calls and texts. You need some way to manage that.
Federated networks are theoretically and systematically superior to centralized, that's why people push it.
Humanity and social media isn't about technological superiority. Current platforms have inertia. Why would people fragment when all they care about is basic actions, and their network is already built?
Federated networks have been burdened by an onboarding tax, but this, along with moderation, can all be abstracted away by AI.
Let's see the current reality: social media platforms are currently American-dominated. A serious geopolitical problem, especially considering the amount of time younger generations spend on it.
There is more and more reason for governments to get involved and force the fragmentation of these platforms.
The utility of federated networks increases a lot when bad actors cause harm to people. What had a minimal value and failed to get attention yesterday when they need was low may be drastically different today when that need is high.
>if federation was clearly superior, it would've won.
no because we don't live in the best of all worlds. it starts to win pretty rapidly when centralized abuses of power become apparent. Bitchat (p2p mesh network messaging app) has been becoming quite popular in Uganda and Iran.
Decentralization is the basic guarantor for most of the freedoms we take for granted in democratic systems. Just because the average user doesn't exercise them, just like people who only start going on the treadmill when their chest starts to hurt at age 50, doesn't mean it isn't the answer.
Well for one we've seen how great and powerful federation can be, email is completely federated and the design of email has enabled hundreds of multibillion dollar companies.
Why wouldn't this also apply to social media? Why is it better for 5 players to exist rather than 1000s?
For almost all of human history information has been centralized among a small actors, for some time period we had a large independent press but those days are gone.
Everyone has a stake in getting accurate information, and therefore they have an interest in owning part of that system.
Sure is! the issue is that people's attention isn't -- most people on the web stick to a few web pages; their social media of choice (facebook, tiktok, etc...) and their news provider of choice (CNN, Fox, NBC).
Putting up a website is easy, pulling traffic away from bigger sites is much more difficult
Beyond federated systems, P2P systems seem to have a strong advantage here in identifying bad actors.
Ranking posts/comments by the exponential of inverse IPAddress-post-frequency would solve bad actors posting behind VPNs/proxies like evil bot farms / state actors and marketers.
Real users have their own IP address, and IP addresses are expensive like $20-50 a month which would make mocking traffic an extremely expensive proposition.
Mocking 1% of reddit's 120M daily active user would cost 58M and you wouldn't want to share/sell these addresses with other actors since it would ruin your credibility
I think it would do the opposite. The regular user posts 5 times per day, but the spammer has bought access to 65536 IP addresses and posts once from each, boosting his posts 5x. And the town in South America with one CGNAT IP address to go around gets censored.
You're not wrong that its easy to get relatively obscure IP addresses cheaply, however youll be sharing them with lots of folks potentially damaging reputation.
At scale, say P2P-book becomes the largest social networking site, all bad actors will be focused on using it and they will likely be sharing IP's, comingiling their reputation.
Sharing account ID's across IP would also be penalized.
People who post consistently from the same IP / MAC would be boosted, those are real people.
Of course before one is the biggest game in town you will simply not be on the radar, so using a captcha as well will be useful to prevent bots.
> Ranking posts/comments by the exponential of inverse IPAddress-post-frequency
Doesn't this just incentivize posting a bunch of comments from your residential proxy IP addresses to launder them? This smells like a poor strategy that's likely to lead to more spam than not. Also, everyone has to start somewhere so your legit IP addresses are also going to seem spammy at first.
Hmm consider an established social network which sees lots of bad actor activity, these folks will likely be sharing IP's on these residential network severly damaging their reputation.
You should only see one user-id per MAC / IP. If you see multiple then its a sign of a bad actor.
Before you're established using something like a captcha prevents most spam, except for state actors, and they wont be focused on the site until its larger.
Not sure which is scarier
reply