As a healthcare company, we are currently going through the process of how to allow on-device AI activity while reducing the associated risk. I really like the idea of giving the agent a specific identity instead of globbing onto the credentials of the person operating it. I think I would like to go further and splitting the agent out onto its own hardware complete separate from the developer. Sounds extreme but MacMini devices are pretty cheap and fairly capable.
I really wish that Keybase would have taken off. I think it was a great balance between verification and ease of use. Again, Keybase didn't prevent someone from impersonating another, it just raised the cost. And sometimes that's enough.
Thank you for reminding me of Y2K! It's the perfect example of what happens when you forget about the people keeping things together.
My team and I worked really hard for several years to make sure that Y2K didn't have any effect, or at least a dramatically downsized one. It worked but I did hear from several people that they were annoyed that we spent so much money, time, and resources on something that turned out to be "not that big of a deal". Arrrgggghh!!
Unfortunately, you cannot hide by obscuring your license plate. The ALPR system recognizes vehicles by type, color, and any outstanding features (bumper stickers, trailers, etc.) So, even if you removed your license plate completely they would still be able to track your car as a blue, 1999 Toyota Camry, with a "I love Peaches" sticker in the back window.
You are correct. Although I drive a very common car/color, it is naïve to think that these systems aren't monitoring me as the grey sedan without visible license plate. I did remove the bumper stickers [0]
My local newspaper retracted a story this month about how the police were able to locate a certain colored vehicle within minutes. Perhaps it was too revealing for general consumption?
My "obstruction" is more (legal) protest? Better to just move to a de-flocked city/state..?
[0] I consider this similar to how some property appraisal maps allow people to remove their names from searches (either through LLC or black-out), but you can still request beneficial owner information of specific parcels; just one/two additional layers of protection to throw off the scent (have to know what you're looking for, not just simple plate# lookup — although I'm sure there are aliases/forwarders in advanced CCTVai systems).
I've known this for quite a while and have advocated for removing 3rd party A/V stuff from our fleet of macOS devices. Unfortunately, A/V software is listed as "required" from our SOC2 auditors and convincing them otherwise is not worth the effort. I wish NIST would recognize that OS vendor A/V is generally enough and to not worry about the 3rd party stuff.
I think this is an excellent move and the sign of a mature organization. I can hardly imagine the confusion and strife that would arise from the sudden termination of Linus' git repo.
I think the whole point of this was to see if the "agents" could act like a real human and real humans use Gmail much more frequently than sendmail. Sage even commented that they had update their prompt to tell the agents to not send email and not just remove the Gmail component for fear that the agent would just open it's own Gmail (or Y! mail, etc.) account and send mail on it's own.
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
How would we know? Isn't this like trying to prove a negative? The rise of AI "bots" seems to be a common experience on the Internet. I think we can agree that this is a problem on many social media sites and it seems to be getting worse.
As for being under "human supervision", at what point does the abstraction remove the human from the equation? Sure, when a human runs "exploit.exe" the human is in complete control. When a human tells Alexa to "open the garage door" they are still in control, but it is lessened somewhat through the indirection. When a human schedules a process that runs a problem which tells an agent to "perform random acts of kindness" the human has very little knowledge of what's going on. In the future I can see the human being less and less directly involved and I think that's where the problem lies.
I can equate this to a CEO being ultimately responsible for what their company does. This is the whole reason behind to the Sarbanes-Oxley law(s); you can't declare that you aren't responsible because you didn't know what was going on. Maybe we need something similar for AI "agents".
AWS just renamed their Security Hub service to Security Hub CSPM and then created a new service named Security Hub that is related but completely different than the original service.
And there's AWS S3, and there's AWS Glacier. And there's AWS S3, Glacier storage tier, which isn't Glacier. Which is OK, because Glacier is going away, and you should use S3, Glacier tier. Unless you're already using it, in which case you can still use it. So you still have to know Glacier and Glacier, while both storing your data, aren't technically the same thing.
But if you think that's bad, you haven't seen the name change shenanigans Microsoft pulls in Azure.
reply