Claw AIs absolutely do have agency in the sense of being able to independently perform actions on their own, based on their "understanding" of a goal given by a "principal". I can't think of a better word than "agent" for that.
I've been carefully watching for more than a year, including 3 productive parallel Claude Code sessions today, and they absolutely have agency in any definition of the word I can think of.
Can you try to define more clearly what it is that you believe AI agents don't have?
The AI "led to" the incident , true. But do nt forget that this, like all similar incidents , is a human failure
AI is a tool with no agency. People make mistakes using it, thone mistakes are the responsibility of the humans