Report: USAF experimental drone simulation AI turned on human operator

I was skeptical too, because in order for it to 'attack operator' it would have to both be enabled to do so (i.e. it'd have to know the location of the operator, etc) and not be prevented from doing so (i.e. not have the operator on a 'do not kill' list, be able to launch attacks independently, etc.), neither of which seem particularly plausible for a simulation, because it is (or should be) glaringly obvious what would happen just by thinking about it.

And having googled it, yes, apparently it wasn't an actual simulation, they did just think about it, i.e. it was a thought experiment.

https://www.businessinsider.com/ai-...ng-its-operator-in-military-simulation-2023-6 :

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he "misspoke" during his presentation. Hamilton said the story of a rogue AI was a "thought experiment" that came from outside the military, and not based on any actual testing.​

Well, that makes more sense, but is also very lame.