Report: USAF experimental drone simulation AI turned on human operator

It's because it's "utility function" prioritized maximizing "points" first and foremost and points are awarded only or at least primarily by hitting its targets. So, according to it's core programming, it's primary objective is to maximize points.

It feels like whoever did this knew this would be the result and wanted to make a point. At least they definitely should have known after the first incident, but instead they just told it, it would lose points if it killed the human operator, so it got more creative by destroying the infrastructure around the human... a better way would have been to change the order of its priorities, or change it so that it gets points by receiving instructions by the human every x minutes and losing points if it does not - and have those points be worth more than the secondary objective of destroying targets.
I was skeptical too, because in order for it to 'attack operator' it would have to both be enabled to do so (i.e. it'd have to know the location of the operator, etc) and not be prevented from doing so (i.e. not have the operator on a 'do not kill' list, be able to launch attacks independently, etc.), neither of which seem particularly plausible for a simulation, because it is (or should be) glaringly obvious what would happen just by thinking about it.

And having googled it, yes, apparently it wasn't an actual simulation, they did just think about it, i.e. it was a thought experiment.

https://www.businessinsider.com/ai-...ng-its-operator-in-military-simulation-2023-6 :

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he "misspoke" during his presentation. Hamilton said the story of a rogue AI was a "thought experiment" that came from outside the military, and not based on any actual testing.​