Report: USAF experimental drone simulation AI turned on human operator

It says what happened in the words of someone who doesn't write code, but not why it happened. And the why it happened, my educated guess is that either the possibility was coded in the simulation, or the targeting recognition algorithm needs more work. What didn't happen, is that the AI decided to ignore the "no-go" command and kill the operator out of free will, or decided on its own to do so because of its "training". Someone had to code in what/who the operator is, the option to not accept the "no-go" command, the option to attack the operator, and the conditions in which it becomes the optimal option to not accept the "no-go" and kill the operator.

It's because it's "utility function" prioritized maximizing "points" first and foremost and points are awarded only or at least primarily by hitting its targets. So, according to it's core programming, it's primary objective is to maximize points.

It feels like whoever did this knew this would be the result and wanted to make a point. At least they definitely should have known after the first incident, but instead they just told it, it would lose points if it killed the human operator, so it got more creative by destroying the infrastructure around the human... a better way would have been to change the order of its priorities, or change it so that it gets points by receiving instructions by the human every x minutes and losing points if it does not - and have those points be worth more than the secondary objective of destroying targets.