Report: USAF experimental drone simulation AI turned on human operator

It says what happened in the words of someone who doesn't write code, but not why it happened. And the why it happened, my educated guess is that either the possibility was coded in the simulation, or the targeting recognition algorithm needs more work. What didn't happen, is that the AI decided to ignore the "no-go" command and kill the operator out of free will, or decided on its own to do so because of its "training". Someone had to code in what/who the operator is, the option to not accept the "no-go" command, the option to attack the operator, and the conditions in which it becomes the optimal option to not accept the "no-go" and kill the operator.

For someone who supposedly works in tech, you have a very, very poor understanding of how all of this works.