Report: USAF experimental drone simulation AI turned on human operator (1 Viewer)

Ummm, did they not watch the 2005 movie, "Stealth"? That's basically the plot of the movie.....18 years ago.

Bonus picture from the movie in case any of you missed it.

maxresdefault.jpg
Will Smith movie, wasn't it? I think I kind of remember watching it.
 
Will Smith movie, wasn't it? I think I kind of remember watching it.
iRobot was Wills movie.

Just saw it over weekend too lol. The AI that controls the robots says humans were destroying the world so it figured in order to save the world, humans had to be eliminated. It figured It knew better basically.
 
Sounds to me that either part of the simulation was to make the drone attack the operator in a given scenario, or the target recognition code needs some work. In any case, this is the type of crap that makes people believe AI thinks by itself and makes its own decisions.

I mean, it’s explained what happened
 
I mean, it’s explained what happened

It says what happened in the words of someone who doesn't write code, but not why it happened. And the why it happened, my educated guess is that either the possibility was coded in the simulation, or the targeting recognition algorithm needs more work. What didn't happen, is that the AI decided to ignore the "no-go" command and kill the operator out of free will, or decided on its own to do so because of its "training". Someone had to code in what/who the operator is, the option to not accept the "no-go" command, the option to attack the operator, and the conditions in which it becomes the optimal option to not accept the "no-go" and kill the operator.
 
It says what happened in the words of someone who doesn't write code, but not why it happened. And the why it happened, my educated guess is that either the possibility was coded in the simulation, or the targeting recognition algorithm needs more work. What didn't happen, is that the AI decided to ignore the "no-go" command and kill the operator out of free will, or decided on its own to do so because of its "training". Someone had to code in what/who the operator is, the option to not accept the "no-go" command, the option to attack the operator, and the conditions in which it becomes the optimal option to not accept the "no-go" and kill the operator.

For someone who supposedly works in tech, you have a very, very poor understanding of how all of this works.
 
It says what happened in the words of someone who doesn't write code, but not why it happened. And the why it happened, my educated guess is that either the possibility was coded in the simulation, or the targeting recognition algorithm needs more work. What didn't happen, is that the AI decided to ignore the "no-go" command and kill the operator out of free will, or decided on its own to do so because of its "training". Someone had to code in what/who the operator is, the option to not accept the "no-go" command, the option to attack the operator, and the conditions in which it becomes the optimal option to not accept the "no-go" and kill the operator.

It's because it's "utility function" prioritized maximizing "points" first and foremost and points are awarded only or at least primarily by hitting its targets. So, according to it's core programming, it's primary objective is to maximize points.

It feels like whoever did this knew this would be the result and wanted to make a point. At least they definitely should have known after the first incident, but instead they just told it, it would lose points if it killed the human operator, so it got more creative by destroying the infrastructure around the human... a better way would have been to change the order of its priorities, or change it so that it gets points by receiving instructions by the human every x minutes and losing points if it does not - and have those points be worth more than the secondary objective of destroying targets.
 
No shirt - we all know how this ends - but we keep pushing head long into the Matrix.... Get ya stuff ready.

1685712354725.jpeg
 
It's because it's "utility function" prioritized maximizing "points" first and foremost and points are awarded only or at least primarily by hitting its targets. So, according to it's core programming, it's primary objective is to maximize points.

It feels like whoever did this knew this would be the result and wanted to make a point. At least they definitely should have known after the first incident, but instead they just told it, it would lose points if it killed the human operator, so it got more creative by destroying the infrastructure around the human... a better way would have been to change the order of its priorities, or change it so that it gets points by receiving instructions by the human every x minutes and losing points if it does not - and have those points be worth more than the secondary objective of destroying targets.
I was skeptical too, because in order for it to 'attack operator' it would have to both be enabled to do so (i.e. it'd have to know the location of the operator, etc) and not be prevented from doing so (i.e. not have the operator on a 'do not kill' list, be able to launch attacks independently, etc.), neither of which seem particularly plausible for a simulation, because it is (or should be) glaringly obvious what would happen just by thinking about it.

And having googled it, yes, apparently it wasn't an actual simulation, they did just think about it, i.e. it was a thought experiment.

https://www.businessinsider.com/ai-...ng-its-operator-in-military-simulation-2023-6 :

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he "misspoke" during his presentation. Hamilton said the story of a rogue AI was a "thought experiment" that came from outside the military, and not based on any actual testing.​
 
I was skeptical too, because in order for it to 'attack operator' it would have to both be enabled to do so (i.e. it'd have to know the location of the operator, etc) and not be prevented from doing so (i.e. not have the operator on a 'do not kill' list, be able to launch attacks independently, etc.), neither of which seem particularly plausible for a simulation, because it is (or should be) glaringly obvious what would happen just by thinking about it.

And having googled it, yes, apparently it wasn't an actual simulation, they did just think about it, i.e. it was a thought experiment.

https://www.businessinsider.com/ai-...ng-its-operator-in-military-simulation-2023-6 :

But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he "misspoke" during his presentation. Hamilton said the story of a rogue AI was a "thought experiment" that came from outside the military, and not based on any actual testing.​

Well, that makes more sense, but is also very lame.
 
It's because it's "utility function" prioritized maximizing "points" first and foremost and points are awarded only or at least primarily by hitting its targets. So, according to it's core programming, it's primary objective is to maximize points.
I got that, however, the program only knows what/who the operator is, where it/they are, and that it/they could be a target dependent on a count, because someone coded all of that into the program

It feels like whoever did this knew this would be the result and wanted to make a point. At least they definitely should have known after the first incident, but instead they just told it, it would lose points if it killed the human operator, so it got more creative by destroying the infrastructure around the human... a better way would have been to change the order of its priorities, or change it so that it gets points by receiving instructions by the human every x minutes and losing points if it does not - and have those points be worth more than the secondary objective of destroying targets.

But it didn't get "creative", the option to do so and how to do it if/when certain conditions were met (i.e., the scenario) was programmed into it.

Edit: just read Arathrael's post. LOL

 
Last edited:

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Users who are viewing this thread

    Back
    Top Bottom