If it is, it’s a convincing one. The thing is, learning systems will try all sorts of crazy things until you specifically rule them out, whether that’s finding exploits to speed-run video games or attacking allies doing so creates a solution with a better score. This is a bigger problem with AGI since all the rules we code as hard for more primitive systems are softer, hence rather than telling it don’t do this thing, I’m serious we have to code in why we’re not supposed to do that thing, so it’s withheld by consequence avoidance rather than fast rules.
So even if it was a silly joke, examples of that sort of thing are routine in AI development, so it’s a believable one, even if they happened to luck into it. That’s the whole point of running autonomous weapon software through simulators, because if it ever does engage in friendly fire, its coders and operators will have to explain themselves before a commission.
Wasn’t that a hoax?
If it is, it’s a convincing one. The thing is, learning systems will try all sorts of crazy things until you specifically rule them out, whether that’s finding exploits to speed-run video games or attacking allies doing so creates a solution with a better score. This is a bigger problem with AGI since all the rules we code as hard for more primitive systems are softer, hence rather than telling it don’t do this thing, I’m serious we have to code in why we’re not supposed to do that thing, so it’s withheld by consequence avoidance rather than fast rules.
So even if it was a silly joke, examples of that sort of thing are routine in AI development, so it’s a believable one, even if they happened to luck into it. That’s the whole point of running autonomous weapon software through simulators, because if it ever does engage in friendly fire, its coders and operators will have to explain themselves before a commission.