I think weapons should always require a human in the loop, but the problem is that there'll be an arms-race where some countries (you know who) will ignore these principles and build fully autonomous no-human weapons. Set proper goals and train the model properly and it would work perfectly. That's still clever.Īs per usual the problem isn't the tool, it's the tool using the tool. Trying to kill the operator, then when adjusted pivoting to destroying the comms tower the operator used. What's funny though, is that the model proved that it was adept at the task they gave it. The reward function should primarily be based on following the continued instructions of the handler, not taking the first instruction and then following it to the letter. Of course the military is using home-grown fisher price models. Definitely just bad model/test conditions/scoring design.
0 Comments
Leave a Reply. |