There was a fantastic recent issue of Nature that had several great articles on Artificial Intelligence. I highly recommend the Insight articles which review most current hot issues in AI, specifically Deep learning, Reinforcement Learning and Evolutionary computation.
However the section that gave me pause and was the inspiration for the title was a series of articles from researchers expressing their concerns on the risks of intelligent machines. In particular, the article by Stuart Russell stands out. He expresses concerns over Lethal Autonomous Weapons Systems, LAWS for short. He sees them as being feasible in a matter of years, not decades. Once I started thinking about the technologies necessary to make this happen, I agreed and feel that the results would be as accurate or as inaccurate as aerial missiles and bombs. With advances in vision recognition and local processing power, it is possible. Robots without the human in the loop, explicitly ignoring Asimov's Laws of Robotics. Russell provides some remedies, but nothing that would prevent rogue elements from employing them. We must prepare for a world with these autonomous entities and consider controls equivalent to those of nuclear weapons. Unfortunately, unlike nuclear weapons the technology necessary to construct them will be available to most groups.
So have I given our technology too much credit? Please comment if you have any opinions and as always thanks for reading! Sorry it has been so long between posts! Later.
No comments:
Post a Comment