Current technology is still a long way from human-level general intelligence, but with high stakes involved we cannot afford to proceed via trial and error and must begin to engage with these issues now.
by Beth Barnes
AI technology has the potential to bring huge benefits to society. It is also possible that advanced Artificial General Intelligence – AI capable of performing at or above human level on a wide range of tasks – could be highly destructive, as discussed by an increasing number of experts in the field. There are many misconceptions about the field of AI and its potential dangers. The problem is not that an AI system will suddenly develop human-like emotions of anger or resentment and ‘rebel’. Rather the issues are more subtle. How can we reliably predict the behaviour of an AI system? How can we specify the goals of a system such that we avoid unanticipated side-effects? How do we ensure that those developing advanced AI are paying sufficient attention to ensuring safety, and avoid arms-race dynamics? Current technology is still a long way from human-level general intelligence, but with high stakes involved we cannot afford to proceed via trial and error and must begin to engage with these issues now. Actions that can be taken immediately include: increasing research in relevant areas of policy and computer science; setting up structures such as regular conferences to improve information flow between policymakers, academia and industry on this topic; mapping possible future scenarios and planning appropriate responses; and investing in technologies that improve our ability to forecast future events such as prediction markets.