satellite-credit

The proliferation of commercial remote sensing satellites has increased the availability of satellite images. This new technology will have positive and negative effects on society due to the dual nature of satellite imagery.

by James Kilroe

The proliferation of commercial remote sensing satellites has increased the availability of satellite images. These images are predicted to increase in spatial and temporal resolution (clarity and frequency) until satellite imagery becomes near real-time. This new technology will have positive and negative effects on society due to the dual nature of satellite imagery. This paper highlights some of the potential consequences of ubiquitous satellite imagery. Furthermore, it examines current regulations and illustrates how these regulations will be unsuitable in the future. Finally, it examines UN regulations surrounding satellite imagery and if international treaties could be used to regulate this technology.

Read full article ↵

ai-credit

Current technology is still a long way from human-level general intelligence, but with high stakes involved we cannot afford to proceed via trial and error and must begin to engage with these issues now.

by Beth Barnes

AI technology has the potential to bring huge benefits to society. It is also possible that advanced Artificial General Intelligence – AI capable of performing at or above human level on a wide range of tasks – could be highly destructive, as discussed by an increasing number of experts in the field. There are many misconceptions about the field of AI and its potential dangers. The problem is not that an AI system will suddenly develop human-like emotions of anger or resentment and ‘rebel’. Rather the issues are more subtle. How can we reliably predict the behaviour of an AI system? How can we specify the goals of a system such that we avoid unanticipated side-effects? How do we ensure that those developing advanced AI are paying sufficient attention to ensuring safety, and avoid arms-race dynamics? Current technology is still a long way from human-level general intelligence, but with high stakes involved we cannot afford to proceed via trial and error and must begin to engage with these issues now. Actions that can be taken immediately include: increasing research in relevant areas of policy and computer science; setting up structures such as regular conferences to improve information flow between policymakers, academia and industry on this topic; mapping possible future scenarios and planning appropriate responses; and investing in technologies that improve our ability to forecast future events such as prediction markets.

Read full article ↵

by Dr. Sobia Hamid

Artificial Intelligence is increasingly being applied in healthcare and medicine, with the greatest impact being achieved thus far in medical imaging. These are technologies that are capable of performing a task that usually requires human perception and judgement, which can make them controversial in a healthcare setting. In this article we will explore some of the opportunities and risks in using AI in healthcare, as well as policy recommendations for improving their use and acceptance.

Read full article ↵

by Steven Witte

Technology for sequencing DNA has advanced very rapidly over the last 15 years, and is poised to become a routine part of clinical evaluation of individuals. The health regulatory agencies in most countries have maintained a conservative position in regards to adopting genetic testing. This is due to several fears, which will be discussed. Recently, social scientists have conducted studies and surveys to find out what the public’s opinion of genetic testing is, including the public’s ability to understand results, as well as their desire to find out if they are at risk for serious diseases later in life. However, this area of research is very recent and has not yet been featured by the media.

Read full article ↵

by Alexandra Gürel

In his upcoming book, Strange Pill: Evidence, Values, and Medical Nihilism, philosopher of Jacob Stegenga charts a history of the term “magic bullet”: a drug that is both specific and effective, curing the patient without side effects. Stegenga argues that the early 20th century was a “golden age” for magic bullets, with the discovery of drugs like penicillin and insulin, and that late 20th/early 21st century medicine has not been able to deliver drugs that are nearly as effective. I propose, by interviewing Stegenga, to outline why recently discovered drugs tend to have tiny effect sizes and bad side effects (and therefore a poor cost/benefit ratio.) I will then outline proposals for how the modern medical research agenda can be restructured so that its products more closely resemble “magic bullets,” an outcome that would save the NHS money and improve the patient experience.

Read full article ↵