AI Safety

Artificial intelligence (AI) safety is an emerging interdisciplinary field of science concerned with making artificially intelligent systems safe and secure. Engineers, computer scientists, philosophers, economists, cognitive scientists, and experts in many other fields are starting to realize that despite differences in terminology and approaches they are all working on a common set of problems associated with control over independent intelligent agents. Historically, such research was conducted under diverse labels such as AI ethics, friendly AI, singularity research, superintelligence control, software verification, machine ethics, and many others. This has led to suboptimal information sharing among practitioners, a confusing lack of common terminology, and, as a result, limited recognition, publishing, and funding opportunities.

The Internet and social media are a home for many intelligent agents (aka bots), ...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles