Artificial Intelligence (AI) is becoming widespread in more and more areas of daily life. With the spread of AI, a host of new risks arise. These risks, as well as the legal and ethical implications of AI, are being discussed among businesses, governments, and non-governmental groups. With these discussions, a dawn of voluntary ethical codes and legislative proposals relating to AI have started to appear. AI will soon have to adhere to these new transparent and ethical expectations.
With discussions and question about the risks and responsibilities associated with AI percolating all over the world, businesses, governments, and NGOs have already started to develop AI ethics codes. In this article, CREATe.org President and CEO Pamela Passman discusses the nine common responsibilities that have emerged from the creation of these ethical codes. Given that private-sector and government enforcement has started to appear, it would be advisable for companies to think about participating in a relevant AI code of conduct and to review their own development, implementation, and use of AI technologies to asses and manage their risk and responsibilities in those areas.