Artificial Intelligence Seeks An Ethical Conscience

Leading artificial-intelligence researchers gathered this week for the prestigious Neural Information Processing Systems conference have a new topic on their agenda. Alongside the usual cutting-edge research, panel discussions, and socializing: concern about AI’s power.

The issue was crystallized in a keynote from Microsoft researcher Kate Crawford Tuesday. The conference, which drew nearly 8,000 researchers to Long Beach, California, is deeply technical, swirling in dense clouds of math and algorithms. Crawford’s good-humored talk featured nary an equation and took the form of an ethical wake-up call. She urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations. “Amongst the very real excitement about what we can do there are also some really concerning problems arising,” Crawford said.

One such problem occurred in 2015, when Google’s photo service labeled some black people as gorillas. More recently, researchers found that image-processing algorithms both learned and amplified gender stereotypes. Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as criminal justice, and finance. “The common examples I’m sharing today are just the tip of the iceberg,” she said. In addition to her Microsoft role, Crawford is also a cofounder of the AI Now Institute at NYU, which studies social implications of artificial intelligence.

Concern about the potential downsides of more powerful AI is apparent elsewhere at the conference. A tutorial session hosted by Cornell and Berkeley professors in the cavernous main hall Monday focused on building fairness into machine-learning systems, a particular issue as governments increasingly tap AI software. It included a reminder for researchers of legal barriers, such as the Civil Rights and Genetic Information Nondiscrimination acts. One concern is that even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it.

Pages: Visit First Page | 1 | 2 | 3 | ... | Visit Next → Page | Visit Last Page

Leave a Reply

loading...