The recent spurt of technological advancements has left the world scrambling to keep up. However, with development, the world can also sense a pattern of problems now. Three students at MIT who have seen the potential of technology to go wrong are stepping up. They are taking the mantle to help the MIT student body, and by extension, communities better understand AI. They have created an AI Ethics Reading Group.
The students are Leilani Gilpin, Harini Suresh, and Irene Chen. All three of these students are graduates of the Department of Electrical Engineering and Computer Science (EECS) at MIT. All three of these students also worked at Silicon Valley and got a first-hand experience of working with the latest advancements in technology. This exposure to the capacity of technology falling into the wrong hands moved them to create the group.
Context
The world has risen and swallowed the onslaught of AI tech. It is making lives easier, and people are all for it. However, issues like fake news coming out of AI having real-life impacts are irking people. When Mark Zuckerberg promised Congress that AI would solve the problem of fake news spreading through Facebook, no one knew how he would do it. Two years after the Cambridge Analytica fiasco, we are nowhere close to knowing any answer.
Discrimination based on the faulty algorithm in AI software is also a real concern. In May 2016, news organization ProPublica made a groundbreaking reveal on COMPAS. COMPAS was an AI used in the US prison system to deduce whether a convict has a high chance of offending again. ProPublica revealed that the AI was racially biased and deliberately marked black prisoners as more likely for an offense. Gender bias in AI is also very apparent and horrifying. A study in 2015 revealed how Google AI was more likely to show high paying jobs to men and not women. Another study revealed how Google AI only correctly guessed gender if it were of a white man. Results came out especially poor for women of color.
Another horrific real-life incident happened because of an AI. The translation AI of Facebook wrongly translated ‘Good Morning’ from Arabic by a Palestinian Muslim to ‘Attack them’ in Hebrew. Israeli authorities arrested, drilled, and tortured the man only to find out the mistake later. Facebook apologized. It was too late.
Under situations like these, it is about time to pause and re-evaluate.
The AI Ethics Reading Group:
At a ‘fairness in machine learning’ workshop in Cambridge, an MIT professor gave them the idea of setting up a group. The same professor linked the three women who obliged, and the group was done. The students had been looking for widening the audience of their debate on AI Ethics. The workshop and the professor made it possible for them to do it.
On the official website of the AI Ethics Reading Group, the reason for forming this group goes:
“As artificial intelligence and autonomous machines become increasingly important, it is crucial to examine the ethics, morals, and explainability of these systems. To gain better understanding, we are starting an AI and ethics reading group. We will explore both foundational and recent literature on AI and ethics.”
On the official website, users can sign up for meetings and reading sessions. They can do so by either joining a mailing list or by filling up an interest form. The website lists out all the expected dates for past and future meetings in one semester. Members can join the meeting and exchange reading material. They can discuss ethical questions to the production and use of AI in the meetings. Members will also reflect on how the community on campus at MIT can contribute to understanding or resolving those issues.
The Debate
The discussion in the group circulates the ethical question of AI. The first meeting of the group happened around the same time when MIT officially announced the opening of its MIT Stephen A. Schwarzman College of Computing. At this new institute, the university is officially teaching the Ethics and Governance of Artificial Intelligence. Its objective reads:
“This course will pursue a cross-disciplinary investigation of the development and deployment of the opaque complex adaptive systems that are increasingly in public and private use. We will explore the proliferation of algorithmic decision-making, autonomous systems, and machine learning and explanation; the search for balance between regulation and innovation; and the effects of AI on the dissemination of information, along with questions related to individual rights, discrimination, and architectures of control.”
After the announcement for the new institute and the course, 60 students turned up. In the meetings that followed, several topics came up. They talked about who should be responsible for a self-driving Uber running over a pedestrian. The students debated whether it should be the engineer who designed the AI or the person behind the wheel.
Another topic for hot debate was whether ethics, as a course, should be integrated into courses in general. Several students argued on how it should be taught as an exclusive course separately.
Inferences
A student said that “It’s hard to teach ethics in a CS class so maybe there should be separate classes.” However, Natalie Leo, an EECS graduate student said:
“When you learn to code, you learn a design process. If you include ethics into your design practice you learn to internalize ethical programming as part of your workflow.”
As it is unclear what mode of learning will be used at the AI Ethics course at MIT, the debate lives on.