Toward the close of the three-day celebration of the MIT Stephen A. Schwarzman College of Computing, there was one inescapable takeaway: "We are at an inflection point. With the progressing technologies of artificial intelligence, we are on the verge of incredible things," said IBM Executive Vice President John E. Kelly.
Less clear to many participants and audience members after a whirlwind of TED-like talks, demonstrations, and discussion was whether advanced computation can truly work primarily for the benefit of humanity.
"We are undergoing a massive shift that can make the world a better place," noted David Siegel, chairman of Two Sigma Investments. "But I fear we could move in a direction that is far from an algorithmic utopia."
Meeting the challenges of artificial intelligence
Many speakers at the three-day celebration, which was held on Feb. 26-28, called for an approach to education, research, and tool-making that combines collective knowledge from the technology, humanities, arts, and social science fields, throwing the double-edged promise of the new machine age into stark relief.
As Melissa Nobles, the Kenan Sahin Dean of MIT’s School of Humanities, Arts, and Social Sciences, introduced the final panel of the celebration, she reinforced the need for such an approach, noting that that the humanities, social sciences, and arts are grappling “with the ways in which computation is changing the world,” and that “technologists themselves must much more deeply understand what they are doing, how they are deeply changing human life."
The final panel was “Computing for the People: Ethics and AI,” moderated by New York Times columnist Thomas Friedman. In a conversation afterward, Nobles also emphasized that the goal of the new college is to advance computation and to give all students a greater “awareness of the larger political, social context in which we’re all living.” That is the MIT vision for developing “bilinguals” — engineers, scholars, professionals, civic leaders, and policymakers who have both superb technical expertise and an understanding of complex societal issues that is gained from study in the humanities, arts, and social sciences.
The perils of speed and limited perspective
The five panelists on “Computing for the People” — representing industry, academia, government, and philanthropy — contributed particulars to the vision of a society infused with those bilinguals, and attested to the perils posed by an overly-swift integration of advanced computing into all domains of modern existence.
"I think of AI as jetpacks and blindfolds that will send us careening in whatever direction we're already headed," said Joi Ito, director of the MIT Media Lab. "It's going to make us more powerful but not necessarily more wise.”
The key problem, according to Ito, is that machine learning and AI have to date been exclusively the province of engineers, who tend to talk only with each other. This means they can deny accountability when their work proves socially, politically, or economically destructive. "Asked to explain their code, technological people say: ‘We're just technical people, we don't deal with racial or political problems,’" Ito said.
Can AI embody advance justice, strengthen democracy?
Darren Walker, president of the Ford Foundation, zeroed in on the value void at the center of this new technology.
"If we go deep [into AI tool-making] without a view as to whether AI can advance justice, whether it can strengthen our democracy, if we engage this enterprise without those questions driving our discourse, we are doomed," he said.
As a case in point, he cited the predictive analytics of AI that more frequently deny parole to black men than to white men with comparable records. "So AI is in fact reifying and amplifying rather than correcting the historic biases we see every day in America," Walker said. "Will AI be a lever for good, or simply compound disadvantages built into our systems?"
Walker also noted that during the recent congressional hearings featuring the testimony of Facebook CEO Mark Zuckerberg, politicians demonstrated ignorance about the workings of social media platforms and of cellphone technology.
"At any other hearing of importance in our society, there would be some smart person sitting behind a congressperson to say, [of the person testifying] ‘Challenge him, he's wrong,’" said Walker. But, he continued, "there are very few people on the Hill working in the public interest on this larger issue of the fourth industrial revolution."
Collaborations to make a better world
Panelists also emphasized that the speed of the current technological transformation threatens to undermine efforts to control it. "By the time we realize there's something we must do to right the ship, the ship will be in the middle of the ocean," said Ursula Burns, executive chairman and CEO of VEON, Ltd.
But Burns and her fellow panelists believe the new MIT Schwarzman College of Computing, by bringing together computer scientists with scholars from the social sciences and humanities, could help reverse the potentially destructive course of AI.
"It's not just about getting a whole bunch of computer scientists writing new programs, it is about making the world a better place," Burns said. "It's active engagement, broad knowledge, and responsibility to other people."
Jennifer Chayes, a technical fellow and managing director of Microsoft Research New England, described an initiative in her labs to promote "fairness, accountability, transparency, and ethics," or FATE, in software platforms and information systems.
"It's a nascent field that brings together legal scholars, ethicists, social scientists, and people in AI to ask how we can make some decisions together in a more equitable fashion," she said
Chayes also highlighted a method she called “algorithmic greenlining,” which makes it possible to purge inherent bias from decision-making codes that determine who in a particular population gets into a school or receives a loan. "We have a fairness component that takes an objective function and optimizes the data in a way that amplifies equities, rather than inequities," she said.
Accountability and human-centered AI
As U.S. Secretary of Defense, Ash Carter, now the director of Harvard University's Belfer Center for Science and International Affairs, said he learned that "accountability as an algorithmic matter isn't automatic," he said. "It needs to be a criterion for people designing AI."
Machines easily amplify "crummy data," said Carter, so unless system designers establish "data standards and transparency, you're just massaging yesterday into a perfected version of then rather than creating 'tomorrow,'" he said.
Throughout his career, which involved deploying new technologies in the most perilous of circumstances, Carter said he always felt the imperative to act and think with broad ethical considerations in mind. In 2012, he recalled, he issued a directive at the Department of Defense dealing with the use of autonomous weapons.
"It said that with any decisions to use lethal force on behalf of our people, there must be a human involved in the decision — a directive that is still in force to this day," he said.
Since machines now weigh in on matters of life and death, justice and freedom, there is an urgency to creating an ethical, socially-informed culture in the fields of AI and data science. Panelists expressed the hope that the new MIT Schwarzman College of Computing would serve as an incubator for more and much stronger interdisciplinary approaches to research and education.
The future for bilinguals
"With this new college, we could not just diversify tech, but technify everything else and really work on the hardest problems together in a collaborative way," said Megan Smith ’86, SM ’88, former U.S. chief technology officer and founder and CEO of shift7. "Feeding 22 million children in a free and reduced lunch program is a big data problem, more important than self-driving cars, and it's the kind of computing I think we should do on inequality and poverty."
Panelists also voiced confidence that the new college will serve as a model to other higher education institutions seeking to engage the engineering and liberal arts fields to solve important societal problems collaboratively. They discussed the importance of faculty and students representing not just a range of disciplines, but a range of human beings, people whose lived experiences are relevant to discerning the ethical and societal implications of AI tools.
The panelists also welcomed the opportunity to help nurture the MIT bilinguals — students with expertise in both technical and liberal arts fields — who could swiftly assume positions as policy advisors and leaders in government and industry.
"MIT is going to be the anchor of what we will know in this society as public interest technology," predicted Darren Walker. "What MIT is doing will set the pace for every other university that wants to be relevant in the future."
Story prepared by MIT SHASS Communications
Editorial team: Leda Zimmerman and Emily Hiestand
de MIT News https://ift.tt/2tQC1bS
No hay comentarios:
Publicar un comentario