Natalie Lao was set on becoming an electrical engineer, like her parents, until she stumbled on course 6.S192 (Making Mobile Apps), taught by Professor Hal Abelson. Here was a blueprint for turning a smartphone into a tool for finding clean drinking water, or sorting pictures of faces, or doing just about anything. “I thought, I wish people knew building tech could be like this,” she said on a recent afternoon, taking a break from writing her dissertation.
After shifting her focus as an MIT undergraduate to computer science, Lao joined Abelson’s lab, which was busy spreading its App Inventor platform and do-it-yourself philosophy to high school students around the world. App Inventor set Lao on her path to making it easy for anyone, from farmers to factory workers, to understand AI, and use it to improve their lives. Now in the third and final year of her PhD at MIT, Lao is also the co-founder of an AI startup to fight fake news, and the co-producer of a series of machine learning tutorials. It’s all part of her mission to help people find the creator and free thinker within.
“She just radiates optimism and enthusiasm,” says Abelson, the Class of 1922 Professor in the Department of Electrical Engineering and Computer Science (EECS). “She’s a natural leader who knows how to get people excited and organized.”
Lao was immersed in App Inventor, building modules to teach students to build face recognition models and store data in the cloud. Then, in 2016, the surprise election of Donald Trump to U.S. president forced her to think more critically about technology. She was less upset by Trump the politician as by revelations that social media-fueled propaganda and misinformation had tilted the race in Trump’s favor.
When a friend, Elan Pavlov, a then-EECS postdoc, approached Lao about an idea he had for building a platform to combat fake news she was ready to dive in. Having grown up in rural, urban, and suburban parts of Tennessee and Ohio, Lao was used to hearing a range of political views. But now, social platforms were filtering those voices, and amplifying polarizing, often inaccurate, content. Pavlov’s idea stood out for its focus on identifying the people (and bots) spreading misinformation and disinformation, rather than the content itself.
Lao recruited two friends, Andrew Tsai and Keertan Kini, to help build out the platform. They would later name it HINTS, or Human Interaction News Trustworthiness System, after an early page-ranking algorithm called HITS.
In a demo last fall, Lao and Tsai highlighted a network of Twitter accounts that had shared conspiracy theories tied to the murder of Saudi journalist Jamal Khashoggi under the hashtag #khashoggi. When they looked at what else those accounts had shared, they found streams of other false and misleading news. Topping the list was the incorrect claim that then-U.S. Congressman Beto O’Rourke had funded a caravan of migrants headed for the U.S. border.
The HINTS team hopes that by flagging the networks that spread fake news, social platforms will move faster to remove fake accounts and contain the propagation of misinformation.
“Fake news doesn’t have any impact in a vacuum — real people have to read it and share it,” says Lao. “No matter what your political views, we’re concerned about facts and democracy. There’s fake news being pushed on both sides and it’s making the political divide even worse.”
The HINTS team is now working with its first client, a media analytics firm based in Virginia. As CEO, Lao has called on her experience as a project manager from internships at GE, Google, and Apple, where, most recently, she led the rollout of the iPhone XR display screen. “I’ve never met anyone as good at managing people and tech,” says Tsai, an EECS master’s student who met Lao as a lab assistant for Abelson’s course 6.S198 (Deep Learning Practicum), and is now CTO of HINTS.
As HINTS was getting off the ground, Lao co-founded a second startup, ML Tidbits, with EECS graduate student Harini Suresh. While learning to build AI models, both women grew frustrated by the tutorials on YouTube. “They were full of formulas, with very few pictures,” she says. “Even if the material isn’t that hard, it looks hard!”
Convinced they could do better, Lao and Suresh reimagined a menu of intimidating topics like unsupervised learning and model-fitting as a set of inviting side dishes. Sitting cross-legged on a table, as if by a cozy fire, Lao and Suresh put viewers at ease with real-world anecdotes, playful drawings, and an engaging tone. Six more videos, funded by MIT Sandbox and the MIT-IBM Watson AI Lab, are planned for release this spring.
If her audience learns one thing from ML Tidbits, Lao says, she hopes it’s that anyone can learn the basic underpinnings of AI. “I want them to think, ‘Oh, this technology isn't just something that professional computer scientists or mathematicians can touch. I can learn it too. I can form educated opinions and join discussions about how it should be used and regulated.’ ”
de MIT News https://ift.tt/36FsEwC
No hay comentarios:
Publicar un comentario