jueves, 31 de enero de 2019

Putting neural networks under the microscope

Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope.

In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or “neurons,” in the networks that capture specific linguistic features.

Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably “learns” linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation.

But, in training, these networks basically adjust internal settings and values in ways the creators can’t interpret. For machine translation, that means the creators don’t necessarily know which linguistic features the network captures.

In a paper being presented at this week’s Association for the Advancement of Artificial Intelligence conference, the researchers describe a method that identifies which neurons are most active when classifying specific linguistic features. They also designed a toolkit for users to analyze and manipulate how their networks translate text for various purposes, such as making up for any classification biases in the training data.

In their paper, the researchers pinpoint neurons that are used to classify, for instance, gendered words, past and present tenses, numbers at the beginning or middle of sentences, and plural and singular words. They also show how some of these tasks require many neurons, while others require only one or two.

“Our research aims to look inside neural networks for language and see what information they learn,” says co-author Yonatan Belinkov, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This work is about gaining a more fine-grained understanding of neural networks and having better control of how these models behave.”

Co-authors on the paper are: senior research scientist James Glass and undergraduate student Anthony Bau, of CSAIL; and Hassan Sajjad, Nadir Durrani, and Fahim Dalvi, of QCRI.  

Putting a microscope on neurons

Neural networks are structured in layers, where each layer consists of many processing nodes, each connected to nodes in layers above and below. Data are first processed in the lowest layer, which passes an output to the above layer, and so on. Each output has a different “weight” to determine how much it figures into the next layer’s computation. During training, these weights are constantly readjusted.

Neural networks used for machine translation train on annotated language data. In training, each layer learns different “word embeddings” for one word. Word embeddings are essentially tables of several hundred numbers combined in a way that corresponds to one word and that word’s function in a sentence. Each number in the embedding is calculated by a single neuron.

In their past work, the researchers trained a model to analyze the weighted outputs of each layer to determine how the layers classified any given embedding. They found that lower layers classified relatively simpler linguistic features — such as the structure of a particular word — and higher levels helped classify more complex features, such as how the words combine to form meaning.

In their new work, the researchers use this approach to determine how learned word embeddings make a linguistic classification. But they also implemented a new technique, called “linguistic correlation analysis,” that trains a model to home in on the individual neurons in each word embedding that were most important in the classification.

The new technique combines all the embeddings captured from different layers — which each contain information about the word’s final classification — into a single embedding. As the network classifies a given word, the model learns weights for every neuron that was activated during each classification process. This provides a weight to each neuron in each word embedding that fired for a specific part of the classification.

“The idea is, if this neuron is important, there should be a high weight that’s learned,” Belinkov says. “The neurons with high weights are the ones more important to predicting the certain linguistic property. You can think of the neurons as a lot of knobs you need to turn to get the correct combination of numbers in the embedding. Some knobs are more important than others, so the technique is a way to assign importance to those knobs.”

Neuron ablation, model manipulation

Because each neuron is weighted, it can be ranked in order of importance. To that end, the researchers designed a toolkit, called NeuroX, that automatically ranks all neurons of a neural network according to their importance and visualizes them in a web interface.

Users upload a network they’ve already trained, as well as new text. The app displays the text and, next to it, a list of specific neurons, each with an identification number. When a user clicks on a neuron, the text will be highlighted depending on which words and phrases the neuron activates for. From there, users can completely knock out — or “ablate” — the neurons, or modify the extent of their activation, to control how the network translates.

The task of ablation was used to determine if the researchers’ method accurately pinpointed the correct high-ranking neurons. In their paper, the researchers used the tool to show that, by ablating high ranking neurons in a network, its performance in classifying correlated linguistic features dipped significantly. Alternatively, when they ablated lower-ranking neurons, performance suffered, but not as dramatically.

“After you get all these rankings, you want to see what happens when you kill these neurons and see how badly it affects performance,” Belinkov says. “That’s an important result proving that the neurons we find are, in fact, important to the classification process.”

One interesting application for the toolkit is helping limit biases in language data. Machine-translation models, such as Google Translate, may train on data with gender bias, which can be problematic for languages with gendered words. Certain professions, for instance, may be more often referred to as male, and others as female. When a network translates new text, it may only produce the learned gender for those words. In many online English-to-Spanish translations, for instance, “doctor” often translates into its masculine version, while “nurse” translates into its feminine version.

“But we find we can trace individual neurons in charge of linguistic properties like gender,” Belinkov says. “If you’re able to trace them, maybe you can intervene somehow and influence the translation to translate these words more to the opposite gender … to remove or mitigate the bias.”

In preliminary experiments, the researchers modified neurons in a network to change translated text from past to present tense with 67 percent accuracy. They ablated to switch the gender of the words with 21 percent accuracy. “It’s still a work in progress,” Belinkov says. A next step, he adds, is fine-tuning the web application to achieve more accurate ablation and manipulation.



de MIT News http://bit.ly/2RYZzKn

Biologist Adam Martin studies the mechanics of tissue folding

Embryonic development is tightly regulated by genes that control how body parts form. One of the key responsibilities of these genes is to make sure that tissues fold into the correct shapes, forming structures that will become the spine, brain, and other body parts.

During the 1970s and ’80s, the field of embryonic development focused mainly on identifying the genes that control this process. More recently, many biologists have shifted toward investigating the physics behind the tissue movements that occur during development, and how those movements affect the shape of tissues, says Adam Martin, an MIT associate professor of biology.

Martin, who recently earned tenure, has made key discoveries in how tissue folding is controlled by the movement of cells’ internal scaffolding, known as the cytoskeleton. Such discoveries can not only shed light on how tissues form, including how birth defects such as spina bifida occur, but may also help guide scientists who are working on engineering artificial human tissues.

“We’d like to understand the molecular mechanisms that tune how forces are generated by cells in a tissue, such that the tissue then gets into a proper shape,” Martin says. “It’s important that we understand fundamental mechanisms that are in play when tissues are getting sculpted in development, so that we can then harness that knowledge to engineer tissues outside of the body.”

Cellular forces

Martin grew up in Rochester, New York, where both of his parents were teachers. As a biology major at nearby Cornell University, he became interested in genetics and development. He went on to graduate school at the University of California at Berkeley, thinking he would study the genes that control embryonic development.

However, while in his PhD program, Martin became interested in a different phenomenon — the role of the cytoskeleton in a process called endocytosis. Cells use endocytosis to absorb many different kinds of molecules, such as hormones or growth factors.

“I was interested in what generates the force to promote this internalization,” Martin says.

He discovered that the force is generated by the assembly of arrays of actin filaments. These filaments tug on a section of the cell membrane, pulling it inward so that the membrane encloses the molecule being absorbed. He also found that myosin, a protein that can act as a motor and controls muscle contractions, helps to control the assembly of actin filaments.

After finishing his PhD, Martin hoped to find a way to combine his study of cytoskeleton mechanics with his interest in developmental biology. As a postdoc at Princeton University, he started to study the phenomenon of tissue folding in fruit fly embryonic development, which is now one of the main research areas of his lab at MIT. Tissue folding is a ubiquitous shape change in tissues to convert a planar sheet of cells into a three-dimensional structure, such as a tube.

In developing fruit fly embryos, tissue folding invaginates cells that will form internal structures in the fly. This folding process is similar to tissue folding events in vertebrates, such as neural tube formation. The neural tube, which is the precursor to the vertebrate spinal cord and brain, begins as a sheet of cells that must fold over and “zip” itself up along a seam to form a tube. Problems with this process can lead to spina bifida, a birth defect that results from an incomplete closing of the backbone.

When Martin began working in this area, scientists had already discovered many of the transcription factors (proteins that turn on networks of specific genes) that control the folding of the neural tube. However, little was known about the mechanics of this folding.

“We didn’t know what types of forces those transcription factors generate, or what the mechanisms were that generated the force,” he says.

He discovered that the accumulation of myosin helps cells lined up in a row to become bottle-shaped, causing the top layer of the tissue to pucker inward and create a fold in the tissue. More recently, he found that myosin is turned on and off in these cells in a dynamic way, by a protein called RhoA.

“What we found is there’s essentially an oscillator running in the cells, and you get a cycle of this signaling protein, RhoA, that’s being switched on and off in a cyclical manner,” Martin says. “When you don’t have the dynamics, the tissue still tries to contract, but it falls apart.”

He also found that the dynamics of this myosin activity can be disrupted by depleting genes that have been linked to spina bifida.

Breaking free

Another important cellular process that relies on tissue folding is the epithelial-mesenchymal transition (EMT). This occurs during embryonic development when cells gain the ability to break free and move to a new location. It is also believed to occur when cancer cells metastasize from tumors to seed new tumors in other parts of the body.

During embryonic development, cells lined up in a row need to orient themselves so that when they divide, both daughter cells remain in the row. Martin has shown that when the mechanism that enables the cells to align correctly is disrupted, one of the daughter cells will be squeezed out of the tissue.

“This has been proposed as one way you can get an epithelial-to-mesenchymal transition, where you have cells dissociate from native tissue,” Martin says.  He now plans to further study what happens to the cells that get squeezed out during the EMT.

In addition to these projects, he is also collaborating with Jörn Dunkel, an MIT associate professor of mathematics, to map the network connections between the myosin proteins that control tissue folding during development. “That project really highlights the benefits of getting people from diverse backgrounds to analyze a problem,” Martin says.



de MIT News http://bit.ly/2SecYxp

Technique could boost resolution of tissue imaging as much as tenfold

Imaging deep inside biological tissue has long been a significant challenge. That is because light tends to be scattered by complex media such as biological tissue, bouncing around inside until it comes out again at a variety of different angles. This distorts the focus of optical microscopes, reducing both their resolution and imaging depth. Using light of a longer wavelength can help to avoid this scattering, but it also reduces imaging resolution.

Now, instead of attempting to avoid scattering, researchers at MIT have developed a technique to use the effect to their advantage. The new technique, which they describe in a paper published in the journal Science, allows them to use light scattering to improve imaging resolution by up to 10 times that of existing systems.

Indeed, while conventional microscopes are limited by what is known as the diffraction barrier, which prevents them focusing beyond a given resolution, the new technique allows imaging at “optical super-resolution,” or beyond this diffraction limit.

The technique could be used to improve biomedical imaging, for example, by allowing more precise targeting of cancer cells within tissue. It could also be combined with optogenetic techniques, to excite particular brain cells. It could even be used in quantum computing, according to Donggyu Kim, a graduate student in mechanical engineering at MIT and first author of the paper.

In 2007, researchers first proposed that by shaping a wave of light before sending it into the tissue, it is possible to reverse the scattering process, focusing the light at a single point. However, taking advantage of this effect has long been hampered by the difficulty of gaining sufficient information about how light is scattered within complex media such as biological tissue.

To obtain this information, researchers have developed numerous techniques for creating “guide stars,” or feedback signals from points within the tissue that allow the light to be focused correctly. But these approaches have so far resulted in imaging resolution well above the diffraction limit, Kim says.

In order to improve the resolution, Kim and co-author Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, developed something they call quantum reference beacons (QRBs).

These QRBs are made using nitrogen-vacancy (N-V) centers within diamonds. These tiny molecular defects within the crystal lattice of diamonds are naturally fluorescent, meaning they will emit light when excited by a laser beam.

What’s more, when a magnetic field is applied to the QRBs, they each resonate at their own specific frequency. By targeting the tissue sample with a microwave signal of the same resonant frequency as a particular QRB, the researchers can selectively alter its fluorescence.

“Imagine a navigator trying to get their vessel to its destination at night,” Kim says. “If they see three beacons, all of which are emitting light, they will be confused. But, if one of the beacons deliberately twinkles to generate a signal, they will know where their destination is,” he says.

In this way the N-V centers act as beacons, each emitting fluorescent light. By modulating a particular beacon’s fluorescence to create an on/off signal, the researchers can determine the beacon’s location within the tissue.

“We can read out where this light is coming from, and from that we can also understand how the light scatters inside the complex media,” Kim says.

The researchers then combine this information from each of the QRBs to create a precise profile of the scattering pattern within the tissue.

By displaying this pattern with a spatial light modulator — a device used to produce holograms by manipulating light — the laser beam can be shaped in advance to compensate for the scattering that will take place inside the tissue. The laser is then able to focus with super resolution on a point inside the tissue.

In biological applications, the researchers envision that a suspension of nanodiamonds could be injected into the tissue, much as a contrast agent is already used in some existing imaging systems. Alternatively, molecular tags attached to the diamond nanoparticles could guide them to specific types of cells.

The QRBs could also be used as qubits for quantum sensing and quantum information processing, Kim says. “The QRBs can be used as quantum bits to store quantum information, and with this we can do quantum computing,” he says.

Super-resolution imaging within complex scattering media has been hampered by the deficiency of guide stars that report their positions with subdiffraction precision, according to Wonshik Choi, a professor of physics at Korea University, who was not involved in the research.

“The researchers have developed an elegant method of exploiting quantum reference beacons made of the nitrogen vacancy center in nanodiamonds as such guide stars,” he says. “This work opens up new venues for deep-tissue super-resolution imaging and quantum information processing within subwavelength nanodevices.”

The researchers now hope to explore the use of quantum entanglement and other types of semiconductors for use as QRBs, Kim says.



de MIT News http://bit.ly/2MHgGKA

Letter regarding the departure of the VP for communications

The following email was sent Jan. 17 to MIT faculty and staff by Vice President Kirk Kolenbrander.

Dear MIT faculty and staff,

I write to let you know that, after a decade and a half of service to the MIT community, Vice President for Communications Nate Nickerson will leave in a few weeks to become Vice President for Communications at Yale University, where he will serve on the president’s cabinet.

After five years as deputy editor of Technology Review, Nate joined the MIT News Office in 2009 as editorial director and led a major office reorganization. In 2010, he was named director of communications, and he became associate vice president for communications in 2012. He has served in his current role since 2015.

Among his accomplishments as vice president, he launched the Office of Communications and created MIT News, one of the most-read university web sites in the world. He built a remarkable team that shares the MIT story with the world every day through first-class reporting, media relations, web design, social media and video. From The Engine to the Quest for Intelligence, Nate also led impressive communications efforts to support major public launches, and helped set an inspiring standard for communications across the Institute. He has been a wonderful colleague and friend. We wish him well in his new position.

Nate’s last day at MIT will be February 15. With his departure, the Office of Communications, including the News Office, will continue to report up to me. As we shape a long-term plan for managing communications going forward, please feel free to share your insights and suggestions with me here.

Sincerely,

Kirk Kolenbrander



de MIT News http://bit.ly/2sVtjbW

Bacteria promote lung tumor development, study suggests

MIT cancer biologists have discovered a new mechanism that lung tumors exploit to promote their own survival: These tumors alter bacterial populations within the lung, provoking the immune system to create an inflammatory environment that in turn helps the tumor cells to thrive.

In mice that were genetically programmed to develop lung cancer, those raised in a bacteria-free environment developed much smaller tumors than mice raised under normal conditions, the researchers found. Furthermore, the researchers were able to greatly reduce the number and size of the lung tumors by treating the mice with antibiotics or blocking the immune cells stimulated by the bacteria.

The findings suggest several possible strategies for developing new lung cancer treatments, the researchers say.

“This research directly links bacterial burden in the lung to lung cancer development and opens up multiple potential avenues toward lung cancer interception and treatment,” says Tyler Jacks, director of MIT’s Koch Institute for Integrative Cancer Research and the senior author of the paper.

Chengcheng Jin, a Koch Institute postdoc, is the lead author of the study, which appears in the Jan. 31 online edition of Cell.

Linking bacteria and cancer

Lung cancer, the leading cause of cancer-related deaths, kills more than 1 million people worldwide per year. Up to 70 percent of lung cancer patients also suffer complications from bacterial infections of the lung. In this study, the MIT team wanted to see whether there was any link between the bacterial populations found in the lungs and the development of lung tumors.

To explore this potential link, the researchers studied genetically engineered mice that express the oncogene Kras and lack the tumor suppressor gene p53. These mice usually develop a type of lung cancer called adenocarcinoma within several weeks.

Mice (and humans) typically have many harmless bacteria growing in their lungs. However, the MIT team found that in the mice engineered to develop lung tumors, the bacterial populations in their lungs changed dramatically. The overall population grew significantly, but the number of different bacterial species went down. The researchers are not sure exactly how the lung cancers bring about these changes, but they suspect one possibility is that tumors may obstruct the airway and prevent bacteria from being cleared from the lungs.

This bacterial population expansion induced immune cells called gamma delta T cells to proliferate and begin secreting inflammatory molecules called cytokines. These molecules, especially IL-17 and IL-22, create a progrowth, prosurvival environment for the tumor cells. They also stimulate activation of neutrophils, another kind of immune cell that releases proinflammatory chemicals, further enhancing the favorable environment for the tumors.

“You can think of it as a feed-forward loop that forms a vicious cycle to further promote tumor growth,” Jin says. “The developing tumors hijack existing immune cells in the lungs, using them to their own advantage through a mechanism that’s dependent on local bacteria.”

However, in mice that were born and raised in a germ-free environment, this immune reaction did not occur and the tumors the mice developed were much smaller.

Blocking tumor growth

The researchers found that when they treated the mice with antibiotics either two or seven weeks after the tumors began to grow, the tumors shrank by about 50 percent. The tumors also shrank if the researchers gave the mice drugs that block gamma delta T cells or that block IL-17.

The researchers believe that such drugs may be worth testing in humans, because when they analyzed human lung tumors, they found altered bacterial signals similar to those seen in the mice that developed cancer. The human lung tumor samples also had unusually high numbers of gamma delta T cells.

“If we can come up with ways to selectively block the bacteria that are causing all of these effects, or if we can block the cytokines that activate the gamma delta T cells or neutralize their downstream pathogenic factors, these could all be potential new ways to treat lung cancer,” Jin says.

Many such drugs already exist, and the researchers are testing some of them in their mouse model in hopes of eventually testing them in humans. The researchers are also working on determining which strains of bacteria are elevated in lung tumors, so they can try to find antibiotics that would selectively kill those bacteria.

The research was funded, in part, by a Lung Cancer Concept Award from the Department of Defense, a Cancer Center Support (core) grant from the National Cancer Institute, the Howard Hughes Medical Institute, and a Margaret A. Cunningham Immune Mechanisms in Cancer Research Fellowship Award.



de MIT News http://bit.ly/2DKLBD4

SuperUROP: Showcasing students' research work in progress

If one overarching message emerged from the 2018 SuperUROP Showcase, it was this: MIT undergraduates can do just about anything.

The lively poster session, which marked the halfway point in the annual Advanced Undergraduate Research Opportunities Program (SuperUROP), featured more than 130 poster presentations by students on topics ranging from DNA-based memory storage to adaptive flight control and from image recognition to the automated correction of grammatical errors in Japanese.

Capping the event was the SuperUROP Community Dinner, which featured a keynote address by Tom Leighton PhD ’81, the CEO and co-founder of Akamai, a $2.5 billion technology company that was born at MIT. Leighton’s talk, “The Akamai Story: From Theory to Practice,” was designed to inspire the undergraduates in attendance. It centered, as Leighton put it, on “taking a UROP project and forming a company and having some success with it.”

SuperUROP builds on the success of MIT’s flagship UROP program. While traditional UROP experiences last just one term, SuperUROP involves research projects spanning the full academic year and includes a two-term class on conducting and presenting research, including writing journal-style papers as their final assignments.

Typically, the impact of SuperUROP experience extends well beyond the course, says Anantha Chandrakasan, dean of the School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science.

“We expect to see the results of many of these projects presented at major conferences and published in top journals,” says Chandrakasan, who founded SuperUROP when he was head of the Department of Electrical Engineering and Computer Science (EECS). “We are also thrilled to see our former SuperUROP scholars move on to top PhD programs and making impact in industry.”

“The fact that it’s year-long is crucial,” says EECS senior Faraaz Nadeem, who is trying to automate the transcription of music featuring multiple instruments, a task he finds quite time-consuming. “The extra time and the way the class is structured, with deadlines, is pretty helpful.”

Launched in 2012 within EECS, SuperUROP later expanded to the full School of Engineering, and in 2017 began supporting research involving the School of Humanities, Arts, and Social Sciences (SHASS). Nadeem is among this year’s nine CS+HASS Undergraduate Research and Innovation Scholars, who work on projects combining computer science with the humanities, arts, and social sciences.

“SHASS is so excited to have students involved in SuperUROP,” says Agustín Rayo, associate dean of SHASS, who attended the December 2018 poster session. “I think our undergraduates are really at the vanguard.”

This year, SuperUROP also included eight scholars funded by the School of Engineering and the MIT Quest for Intelligence, a campus-wide initiative launched in February 2018 to advance human understanding of intelligence.

“The research goes beyond EECS. We have a really broad spectrum,” says Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor of EECS and one of three faculty members who teach the SuperUROP class 6.UAR (Seminar in Advanced Undergraduate Research) with the support of eight teaching assistants.

EECS faculty member Thomas Heldt, who has served as an advisor for several SuperUROP students in the past few years, pointed out that the year-long program enables undergraduates to really dig into their topics.

“Usually it’s a more meaningful experience than a regular UROP because we’re working with students for nine months and there’s a formal program of classwork,” noted Heldt, who is the W.M. Keck Career Development Professor in Biomedical Engineering and an associate professor of electrical and biomedical engineering. “The experience is fantastic.”

Students agree.

“This is usually something graduate students would do,” says Patrick Tornes, a senior in mechanical engineering and School of Engineering/Quest scholar who is creating adaptive controls for drones so that the devices can better navigate the variable conditions of the real world. “It’s really awesome to be able to work on this as an undergraduate. In the spring, I’m looking forward to implementing the controller on a hexacopter and seeing how it actually performs.”

EECS senior Sky Shin, also a School of Engineering/Quest scholar, says SuperUROP is helping her decide what path to take in her future.

“I think [SuperUROP is] testing how I’ll fit in grad school,” says Shin, who is working in the Computational Cognitive Science Group to enable computers to classify images based on just a few examples. “This is very extensive research.”

The poster session gave students the chance to practice presenting technical material to a technical audience — one of the key skills taught in the SuperUROP program, says Dina Katabi, the Andrew & Erna Viterbi Professor of EECS and another 6.UAR instructor. “This is a very different class from anything other universities do. It’s a class that believes that research and presentation should go hand in hand,” she says.

Austin Garrett, a senior double-majoring in EECS and physics, says the SuperUROP class assignments — from developing a research topic to creating a poster and giving a presentation — have been useful in helping him plan his research.

“I’ve realized how difficult it is to develop a project,” says Garrett, a School of Engineering/Quest scholar whose research goal is to embed an intuitive understanding of physics into artificial intelligence. “It’s easy to get lost in the sea of possibilities.”

What many students say they like best about SuperUROP, however, is the chance to pursue independent research in an area that really interests them. “I’ve been given a lot of freedom in how I approach the problem. It’s really self-driven,” says Alex Kimn, a senior double-majoring in EECS and physics and another School of Engineering/Quest scholar. Kimn is using neural modeling to address grammatical errors to aid students of Japanese — work motivated by his interest in education.

Ronit Langer, a junior in EECS, meanwhile, has pursued her interest in “how we can take biological knowledge and, using computer science, develop things that can be deployed to help people.” Specifically, she’s trying to develop a protein sensor that will alert first responders to the presence of fentanyl, a powerful synthetic opioid, in possible drug-overdose cases. “What I’ve been able to accomplish in one semester is inspiring,” says Langer, a CS+HASS scholar.

The December showcase gave just a taste of things to come; students will next present the results of their research at the April 2019 SuperUROP Showcase poster session. However, it was clear that MIT undergraduates have the potential to produce great work, as Leighton underscored in his keynote dinner presentation.  

As Leighton recounted the story of Akamai’s founding at MIT, its meteoric rise during the dot.com era, and its near total collapse in 2001, he attributed much of the company’s success to the work of MIT students. Teams of students worked to get the company launched and later helped it rebound from disaster, he said.

“We got through it, led by people just like you: MIT undergraduates.”



de MIT News http://bit.ly/2CXviRL

miércoles, 30 de enero de 2019

Mining a trove of text

Few students boast as precocious a start in their field as Andrew Halterman. At age seven, Halterman accompanied his mom, a political scientist, on a research trip to Bosnia. It was just a few months after the ceasefire in that region's civil war.

"I learned all about the conflict and ethnic cleansing," he says. 

With dinner conversations revolving around this kind of field work and on the academic world at the University of Oklahoma, where both parents served on the faculty, Halterman was hooked. "I became focused on a life in political science at a pretty young age, and by high school, it seemed like a natural path," he says.

Today, the fourth-year doctoral student in political science is pursuing an ambitious two-part research agenda, deploying an original computational strategy to pursue questions about the casualties of war. Halterman's methodology involves a new way of analyzing large collections of written communications — whether newspaper stories or social media postings. It offers him, and potentially other researchers, a way of ferreting out connections between people and places that might otherwise remain concealed.

"I take innovative tools from computer science — different ways of representing sentences and analyzing word order — and assemble them like Lego pieces to link events and locations," says Halterman. "This approach is something new for political scientists, and will make possible new techniques."

In testing out his approach, Halterman chose a pressing contemporary problem.

"I am interested in political violence, and I wanted to understand the vast number of civilian casualties in the Syrian conflict," he says. "There are a lot of theories why armed groups kill civilians, and I wondered if these theories applied to a war like Syria."

To answer this question, Halterman collected data from 2011 to 2016 comprised of news texts from international wire services, local newspapers, and Syrian information posted online concerning civilian deaths. "I don't think there had ever been a data set like this, which gave us a pretty good sense of where and how most civilians were actually killed," he says.

Halterman then trained a natural language program he had designed to parse the nearly 10 thousand sentences, seeking in particular to link verbs such as "attack" and "advance" with location words. To ensure the accuracy of his model, he included non-Syrian text from Wikipedia and The New York Times.

One conventional theory about civilian loss in conflicts suggests "that civilian casualties are like a bell curve: If civilians are right next to the front line, casualties are low, then as you move away, they go up, and then as you go further still, the numbers go down again," Halterman says. "But I didn’t find evidence for this idea, showing that in Syria, the closer you are to the front line, the more killing of civilians there is." 

Halterman's research links civilian deaths to one particular cause:

"What explains most violence in Syria is the government's deliberate targeting of civilians," he says. "Some deaths plausibly occur as collateral damage — artillery and barrel bomb strikes against rebel fighters — but those areas where the regime saw the greatest threat saw the highest rates of violence."

Halterman has long been concerned with conflict and its consequences. His thesis at Amherst College examined the efficacy of American aid in the Balkans during and after the war. On a research trip, he visited Belgrade, Sarajevo, and Pristina, observing the flow of American and UN personnel. "I realized the power of asking people about what they think is going on, and how official statistics can be inaccurate and really misleading," he says.

On a Fulbright Fellowship to Kosovo after Amherst, Halterman was able to flesh out some of the themes of his undergraduate research, looking at the mechanisms of local and international NGO partnerships. This research launched him into a series of opportunities in Washington, at government-related think tanks and consultancies that proved formative.

"I was introduced to working in offices on a team — understanding how that world works," he recalls. The government agencies he assisted looked at such problems as developing effective counterinsurgency plans for Afghanistan. "These jobs also introduced me to quantitative approaches to problems, as we developed new software to help analysts find meaning in large datasets," he says.

Halterman took online courses in Python and R and taught himself programming, so he could help academics and government officials visualize questions. "I'd ask, 'What do you need to do your job, and does this kind of software make your job easier?'"

After several years, "I was ready to go back to grad school," says Halterman. "I'd learned a ton of technical skills but had gotten away from substantive problems."  He chose MIT's graduate program in part because of its rigorous methods sequence. "I had no formal training formulating research questions, or a good process for answering them."

Today, Halterman continues to develop his data tools in the service of political science. "I'm motivated to make things better for others in my field," he says.  He has published his event extraction software on open source websites. "My method can be useful for a range of purposes — looking at protests, attacks, meetings, speeches, finding relationships between people and places," he notes.

As he closes in on his degree, Halterman is polishing his Syria paper before turning his attention to text analysis of conflicts in other countries. He draws considerable support from MIT's political science graduate student community. 

"We get together for dinner, invite a faculty member over, and through the years I've developed really close friendships," says Halterman. "I'm surrounded by people who read drafts and give advice, which helps make me a better scholar."



de MIT News http://bit.ly/2MFAhuE

Eruption spurs creation of real-time air pollution network

As red molten lava oozed out of Kilauea on the Island of Hawaii (“the Big Island”) in May 2018, destroying houses and property in its path, clouds of ash particles and toxic gases from the volcano — known as vog — filled the air and drifted across the island with the wind.

Even before this most recent phase of the Kilauea eruption, air quality was a major concern for citizens across the island. Researchers from MIT’s Department of Civil and Environmental Engineering (CEE) have worked closely with citizens on Hawaii Island for several years to monitor air quality from the volcano using low-cost sensors. The researchers were even planning to launch a large-scale air quality project funded by the U.S. Environmental Protection Agency (EPA), but the emergency conditions created by Kilauea starting in the spring of last year, and the urgent demands for air pollution data from community groups and state government officials, prompted the MIT researchers to jump into action months before schedule.

“We realized that because we’d been building these instruments for measuring gases and particles relatively quickly and inexpensively, we had the tools to help people in Hawaii understand the quality of the air they were breathing,” says Jesse Kroll, associate professor of civil and environmental engineering and chemical engineering, who leads the air quality research projects across the island with Colette Heald, a CEE professor. “In a period of just about two weeks, we organized this effort in which we built a number of sensor boxes and came over here to Hawaii to try to put them up all over the island.”

Since the researchers had a few sensors on hand, and because time was of the essence, they immediately sent the instruments they had to the Hawaii Department of Health (DoH) before getting to work building the new ones. These sensors were the first to be deployed in the affected zone, as the DoH awaited other air quality monitors from government agencies. The emergency-response initiative was supported entirely by CEE, which provided funds for Kroll and Heald, along with postdoc Ben Crawford and graduate student David Hagan, to purchase supplies to build the air quality sensors and travel to Hawaii to deploy the sensors around the island in May. 

“We had been working with MIT for almost two years on developing a project and it was, on our part, to help MIT place monitors and sensors so that they could construct and test a group of sensors that would provide air quality information both back to the university and be set up as a way to inform the public in general,” says Betsy Cole, the director of strategic projects at The Kohala Center, a nonprofit organization that helped put the MIT researchers in contact with citizens, schools, and organizations across the island. Cole notes that an increase in the number of requests for information prompted her to contact Kroll to see if there was anything MIT could do to accelerate the process of providing sensors and measurements for citizens to understand the impact of the eruption — and its lasting impact — on their air.

The MIT sensors can detect and measure sulfur dioxide, which is an irritant and can be toxic in large quantities, as well as particulate matter, including sulfuric acid. “With this eruption, there was some concern about ash coming from the volcano as well. So we can measure that with particulate sensors, too,” Crawford says. The sensors provide real-time air quality data, and the information is published on a website created by the researchers. Currently, the website reflects data from 16 sensors across the Island, and more sensors will be added as the project progresses.  

There are many benefits to deploying the MIT sensors in place of larger, more expensive instruments typically used by government agencies. Hagan, the developer of the website and one of the original creators of this these sensors, explains, “[our sensors] have a much smaller footprint, so you can put them in more places; they are solar powered, so you can really put them in remote areas, and they communicate wirelessly over a 3G network, so we get all this data remotely in real-time at very high spatial and temporal resolution.”

The design of the sensors makes it feasible for the researchers to install them in many areas across the island, but this required buy-in from local citizens. “When deploying a sensor network like this, where you want to get measurements made throughout a region, it’s really important to interact directly with members of the communities,” Kroll explains. In turn, The Kohala Center established connections with schools and health centers in preparation for the EPA-funded research project, and the researchers were able to leverage these connections early as part of their emergency response project. The locations were strategically selected for their positions as community congregation spaces, and for the educational opportunities afforded by the sensor’s data, as education and outreach are a central facet of the long-term research project.

Crawford explains that, as part of the EPA project, “we’re working with the teachers so they can use the sensors in different ways in their STEM curriculum, to engage with the students about data analysis, environmental science, [and] some programming skills,”  He moved to Hawaii in September to both maintain the network and to provide professional development opportunities for teachers.

As Crawford and Hagan installed sensors at different locations shortly after the major eruption in May, teachers and administrators told the researchers about the impact of the eruption on their students, often reporting an increase in absences and in a few cases the loss of students’ homes. Steve Hirakami, the principal and founder of the Hawaii Academy of Arts and Science (HAAS), estimated that almost 40 percent of their staff and students had been impacted by the evacuation. “This has a major impact on [our] school,” he said in May when the Kilauea was still active. Hirakami used the MIT sensors to determine school closures and expressed gratitude to the researchers for providing him with the resource.  

In the immediate wake of the fissures, Wendy Baker, a history teacher at HAAS, worked with Crawford to install a sensor that the researchers had sent through the mail on the school’s property, even before the researchers arrived on the island. She, too, highlighted the value of the sensors for the peace of mind for the community during the eruption, and also as a teaching tool. “The day that we came back, I pulled it up [on the projector], and we’ve been looking at it every morning, looking at the data and checking the air quality,” she recalled. Baker also explained that the sensor was helpful for connecting the science behind the air quality with what students were experiencing in their everyday lives.  

Ted Brattstrom, a high school teacher at Ka‘u High School, was similarly enthusiastic about having a sensor installed at his school.

“The sensors are going to give us two benefits. The first and foremost benefit is, by having this data in one-minute intervals, we’re going to know when we actually have an SO2 event occurring,”  he said.

“That lets us keep the kids inside, and in as air conditioned an area and as filtered an area as we can, and then say when it’s safe to go outside,” he explained in May as the sensor at his school was initially installed. “As a science geek for myself and my class, we now get to see how the atmosphere is running, how not only the caldera itself — the volcano itself — is operating and putting out gases, but also how that’s coming downwind, working with the topography of the Island, and getting the [vog] here.” 

The sensors themselves are rooted in education. They were initially developed as part of the CEE subject 1.091 (Traveling Research Environmental Experiences, or TREX), an annual undergraduate fieldwork project which takes students to Hawaii Island to conduct research over Independent Activities Period. Over the years, the students discovered and worked through the glitches and issues with the sensors, leading to the development of the current iteration. It was thus natural for Kroll and Heald to engage with the EPA on a new project to use the sensors for real-time data but to also have a similar educational component with the schools and health centers.

“The ultimate goal is for each school to have one of these air quality monitors, and by doing that the students get information on the air that they’re breathing, really connecting these abstract concepts of chemistry and of measurements to something they actually know: the vog in the air they’re breathing,” Kroll says of the long-term project. “On top of that, it puts a data set in their hands. We make the data freely available so we can see all these numbers corresponding to concentrations of SO2 and particulate matter, and they can learn how to plot the data, how to analyze it, how to think about it in the larger context of environmental science.”

In early August, as abruptly as it started, the eruption suddenly ended. Kilauea is currently the quietest it has been in decades. While the immediate threat has dissipated, and the air quality in Hawaii is better than it has been since the beginning of the eruption in 1983, the network continues to collect and publish valuable data on background pollution levels.

Since installing the sensors, the researchers have collected a unique dataset on the air quality across the island. They are currently analyzing their measurements from the eruption to better understand the atmospheric transport and transformation of vog components. The researchers are also hoping to learn how sensor data relates to — and complements — air quality measurements from other platforms such as monitoring stations and satellites.

“Even though the eruption seems to be over, the network is still running. Right now, we’re measuring very low levels of pollutants, as expected. This is good not only for the local air quality but also for the science: When researching pollution, it’s not often you get to measure what the underlying background levels are,” Kroll says of the ongoing research. “More importantly, we now have the sensor network up, so we'll be ready to measure air quality across the whole island the next time Kilauea erupts.”



de MIT News http://bit.ly/2MLiZMU

MIT robot combines vision and touch to learn the game of Jenga

In the basement of MIT’s Building 3, a robot is carefully contemplating its next move. It gently pokes at a tower of blocks, looking for the best block to extract without toppling the tower, in a solitary, slow-moving, yet surprisingly agile game of Jenga.

The robot, developed by MIT engineers, is equipped with a soft-pronged gripper, a force-sensing wrist cuff, and an external camera, all of which it uses to see and feel the tower and its individual blocks.

As the robot carefully pushes against a block, a computer takes in visual and tactile feedback from its camera and cuff, and compares these measurements to moves that the robot previously made. It also considers the outcomes of those moves — specifically, whether a block, in a certain configuration and pushed with a certain amount of force, was successfully extracted or not. In real-time, the robot then “learns” whether to keep pushing or move to a new block, in order to keep the tower from falling.

Details of the Jenga-playing robot are published today in the journal Science Robotics. Alberto Rodriguez, the Walter Henry Gale Career Development Assistant Professor in the Department of Mechanical Engineering at MIT, says the robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.

“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks,” Rodriguez says. “This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

He says the tactile learning system the researchers have developed can be used in applications beyond Jenga, especially in tasks that need careful physical interaction, including separating recyclable objects from landfill trash and assembling consumer products.

“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” Rodriguez says. “Learning models for those actions is prime real-estate for this kind of technology.”

The paper’s lead author is MIT graduate student Nima Fazeli. The team also includes Miquel Oller, Jiajun Wu, Zheng Wu, and Joshua Tenenbaum, professor of brain and cognitive sciences at MIT.

Push and pull

In the game of Jenga — Swahili for “build” — 54 rectangular blocks are stacked in 18 layers of three blocks each, with the blocks in each layer oriented perpendicular to the blocks below. The aim of the game is to carefully extract a block and place it at the top of the tower, thus building a new level, without toppling the entire structure.

To program a robot to play Jenga, traditional machine-learning schemes might require capturing everything that could possibly happen between a block, the robot, and the tower — an expensive computational task requiring data from thousands if not tens of thousands of block-extraction attempts.

Instead, Rodriguez and his colleagues looked for a more data-efficient way for a robot to learn to play Jenga, inspired by human cognition and the way we ourselves might approach the game.

The team customized an industry-standard ABB IRB 120 robotic arm, then set up a Jenga tower within the robot’s reach, and began a training period in which the robot first chose a random block and a location on the block against which to push. It then exerted a small amount of force in an attempt to push the block out of the tower.

For each block attempt, a computer recorded the associated visual and force measurements, and labeled whether each attempt was a success.

Rather than carry out tens of thousands of such attempts (which would involve reconstructing the tower almost as many times), the robot trained on just about 300, with attempts of similar measurements and outcomes grouped in clusters representing certain block behaviors. For instance, one cluster of data might represent attempts on a block that was hard to move, versus one that was easier to move, or that toppled the tower when moved. For each data cluster, the robot developed a simple model to predict a block’s behavior given its current visual and tactile measurements.

Fazeli says this clustering technique dramatically increases the efficiency with which the robot can learn to play the game, and is inspired by the natural way in which humans cluster similar behavior: “The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen.”

Stacking up

The researchers tested their approach against other state-of-the-art machine learning algorithms, in a computer simulation of the game using the simulator MuJoCo. The lessons learned in the simulator informed the researchers of the way the robot would learn in the real world.

“We provide to these algorithms the same information our system gets, to see how they learn to play Jenga at a similar level,” Oller says. “Compared with our approach, these algorithms need to explore orders of magnitude more towers to learn the game.”

Curious as to how their machine-learning approach stacks up against actual human players, the team carried out a few informal trials with several volunteers.

“We saw how many blocks a human was able to extract before the tower fell, and the difference was not that much,” Oller says.

But there is still a way to go if the researchers want to competitively pit their robot against a human player. In addition to physical interactions, Jenga requires strategy, such as extracting just the right block that will make it difficult for an opponent to pull out the next block without toppling the tower.

For now, the team is less interested in developing a robotic Jenga champion, and more focused on applying the robot’s new skills to other application domains.

“There are many tasks that we do with our hands where the feeling of doing it ‘the right way’ comes in the language of forces and tactile cues,” Rodriguez says. “For tasks like these, a similar approach to ours could figure it out.”

This research was supported, in part, by the National Science Foundation through the National Robotics Initiative.



de MIT News http://bit.ly/2RV6MLj

Ingestible, expanding pill monitors the stomach for up to a month

MIT engineers have designed an ingestible, Jell-O-like pill that, upon reaching the stomach, quickly swells to the size of a soft, squishy ping-pong ball big enough to stay in the stomach for an extended period of time.

The inflatable pill is embedded with a sensor that continuously tracks the stomach’s temperature for up to 30 days. If the pill needs to be removed from the stomach, a patient can drink a solution of calcium that triggers the pill to quickly shrink to its original size and pass safely out of the body.

The new pill is made from two types of hydrogels — mixtures of polymers and water that resemble the consistency of Jell-O. The combination enables the pill to quickly swell in the stomach while remaining impervious to the stomach’s churning acidic environment.

The hydrogel-based design is softer, more biocompatible, and longer-lasting than current ingestible sensors, which either can only remain in the stomach for a few days, or are made from hard plastics or metals that are orders of magnitude stiffer than the gastrointestinal tract.

“The dream is to have a Jell-O-like smart pill, that once swallowed stays in the stomach and monitors the patient’s health for a long time such as a month,” says Xuanhe Zhao, associate professor of mechanical engineering at MIT.

Zhao and senior collaborator Giovanni Traverso, a visiting scientist who will join the MIT faculty in 2019, along with lead authors Xinyue Liu, Christoph Steiger, and Shaoting Lin, have published their results today in Nature Communications.

Pills, ping-pongs, and pufferfish

The design for the new inflatable pill is inspired by the defense mechanisms of the pufferfish, or blowfish. Normally a slow-moving species, the pufferfish will quickly inflate when threatened, like a spiky balloon. It does so by sucking in a large amount of water, fast.

The puffer’s tough, fast-inflating body was exactly what Zhao was looking to replicate in hydrogel form. The team had been looking for ways to design a hydrogel-based pill to carry sensors into the stomach and stay there to monitor, for example, vital signs or disease states for a relatively long period of time.

They realized that if a pill were small enough to be swallowed and passed down the esophagus, it would also be small enough to pass out of the stomach, through an opening known as the pylorus. To keep it from exiting the stomach, the group would have to design the pill to quickly swell to the size of a ping-pong ball.

“Currently, when people try to design these highly swellable gels, they usually use diffusion, letting water gradually diffuse into the hydrogel network,” Liu says. “But to swell to the size of a ping-pong ball takes hours, or even days. It’s longer than the emptying time of the stomach.”

The researchers instead looked for ways to design a hydrogel pill that could inflate much more quickly, at a rate comparable to that of a startled pufferfish.

 A new hydrogel device swells to more than twice its size in just a few minutes in water.  

An ingestible tracker

The design they ultimately landed on resembles a small, Jell-O-like capsule, made from two hydrogel materials. The inner material contains sodium polyacrylate — superabsorbent particles that are used in commercial products such as diapers for their ability to rapidly soak up liquid and inflate.

The researchers realized, however, that if the pill were made only from these particles, it would immediately break apart and pass out of the stomach as individual beads. So they designed a second, protective hydrogel layer to encapsulate the fast-swelling particles. This outer membrane is made from a multitude of nanoscopic, crystalline chains, each folded over another, in a nearly impenetrable, gridlock pattern — an “anti-fatigue” feature that the researchers reported in an earlier paper.

“You would have to crack through many crystalline domains to break this membrane,” Lin says. “That’s what makes this hydrogel extremely robust, and at the same time, soft.”

In the lab, the researchers dunked the pill in various solutions of water and fluid resembling gastric juices, and found the pill inflated to 100 times its original size in about 15 minutes — much faster than existing swellable hydrogels. Once inflated, Zhao says the pill is about the softness of tofu or Jell-O, yet surprisingly strong.

To test the pill’s toughness, the researchers mechanically squeezed it thousands of times, at forces even greater than what the pill would experience from regular contractions in the stomach.

“The stomach applies thousands to millions of cycles of load to grind food down,” Lin explains. “And we found that even when we make a small cut in the membrane, and then stretch and squeeze it thousands of times, the cut does not grow larger. Our design is very robust.”

The researchers further determined that a solution of calcium ions, at a concentration higher than what’s in milk, can shrink the swollen particles. This triggers the pill to deflate and pass out of the stomach.

Finally, Steiger and Traverso embedded small, commercial temperature sensors into several pills, and fed the pills to pigs, which have stomachs and gastrointestinal tracts very similar to humans. The team later retrieved the temperature sensors from the pigs’ stool and plotted the sensors’ temperature measurements over time. They found that the sensor was able to accurately track the animals’ daily activity patterns up to 30 days.

“Ingestible electronics is an emerging area to monitor important physiological conditions and biomarkers,” says Hanqing Jiang, a professor of mechanical and aerospace engineering at Arizona State University, who was not involved in the work. “Conventional ingestible electronics are made of non-bio-friendly materials. Professor Zhao’s group is making a big leap on the development of biocompatible and soft but tough gel-based ingestible devices, which significantly extends the horizon of ingestible electronics. It also represents a new application of tough hydrogels that the group has been devoted to for years.”

Down the road, the researchers envision the pill may safely deliver a number of different sensors to the stomach to monitor, for instance, pH levels, or signs of certain bacteria or viruses. Tiny cameras may also be embedded into the pills to image the progress of tumors or ulcers, over the course of several weeks. Zhao says the pill might also be used as a safer, more comfortable alternative to the gastric balloon diet, a form of diet control in which a balloon is threaded through a patient’s esophagus and into the stomach, using an endoscope.

“With our design, you wouldn’t need to go through a painful process to implant a rigid balloon,” Zhao says. “Maybe you can take a few of these pills instead, to help fill out your stomach, and lose weight. We see many possibilities for this hydrogel device.”

This research was supported, in part, by the National Science Foundation, National Institutes of Health, and the Bill and Melinda Gates Foundation.



de MIT News http://bit.ly/2RWfQzv

martes, 29 de enero de 2019

Optimizing solar farms with smart drones

As the solar industry has grown, so have some of its inefficiencies. Smart entrepreneurs see those inefficiencies as business opportunities and try to create solutions around them. Such is the nature of a maturing industry.

One of the biggest complications emerging from the industry’s breakneck growth is the maintenance of solar farms. Historically, technicians have run electrical tests on random sections of solar cells in order to identify problems. In recent years, the use of drones equipped with thermal cameras has improved the speed of data collection, but now technicians are being asked to interpret a never-ending flow of unstructured data.

That’s where Raptor Maps comes in. The company’s software analyzes imagery from drones and diagnoses problems down to the level of individual cells. The system can also estimate the costs associated with each problem it finds, allowing technicians to prioritize their work and owners to decide what’s worth fixing.

“We can enable technicians to cover 10 times the territory and pinpoint the most optimal use of their skill set on any given day,” Raptor Maps co-founder and CEO Nikhil Vadhavkar says. “We came in and said, ‘If solar is going to become the number one source of energy in the world, this process needs to be standardized and scalable.’ That’s what it takes, and our customers appreciate that approach.”

Raptor Maps processed the data of 1 percent of the world’s solar energy in 2018, amounting to the energy generated by millions of panels around the world. And as the industry continues its upward trajectory, with solar farms expanding in size and complexity, the company’s business proposition only becomes more attractive to the people driving that growth.

Picking a path

Raptor Maps was founded by Eddie Obropta ’13 SM ’15, Forrest Meyen SM ’13 PhD ’17, and Vadhavkar, who was a PhD candidate at MIT between 2011 and 2016. The former classmates had worked together in the Human Systems Laboratory of the Department of Aeronautics and Astronautics when Vadhavkar came to them with the idea of starting a drone company in 2015.

The founders were initially focused on the agriculture industry. The plan was to build drones equipped with high-definition thermal cameras to gather data, then create a machine-learning system to gain insights on crops as they grew. While the founders began the arduous process of collecting training data, they received guidance from MIT’s Venture Mentoring Service and the Martin Trust Center. In the spring of 2015, Raptor Maps won the MIT $100K Launch competition.

But even as the company began working with the owners of two large farms, Obropta and Vadhavkar were unsure of their path to scaling the company. (Meyen left the company in 2016.) Then, in 2017, they made their software publicly available and something interesting happened.

They found that most of the people who used the system were applying it to thermal images of solar farms instead of real farms. It was a message the founders took to heart.

“Solar is similar to farming: It’s out in the open, 2-D, and it’s spread over a large area,” Obropta says. “And when you see [an anomaly] in thermal images on solar, it usually means an electrical issue or a mechanical issue — you don’t have to guess as much as in agriculture. So we decided the best use case was solar. And with a big push for clean energy and renewables, that aligned really well with what we wanted to do as a team.”

Obropta and Vadhavkar also found themselves on the right side of several long-term trends as a result of the pivot. The International Energy Agency has proposed that solar power could be the world’s largest source of electricity by 2050. But as demand grows, investors, owners, and operators of solar farms are dealing with an increasingly acute shortage of technicians to keep the panels running near peak efficiency.

Since deciding to focus on solar exclusively around the beginning of 2018, Raptor Maps has found success in the industry by releasing its standards for data collection and letting customers — or the many drone operators the company partners with — use off-the-shelf hardware to gather the data themselves. After the data is submitted to the company, the system creates a detailed map of each solar farm and pinpoints any problems it finds.

“We run analytics so we can tell you, ‘This is how many solar panels have this type of issue; this is how much the power is being affected,’” Vadhavkar says. “And we can put an estimate on how many dollars each issue costs.”

The model allows Raptor Maps to stay lean while its software does the heavy lifting. In fact, the company’s current operations involve more servers than people.

The tiny operation belies a company that’s carved out a formidable space for itself in the solar industry. Last year, Raptor Maps processed four gigawatts worth of data from customers on six different continents. That’s enough energy to power nearly 3 million homes.

Vadhavkar says the company’s goal is to grow at least fivefold in 2019 as several large customers move to make the software a core part of their operations. The team is also working on getting its software to generate insights in real time using graphical processing units on the drone itself as part of a project with the multinational energy company Enel Green Power.

Ultimately, the data Raptor Maps collect are taking the uncertainty out of the solar industry, making it a more attractive space for investors, operators, and everyone in between.

“The growth of the industry is what drives us,” Vadhavkar says. “We’re directly seeing that what we’re doing is impacting the ability of the industry to grow faster. That’s huge. Growing the industry — but also, from the entrepreneurial side, building a profitable business while doing it — that’s always been a huge dream.”



de MIT News http://bit.ly/2TiaAmh

Engineers program marine robots to take calculated risks

We know far less about the Earth’s oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate.

But what if the reward for traversing such places was worth the risk?

MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unknown region. For instance, if a vehicle tasked with identifying underwater oil seeps approached a steep, rocky trench, the algorithm could assess the reward level (the probability that an oil seep exists near this trench), and the risk level (the probability of colliding with an obstacle), if it were to take a path through the trench.  

“If we were very conservative with our expensive vehicle, saying its survivability was paramount above all, then we wouldn’t find anything of interest,” Ayton says. “But if we understand there’s a tradeoff between the reward of what you gather, and the risk or threat of going toward these dangerous geographies, we can take certain risks when it’s worthwhile.”

Ayton says the new algorithm can compute tradeoffs of risk versus reward in real time, as a vehicle decides where to explore next. He and his colleagues in the lab of Brian Williams, professor of aeronautics and astronautics, are implementing this algorithm and others on AUVs, with the vision of deploying fleets of bold, intelligent robotic explorers for a number of missions, including looking for offshore oil deposits, investigating the impact of climate change on coral reefs, and exploring extreme environments analogous to Europa, an ice-covered moon of Jupiter that the team hopes vehicles will one day traverse.

“If we went to Europa and had a very strong reason to believe that there might be a billion-dollar observation in a cave or crevasse, which would justify sending a spacecraft to Europa, then we would absolutely want to risk going in that cave,” Ayton says. “But algorithms that don’t consider risk are never going to find that potentially history-changing observation.”

Ayton and Williams, along with Richard Camilli of the Woods Hole Oceanographic Institution, will present their new algorithm at the Association for the Advancement of Artificial Intelligence conference this week in Honolulu.

A bold path

The team’s new algorithm is the first to enable “risk-bounded adaptive sampling.” An adaptive sampling mission is designed, for instance, to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do so by finding paths with a concrete, acceptable level of risk. For instance, AUVs may be programmed to only chart paths with a chance of collision that doesn’t exceed 5 percent.

But the researchers found that accounting for risk alone could severely limit a mission’s potential rewards. 

“Before we go into a mission, we want to specify the risk we’re willing to take for a certain level of reward,” Ayton says. “For instance, if a path were to take us to more hydrothermal vents, we would be willing to take this amount of risk, but if we’re not going to see anything, we would be willing to take less risk.”

The team’s algorithm takes in bathymetric data, or information about the ocean topography, including any surrounding obstacles, along with the vehicle’s dynamics and inertial measurements, to compute the level of risk for a certain proposed path. The algorithm also takes in all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV goes ahead with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They used bathymetric data collected from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

In the scenario with the lowest acceptable risk, meaning that the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that also did not have any high rewards — in this case, high temperatures. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and ultimately to a high-reward region.

The team also ran the algorithm through 10,000 numerical simulations, generating random environments in each simulation through which to plan a path, and found that the algorithm “trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.”

A risky slope

Last December, Ayton, Williams, and others spent two weeks on a cruise off the coast of Costa Rica, deploying underwater gliders, on which they tested several algorithms, including this newest one. For the most part, the algorithm’s path planning agreed with those proposed by several onboard geologists who were looking for the best routes to find oil seeps.

Ayton says there was a particular moment when the risk-bounded algorithm proved especially handy. An AUV was making its way up a precarious slump, or landslide, where the vehicle couldn’t take too many risks.

“The algorithm found a method to get us up the slump quickly, while being the most worthwhile,” Ayton says. “It took us up a path that, while it didn’t help us discover oil seeps, it did help us refine our understanding of the environment.”

“What was really interesting was to watch how the machine algorithms began to ‘learn’ after the findings of several dives, and began to choose sites that we geologists might not have chosen initially,” says Lori Summa, a geologist and guest investigator at the Woods Hole Oceanographic Institution, who took part in the cruise.  “This part of the process is still evolving, but it was exciting to watch the algorithms begin to identify the new patterns from large amounts of data, and couple that information to an efficient, ‘safe’ search strategy.” 

In their long-term vision, the researchers hope to use such algorithms to help autonomous vehicles explore environments beyond Earth.

“If we went to Europa and weren’t willing to take any risks in order to preserve a probe, then the probability of finding life would be very, very low,” Ayton says. “You have to risk a little to get more reward, which is generally true in life as well.”

This research was supported, in part, by Exxon Mobile, as part of the MIT Energy Initiative, and by NASA.



de MIT News http://bit.ly/2BbfNFI

Hayden Library to undergo renovation in 2020

The MIT Libraries has announced plans to partially renovate Hayden Library next year, with a goal of beginning construction in January 2020 and reopening a renovated Hayden in the fall term of 2020. MIT Campus Planning has engaged Kennedy Violich Architects (KVA) to work with the Institute to plan for the next version of Hayden, and the project is now entering the design phase. The library will close at the end of the 2019 fall term to prepare for construction and is planning to reopen in fall 2020.

The project developed from recommendations of the MIT Task Force on the Future of Libraries, released in 2016, which stressed the importance of “welcoming and inclusive spaces for discovery and scholarship,” as well as the need to address building renewal needs and code updates.

“The Task Force challenged us to create spaces that reflect the library of the future: participatory, creative, dynamic, with research at the center” says Chris Bourg, director of the MIT Libraries. “We envision the new Hayden Library as a destination on campus, a place that is open, welcoming, and that invites community members to make connections between ideas, collections, and each other.”

This renovation will include the first and second floors of Hayden Library and their mezzanine levels. Design goals include creating both vibrant, interactive spaces as well as quiet zones, with specific improvements including:

  • significant expansion of 24/7 study space;
  • a café;
  • greater variety of study spaces — for both individual and group work, with both quiet and conversation zones and varied seating styles; and
  • flexible teaching and event space.

“We’ve gathered input from the MIT community over the last several years about what they want from library spaces,” says Tracy Gabridge, deputy director of MIT Libraries. “This project aims to meet those diverse needs — from a place to grab coffee and run into friends to a spot to work together with others, all while having space for quiet study and reflection — as well as pursue the bold vision from the task force.”

The MIT Libraries staff and the Office of Campus Planning have recently completed pre-design activities with KVA in anticipation of project approval. This work has drawn on library user survey data, input from the Committee on the Library System, and the work of the MIT Library Space Planning Group convened in 2017.

The Libraries and KVA are planning additional opportunities to seek input from across the MIT community to inform the design and construction phases, and a project launch event is planned for February. Please visit the project webpage for more information and to sign up for email updates.



de MIT News http://bit.ly/2Rpi4lO

MIT’s REXIS and Bennu’s watery surface

After flying in space for more than two years, NASA’s spacecraft OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer) recently entered into orbit around its target, the asteroid Bennu. Asteroids like Bennu are considered to be leftover debris from the formation of our solar system. So, in the first mission of its kind flown by NASA, OSIRIS-REx is looking to retrieve a sample and bring it to Earth.

In addition to several instruments onboard the spacecraft is an MIT student-built one called the REgolith X-Ray Imaging Spectrometer (REXIS), which will provide data to help select the sampling site, as well as other mission objectives, including characterizing the asteroid and its behaviors, and comparing those to ground-based observations. REXIS is a joint project between the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), MIT Department of Aeronautics and Astronautics (AeroAstro), the Harvard College Observatory, the MIT Kavli Institute for Astrophysics and Space Research, and MIT Lincoln Laboratory.

Shortly after arriving at Bennu, OSIRIS-REx researchers announced that they had identified water on the asteroid, possibly impacting selection of the sampling site. EAPS spoke with Richard Binzel — an expert on asteroids at MIT and co-investigator on this mission, leading the development of REXIS — about the instrument’s role and what this finding means for the future use of similar devices. Binzel is also professor of planetary sciences in EAPS with a joint appointment in AeroAstro, and a Margaret MacVicar Faculty Fellow.

Q: What is the purpose of REXIS, as part of the OSIRIS-REx mission?

A: The goal of the OSIRIS-REx mission is to obtain a pristine sample from the surface of the asteroid, Bennu, that has some of the most original, surviving chemistry from the very beginning of our solar system. The asteroid is like a time capsule, which is going to tell us what the condition of our solar system was like when it formed 4.56 billion years ago.

The goal of REXIS is to map the composition of Bennu in support of the mission, choosing the location for that sample. The objective is to go to the asteroid and spend up to a year studying it in detail to determine what location can give us the highest scientific return. It is a matter of progressive evaluation and characterization of the asteroid: We will undergo orbits that gradually go lower to the point where we see the surface in extremely good detail — like the characteristics of craters and boulders. In this way, we know where we're going to touch the surface, grab a sample, and bring it safely onboard the spacecraft.

To do this, onboard OSIRIS-REx, there's a suite of instruments: visible cameras and spectrometers mostly in the visible and near infrared wavelengths that are mapping the asteroid’s surface, in addition to MIT’s REXIS, the REgolith X-ray Imaging Spectrometer. REXIS complements all the other instruments and contributes to the rest of the data by seeing in X-ray light. No other instrument on OSIRIS-REx will see the surface in X-ray light. So, this is quite unique in planetary exploration, and the fact that it was built by students is even more amazing.

One of our objectives is to corroborate the mineral mapping that's done by the other instruments. The visible and near infrared spectrometers are sensitive to the mineral composition of the surface, and REXIS measures the individual atomic elements that are present. One of the things that we want to accomplish is to see whether the atomic elements that we measure are consistent with the minerals that the other instruments are measuring and vice versa.

Q: How does REXIS work?

A: REXIS works by taking advantage of the sun’s X-ray emissions. Some of those X-rays hit the asteroid and interact with the atoms on the surface: They get absorbed and change the electron energy level in the atoms. When the atoms return to their ground state, they emit an X-ray photon, which means the X-rays from the sun caused the asteroid to glow or fluoresce.

REXIS measures the energy and the locations of the X-rays that are fluorescing away from the asteroid surface, and the energies tell us which atoms are present. The energy of an X-ray photon that gets emitted by an atom corresponds exactly to the energy between two electron orbitals. Every atom has its own unique signature of energy states, so we can deduce the elemental composition of the surface of the asteroid.

We're going to be looking for things like iron, silicon, oxygen, and sulfur — some very basic building blocks of planetary bodies. We'll be able to measure those abundances and determine the composition of this asteroid.

Now, we are performing all sorts of calibration measurements, and we're learning about the characteristics of the instrument in space: ways that it's working as expected and differences. It's part of the instrument design to monitor the sun's output and calibrate the asteroid observations, taking into account any variation from the sun. REXIS has two parts to it: One part is the main spectrometer that is measuring the X-rays emitted from the asteroid surface; the second is a small solar X-ray monitor or SXM, and it is constantly looking at the output of the sun, which varies over timescales of minutes, hours, and days. This way, if we are looking at one location on the asteroid and we see this enormous X-ray fluorescence, we'll know whether it's the asteroid that's special in that location, or whether it was just a solar flare, which happened to be occurring at the same time. We're also looking at the cosmic X-ray background or CXB and calibrating our instrument's sensitivity by looking at a steady, strong X-ray source in the sky called the Crab Nebula.

We also calibrate REXIS measurements against laboratory measurements of meteorites, and we're going to be able to pinpoint which meteorite type Bennu is most like. If we see any variation across the surface, we'll be able to say which regions have the most similarity to known meteorites, and this can guide us as to where we get our sample. 

Q: NASA announced that they found evidence of water on Bennu. What does this mean for REXIS and where the sample is taken from?

A: The OSIRIS-REx mission found evidence for the presence of hydrated minerals on the surface of the asteroid Bennu. These minerals form when water molecules react with rocky material and become part of the crystal structure. Meteorite studies suggest that this process occurred very early in solar system history. This discovery tells us that Bennu’s surface has not been heated to temperatures high enough to break down these minerals and release the water. Bennu appears to contain this primordial water, providing clues to how such material was delivered to Earth, leading to a habitable world.

This is enticing news for REXIS because one of the atomic elements we are going to be searching for is oxygen, which of course is a major constituent of water, and REXIS has the potential to confirm the finding of these water molecules in the minerals of Bennu.

A lot of factors go into the decision of where to sample. First of all, we have to determine which parts of the surface are safe to go to, that we know the spacecraft can navigate, get a sample, and come back safely. Then out of all the safe regions, which ones are the most scientifically interesting — based on what we call the science value map. The objective is to have a complete understanding of the composition of the asteroid’s surface and any variability. Then, we want to find a place to sample that we think has the most original organic chemistry from the beginning of the solar system, and so places on Bennu that may have a signature of water would be very interesting to sample.

Currently, we're still pretty far from the asteroid and slowly advancing to lower orbital distances. We will reach the orbital distance for REXIS to begin its science operations this coming June. Then, REXIS will fingerprint the composition of the asteroid in terms of its atomic elements. When we get the sample back, we'll be able to check whether REXIS got it right. If we did, it means that we can send a REXIS-like instrument anywhere in the solar system and get a reliable fingerprint of the detailed composition of what these objects are made of.

If REXIS is successful, it shows that with a small instrument you can get big science. Our nickname for REXIS is, "the little spectrometer that could."



de MIT News http://bit.ly/2FSF1fS

Learning to teach to speed up learning

The first artificial intelligence programs to defeat the world’s best players at chess and the game Go received at least some instruction by humans, and ultimately, would prove no match for a new generation of AI programs that learn wholly on their own, through trial and error.

A combination of deep learning and reinforcement learning algorithms are responsible for computers achieving dominance at challenging board games like chess and Go, a growing number of video games, including Ms. Pac-Man, and some card games, including poker. But for all the progress, computers still get stuck the closer a game resembles real life, with hidden information, multiple players, continuous play, and a mix of short and long-term rewards that make computing the optimal move hopelessly complex.

To get past these hurdles, AI researchers are exploring complementary techniques to help robot agents learn, modeled after the way humans pick up new information not only on our own, but from the people around us, and from newspapers, books, and other media. A collective-learning strategy developed by the MIT-IBM Watson AI Lab offers a promising new direction. Researchers show that a pair of robot agents can cut the time it takes to learn a simple navigation task by 50 percent or more when the agents learn to leverage each other’s growing body of knowledge. 

The algorithm teaches the agents when to ask for help, and how to tailor their advice to what has been learned up until that point. The algorithm is unique in that neither agent is an expert; each is free to act as a student-teacher to request and offer more information. The researchers are presenting their work this week at the AAAI Conference on Artificial Intelligence in Hawaii.

Co-authors on the paper, which received an honorable mention for best student paper at AAAI, are Jonathan How, a professor in MIT’s Department of Aeronautics and Astronautics; Shayegan Omidshafiei, a former MIT graduate student now at Google DeepMind; Dong-ki Kim of MIT; Miao Liu, Gerald Tesauro, Matthew Riemer, and Murray Campbell of IBM; and Christopher Amato of Northeastern University.

“This idea of providing actions to most improve the student's learning, rather than just telling it what to do, is potentially quite powerful,” says Matthew E. Taylor, a research director at Borealis AI, the research arm of the Royal Bank of Canada, who was not involved in the research. “While the paper focuses on relatively simple scenarios, I believe the student/teacher framework could be scaled up and useful in multi-player video games like StarCraft or Dota 2, robot soccer, or disaster-recovery scenarios.”

For now, the pros still have the edge in StarCraft, Dota2, and other virtual games that favor teamwork and quick, strategic thinking. But as machines get better at maneuvering dynamic environments, they may soon be ready for real-world tasks like managing traffic in a big city or coordinating search-and-rescue teams on the ground and in the air.   

“Machines lack the common-sense knowledge we develop as children,” says Liu, a former MIT postdoc now at the MIT-IBM lab. “That’s why they need to watch millions of video frames, and spend a lot of computation time, learning to play a game well. Even then, they lack efficient ways to transfer their knowledge to the team, or generalize their skills to a new game. If we can train robots to learn from others, and generalize their learning to other tasks, we can start to better coordinate their interactions with each other, and with humans.” 

The MIT-IBM team’s key insight was that a team that divides and conquers to learn a new task — in this case, maneuvering to opposite ends of a room and touching the wall at the same time — will learn faster. 

Their teaching algorithm alternates between two phases. In the first, both student and teacher decide with each respective step whether to ask for, or give, advice based on their confidence that the next move, or the advice they are about to give, will bring them closer to their goal. Thus, the student only asks for advice, and the teacher only gives it, when the added information is likely to improve their performance. With each step, the agents update their respective task policies and theprocess continues until they reach their goal or run out of time. 

With each iteration, the algorithm records the student’s decisions, the teacher’s advice, and their learning progress as measured by the game’s final score. In the second phase, a deep reinforcement learning technique uses the previously recorded teaching data to update both advising policies. “With each update the teacher gets better at giving the right advice at the right time,” says Kim, a graduate student at MIT.

In a follow up paper to be discussed in a workshop at AAAI, the researchers improve on the algorithm’s ability to track how well the agents are learning the underlying task — in this case, a box-pushing task — to improve the agents’ ability to give and receive advice. It’s another step that takes the team closer to its longer term goal of entering the RoboCup, an annual robotics competition started by academic AI researchers.  

“We would need to scale to 11 agents before we can play a game of soccer,” says Tesauro, an IBM researcher who developed the first AI program to master the game of backgammon. “It’s going to take some more work but we’re hopeful.”



de MIT News http://bit.ly/2RWYAtR

lunes, 28 de enero de 2019

Want to squelch fake news? Let the readers take charge

Would you like to rid the internet of false political news stories and misinformation? Then consider using — yes — crowdsourcing.

That’s right. A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation.

“What we found is that, while there are real disagreements among Democrats and Republicans concerning mainstream news outlets, basically everybody — Democrats, Republicans, and professional fact-checkers — agree that the fake and hyperpartisan sites are not to be trusted,” says David Rand, an MIT scholar and co-author of a new paper detailing the study’s results.

Indeed, using a pair of public-opinion surveys to evaluate of 60 news sources, the researchers found that Democrats trusted mainstream media outlets more than Republicans do — with the exception of Fox News, which Republicans trusted far more than Democrats did. But when it comes to lesser-known sites peddling false information, as well as “hyperpartisan” political websites (the researchers include Breitbart and Daily Kos in this category), both Democrats and Republicans show a similar disregard for such sources.

Trust levels for these alternative sites were low overall. For instance, in one survey, when respondents were asked to give a trust rating from 1 to 5 for news outlets, the result was that hyperpartisan websites received a trust rating of only 1.8 from both Republicans and Democrats; fake news sites received a trust rating of only 1.7 from Republicans and 1.9 from Democrats. 

By contrast, mainstream media outlets received a trust rating of 2.9 from Democrats but only 2.3 from Republicans; Fox News, however, received a trust rating of 3.2 from Republicans, compared to 2.4 from Democrats.

The study adds a twist to a high-profile issue. False news stories have proliferated online in recent years, and social media sites such as Facebook have received sharp criticism for giving them visibility. Facebook also faced pushback for a January 2018 plan to let readers rate the quality of online news sources. But the current study suggests such a crowdsourcing approach could work well, if implemented correctly.

“If the goal is to remove really bad content, this actually seems quite promising,” Rand says. 

The paper, “Fighting misinformation on social media using crowdsourced judgments of news source quality,” is being published in Proceedings of the National Academy of Sciences this week. The authors are Gordon Pennycook of the University of Regina, and Rand, an associate professor in the MIT Sloan School of Management.

To promote, or to squelch?

To perform the study, the researchers conducted two online surveys that had roughly 1,000 participants each, one on Amazon’s Mechanical Turk platform, and one via the survey tool Lucid. In each case, respondents were asked to rate their trust in 60 news outlets, about a third of which were high-profile, mainstream sources.

The second survey’s participants had demographic characteristics resembling that of the country as a whole — including partisan affiliation. (The researchers weighted Republicans and Democrats equally in the survey to avoid any perception of bias.) That survey also measured the general audience’s evaluations against a set of judgments by professional fact-checkers, to see whether the larger audience’s judgments were similar to the opinions of experienced researchers.

But while Democrats and Republicans regarded prominent news outlets differently, that party-based mismatch largely vanished when it came to the other kinds of news sites, where, as Rand says, “By and large we did not find that people were really blinded by their partisanship.”

In this vein, Republicans trusted MSNBC more than Breitbart, even though many of them regarded it as a left-leaning news channel. Meanwhile, Democrats, although they trusted Fox News less than any other mainstream news source, trusted it more than left-leaning hyperpartisan outlets (such as Daily Kos).

Moreover, because the respondents generally distrusted the more marginal websites, there was significant agreement among the general audience and the professional fact-checkers. (As the authors point out, this also challenges claims about fact-checkers having strong political biases themselves.)

That means the crowdsourcing approach could work especially well in marginalizing false news stories — for instance by building audience judgments into an algorithm ranking stories by quality. Crowdsourcing would probably be less effective, however, if a social media site were trying to build a consensus about the very best news sources and stories.

Where Facebook failed: Familiarity?

If the new study by Rand and Pennycook rehabilitates the idea of crowdsourcing news source judgments, their approach differs from Facebook’s stated 2018 plan in one crucial respect. Facebook was only going to let readers who were familiar with a given news source give trust ratings.

But Rand and Pennycook conclude that this method would indeed build bias into the system, because people are more skeptical of news sources they have less familiarity with — and there is likely good reason why most people are not acquainted with many sites that run fake or hyperpartisan news.

 “The people who are familiar with fake news outlets are, by and large, the people who like fake news,” Rand says. “Those are not the people that you want to be asking whether they trust it.”

Thus for crowdsourced judgments to be a part of an online ranking algorithm, there might have to be a mechanism for using the judgments of audience members who are unfamiliar with a given source. Or, better yet, suggest, Pennycook and Rand, showing users sample content from each news outlet before having the users produce trust ratings.

For his part, Rand acknowledges one limit to the overall generalizability of the study: The dymanics could be different in countries that have more limited traditions of freedom of the press.

“Our results pertain to the U.S., and we don’t have any sense of how this will generalize to other countries, where the fake news problem is more serious than it is here,” Rand says.

All told, Rand says, he also hopes the study will help people look at America’s fake news problem with something less than total despair.

“When people talk about fake news and misinformation, they almost always have very grim conversations about how everything is terrible,” Rand says. “But a lot of the work Gord [Pennycook] and I have been doing has turned out to produce a much more optimistic take on things.”

Support for the study came from the Ethics and Governance of Artifical Intelligence Initiative of the Miami Foundation, the Social Sciences and Humanities Research Council of Canada, and the Templeton World Charity Foundation.



de MIT News http://bit.ly/2FUE7Q2