lunes, 31 de enero de 2022

Nancy Kanwisher wins National Academy of Sciences Award in the Neurosciences

The National Academy of Sciences (NAS) has announced that Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience in MIT’s Department of Brain and Cognitive Sciences, has received the 2022 NAS Award in the Neurosciences for her “pioneering research into the functional organization of the human brain.” The $25,000 prize, established by the Fidia Research Foundation, is presented every three years to recognize “extraordinary contributions to the neuroscience fields.”

“I am deeply honored to receive this award from the NAS,” says Kanwisher, who is also an investigator in MIT’s McGovern Institute and a member of the Center for Brains, Minds and Machines. “It has been a profound privilege, and a total blast, to watch the human brain in action as these data began to reveal an initial picture of the organization of the human mind. But the biggest joy has been the opportunity to work with the incredible group of talented young scientists who actually did the work that this award recognizes.”

A window into the mind

Kanwisher is best known for her landmark insights into how humans recognize and process faces. Psychology had long suggested that recognizing a face might be distinct from general object recognition. But Kanwisher galvanized the field in 1997 with her influential discovery that the human brain contains a small region specialized to respond only to faces. The region, which Kanwisher termed the fusiform face area (FFA), became activated when subjects viewed images of faces in an MRI scanner, but not when they looked at scrambled faces or control stimuli.

Since her discovery (now the most highly cited manuscript in its area), Kanwisher and her students have applied similar methods to find brain specializations for the recognition of scenes, the mental states of others, language, and music. Taken together, her research provides a compelling glimpse into the architecture of the brain, and, ultimately, what makes us human.

“Nancy’s work over the past two decades has argued that many aspects of human cognition are supported by specialized neural circuitry, a conclusion that stands in contrast to our subjective sense of a singular mental experience,” says McGovern Institute Director Robert Desimone. “She has made profound contributions to the psychological and cognitive sciences and I am delighted that the National Academy of Sciences has recognized her outstanding achievements.” 

One-in-a-million mentor

Beyond the lab, Kanwisher has a reputation as a tireless communicator and mentor who is actively engaged in the policy implications of brain research. The statistics speak for themselves: Her 2014 TED talk, “A neural portrait of the human mind” has been viewed over a million times online and her introductory MIT OpenCourseWare course on the human brain has generated more than 9 million views on YouTube.

Kanwisher also has an exceptional track record in training women in science who have gone on to successful independent research careers, in many cases becoming prominent figures in their own right. 

“Nancy is the one-in-a-million mentor, who is always skeptical of your ideas and your arguments, but immensely confident of your worth,” says Rebecca Saxe, the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, investigator at the McGovern Institute, and associate dean of MIT’s School of Science. Saxe was a graduate student in Kanwisher’s lab, where she earned her PhD in cognitive neuroscience in 2003. “She has such authentic curiosity,” Saxe adds. “It’s infectious and sustaining. Working with Nancy was a constant reminder of why I wanted to be a scientist.”

The NAS will present Kanwisher with the award during its annual meeting on May 1 in Washington. The event will be webcast live. Kanwisher plans to direct her prize funds to the nonprofit organization Malengo, which was established by a former student and which provides quality undergraduate education to individuals who would otherwise not be able to afford it.



de MIT News https://ift.tt/EOpSWKqYU

Preparing global online learners for the clean energy transition

After a career devoted to making the electric power system more efficient and resilient, Marija Ilic came to MIT in 2018 eager not just to extend her research in new directions, but to prepare a new generation for the challenges of the clean-energy transition.

To that end, Ilic, a senior research scientist in MIT’s Laboratory for Information and Decisions Systems (LIDS) and a senior staff member at Lincoln Laboratory in the Energy Systems Group, designed an edX course that captures her methods and vision: Principles of Modeling, Simulation, and Control for Electric Energy Systems.

EdX is a provider of massive open online courses produced in partnership with MIT, Harvard University, and other leading universities. Ilic’s class made its online debut in June 2021, running for 12 weeks, and it is one of an expanding set of online courses funded by the MIT Energy Initiative (MITEI) to provide global learners with a view of the shifting energy landscape.

Ilic first taught a version of the class while a professor at Carnegie Mellon University, rolled out a second iteration at MIT just as the pandemic struck, and then revamped the class for its current online presentation. But no matter the course location, Ilic focuses on a central theme: “With the need for decarbonization, which will mean accommodating new energy sources such as solar and wind, we must rethink how we operate power systems,” she says. “This class is about how to pose and solve the kinds of problems we will face during this transformation.”

Hot global topic

The edX class has been designed to welcome a broad mix of students. In summer 2021, more than 2,000 signed up from 109 countries, ranging from high school students to retirees. In surveys, some said they were drawn to the class by the opportunity to advance their knowledge of modeling. Many others hoped to learn about the move to decarbonize energy systems.

“The energy transition is a hot topic everywhere in the world, not just in the U.S.,” says teaching assistant Miroslav Kosanic. “In the class, there were veterans of the oil industry and others working in investment and finance jobs related to energy who wanted to understand the potential impacts of changes in energy systems, as well as students from different fields and professors seeking to update their curricula — all gathered into a community.”

Kosanic, who is currently a PhD student at MIT in electrical engineering and computer science, had taken this class remotely in the spring semester of 2021, while he was still in college in Serbia. “I knew I was interested in power systems, but this course was eye-opening for me, showing how to apply control theory and to model different components of these systems,” he says. “I finished the course and thought, this is just the beginning, and I’d like to learn a lot more.” Kosanic performed so well online that Ilic recruited him to MIT, as a LIDS researcher and edX course teaching assistant, where he grades homework assignments and moderates a lively learner community forum.

A platform for problem-solving

The course starts with fundamental concepts in electric power systems operations and management, and it steadily adds layers of complexity, posing real-world problems along the way. Ilic explains how voltage travels from point to point across transmission lines and how grid managers modulate systems to ensure that enough, but not too much, electricity flows. “To deliver power from one location to the next one, operators must constantly make adjustments to ensure that the receiving end can handle the voltage transmitted, optimizing voltage to avoid overheating the wires,” she says.

In her early lectures, Ilic notes the fundamental constraints of current grid operations, organized around a hierarchy of regional managers dealing with a handful of very large oil, gas, coal, and nuclear power plants, and occupied primarily with the steady delivery of megawatt-hours to far-flung customers. But historically, this top-down structure doesn’t do a good job of preventing loss of energy due to sub-optimal transmission conditions or due to outages related to extreme weather events.

These issues promise to grow for grid operators as distributed resources such as solar and wind enter the picture, Ilic tells students. In the United States, under new rules dictated by the Federal Energy Regulatory Commission, utilities must begin to integrate the distributed, intermittent electricity produced by wind farms, solar complexes, and even by homes and cars, which flows at voltages much lower than electricity produced by large power plants.

Finding ways to optimize existing energy systems and to accommodate low- and zero-carbon energy sources requires powerful new modes of analysis and problem-solving. This is where Ilic’s toolbox comes in: a mathematical modeling strategy and companion software that simplifies the input and output of electrical systems, no matter how large or how small. “In the last part of the course, we take up modeling different solutions to electric service in a way that is technology-agnostic, where it only matters how much a black-box energy source produces, and the rates of production and consumption,” says Ilic.

This black-box modeling approach, which Ilic pioneered in her research, enables students to see, for instance, “what is happening with their own household consumption, and how it affects the larger system,” says Rupamathi Jaddivada PhD ’20, a co-instructor of the edX class and a postdoc in electrical engineering and computer science. “Without getting lost in details of current or voltage, or how different components work, we think about electric energy systems as dynamical components interacting with each other, at different spatial scales.” This means that with just a basic knowledge of physical laws, high school and undergraduate students can take advantage of the course “and get excited about cleaner and more reliable energy,” adds Ilic.

What Jaddivada and Ilic describe as “zoom in, zoom out” systems thinking leverages the ubiquity of digital communications and the so-called “internet of things.” Energy devices of all scales can link directly to other devices in a network instead of just to a central operations hub, allowing for real-time adjustments in voltage, for instance, vastly improving the potential for optimizing energy flows.

“In the course, we discuss how information exchange will be key to integrating new end-to-end energy resources and, because of this interactivity, how we can model better ways of controlling entire energy networks,” says Ilic. “It’s a big lesson of the course to show the value of information and software in enabling us to decarbonize the system and build resilience, rather than just building hardware.”

By the end of the course, students are invited to pursue independent research projects. Some might model the impact of a new energy source on a local grid or investigate different options for reducing energy loss in transmission lines.

“It would be nice if they see that we don’t have to rely on hardware or large-scale solutions to bring about improved electric service and a clean and resilient grid, but instead on information technologies such as smart components exchanging data in real time, or microgrids in neighborhoods that sustain themselves even when they lose power,” says Ilic. “I hope students walk away convinced that it does make sense to rethink how we operate our basic power systems and that with systematic, physics-based modeling and IT methods we can enable better, more flexible operation in the future.”

This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative



de MIT News https://ift.tt/Qwr9sdLtU

Why are comet heads green — but not their tails?

In a global collaboration, a team of researchers recently proved a 90-year-old theory on why comets’ heads, but never the tails, are green.

The scientific explanation, published in PNAS on Dec. 21, has to do with the way the molecule dicarbon (C2) gets blown apart by sunlight. The other part of the story lies in an accidental discovery and a love of spectroscopic perturbations, passed from a recently retired professor to another generation of scientists.

When molecules misbehave

As a graduate student at MIT in the lab of Robert W. Field, Jun Jiang PhD ’17 was studying the molecule acetylene by exciting it with a high-power frequency-tunable UV laser. As the acetylene blew apart, one of the resulting molecules, C2, emitted light from several highly excited states.

One of these high-energy states, the C1Πg state of C2, showed an irregular vibrational energy level structure and was strongly perturbed by another mysterious electronic state. In other words, Jiang noticed that the carbon-carbon bond in the dicarbon C state vibrates in a highly unusual manner not readily explained, in some ways like a child throwing a tantrum for no apparent reason.

Introductory classes in quantum mechanics teach a model system of how molecules are supposed to act or react in various situations. “Perturbations are deviations that are so large, spectroscopists often give up and label the observed spectra of the molecule as ‘strongly perturbed,’” says Jiang, now a researcher at Lawrence Livermore National Laboratory and a co-author of the paper.

According to Field, even physicist Gerhard Herzberg, who all but created the study of small molecule spectroscopy and originated the proposal of why comet’s tails are never green, would usually set perturbations aside “for future study” in his research.

“I started my career dealing with Herzberg’s garbage,” says Field, professor of chemistry post-tenure at MIT who also co-authored the paper. Field’s interest in the “bad behavior” of molecules began over 40 years ago with deviations in carbon monoxide. “When molecules misbehave, it can lead to great insight.”

The valence-hole concept

The perturbations in the C state of C2 led researchers to more than what was previously known about the molecule’s electronic structure, a concept invented by quantum chemists to describe the complex, many-body interactions among the electrons and nuclei in the molecule.

“At MIT, we discovered that the source of these systematic perturbations in C2 is a new phenomenon that we call ‘valence-hole electron configurations,’” says Field.

Despite the simplicity of its chemical composition, dicarbon possesses a surprisingly intricate electronic structure, which manifests strident anomalies in energy level patterns. These signs of “spectroscopic perturbations” are far more numerous and complex than those found in other simple, textbook-featured diatomic molecules, such as CO, N2, and O2.

“The perturbations caused by these special, unexpectedly stable valence-hole configurations profoundly affect the photodissociation and predissociation properties of C2, which, as we show in our PNAS paper, determine how long C2 molecules survive on a comet before being destroyed by ultraviolet radiation in sunlight,” says Field. “Perturbations, predissociation, and photodissociation are three spectroscopic arcanae that explain the mystery of the color difference between the head and tail of a strikingly visible comet.”

These insights were crucial to the solution of an almost-century-old puzzle that Professor Timothy W. Schmidt of the University of New South Wales and lead author of the paper was investigating on the other side of the world. Arriving at similar conclusions about the excited C state of C2, Schmidt reached out to Field, leading to the first time in history scientists observed the diagnostic details of this chemical interaction, theorized by Herzberg in the 1930s.

Putting Humpty together again

After seven years in the Field research group, Jiang has learned to embrace a curiosity-guided approach to research. “Bob always challenged us to look beyond the conventional expectations about how a molecule should behave. There can be beautiful stories to learn,” says Jiang.

The stories from this discovery reach even further than C2. Studies have shown the importance of the valence-hole state in dinitrogen, but the high energy of this state in N2 makes a more complete spectroscopic investigation difficult. As Jiang’s accidental discovery determined that spectra for the valence-hole states of dicarbon are more easily obtained than for other related molecules, C2 can serve as a model for understanding the disruptive impact of valence-hole states in general.

“Perturbations break the regular Herzbergian pattern, and theory based on the valence-hole concept puts the broken pieces back together,” says Jiang, whose current work compares the idea to achieving what was impossible in the Humpty Dumpty nursery rhyme.

Perhaps children’s tales have more in common with chemical breakthroughs than we may think. If unexpected deviations lead to deeper understanding of a subject’s nature, we might say that misbehavior is simply misunderstood behavior.

Molecules, like children, “act out” for reasons not readily obvious. But once we identify the cause, the pieces fit together to tell a more complete story.

As Field says, “Nature leaves a breadcrumb trail of insights through perturbations.” We can reap those insights if we follow where curiosity leads.



de MIT News https://ift.tt/y1N9lhxUK

Making RNA vaccines easier to swallow

Like most vaccines, RNA vaccines have to be injected, which can be an obstacle for people who fear needles. Now, a team of MIT researchers has developed a way to deliver RNA in a capsule that can be swallowed, which they hope could help make people more receptive to them.

In addition to making vaccines easier to tolerate, this approach could also be used to deliver other kinds of therapeutic RNA or DNA directly to the digestive tract, which could make it easier to treat gastrointestinal disorders such as ulcers.

“Nucleic acids, in particular RNA, can be extremely sensitive to degradation particularly in the digestive tract. Overcoming this challenge opens up multiple approaches to therapy, including potential vaccination through the oral route,” says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.

In a new study, Traverso and his colleagues showed that they could use the capsule they developed to deliver up to 150 micrograms of RNA — more than the amount used in mRNA Covid vaccines — in the stomach of pigs.

Traverso and Robert Langer, the David H. Koch Institute Professor at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research, are the senior authors of the study. Alex Abramson PhD ’19 and MIT postdocs Ameya Kirtane and Yunhua Shi are the lead authors of the study, which appears today in the journal Matter.

Oral drug delivery

For several years, Langer’s and Traverso’s labs have been developing novel ways to deliver drugs to the gastrointestinal tract. In 2019, the researchers designed a capsule that, after being swallowed, can place solid drugs, such as insulin, into the lining of the stomach.

The pill, about the size of a blueberry, has a high, steep dome inspired by the leopard tortoise. Just as the tortoise is able to right itself if it rolls onto its back, the capsule is able to orient itself so that its contents can be injected into the lining of the stomach.

In 2021, the researchers showed that they could use the capsule to deliver large molecules such as monoclonal antibodies in liquid form. Next, the researchers decided to try to use the capsule to deliver nucleic acids, which are also large molecules.

Nucleic acids are susceptible to degradation when they enter the body, so they need to be carried by protective particles. For this study, the MIT team used a new type of polymeric nanoparticle that Langer’s and Traverso’s labs had recently developed.

These particles, which can deliver RNA with high efficiency, are made from a type of polymer called poly(beta-amino esters). The MIT team’s previous work showed that branched versions of these polymers are more effective than linear polymers at protecting nucleic acids and getting them into cells. They also showed that using two of these polymers together is more effective than just one.

“We made a library of branched, hybrid poly(beta-amino esters), and we found that the lead polymers within them would do better than the lead polymers within the linear library,” Kirtane says. “What that allows us to do now is to reduce the total amount of nanoparticles that we are administering.”

To test the particles, the researchers first injected them into the stomachs of mice, without using the delivery capsule. The RNA that they delivered codes for a reporter protein that can be detected in tissue if cells successfully take up the RNA. The researchers found the reporter protein in the stomachs of the mice and also in the liver, suggesting that RNA had been taken up in other organs of the body and then carried to the liver, which filters the blood.

Next, the researchers freeze-dried the RNA-nanoparticle complexes and packaged them into their drug delivery capsules. Working with scientists at Novo Nordisk, they were able to load about 50 micrograms of mRNA per capsule, and delivered three capsules into the stomachs of pigs, for a total of 150 micrograms of mRNA. This is the more than the amount of mRNA in the Covid vaccines now in use, which have 30 to 100 micrograms of mRNA.

In the pig studies, the researchers found that the reporter protein was successfully produced by cells of the stomach, but they did not see it elsewhere in the body. In future work, they hope to increase RNA uptake in other organs by changing the composition of the nanoparticles or giving larger doses. However, it may also be possible to generate a strong immune response with delivery only to the stomach, Abramson says.

“There are many immune cells in the gastrointestinal tract, and stimulating the immune system of the gastrointestinal tract is a known way of creating an immune response,” he says.

Immune activation

The researchers now plan to investigate whether they can create a systemic immune response, including activation of B and T cells, by delivering mRNA vaccines using their capsule. This approach could also be used to create targeted treatments for gastrointestinal diseases, which can be difficult to treat using traditional injection under the skin.

“When you have systemic delivery through intravenous injection or subcutaneous injection, it’s not very easy to target the stomach,” Abramson says. “We see this as a potential way to treat different diseases that are present in the gastrointestinal tract.”

Novo Nordisk, which partially funded the research, has licensed the drug-delivery capsule technology and hopes to test it in clinical trials. The research was also funded by the National Institutes of Health, the National Science Foundation Graduate Research Fellowships Program, a PhRMA Foundation postdoctoral fellowship, the Division of Gastroenterology at Brigham and Women’s Hospital, and MIT’s Department of Mechanical Engineering.

Other authors of the paper are Grace Zhong, Joy Collins, Siddartha Tamang, Keiko Ishida, Alison Hayward, Jacob Wainer, Netra Unni Rajesh, Xiaoya Lu, Yuan Gao, Paramesh Karandikar, Chaoyang Tang, Aaron Lopes, Aniket Wahane, Daniel Reker, Morten Revsgaard Frederiksen, and Brian Jensen.



de MIT News https://ift.tt/eMJ2znyuS

viernes, 28 de enero de 2022

Professors Elchanan Mossel and Rosalind Picard named 2021 ACM Fellows

The Association for Computing Machinery (ACM) has named MIT professors Elchanan Mossel and Rosalind Picard as fellows for outstanding accomplishments in computing and information technology.

The ACM Fellows program recognizes wide-ranging and fundamental contributions in areas including algorithms, computer science education, cryptography, data security and privacy, medical informatics, and mobile and networked systems, among many other areas. The accomplishments of the 2021 ACM Fellows underpin important innovations that shape the technologies we use every day.

Elchanan Mossel

Mossel is a professor of mathematics and a member at the Statistics and Data Science Center of the MIT Institute for Data, Systems and Society. His research in discrete functional inequalities, isoperimetry, and hypercontractivity led to the proof that Majority is Stablest and confirmed the optimality of the Goemans-Williamson MAX-CUT algorithm under the unique games conjecture from computational complexity. His work on the reconstruction problem on trees provides optimal algorithms and bounds for phylogenetic reconstruction in molecular biology and has led to sharp results in the analysis of Gibbs samplers from statistical physics and inference problems on graphs. His research has resolved open problems in computational biology, machine learning, social choice theory, and economics.

Mossel received a BS from the Open University in Israel in 1992, and MS (1997) and PhD (2000) degrees in mathematics from the Hebrew University of Jerusalem. He was a postdoc at the Microsoft Research Theory Group and a Miller Fellow at University of California at Berkeley. He joined the UC Berkeley faculty in 2003 as a professor of statistics and computer science, and spent leaves as a professor at the Weizmann Institute and at the Wharton School before joining MIT in 2016 as a full professor.

In 2020, he received the Vannevar Bush Faculty Fellowship of the U.S. Department of Defense. Other distinctions include being named a Simons Investigator in Mathematics in 2019, being selected as a fellow of the AMS, and receiving a Sloan Research Fellowship, NSF CAREER Award, and the Bergmann Memorial Award from the U.S.-Israel Binational Science Foundation.

“I am honored by this award,” says Mossel. “It makes me realize how fortunate I've been, working with creative and generous colleagues, and mentoring brilliant young minds.”

Rosalind Picard

Picard is a scientist, engineer, author, and professor of media arts and sciences at the MIT Media Lab. She is recognized as the founder of the field of affective computing, and has carried this research forward as head of the Media Lab's Affective Computing research group. She is also a founding faculty chair of MIT's MindHandHeart Initiative, and a faculty member of the MIT Center for Neurobiological Engineering. Picard is an IEEE fellow, and a member of the National Academy of Engineering. 

Picard's inventions are in use by thousands of research teams worldwide as well as in numerous products and services. She has co-founded two companies: Affectiva (now part of Smart Eye), providing emotion AI technologies now used by more than 25 percent of the Global Fortune 500, and Empatica, providing wearable sensors and analytics to improve health. Starting from inventions by Picard and her team, Empatica created the first AI-based smart watch cleared by the FDA (in neurology for monitoring seizures), which is now helping to bring potentially lifesaving help for people with epilepsy. 

"This award makes me think of how blessed I am to work with so many amazing people here at MIT, especially at the Media Lab," Picard notes. "Whenever any one of us has our contributions recognized, it is also a recognition of how special a place this is."



de MIT News https://ift.tt/3rYfEiy

School of Science announces 2022 Infinite Expansion Awards

The MIT School of Science has announced eight postdocs and research scientists as recipients of the 2022 Infinite Expansion Award.

The award, formerly known as the Infinite Kilometer Award, was created in 2012 to highlight extraordinary members of the MIT science community. The awardees are nominated not only for their research, but for going above and beyond in mentoring junior colleagues, participating in educational programs, and contributing to their departments, labs, and research centers, the school, and the Institute.

The 2022 School of Science Infinite Expansion winners are:

  • Héctor de Jesús-Cortés, a postdoc in the Picower Institute for Learning and Memory, nominated by professor and Department of Brain and Cognitive Sciences (BCS) head Michale Fee, professor and McGovern Institute for Brain Research Director Robert Desimone, professor and Picower Institute Director Li-Huei Tsai, professor and associate BCS head Laura Schulz, associate professor and associate BCS head Joshua McDermott, and professor and BCS Postdoc Officer Mark Bear for his “awe-inspiring commitment of time and energy to research, outreach, education, mentorship, and community;”
     
  • Harold Erbin, a postdoc in the Laboratory for Nuclear Science’s Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), nominated by professor and IAIFI Director Jesse Thaler, associate professor and IAIFI Deputy Director Mike Williams, and associate professor and IAIFI Early Career and Equity Committee Chair Tracy Slatyer for “provid[ing] exemplary service on the IAIFI Early Career and Equity Committee” and being “actively involved in many other IAIFI community building efforts;”
     
  • Megan Hill, a postdoc in the Department of Chemistry, nominated by Professor Jeremiah Johnson for being an “outstanding scientist” who has “also made exceptional contributions to our community through her mentorship activities and participation in Women in Chemistry;”
     
  • Kevin Kuns, a postdoc in the Kavli Institute for Astrophysics and Space Research, nominated by Associate Professor Matthew Evans for “consistently go[ing] beyond expectations;”
     
  • Xingcheng Lin, a postdoc in the Department of Chemistry, nominated by Associate Professor Bin Zhang for being “very talented, extremely hardworking, and genuinely enthusiastic about science;”
     
  • Alexandra Pike, a postdoc in the Department of Biology, nominated by Professor Stephen Bell for “not only excel[ing] in the laboratory” but also being “an exemplary citizen in the biology department, contributing to teaching, community, and to improving diversity, equity, and inclusion in the department;”
     
  • Nora Shipp, a postdoc with the Kavli Institute for Astrophysics and Space Research, nominated by Assistant Professor Lina Necib for being “independent, efficient, with great leadership qualities” with “impeccable” research; and
     
  • Jakob Voigts, a research scientist in the McGovern Institute for Brain Research, nominated by Associate Professor Mark Harnett and his laboratory for “contribut[ing] to the growth and development of the lab and its members in numerous and irreplaceable ways.”

Winners are honored with a monetary award and will be celebrated with family, friends, and nominators at a later date, along with recipients of the Infinite Mile Award.



de MIT News https://ift.tt/3r84FUi

Tiny materials lead to a big advance in quantum computing

Like the transistors in a classical computer, superconducting qubits are the building blocks of a quantum computer. While engineers have been able to shrink transistors to nanometer scales, however, superconducting qubits are still measured in millimeters. This is one reason a practical quantum computing device couldn’t be miniaturized to the size of a smartphone, for instance.

MIT researchers have now used ultrathin materials to build superconducting qubits that are at least one-hundredth the size of conventional designs and suffer from less interference between neighboring qubits. This advance could improve the performance of quantum computers and enable the development of smaller quantum devices.

The researchers have demonstrated that hexagonal boron nitride, a material consisting of only a few monolayers of atoms, can be stacked to form the insulator in the capacitors on a superconducting qubit. This defect-free material enables capacitors that are much smaller than those typically used in a qubit, which shrinks its footprint without significantly sacrificing performance.

In addition, the researchers show that the structure of these smaller capacitors should greatly reduce cross-talk, which occurs when one qubit unintentionally affects surrounding qubits.

“Right now, we can have maybe 50 or 100 qubits in a device, but for practical use in the future, we will need thousands or millions of qubits in a device. So, it will be very important to miniaturize the size of each individual qubit and at the same time avoid the unwanted cross-talk between these hundreds of thousands of qubits. This is one of the very few materials we found that can be used in this kind of construction,” says co-lead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

Wang’s co-lead author is Megan Yamoah ’20, a former student in the Engineering Quantum Systems group who is currently studying at Oxford University on a Rhodes Scholarship. Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, is a corresponding author, and the senior author is William D. Oliver, a professor of electrical engineering and computer science and of physics, an MIT Lincoln Laboratory Fellow, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics. The research is published today in Nature Materials.

Qubit quandaries

Superconducting qubits, a particular kind of quantum computing platform that uses superconducting circuits, contain inductors and capacitors. Just like in a radio or other electronic device, these capacitors store the electric field energy. A capacitor is often built like a sandwich, with metal plates on either side of an insulating, or dielectric, material.

But unlike a radio, superconducting quantum computers operate at super-cold temperatures — less than 0.02 degrees above absolute zero (-273.15 degrees Celsius) — and have very high-frequency electric fields, similar to today’s cellphones. Most insulating materials that work in this regime have defects. While not detrimental to most classical applications, when quantum-coherent information passes through the dielectric layer, it may get lost or absorbed in some random way.

“Most common dielectrics used for integrated circuits, such as silicon oxides or silicon nitrides, have many defects, resulting in quality factors around 500 to 1,000. This is simply too lossy for quantum computing applications,” Oliver says.

To get around this, conventional qubit capacitors are more like open-faced sandwiches, with no top plate and a vacuum sitting above the bottom plate to act as the insulating layer.

“The price one pays is that the plates are much bigger because you dilute the electric field and use a much larger layer for the vacuum,” Wang says. “The size of each individual qubit will be much larger than if you can contain everything in a small device. And the other problem is, when you have two qubits next to each other, and each qubit has its own electric field open to the free space, there might be some unwanted talk between them, which can make it difficult to control just one qubit. One would love to go back to the very original idea of a capacitor, which is just two electric plates with a very clean insulator sandwiched in between.”

So, that’s what these researchers did.

They thought hexagonal boron nitride, which is from a family known as van der Waals materials (also called 2D materials), would be a good candidate to build a capacitor. This unique material can be thinned down to one layer of atoms that is crystalline in structure and does not contain defects. Researchers can then stack those thin layers in desired configurations.

To test hexagonal boron nitride, they ran experiments to characterize how clean the material is when interacting with a high-frequency electric field at ultracold temperatures, and found that very little energy is lost when it passes through the material.

“Much of the previous work characterizing hBN (hexagonal boron nitride) was performed at or near zero frequency using DC transport measurements. However, qubits operate in the gigahertz regime. It’s great to see that hBN capacitors have quality factors exceeding 100,000 at these frequencies, amongst the highest Qs I have seen for lithographically defined, integrated parallel-plate capacitors,” Oliver says.

Capacitor construction

They used hexagonal boron nitride to build a parallel-plate capacitor for a qubit. To fabricate the capacitor, they sandwiched hexagonal boron nitride between very thin layers of another van der Waals material, niobium diselenide.

The intricate fabrication process involved preparing one-atom-thick layers of the materials under a microscope and then using a sticky polymer to grab each layer and stack it on top of the other. They placed the sticky polymer, with the stack of 2D materials, onto the qubit circuit, then melted the polymer and washed it away.

Then they connected the capacitor to the existing structure and cooled the qubit to 20 millikelvins (-273.13 C).  

“One of the biggest challenges of the fabrication process is working with niobium diselenide, which will oxidize in seconds if it is exposed to the air. To avoid that, the whole assembly of this structure has to be done in what we call the glove box, which is a big box filled with argon, which is an inert gas that contains a very low level of oxygen. We have to do everything inside this box,” Wang says.

The resulting qubit is about 100 times smaller than what they made with traditional techniques on the same chip. The coherence time, or lifetime, of the qubit is only a few microseconds shorter with their new design. And capacitors built with hexagonal boron nitride contain more than 90 percent of the electric field between the upper and lower plates, which suggests they will significantly suppress cross-talk among neighboring qubits, Wang says. This work is complementary to recent research by a team at Columbia University and Raytheon.

In the future, the researchers want to use this method to build many qubits on a chip to verify that their technique reduces cross-talk. They also want to improve the performance of the qubit by finetuning the fabrication process, or even building the entire qubit out of 2D materials.

“Now we have cleared a path to show that you can safely use as much hexagonal boron nitride as you want without worrying too much about defects. This opens up a lot of opportunity where you can make all kinds of different heterostructures and combine it with a microwave circuit, and there is a lot more room that you can explore. In a way, we are giving people the green light — you can use this material in any way you want without worrying too much about the loss that is associated with the dielectric,” Wang says.

This research was funded, in part, by the U.S. Army Research Office, the National Science Foundation, and the Assistant Secretary of Defense for Research and Engineering via MIT Lincoln Laboratory.



de MIT News https://ift.tt/33U6nQC

Invisible machine-readable labels that identify and track objects

If you download music online, you can get accompanying information embedded into the digital file that might tell you the name of the song, its genre, the featured artists on a given track, the composer, and the producer. Similarly, if you download a digital photo, you can obtain information that may include the time, date, and location at which the picture was taken. That led Mustafa Doga Dogan to wonder whether engineers could do something similar for physical objects. “That way,” he mused, “we could inform ourselves faster and more reliably while walking around in a store or museum or library.”

The idea, at first, was a bit abstract for Dogan, a 4th-year PhD student in the MIT Department of Electrical Engineering and Computer Science. But his thinking solidified in the latter part of 2020 when he heard about a new smartphone model with a camera that utilizes the infrared (IR) range of the electromagnetic spectrum that the naked eye can’t perceive. IR light, moreover, has a unique ability to see through certain materials that are opaque to visible light. It occurred to Dogan that this feature, in particular, could be useful.

The concept he has since come up with — while working with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and a research scientist at Facebook — is called InfraredTags. In place of the standard barcodes affixed to products, which may be removed or detached or become otherwise unreadable over time, these tags are unobtrusive (due to the fact that they are invisible) and far more durable, given that they’re embedded within the interior of objects fabricated on standard 3D printers.

Last year, Dogan spent a couple of months trying to find a suitable variety of plastic that IR light can pass through. It would have to come in the form of a filament spool specifically designed for 3D printers. After an extensive search, he came across customized plastic filaments made by a small German company that seemed promising. He then used a spectrophotometer at an MIT materials science lab to analyze a sample, where he discovered that it was opaque to visible light but transparent or translucent to IR light — just the properties he was seeking.

The next step was to experiment with techniques for making tags on a printer. One option was to produce the code by carving out tiny air gaps — proxies for zeroes and ones — in a layer of plastic. Another option, assuming an available printer could handle it, would be to use two kinds of plastic, one that transmits IR light and the other — upon which the code is inscribed — that is opaque. The dual material approach is preferable, when possible, because it can provide a clearer contrast and thus could be more easily read with an IR camera.

The tags themselves could consist of familiar barcodes, which present information in a linear, one-dimensional format. Two-dimensional options — such as square QR codes (commonly used, for instance, on return labels) and so-called ArUco (fiducial) markers — can potentially pack more information into the same area. The MIT team has developed a software “user interface” that specifies exactly what the tag should look like and where it should appear within a particular object. Multiple tags could be placed throughout the same object, in fact, making it easy to access information in the event that views from certain angles are obstructed.

“InfraredTags is a really clever, useful, and accessible approach to embedding information into objects,” comments Fraser Anderson, a senior principal research scientist at the Autodesk Technology Center in Toronto, Ontario. “I can easily imagine a future where you can point a standard camera at any object and it would give you information about that object — where it was manufactured, the materials used, or repair instructions — and you wouldn't even have to search for a barcode.”

Dogan and his collaborators have created several prototypes along these lines, including mugs with bar codes engraved inside the container walls, beneath a 1-millimeter plastic shell, which can be read by IR cameras. They’ve also fabricated a Wi-Fi router prototype with invisible tags that reveal the network name or password, depending on the perspective it’s viewed from. They’ve made a cheap video game controller, shaped like a wheel, that is completely passive, with no electronic components at all. It just has a barcode (ArUco marker) inside. A player simply turns the wheel, clockwise or counterclockwise, and an inexpensive ($20) IR camera can then determine its orientation in space.

In the future, if tags like these become widespread, people could use their cellphones to turn lights on and off, control the volume of a speaker, or regulate the temperature on a thermostat. Dogan and his colleagues are looking into the possibility of adding IR cameras to augmented reality headsets. He imagines walking around a supermarket, someday, wearing such headsets and instantly getting information about the products around him — how many calories are in an individual serving, and what are some recipes for preparing it?

Kaan Akşit, an associate professor of computer science at University College London, sees great potential for this technology. “The labeling and tagging industry is a vast part of our day-to-day lives,” Akşit says. “Everything we buy from grocery stores to pieces to be replaced in our devices (e.g., batteries, circuits, computers, car parts) must be identified and tracked correctly. Doga’s work addresses these issues by providing an invisible tagging system that is mostly protected against the sands of time.” And as futuristic notions like the metaverse become part of our reality, Akşit adds, “Doga’s tagging and labeling mechanism can help us bring a digital copy of items with us as we explore three-dimensional virtual environments.”

The paper, “InfraredTags: Embedding Invisible AR Markers and Barcodes into Objects Using Low-Cost Infrared-Based 3D Printing and Imaging Tools,” is being presented at the ACM CHI Conference on Human Factors in Computing Systems, in New Orleans this spring, and will be published in the conference proceedings.

Dogan’s coauthors on this paper are Ahmad Taka, Michael Lu, Yunyi Zhu, Akshat Kumar, and Stefanie Mueller of MIT CSAIL; and Aakar Gupta of Facebook Reality Labs in Redmond, Washington.

This work was supported by an Alfred P. Sloan Foundation Research Fellowship. The Dynamsoft Corp. provided a free software license that facilitated this research.



de MIT News https://ift.tt/3KS1W9j

3 Questions: Jinhua Zhao on a “third place” between home and office

During the Covid-19 pandemic, many office workers have developed flexible working arrangements, to avoid too much time spent in crowded offices. But an MIT-supported survey project reveals a twist on this now-familiar scenario: Many workers with location flexibility are not necessarily working from home. Instead, they are taking their work to a “third place,” including cafés, libraries, and co-working spaces. About one-third of nonoffice work hours are spent in such places, the data show, even if those locations put people in closer proximity to others than working at home might.

The results come from the November and December iterations of the Survey of Working Arrangements and Attitudes, a joint monthly project in which MIT has joined forces with the University of Chicago, Stanford University, and the Instituto Tecnológico Autónomo de México. To learn more about this trend and its implications, MIT News spoke with Jinhua Zhao, associate professor of transportation and city planning in MIT’s Department of Urban Studies and Planning, and director of the MIT Mobility Initiative, who is working with his students Nick Caros and Xiaotong Guo on this project.

Q: It’s become fairly common for workers to have flexible arrangements during the Covid-19 pandemic. Certainly many employees in the service industries, in health care, and other essential occupations do not have that opportunity. But many who can work remotely have been doing so. However, the survey indicates a substantial number of people working remotely are not staying home, but going to other locations. Can you explain the survey’s main findings for us?

A: We find that besides the home and the office, there is a whole spectrum of places people are using as their location of choice for work. The nonhome, nonoffice workspace is what we call the “third place,” and we recently distributed a survey to quantify this trend: The “third place” constitutes more than a third of the total remote working hours [as of November and December 2021].

The first category of third places can be described broadly as “public spaces.” That includes places like a café, a library, a community center. The second category is co-working spaces, a collaborative working environment where people or companies can rent desks on a short term basis. Most of these spaces are currently located in downtowns, but now they’re starting to penetrate into suburban areas. Why commute a long way if I can walk to a co-working space? The third example is a friend or associate’s home. Suppose you have three or four good friends or work colleagues, and you say, “Today I’ll go to your place, but tomorrow you can come to my backyard.”

Why are people leaving their homes to go to a “third place?” There are multiple reasons. One may be that you don’t have good internet, or your neighbor may be doing leaf-blowing all the time. The “third place” may bring benefits like a quiet room for conference calls. But there are also social reasons. It gives workers the opportunity to meet people, which helps for creativity and productivity, or just for mental health. It’s good to say “hello” to people.

Q: You have a uncovered a split in these survey results between men and women. What did you find there?

A: Men and women behave the same in terms of their total remote working hours, but there is a distinct gender difference in the use of the “third place.” On average, men spend about 40 percent of total remote hours in “third places,” and women only 30 percent. So, what could be the source of this difference? One hypothesis is that women still take on a higher burden of household maintenance tasks and child care. Working at home allows you care for children and family, as well as complete small chores during breaks in the work day. That’s our main hypothesis. There are other potential reasons: Do women have a different perspective about interacting with other people in a “third place,” for instance? Our survey only quantified the facts, and a follow-up study could clarify the motivations. 

Q: What are the main implications of this trend for neighborhoods, urban planning, and transportation, among other things?

A: We anticipate there would be quite a significant impact. Let me mention three potential consequences. The first involves urban space. If co-working spaces move to suburban centers, they may be smaller and more localized. If that is the case, you’ll probably do some grocery shopping, make retail purchases, or have your hair cut near your “third place.” That would boost demand for neighborhood-level retail, as opposed to businesses located downtown or near the office park off the interstate. In the urban planning realm, we talk about the “15-minute city,” where

most daily activities can be accomplished within a short walk. The “third place” working trend is one component of this concept.

The second area of impact is transportation. I’m a transportation scholar, and the thought is this: If people in Newton [a suburban city west of Boston] can go to Newton Center instead of Boston, it’s a shorter trip and they’re more likely to get there by walking, cycling, or a short bus ride. That can reduce traffic and carbon dioxide emissions. As far as climate change goes, transportation is the biggest carbon dioxide-emitting sector in the economy. So, if “third place” working can reduce travel, that would contribute to decarbonization.

The third one is social: If I work locally, I know my community better and have more chances to meet my neighbors. Would that allow a better understanding between people? There’s potential for that.

For a long time, a job imposed a specific time, space, and organizational arrangement on people. A job is an anchor. But many people are rethinking these arrangements.



de MIT News https://ift.tt/3AI2SbY

A creative desire, and the grit to get it done

When Laura Rosado was headed to MIT four years ago, she was undecided about choosing a major. Asked what she wanted to study, she would answer with a bit of a dodge, saying, “Well, right now my favorite class is math.”

Then, in the spring semester of her first year, she signed up for Class 2.00a (Fundamentals of Engineering Design). That semester, the focus of the class — taught by Professor Daniel Frey — was aircraft, and the requirements of the final project were wide open: Design something that flies and is radio controlled. Rosado and her lab partner decided on a project that was “ridiculously" ambitious for first-year students: a Chitty Chitty Bang Bang-style flying car.

“The professor really just let us go crazy,” she says, adding that toward the end of the class, Frey made extra time for the students, letting them out of clean-up labs and telling them to “keep building and enjoy yourselves.”

“The creative freedom that I was given in that class was eye-opening,” says Rosado, who will graduate in May. “Being allowed to explore and learn without hard constraints completely sold me. I thought this just makes me very excited to be a mechanical engineer.”

There have, of course, been other important formative experiences in the aspiring mechanical engineer’s life, which reinforced each other and brought Rosado to where she is today, including Thomas the Tank Engine.

Around age 2, Rosado embarked on a lifelong romance with trains, and she says a current research interest in transportation infrastructure can be connected to those early days of carefully assembling her collection of tracks.

“Oh, 100 percent,” Rosado says. “I just like the idea of railroads. It’s a very nostalgic, romantic idea, especially in the U.S. It hearkens back to going out West, the adventure.”

Over the years, as the young Rosado developed a concern for society and the health of the planet, her fondness for railroad transportation only grew.

“The transportation sector is responsible for the largest chunk of total carbon emissions in the United States, and the majority of it is from personal cars, not even tractor-trailers,” Rosado says. “Well, this just reinforced my love of trains. I wish they were more extensive in the United States. Improving transportation infrastructure is a great public good. I've always thought that if I got a chance to work in that sector, I’d be pretty happy.”

Other influences on Rosado’s direction in life worked their magic along the way. Her father shared his limitless curiosity, and her mother encouraged her to think critically and love reading — she is pursuing a double major, in mechanical engineering and creative writing. The swim coach she had from about age 11 to 16 taught her to push herself really hard, to “be honest with myself and always aim higher.”

Rosado tells the story of when she clocked her best time yet at a swim meet.

“I was over the moon. You look at the board and you see a time that you’ve never gotten before,” Rosado says. “I remember going up to my coach in the stands, and he was like, ‘Well, you could have been stronger in the second half of that race,' and I said, ‘Yeah, I’ll work on it next time.’

“There’s always something to improve on.”

Around the same time that Rosado was avoiding the question of what she would major in at college, the New Haven native had already earned herself the title of 2017 Southern Connecticut Conference Swimmer of the Year. Wanting to continue swimming competitively, she sought a Division III school, where she could achieve a balance with academics. She jokingly says she chose MIT because it had the “best pool in the Northeast,” and speaks very highly of her experience of swimming all four years.

“I’d say swimming is the reason I stayed sane,” she says. “It’s like having a space to push myself that’s not academic. It’s kind of meditative to be in the water during practice.”

She has applied to graduate school at MIT and has worked on projects through MIT’s Human Systems Lab and Little Devices Lab. Her 2019 Undergraduate Research Opportunities Program (UROP) project analyzed and modeled the driving strategies of freight train engineers in order to develop automation that will aid them to drive safely in all kinds of scenarios.

“I got to drive a simulated locomotive,” Rosado says, “and talk to people who actually drive for a living.”

Rosado’s mechanical engineering concentration is in robotics. A summer UROP project she worked on last year involved improving the mechanical and sensor systems of a five-robot swarm system used in health care, which employs the robots to work collaboratively, both to help with diagnostic processing associated with Covid-19 and to provide automated systems for struggling labs around the world.

Her interest in robotics enters into yet another area of Rosado’s busy life — her writing. As her senior thesis for her creative writing major, she is working on a novel. The novel is a coming-of-age story without any science fiction elements, but Rosado also likes to write speculative fiction, such as stories that examine the ramifications of a worst-case dystopian scenario of robots taking over the world, which of course invites ethical examination of how technology progresses.

As a person with a fascination with transportation, a desire to be creative, and the grit to get done whatever she plans, 21-year-old Rosado is proceeding smoothly on the course toward her future, while never forgetting her social concern.

“How can we think about this," Rosado says about the power-of-technology issues raised in her speculative fiction, "and then take that ethical pondering to be more thoughtful engineers?"



de MIT News https://ift.tt/3ADh7yB

jueves, 27 de enero de 2022

Where did that sound come from?

The human brain is finely tuned not only to recognize particular sounds, but also to determine which direction they came from. By comparing differences in sounds that reach the right and left ear, the brain can estimate the location of a barking dog, wailing fire engine, or approaching car.

MIT neuroscientists have now developed a computer model that can also perform that complex task. The model, which consists of several convolutional neural networks, not only performs the task as well as humans do, it also struggles in the same ways that humans do.

“We now have a model that can actually localize sounds in the real world,” says Josh McDermott, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “And when we treated the model like a human experimental participant and simulated this large set of experiments that people had tested humans on in the past, what we found over and over again is it the model recapitulates the results that you see in humans.”

Findings from the new study also suggest that humans’ ability to perceive location is adapted to the specific challenges of our environment, says McDermott, who is also a member of MIT’s Center for Brains, Minds, and Machines.

McDermott is the senior author of the paper, which appears today in Nature Human Behavior. The paper’s lead author is MIT graduate student Andrew Francl.

Modeling localization

When we hear a sound such as a train whistle, the sound waves reach our right and left ears at slightly different times and intensities, depending on what direction the sound is coming from. Parts of the midbrain are specialized to compare these slight differences to help estimate what direction the sound came from, a task also known as localization.

This task becomes markedly more difficult under real-world conditions — where the environment produces echoes and many sounds are heard at once.

Scientists have long sought to build computer models that can perform the same kind of calculations that the brain uses to localize sounds. These models sometimes work well in idealized settings with no background noise, but never in real-world environments, with their noises and echoes.

To develop a more sophisticated model of localization, the MIT team turned to convolutional neural networks. This kind of computer modeling has been used extensively to model the human visual system, and more recently, McDermott and other scientists have begun applying it to audition as well.

Convolutional neural networks can be designed with many different architectures, so to help them find the ones that would work best for localization, the MIT team used a supercomputer that allowed them to train and test about 1,500 different models. That search identified 10 that seemed the best-suited for localization, which the researchers further trained and used for all of their subsequent studies.

To train the models, the researchers created a virtual world in which they can control the size of the room and the reflection properties of the walls of the room. All of the sounds fed to the models originated from somewhere in one of these virtual rooms. The set of more than 400 training sounds included human voices, animal sounds, machine sounds such as car engines, and natural sounds such as thunder.

The researchers also ensured the model started with the same information provided by human ears. The outer ear, or pinna, has many folds that reflect sound, altering the frequencies that enter the ear, and these reflections vary depending on where the sound comes from. The researchers simulated this effect by running each sound through a specialized mathematical function before it went into the computer model.

“This allows us to give the model the same kind of information that a person would have,” Francl says.

After training the models, the researchers tested them in a real-world environment. They placed a mannequin with microphones in its ears in an actual room and played sounds from different directions, then fed those recordings into the models. The models performed very similarly to humans when asked to localize these sounds.

“Although the model was trained in a virtual world, when we evaluated it, it could localize sounds in the real world,” Francl says.

Similar patterns

The researchers then subjected the models to a series of tests that scientists have used in the past to study humans’ localization abilities.

In addition to analyzing the difference in arrival time at the right and left ears, the human brain also bases its location judgments on differences in the intensity of sound that reaches each ear. Previous studies have shown that the success of both of these strategies varies depending on the frequency of the incoming sound. In the new study, the MIT team found that the models showed this same pattern of sensitivity to frequency.

“The model seems to use timing and level differences between the two ears in the same way that people do, in a way that's frequency-dependent,” McDermott says.

The researchers also showed that when they made localization tasks more difficult, by adding multiple sound sources played at the same time, the computer models’ performance declined in a way that closely mimicked human failure patterns under the same circumstances.

“As you add more and more sources, you get a specific pattern of decline in humans’ ability to accurately judge the number of sources present, and their ability to localize those sources,” Francl says. “Humans seem to be limited to localizing about three sources at once, and when we ran the same test on the model, we saw a really similar pattern of behavior.”

Because the researchers used a virtual world to train their models, they were also able to explore what happens when their model learned to localize in different types of unnatural conditions. The researchers trained one set of models in a virtual world with no echoes, and another in a world where there was never more than one sound heard at a time. In a third, the models were only exposed to sounds with narrow frequency ranges, instead of naturally occurring sounds.

When the models trained in these unnatural worlds were evaluated on the same battery of behavioral tests, the models deviated from human behavior, and the ways in which they failed varied depending on the type of environment they had been trained in. These results support the idea that the localization abilities of the human brain are adapted to the environments in which humans evolved, the researchers say.

The researchers are now applying this type of modeling to other aspects of audition, such as pitch perception and speech recognition, and believe it could also be used to understand other cognitive phenomena, such as the limits on what a person can pay attention to or remember, McDermott says.

The research was funded by the National Science Foundation and the National Institute on Deafness and Other Communication Disorders.



de MIT News https://ift.tt/3g2DjZv

Demystifying machine-learning systems

Neural networks are sometimes called black boxes because, despite the fact that they can outperform humans on certain tasks, even the researchers who design them often don’t understand how or why they work so well. But if a neural network is used outside the lab, perhaps to classify medical images that could help diagnose heart conditions, knowing how the model works helps researchers predict how it will behave in practice.

MIT researchers have now developed a method that sheds some light on the inner workings of black box neural networks. Modeled off the human brain, neural networks are arranged into layers of interconnected nodes, or “neurons,” that process data. The new system can automatically produce descriptions of those individual neurons, generated in English or another natural language.

For instance, in a neural network trained to recognize animals in images, their method might describe a certain neuron as detecting ears of foxes. Their scalable technique is able to generate more accurate and specific descriptions for individual neurons than other methods.

In a new paper, the team shows that this method can be used to audit a neural network to determine what it has learned, or even edit a network by identifying and then switching off unhelpful or incorrect neurons.

“We wanted to create a method where a machine-learning practitioner can give this system their model and it will tell them everything it knows about that model, from the perspective of the model’s neurons, in language. This helps you answer the basic question, ‘Is there something my model knows about that I would not have expected it to know?’” says Evan Hernandez, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Co-authors include Sarah Schwettmann, a postdoc in CSAIL; David Bau, a recent CSAIL graduate who is an incoming assistant professor of computer science at Northeastern University; Teona Bagashvili, a former visiting student in CSAIL; Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL; and senior author Jacob Andreas, the X Consortium Assistant Professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

Automatically generated descriptions

Most existing techniques that help machine-learning practitioners understand how a model works either describe the entire neural network or require researchers to identify concepts they think individual neurons could be focusing on.

The system Hernandez and his collaborators developed, dubbed MILAN (mutual-information guided linguistic annotation of neurons), improves upon these methods because it does not require a list of concepts in advance and can automatically generate natural language descriptions of all the neurons in a network. This is especially important because one neural network can contain hundreds of thousands of individual neurons.

MILAN produces descriptions of neurons in neural networks trained for computer vision tasks like object recognition and image synthesis. To describe a given neuron, the system first inspects that neuron’s behavior on thousands of images to find the set of image regions in which the neuron is most active. Next, it selects a natural language description for each neuron to maximize a quantity called pointwise mutual information between the image regions and descriptions. This encourages descriptions that capture each neuron’s distinctive role within the larger network.

“In a neural network that is trained to classify images, there are going to be tons of different neurons that detect dogs. But there are lots of different types of dogs and lots of different parts of dogs. So even though ‘dog’ might be an accurate description of a lot of these neurons, it is not very informative. We want descriptions that are very specific to what that neuron is doing. This isn’t just dogs; this is the left side of ears on German shepherds,” says Hernandez.

The team compared MILAN to other models and found that it generated richer and more accurate descriptions, but the researchers were more interested in seeing how it could assist in answering specific questions about computer vision models.      

Analyzing, auditing, and editing neural networks

First, they used MILAN to analyze which neurons are most important in a neural network. They generated descriptions for every neuron and sorted them based on the words in the descriptions. They slowly removed neurons from the network to see how its accuracy changed, and found that neurons that had two very different words in their descriptions (vases and fossils, for instance) were less important to the network.

They also used MILAN to audit models to see if they learned something unexpected. The researchers took image classification models that were trained on datasets in which human faces were blurred out, ran MILAN, and counted how many neurons were nonetheless sensitive to human faces.

“Blurring the faces in this way does reduce the number of neurons that are sensitive to faces, but far from eliminates them. As a matter of fact, we hypothesize that some of these face neurons are very sensitive to specific demographic groups, which is quite surprising. These models have never seen a human face before, and yet all kinds of facial processing happens inside them,” Hernandez says.

In a third experiment, the team used MILAN to edit a neural network by finding and removing neurons that were detecting bad correlations in the data, which led to a 5 percent increase in the network’s accuracy on inputs exhibiting the problematic correlation.

While the researchers were impressed by how well MILAN performed in these three applications, the model sometimes gives descriptions that are still too vague, or it will make an incorrect guess when it doesn’t know the concept it is supposed to identify.

They are planning to address these limitations in future work. They also want to continue enhancing the richness of the descriptions MILAN is able to generate. They hope to apply MILAN to other types of neural networks and use it to describe what groups of neurons do, since neurons work together to produce an output.

“This is an approach to interpretability that starts from the bottom up. The goal is to generate open-ended, compositional descriptions of function with natural language. We want to tap into the expressive power of human language to generate descriptions that are a lot more natural and rich for what neurons do. Being able to generalize this approach to different types of models is what I am most excited about,” says Schwettmann.

“The ultimate test of any technique for explainable AI is whether it can help researchers and users make better decisions about when and how to deploy AI systems,” says Andreas. “We’re still a long way off from being able to do that in a general way. But I’m optimistic that MILAN — and the use of language as an explanatory tool more broadly — will be a useful part of the toolbox.”

This work was funded, in part, by the MIT-IBM Watson AI Lab and the SystemsThatLearn@CSAIL initiative.



de MIT News https://ift.tt/3o3y14a

miércoles, 26 de enero de 2022

Immersive video game explores the history of women at MIT

A new video game, "A Lab of One’s Own," creates an immersive environment in which players discover archival materials that tell the stories of women from MIT’s history. Created by multimedia artists Mariana Roa Oliva and Maya Bjornson with collections from MIT Libraries’ Women@MIT archival initiative, the project aims to create a multi-sensory, choose-your-own-adventure-style experience that challenges the idea that the past is behind us. 

“Our goal was to bring these materials into conversation through an engaging virtual space,” says Bjornson. “We felt that by using new digital technologies we could make the archives accessible to a wider audience, and make research feel like play.” 

Multimedia and installation artists Roa Oliva and Bjornson were named the spring 2021 Women@MIT Fellows in the MIT Libraries’ Distinctive Collections department. Engaging in archival research using MIT’s rich collections, fellows create projects that contribute to greater understanding of the history of women at the Institute and in the history of science, technology, engineering, and mathematics (STEM).

"A Lab of One’s Own" is a fantastical virtual world in which players encounter quotes from memoirs and oral histories, newspaper clippings, audio clips, and ephemera that all speak to women’s experiences at MIT and in the STEM fields. Perspectives from a variety of individuals and time periods are juxtaposed in a kind of collage that offers new interpretations of these histories. Created using the public game engine Unity, "A Lab of One’s Own" can be downloaded from the project’s website.

In the game, players navigate through different settings — including an island, a cabin in the woods, the interior of a microscope, a lecture hall, and outer space — following a series of floating diamonds that activate quotes and excerpts of text from the MIT archives. Players can also explore their virtual surroundings: examining formulas on a chalkboard, walking through a landscape of floating photographs, or reading pages from scientists’ notebooks. Throughout the game’s world, one can find newspaper stands that offer clippings from publications such as The Tech and the Chronicle of Higher Education on issues of gender, sexuality, and race. 

The six chapters of "A Lab of One’s Own" examine different aspects of a variety of women’s lives and work. The cabin contains objects and texts from trailblazers like Ellen Swallow Richards and Emily Wick, who studied the domestic sphere through the lens of science. Chapter Three makes the idea of the “rat race” literal, while texts describe the challenges of balancing career, motherhood, marriage, or a spouse’s career, and an audio track from ChoKyun Rha, the first woman of Asian descent to receive tenure at MIT, talks about her work developing synthetic milk. In the auditorium, players can explore the intersection of gender and race, as articulated in a keynote speech from Angela Davis at the 1994 Black Women in the Academy conference at MIT, and other quotes from archival sources.

“The materials from the Women@MIT archival initiative tell stories of women who’ve been first in graduating from academic institutions, publishing ground-breaking papers, and getting together at first-of-its-kind conferences,” write the fellows in their introduction to the game. “They also offer glimpses into the history that happened just as much in community meetings, quiet lab hours, and the home.”

Accompanying the game is an exhibit in Hayden Library’s loft, located on the 1M level, which illustrates how Roa Oliva and Bjornson utilized Distinctive Collections to create the immersive experience of "A Lab of One’s Own." Archival materials — including audio recordings of Margaret Hutchinson Compton, wife of MIT President Karl Compton, and MIT Sloan School of Management faculty member Lotte Bailyn; a transcription of a 1976 Women’s Luncheon; and minutes from a Women’s Independent Living Group meeting in the 1970s — are on display, paired with reflections from the fellows on their exploration and interpretation of the collections.

"The goal of the exhibit is to showcase the Distinctive Collections’ archival items Mari and Maya used alongside their reflections to illustrate the interpretive process that comes with working with archival materials,” says Alex McGee, interim head of public services for Distinctive Collections. “The many different types of items on display also demonstrates the diversity of our collections. Our hope is that the exhibit illuminates the possibilities for archival research beyond your standard paper or article, instead highlighting the limitless potential for these collections in one’s work."

The MIT Libraries’ Women@MIT archival initiative seeks to add the records of women faculty, staff, students, and alumnae to the historic record by collecting, preserving, and sharing their life and work with MIT and global audiences. These efforts are made possible thanks to the generous support of Barbara Ostrom ’78 and Shirley Sontheimer with the hope that this project will encourage more women and underrepresented people to become engaged in science, technology, and engineering. Extending from this initiative, Distinctive Collections also is committed to acquiring, preserving, and making accessible the papers of gender non-binary and non-conforming individuals at MIT to help share their stories and contributions.



de MIT News https://ift.tt/3g2ylfc

Deploying machine learning to improve mental health

A machine-learning expert and a psychology researcher/clinician may seem an unlikely duo. But MIT’s Rosalind Picard and Massachusetts General Hospital’s Paola Pedrelli are united by the belief that artificial intelligence may be able to help make mental health care more accessible to patients.

In her 15 years as a clinician and researcher in psychology, Pedrelli says “it's been very, very clear that there are a number of barriers for patients with mental health disorders to accessing and receiving adequate care.” Those barriers may include figuring out when and where to seek help, finding a nearby provider who is taking patients, and obtaining financial resources and transportation to attend appointments. 

Pedrelli is an assistant professor in psychology at the Harvard Medical School and the associate director of the Depression Clinical and Research Program at Massachusetts General Hospital (MGH). For more than five years, she has been collaborating with Picard, an MIT professor of media arts and sciences and a principal investigator at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) on a project to develop machine-learning algorithms to help diagnose and monitor symptom changes among patients with major depressive disorder.

Machine learning is a type of AI technology where, when the machine is given lots of data and examples of good behavior (i.e., what output to produce when it sees a particular input), it can get quite good at autonomously performing a task. It can also help identify patterns that are meaningful, which humans may not have been able to find as quickly without the machine's help. Using wearable devices and smartphones of study participants, Picard and Pedrelli can gather detailed data on participants’ skin conductance and temperature, heart rate, activity levels, socialization, personal assessment of depression, sleep patterns, and more. Their goal is to develop machine learning algorithms that can intake this tremendous amount of data, and make it meaningful — identifying when an individual may be struggling and what might be helpful to them. They hope that their algorithms will eventually equip physicians and patients with useful information about individual disease trajectory and effective treatment.

“We're trying to build sophisticated models that have the ability to not only learn what's common across people, but to learn categories of what's changing in an individual’s life,” Picard says. “We want to provide those individuals who want it with the opportunity to have access to information that is evidence-based and personalized, and makes a difference for their health.”

Machine learning and mental health

Picard joined the MIT Media Lab in 1991. Three years later, she published a book, “Affective Computing,” which spurred the development of a field with that name. Affective computing is now a robust area of research concerned with developing technologies that can measure, sense, and model data related to people’s emotions. 

While early research focused on determining if machine learning could use data to identify a participant’s current emotion, Picard and Pedrelli’s current work at MIT’s Jameel Clinic goes several steps further. They want to know if machine learning can estimate disorder trajectory, identify changes in an individual’s behavior, and provide data that informs personalized medical care. 

Picard and Szymon Fedor, a research scientist in Picard’s affective computing lab, began collaborating with Pedrelli in 2016. After running a small pilot study, they are now in the fourth year of their National Institutes of Health-funded, five-year study. 

To conduct the study, the researchers recruited MGH participants with major depression disorder who have recently changed their treatment. So far, 48 participants have enrolled in the study. For 22 hours per day, every day for 12 weeks, participants wear Empatica E4 wristbands. These wearable wristbands, designed by one of the companies Picard founded, can pick up information on biometric data, like electrodermal (skin) activity. Participants also download apps on their phone which collect data on texts and phone calls, location, and app usage, and also prompt them to complete a biweekly depression survey. 

Every week, patients check in with a clinician who evaluates their depressive symptoms. 

“We put all of that data we collected from the wearable and smartphone into our machine-learning algorithm, and we try to see how well the machine learning predicts the labels given by the doctors,” Picard says. “Right now, we are quite good at predicting those labels.” 

Empowering users

While developing effective machine-learning algorithms is one challenge researchers face, designing a tool that will empower and uplift its users is another. Picard says, “The question we’re really focusing on now is, once you have the machine-learning algorithms, how is that going to help people?” 

Picard and her team are thinking critically about how the machine-learning algorithms may present their findings to users: through a new device, a smartphone app, or even a method of notifying a predetermined doctor or family member of how best to support the user. 

For example, imagine a technology that records that a person has recently been sleeping less, staying inside their home more, and has a faster-than-usual heart rate. These changes may be so subtle that the individual and their loved ones have not yet noticed them. Machine-learning algorithms may be able to make sense of these data, mapping them onto the individual’s past experiences and the experiences of other users. The technology may then be able to encourage the individual to engage in certain behaviors that have improved their well-being in the past, or to reach out to their physician. 

If implemented incorrectly, it’s possible that this type of technology could have adverse effects. If an app alerts someone that they’re headed toward a deep depression, that could be discouraging information that leads to further negative emotions. Pedrelli and Picard are involving real users in the design process to create a tool that’s helpful, not harmful.

“What could be effective is a tool that could tell an individual ‘The reason you’re feeling down might be the data related to your sleep has changed, and the data relate to your social activity, and you haven't had any time with your friends, your physical activity has been cut down. The recommendation is that you find a way to increase those things,’” Picard says. The team is also prioritizing data privacy and informed consent.

Artificial intelligence and machine-learning algorithms can make connections and identify patterns in large datasets that humans aren’t as good at noticing, Picard says. “I think there's a real compelling case to be made for technology helping people be smarter about people.”



de MIT News https://ift.tt/3IDAHxt

Vibrating atoms make robust qubits, physicists find

MIT physicists have discovered a new quantum bit, or “qubit,” in the form of vibrating pairs of atoms known as fermions. They found that when pairs of fermions are chilled and trapped in an optical lattice, the particles can exist simultaneously in two states — a weird quantum phenomenon known as superposition. In this case, the atoms held a superposition of two vibrational states, in which the pair wobbled against each other while also swinging in sync, at the same time.

The team was able to maintain this state of superposition among hundreds of vibrating pairs of fermions. In so doing, they achieved a new “quantum register,” or system of qubits, that appears to be robust over relatively long periods of time. The discovery, published today in the journal Nature, demonstrates that such wobbly qubits could be a promising foundation for future quantum computers.

A qubit represents a basic unit of quantum computing. Where a classical bit in today’s computers carries out a series of logical operations starting from one of either two states, 0 or 1, a qubit can exist in a superposition of both states. While in this delicate in-between state, a qubit should be able to simultaneously communicate with many other qubits and process multiple streams of information at a time, to quickly solve problems that would take classical computers years to process.

There are many types of qubits, some of which are engineered and others that exist naturally. Most qubits are notoriously fickle, either unable to maintain their superposition or unwilling to communicate with other qubits.

By comparison, the MIT team’s new qubit appears to be extremely robust, able to maintain a superposition between two vibrational states, even in the midst of environmental noise, for up to 10 seconds. The team believes the new vibrating qubits could be made to briefly interact, and potentially carry out tens of thousands of operations in the blink of an eye.

“We estimate it should take only a millisecond for these qubits to interact, so we can hope for 10,000 operations during that coherence time, which could be competitive with other platforms,” says Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT. “So, there is concrete hope toward making these qubits compute.”

Zwierlein is a co-author on the paper, along with lead author Thomas Hartke, Botond Oreg, and Ningyuan Jia, who are all members of MIT’s Research Laboratory of Electronics.

quibits shaking

Happy accidents

The team’s discovery initially happened by chance. Zwierlein’s group studies the behavior of atoms at ultracold, super-low densities. When atoms are chilled to temperatures a millionth that of interstellar space, and isolated at densities a millionth that of air, quantum phenomena and novel states of matter can emerge.

Under these extreme conditions, Zwierlein and his colleagues were studying the behavior of fermions. A fermion is technically defined as any particle that has an odd half-integer spin, like neutrons, protons, and electrons. In practical terms, this means that fermions are prickly by nature. No two identical fermions can occupy the same quantum state — a property known as the Pauli exclusion principle. For instance, if one fermion spins up, the other must spin down.

Electrons are classic examples of fermions, and their mutual Pauli exclusion is responsible for the structure of atoms and the diversity of the periodic table of elements, along with the stability of all the matter in the universe. Fermions are also any type of atom with an odd number of elementary particles, as these atoms would also naturally repel each other.

Zwierlein’s team happened to be studying fermionic atoms of potassium-40. They cooled a cloud of fermions down to 100 nanokelvins and used a system of lasers to generate an optical lattice in which to trap the atoms. They tuned the conditions so that each well in the lattice trapped a pair of fermions. Initially, they observed that under certain conditions, each pair of fermions appeared to move in sync, like a single molecule.

To probe this vibrational state further, they gave each fermion pair a kick, then took fluorescence images of the atoms in the lattice, and saw that every so often, most squares in the lattice went dark, reflecting pairs bound in a molecule. But as they continued imaging the system, the atoms seemed to reappear, in periodic fashion, indicating that the pairs were oscillating between two quantum vibrational states.

“It’s often in experimental physics that you have some bright signal, and the next moment it goes to hell, to never come back,” Zwierlein says. “Here, it went dark, but then bright again, and repeating. That oscillation shows there is a coherent superposition evolving over time. That was a happy moment.”

A low hum

After further imaging and calculations, the physicists confirmed that the fermion pairs were holding a superposition of two vibrational states, simultaneously moving together, like two pendula swinging in sync, and also relative to, or against each other.

“They oscillate between these two states at about 144 hertz,” Hartke notes. “That’s a frequency you could hear, like a low hum.”

The team was able to tune this frequency, and control the vibrational states of the fermion pairs, by three orders of magnitude, by applying and varying a magnetic field, through an effect known as Feshbach resonance.

“It’s like starting with two noninteracting pendula, and by applying a magnetic field, we create a spring between them, and can vary the strength of that spring, slowly pushing the pendula apart,” Zwierlein says.

In this way, they were able to simultaneously manipulate about 400 fermion pairs. They observed that as a group, the qubits maintained a state of superposition for up to 10 seconds, before individual pairs collapsed into one or the other vibrational state.

“We show we have full control over the states of these qubits,” Zwierlein says.

To make a functional quantum computer using vibrating qubits, the team will have to find ways to also control individual fermion pairs — a problem the physicists are already close to solving. The bigger challenge will be finding a way for individual qubits to communicate with each other. For this, Zwierlein has some ideas.

“This is a system where we know we can make two qubits interact,” he says. “There are ways to lower the barrier between pairs, so that they come together, interact, then split again, for about one millisecond. So, there is a clear path toward a two-qubit gate, which is what you would need to make a quantum computer.”

This research was supported, in part, by the National Science Foundation, the Gordon and Betty Moore Foundation, the Vannevar Bush Faculty Fellowship, and the Alexander von Humboldt Foundation.



de MIT News https://ift.tt/3g0NLAQ

Cynthia Breazeal named dean for digital learning at MIT

In a letter to the MIT community today, Vice President for Open Learning Sanjay Sarma announced the appointment of Professor Cynthia Breazeal as dean for digital learning, effective Feb. 1. As dean, she will supervise numerous business units and research initiatives centered on developing and deploying digital technologies for learning. These include MIT xPRO, Bootcamps, Horizon, the Center for Advanced Virtuality, MIT Integrated Learning Initiative, RAISE, and other strategic initiatives. Breazeal has served as senior associate dean for open learning since the fall.

As dean, Breazeal will lead corporate education efforts, helping to grow the existing portfolio of online professional courses, content libraries, and boot camps, while looking more holistically at the needs of companies and professionals to identify areas of convergence and innovation. She will also lead research efforts at MIT Open Learning into teaching, learning, and how new technologies can enhance both, with a special focus on virtual and augmented reality, artificial intelligence, and learning science. Breazeal will help infuse these new technologies and pedagogies into all of the teams’ learning offerings.

“Cynthia brings to the deanship a remarkable combination of experience and expertise. She consistently displays an outstanding facility for leadership and collaboration, bringing together people, ideas, and technologies in creative and fruitful ways,” Sarma wrote in his letter to the community. “Cynthia is an ambassador for women in STEM and a trailblazer in interdisciplinary research and community engagement.”

The director of MIT RAISE — a cross-MIT research effort on advancing AI education for K-12 and adult learners — and head of the Personal Robots research group at the MIT Media Lab, Breazeal is a professor of media arts and sciences and a pioneer in human-robot interaction and social robotics. Her research focus includes technical innovation in AI and user experience design combined with understanding the psychology of engagement to design personified AI technologies that promote human flourishing and personal growth. Over the past decade, her work has expanded to include outreach, engagement, and education in the design and use of AI, as well as AI literacy. She has placed particular emphasis on diversity and inclusion for all ages, backgrounds, and comfort levels with technology.

“The work that Open Learning is doing to extend the best of MIT’s teaching, knowledge, and technology to the world is so thrilling to me,” says Breazeal. “I’m excited to work with these teams to grow and expand their respective programs and to develop new, more integrated, potentially thematic solutions for corporations and professionals.”

TC Haldi, senior director of MIT xPRO, says, "There's an increasing sophistication in the needs of the professional workforce, as technologies and systems grow more complex in every sector. Cynthia has a deep understanding of the intersection between research and industry, and her insights into learning and technology are invaluable."

Breazeal will also continue to head the Personal Robots research group, whose recent work focuses on the theme of "living with AI" and understanding the long-term impact of social robots that can build relationships and provide personalized support as helpful companions in daily life. Under her continued direction, the RAISE initiative, a joint collaboration between the Media Lab, Open Learning, and the MIT Schwarzman College of Computing, is bringing AI resources and education opportunities to teachers and students across the United States and the world through workshops and professional training, hands-on activities, research, and curricula.



de MIT News https://ift.tt/3rQY8N4