lunes, 31 de agosto de 2020

Making health care more personal

The health care system today largely focuses on helping people after they have problems. When they do receive treatment, it’s based on what has worked best on average across a huge, diverse group of patients.

Now the company Health at Scale is making health care more proactive and personalized — and, true to its name, it’s doing so for millions of people.

Health at Scale uses a new approach for making care recommendations based on new classes of machine-learning models that work even when only small amounts of data on individual patients, providers, and treatments are available.

The company is already working with health plans, insurers, and employers to match patients with doctors. It’s also helping to identify people at rising risk of visiting the emergency department or being hospitalized in the future, and to predict the progression of chronic diseases. Recently, Health at Scale showed its models can identify people at risk of severe respiratory infections like influenza or pneumonia, or, potentially, Covid-19.

“From the beginning, we decided all of our predictions would be related to achieving better outcomes for patients,” says John Guttag, chief technology officer of Health at Scale and the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT. “We’re trying to predict what treatment or physician or intervention would lead to better outcomes for people.”

A new approach to improving health

Health at Scale co-founder and CEO Zeeshan Syed met Guttag while studying electrical engineering and computer science at MIT. Guttag served as Syed’s advisor for his bachelor’s and master’s degrees. When Syed decided to pursue his PhD, he only applied to one school, and his advisor was easy to choose.

Syed did his PhD through the Harvard-MIT Program in Health Sciences and Technology (HST). During that time, he looked at how patients who’d had heart attacks could be better managed. The work was personal for Syed: His father had recently suffered a serious heart attack.

Through the work, Syed met Mohammed Saeed SM ’97, PhD ’07, who was also in the HST program. Syed, Guttag, and Saeed founded Health at Scale in 2015 along with  David Guttag ’05, focusing on using core advances in machine learning to solve some of health care’s hardest problems.

“It started with the burning itch to address real challenges in health care about personalization and prediction,” Syed says.

From the beginning, the founders knew their solutions needed to work with widely available data like health care claims, which include information on diagnoses, tests, prescriptions, and more. They also sought to build tools for cleaning up and processing raw data sets, so that their models would be part of what Guttag refers to as a “full machine-learning stack for health care.”

Finally, to deliver effective, personalized solutions, the founders knew their models needed to work with small numbers of encounters for individual physicians, clinics, and patients, which posed severe challenges for conventional AI and machine learning.

“The large companies getting into [the health care AI] space had it wrong in that they viewed it as a big data problem,” Guttag says. “They thought, ‘We’re the experts. No one’s better at crunching large amounts of data than us.’ We thought if you want to make the right decision for individuals, the problem was a small data problem: Each patient is different, and we didn’t want to recommend to patients what was best on average. We wanted what was best for each individual.”

The company’s first models helped recommend skilled nursing facilities for post-acute care patients. Many such patients experience further health problems and return to the hospital. Health at Scale’s models showed that some facilities were better at helping specific kinds of people with specific health problems. For example, a 64-year-old man with a history of cardiovascular disease may fare better at one facility compared to another.

Today the company’s recommendations help guide patients to the primary care physicians, surgeons, and specialists that are best suited for them. Guttag even used the service when he got his hip replaced last year.

Health at Scale also helps organizations identify people at rising risk of specific adverse health events, like heart attacks, in the future.

“We’ve gone beyond the notion of identifying people who have frequently visited emergency departments or hospitals in the past, to get to the much more actionable problem of finding those people at an inflection point, where they are likely to experience worse outcomes and higher costs,” Syed says.

The company’s other solutions help determine the best treatment options for patients and help reduce health care fraud, waste, and abuse. Each use case is designed to improve patient health outcomes by giving health care organizations decision-support for action.

“Broadly speaking, we are interested in building models that can be used to help avoid problems, rather than simply predict them,” says Guttag. “For example, identifying those individuals at highest risk for serious complications of a respiratory infection [enables care providers] to target them for interventions that reduce their chance of developing such an infection.”

Impact at scale

Earlier this year, as the scope of the Covid-19 pandemic was becoming clear, Health at Scale began considering ways its models could help.

“The lack of data in the beginning of the pandemic motivated us to look at the experiences we have gained from combatting other respiratory infections like influenza and pneumonia,” says Saeed, who serves as Health at Scale’s chief medical officer.

The idea led to a peer-reviewed paper where researchers affiliated with the company, the University of Michigan, and MIT showed Health at Scale’s models could accurately predict hospitalizations and visits to the emergency department related to respiratory infections.

“We did the work on the paper using the tech we’d already built,” Guttag says. “We had interception products deployed for predicting patients at-risk of emergent hospitalizations for a variety of causes, and we saw that we could extend that approach. We had customers that we gave the solution to for free.”

The paper proved out another use case for a technology that is already being used by some of the largest health plans in the U.S. That’s an impressive customer base for a five-year-old company of only 20 people — about half of which have MIT affiliations.

“The culture MIT creates to solve problems that are worth solving, to go after impact, I think that’s been reflected in the way the company got together and has operated,” Syed says. “I’m deeply proud that we’ve maintained that MIT spirit.”

And, Syed believes, there’s much more to come.

“We set out with the goal of driving impact,” Syed says. “We currently run some of the largest production deployments of machine learning at scale, affecting millions, if not tens of millions, of patients, and we  are only just getting started.”



de MIT News https://ift.tt/31LWlNC

Six strategic areas identified for shared faculty hiring in computing

Nearly every aspect of the modern world is being transformed by computing. As computing technology continues to revolutionize the way people live, work, learn, and interact, computing research and education are increasingly playing a role in a broad range of academic disciplines, and are in turn being shaped by this expanding breadth.

To connect computing and other disciplines in addressing critical challenges and opportunities facing the world today, the MIT Stephen A. Schwarzman College of Computing is planning to create 25 new faculty positions that will be shared between the college and an MIT department or school. Hiring for these new positions will be focused on six strategic areas of inquiry, to build capacity at MIT in key computing domains that cut across departments and schools. The shared faculty members are expected to engage in research and teaching that contributes to their home department, that is of mutual value to that department and the college, and that helps form and strengthen cross-departmental ties.

“These new shared faculty positions present an unprecedented opportunity to develop crucial areas at MIT which connect computing with other disciplines,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “By coordinated hiring between the college and departments and schools, we expect to have significant impact with multiple touch points across MIT.”

The six strategic areas and the schools expected to be involved in hiring for each are as follows:

Social, Economic, and Ethical Implications of Computing and Networks. Associated schools: School of Humanities, Arts and the Social Sciences and MIT Sloan School of Management.

There have been tremendous advances in new digital platforms and algorithms, which have already transformed our economic, social, and even political lives. But the future societal implications of these technologies and the consequences of the use and misuse of massive social data are poorly understood. There are exciting opportunities for building on the growing intellectual connections between computer science, data science, and social science and humanities, in order to bring a better conceptual framework to understand the social and economic implications, ethical dimensions, and regulation of these technologies.

Focusing on the interplay between computing systems and our understanding of individuals and societal institutions, this strategic hiring area will include faculty whose work focuses on the broader consequences of the changing digital and information environment, market design, digital commerce and competition, and economic and social networks. Issues of interest include how computing and AI technologies have shaped and are shaping the work of the future; how social media tools have reshaped political campaigns, changed the nature and organization of mass protests, and spurred governments to either reduce or dramatically enhance censorship and social control; increasing challenges in adjudicating what information is reliable, what is slanted, and what is entirely fake; conceptions of privacy, fairness, and transparency of algorithms; and the effects of new technologies on democratic governance.

Computing and Natural Intelligence: Cognition, Perception, and Language. Associated schools: School of Science; School of Humanities, Arts, and Social Sciences; and School of Architecture and Planning.

Intelligence — what it is, how the brain produces it, and how it can be engineered — is simultaneously one of the greatest open questions in natural sciences and the most important engineering challenges of our time. Significant advances in computing and machine learning have enabled a better understanding of the brain and the mind. Concurrently, neuroscience and cognitive science have started to give meaningful engineering guidance to AI and related computing efforts. Yet, huge gaps remain in connecting the science and engineering of intelligence.

Integrating science, computing, and social sciences and humanities, this strategic hiring area aims to address the gap between science and engineering of intelligence, in order to make transformative advances in AI and deepen our understanding of natural intelligence. Hiring in this area is expected to advance a holistic approach to understanding human perception and cognition through work such as the study of computational properties of language by bridging linguistic theory, cognitive science, and computer science; improving the art of listening by re-engineering music through music classification and machine learning, music cognition, and new interfaces for musical expression; discovering how artificial systems might help explain natural intelligence and vice versa; and seeking ways in which computing can aid in human expression, communications, health, and meaning.

Computing in Health and Life Sciences. Associated schools: School of Engineering; School of Science; and MIT Sloan School of Management.

Computing is increasingly becoming an indispensable tool in the health and life sciences. A key area is facilitating new approaches to identifying molecular and biomolecular agents with desired functions and for discovering new medications and new means of diagnosis. For instance, machine learning provides a unique opportunity in the pursuit of molecular and biomolecular discovery to parameterize and augment physics-based models, or possibly even replace them, and enable a revolution in molecular science and engineering. Another major area is health-care delivery, where novel algorithms, high performance computing, and machine learning offer new possibilities to transform health monitoring and treatment planning, facilitating better patient care, and enabling more effective ways to help prevent disease. In diagnosis, machine learning methods hold the promise of improved detection of diseases, increasing both specificity and sensitivity of imaging and testing.

This strategic area aims to hire faculty who help create transformative new computational methods in health and life sciences, while complementing the considerable existing work at MIT by forging additional connections. The broad scope ranges from computational approaches to fundamental problems in molecular design and synthesis for human health; to reshaping health-care delivery and personalized medicine; to understanding radiation effects and optimizing dose delivery on target cells; to improving tracing, imaging, and diagnosis techniques.

Computing for Health of the Planet. Associated schools: School of Engineering; School of Science; and School of Architecture and Planning.

The health of the planet is one of the most important challenges facing humankind today. Rapid industrialization has led to a number of serious threats to human and ecosystem health, including climate change, unsafe levels of air and water pollution, coastal and agricultural land erosion, and many others. Ensuring the health and safety of our planet necessitates an interdisciplinary approach that connects scientific understanding, engineering solutions, social, economic, and political aspects, with new computational methods to provide data-driven models and solutions for providing clean air, usable water, resilient food, efficient transportation systems, and identifying sustainable sources of energy.

This strategic hiring area will help facilitate such collaborations by bringing together expertise that will enable us to advance physical understanding of low-carbon energy solutions, earth-climate modelling, and urban planning through high performance computing, transformational numerical methods, and/or machine learning techniques.

Computing and Human Experience. Associated schools: School of Humanities, Arts, and Social Sciences and School of Architecture and Planning.

Computing and digital technologies are challenging the very ways in which people understand reality and our role in it. These technologies are embedded in the everyday lives of people around the world, and while frequently highly useful, they can reflect cultural assumptions and technological heritage, even though they are often viewed as being neutral prescriptions for structuring the world. Indeed, as becomes increasingly apparent, these technologies are able to alter individual and societal perceptions and actions, or affect societal institutions, in ways that are not broadly understood or intended. Moreover, although these technologies are conventionally developed for improved efficacy or efficiency, they can also provide opportunities for less utilitarian purposes such as supporting introspection and personal reflection.

This strategic hiring area focuses on growing the set of scholars in the social sciences, humanities, and computing who examine technology designs, systems, policies, and practices that can address the dual challenges of the lack of understanding of these technologies and their implications, including the design of systems that may help ameliorate rather than exacerbate inequalities. It further aims to develop techniques and systems that help people interpret and gain understanding from societal and historical data, including in humanities disciplines such as comparative literature, history, and art and architectural history.

Quantum Computing. Associated schools: School of Engineering and School of Science.

One of the most promising directions for continuing improvements in computing power comes from quantum mechanics. In the coming years, new hardware, algorithms, and discoveries offer the potential to dramatically increase the power of quantum computers far beyond current machines. Achieving these advances poses challenges that span multiple scientific and engineering fields, and from quantum hardware to quantum computing algorithms. Potential quantum computing applications span a broad range of fields, including chemistry, biology, materials science, atmospheric modeling, urban system simulation, nuclear engineering, finance, optimization, and others, requiring a deep understanding of both quantum computing algorithms and the problem space.

This strategic hiring area aims to build on MIT’s rich set of activities in the space to catalyze research and education in quantum computing and quantum information across the Institute, including the study of quantum materials; developing robust controllable quantum devices and networks that can faithfully transmit quantum information; and new algorithms for machine learning, AI, optimization, and data processing to fully leverage the promise of quantum computing.

A coordinated approach

Over the past few months, the MIT Schwarzman College of Computing has undertaken a strategic planning exercise to identify key areas for hiring the new shared faculty. The process has been led by Huttenlocher, together with MIT Provost Martin Schmidt and the deans of the five schools — Anantha Chandrakasan, dean of the School of Engineering; Melissa Nobles, Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences; Hashim Sarkis, dean of the School of Architecture and Planning; David Schmittlein, John C. Head III Dean of MIT Sloan; and Michael Sipser, dean of the School of Science — beginning with input from departments across the Institute.

This input was in the form of proposals for interdisciplinary computing areas that were solicited from department heads. A total of 29 proposals were received. Over a six-week period, the committee worked with proposing departments to identify strategic hiring themes. The process yielded the six areas that cover several critically important directions. 

“These areas not only bring together computing with numerous departments and schools, but also involve multiple modes of academic inquiry, offering opportunities for new collaborations in research and teaching across a broad range of fields,” says Schmidt. “I’m excited to see us launch this critical part of the college’s mission.”

The college will also coordinate with each of the five schools to ensure that diversity, equity, and inclusion is at the forefront for all of the hiring areas.

Hiring for the 2020-21 academic year

While the number of searches and involved schools will vary from year to year, the plan for the coming academic year is to have five searches, one with each school. These searches will be in three of the six strategic hiring areas as follows:

Social, Economic, and Ethical Implications of Computing and Networks will focus on two searches, one with the Department of Philosophy in the School of Humanities, Arts, and Social Sciences, and one with the MIT Sloan School of Management.

Computing and Natural Intelligence: Cognition, Perception and Language will focus on one search with the Department of Brain and Cognitive Sciences in the School of Science.

Computing for Health of the Planet will focus on two searches, one with the Department of Urban Studies and Planning in the School of Architecture and Planning, and one with a department to be identified in the School of Engineering.



de MIT News https://ift.tt/3lwDPR0

MIT partners with national labs on two new National Quantum Information Science Research Centers

Early this year, the U.S. Department of Energy sent out a call for proposals as it announced it would award up to $625 million in funding over the next five years to establish multidisciplinary National Quantum Information Science (QIS) Research Centers. These awards would support the National Quantum Initiative Act, passed in 2018 to accelerate the development of quantum science and information technology applications.

Now, MIT is a partner institute on two QIS Research Centers that the Department of Energy has selected for funding.

One of the centers, the Co-design Center for Quantum Advantage (C2QA), will be led by Brookhaven National Laboratory. MIT participation in this center will be coordinated by Professor Isaac Chuang through the Center for Theoretical Physics.

The other center, the Quantum Systems Accelerator (QSA), will be led by Lawrence Berkeley National Laboratory. The Research Laboratory of Electronics (RLE) and MIT Lincoln Laboratory are partners on this center. Professor William Oliver, a Lincoln Laboratory fellow and director of the Center for Quantum Engineering, and Eric Dauler, who leads the Quantum Information and Integrated Nanosystems Group at Lincoln Laboratory, will coordinate MIT research activities with this center.

“Quantum information science and engineering research is a core strength at MIT, ranging broadly from algorithms and molecular chemistry to atomic and superconducting qubits, as well as quantum gravity and the foundations of computer science. This new funding from the Department of Energy will connect ongoing vibrant MIT research in quantum information with teams seeking to harness and discover quantum technologies,” says Chuang.

Devices based on the mysterious phenomena of quantum physics have begun to reshape the technology landscape. In recent years, researchers have been pursuing advanced quantum systems, like those that could lead to tamper-proof communications systems and computers that could tackle problems today's machines would need billions of years to solve.

The foundational expertise, infrastructure, and resources that MIT will bring to both QIS research centers is expected to help accelerate the development of such quantum technologies.

“Much of the theoretical and algorithmic foundation for quantum information science, as well as early experimental implementations, were developed at MIT. The QIS research centers build on this experience and the broader landscape. It is fantastic that MIT is participating with two centers, and this reflects our strength and breadth,” says Oliver.  

Each QIS research center incorporates a collaborative research team spanning multiple scientific and engineering disciplines and multiple institutions. Both centers are focused on pushing quantum computers “beyond-NISQ,” the acronym referring to today's generation of noisy intermediate-scale quantum systems. The long-term goal is to develop a “universal” quantum computer, the kind that can perform computational tasks that would be practically impossible for traditional supercomputers to solve. To get there, researchers face enormous challenges in creating and controlling the perfect conditions for large numbers of quantum bits (qubits) to interact and store information long enough to perform calculations. 

“Unlike most previous efforts, contributors from the algorithm, quantum computing, and quantum engineering areas will all need to work together to achieve the community's acceleration toward this ambitious goal,” says John Chiaverini, a principal investigator in the Quantum Information and Integrated Nanosystems Group.

In their partnership with the QSA, RLE and Lincoln Laboratory researchers will focus their efforts on co-designing fundamental engineering approaches, with the goal of enabling larger programmable quantum systems built from neutral atoms, trapped ions, and superconducting qubits. “Advancing all three hardware approaches to quantum computation within a coordinated, center-scale effort will enable uniquely collaborative development efforts and a deeper understanding of the fundamental quantum engineering constraints,” says Dauler. As larger systems are realized, they will be used by researchers throughout the center to feed quantum science research.  

“We look forward to further strengthening our research collaboration with Lawrence Berkeley National Laboratory, Sandia National Laboratories, and the partner universities to create many advances in quantum information science through the Quantum Systems Accelerator,” says Lincoln Laboratory Director Eric Evans.

At the C2QA, experts in QIS, materials science, computer science, and theory will focus on the superconducting qubit modality and work together to resolve performance issues with quantum computers by simultaneously co-designing software and hardware. Through these parallel efforts, the team will understand and control material properties to extend “coherence” time, or how long the qubits can function; design devices to generate more robust qubits; optimize algorithms to target specific scientific applications; and develop error-correction solutions.

MIT's cutting-edge facilities will bolster these collaborations. Lincoln Laboratory has the Microelectronics Laboratory, an ISO-9001-certified facility for fabricating advanced circuits for superconducting and trapped-ion quantum bit applications, and MIT.nano offers more than 20,000 square feet of clean-room space for making and testing quantum devices.

“I'm excited by the opportunity the research centers offer to collaborate, and to better advance the state of knowledge and technology in the quantum area. Specifically, the collaboration offers a new avenue for the U.S. quantum information science community to access the unique design, fabrication, and testing capabilities at MIT and Lincoln Laboratory, including the Microelectronics Laboratory and numerous laboratories specializing in advanced packaging and testing,” says Robert Atkins, who leads the Advanced Technology Division overseeing quantum computing research at Lincoln Laboratory.

Participation in both centers will complement other major programs that MIT has initiated in recent years, including the MIT-IBM Watson AI Lab, which aims to advance artificial intelligence hardware, software, and algorithms; the MIT Stephen A. Schwarzman College of Computing, which spans all five of MIT's schools; and the most-recently established Center for Quantum Engineering out of RLE and Lincoln Laboratory.

In addition to selecting these two MIT-affiliated centers, the Department of Energy announced funding for three additional QIS research centers. These investments, according to the department, represent a long-term, large-scale commitment of U.S. scientific and technological resources to a highly competitive and promising new area of investigation, with enormous potential to transform science and technology. 

“The QIS research centers will assure that advances in fundamental research in quantum science will progress to practical applications to benefit national security and many other segments of society,” says MIT Vice President for Research Maria Zuber. “The pace of discovery in this field is rapid, and the combined strengths of campus and Lincoln Laboratory are very well-aligned to lead in this area.”



de MIT News https://ift.tt/2ENlD4S

Toward a machine learning model that can reason about everyday actions

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending. 

Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them. 

Their model did as well as or better than humans at two types of visual reasoning tasks — picking the video that conceptually best completes the set, and picking the video that doesn’t fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MIT’s Multi-Moments in Time and DeepMind’s Kinetics.

“We show that you can build abstraction into an AI system to perform ordinary visual reasoning tasks close to a human level,” says the study’s senior author Aude Oliva, a senior research scientist at MIT, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab. “A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making.”

As deep neural networks become expert at recognizing objects and actions in photos and video, researchers have set their sights on the next milestone: abstraction, and training models to reason about what they see. In one approach, researchers have merged the pattern-matching power of deep nets with the logic of symbolic programs to teach a model to interpret complex object relationships in a scene. Here, in another approach, researchers capitalize on the relationships embedded in the meanings of words to give their model visual reasoning power.

“Language representations allow us to integrate contextual information learned from text databases into our visual models,” says study co-author Mathew Monfort, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Words like ‘running,’ ‘lifting,’ and ‘boxing’ share some common characteristics that make them more closely related to the concept ‘exercising,’ for example, than ‘driving.’ ”

Using WordNet, a database of word meanings, the researchers mapped the relation of each action-class label in Moments and Kinetics to the other labels in both datasets. Words like “sculpting,” “carving,” and “cutting,” for example, were connected to higher-level concepts like “crafting,” “making art,” and “cooking.” Now when the model recognizes an activity like sculpting, it can pick out conceptually similar activities in the dataset. 

This relational graph of abstract classes is used to train the model to perform two basic tasks. Given a set of videos, the model creates a numerical representation for each video that aligns with the word representations of the actions shown in the video. An abstraction module then combines the representations generated for each video in the set to create a new set representation that is used to identify the abstraction shared by all the videos in the set.

To see how the model would do compared to humans, the researchers asked human subjects to perform the same set of visual reasoning tasks online. To their surprise, the model performed as well as humans in many scenarios, sometimes with unexpected results. In a variation on the set completion task, after watching a video of someone wrapping a gift and covering an item in tape, the model suggested a video of someone at the beach burying someone else in the sand. 

“It’s effectively ‘covering,’ but very different from the visual features of the other clips,” says Camilo Fosco, a PhD student at MIT who is co-first author of the study with PhD student Alex Andonian. “Conceptually it fits, but I had to think about it.”

Limitations of the model include a tendency to overemphasize some features. In one case, it suggested completing a set of sports videos with a video of a baby and a ball, apparently associating balls with exercise and competition.

A deep learning model that can be trained to “think” more abstractly may be capable of learning with fewer data, say researchers. Abstraction also paves the way toward higher-level, more human-like reasoning.

“One hallmark of human cognition is our ability to describe something in relation to something else — to compare and to contrast,” says Oliva. “It’s a rich and efficient way to learn that could eventually lead to machine learning models that can understand analogies and are that much closer to communicating intelligently with us.”

Other authors of the study are Allen Lee from MIT, Rogerio Feris from IBM, and Carl Vondrick from Columbia University.



de MIT News https://ift.tt/2GfLByz

domingo, 30 de agosto de 2020

Robot takes contact-free measurements of patients’ vital signs

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.

The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.

“In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”

Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.

“We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

Measuring vital signs

When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.

The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.

To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.

The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.

Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.

The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.

“We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”

Continuous monitoring

In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.

While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.

The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship.



de MIT News https://ift.tt/3ly3YPl

viernes, 28 de agosto de 2020

Bringing MIT magic to first-years everywhere

“What I want to reiterate about this year is, it’s going to be strange. It is going to be a transition for the entire MIT community. But as 2024s, you are absolutely being prioritized in a number of ways, and there are so many people out there who are really, really excited to have you here and are ready to help you.”

That’s how MIT senior Danielle Grey-Stewart sought to reassure incoming students and their families at the July virtual town hall for first-years. She knows of what she speaks: as a member of the Undergraduate Association (UA) Committee on Covid-19, she and others have steadfastly represented undergraduates’ interests in conversations with the administration as it has navigated how to best continue MIT’s academic and research enterprises amidst a pandemic.

Plans to create an only-at-MIT experience for first-year students — who, for the fall semester at least, will be learning remotely — have been in the works for months. At the heart of many of these efforts, from academics to social life, is an emphasis on building personal connections among the Class of 2024 members — as well as with other students, faculty, staff, and alumni.

The Office of the First Year (OFY) kicked into high gear in the spring to build a foundation for the incoming class — well before OFY’s signature orientation week in late August. The OFY staff and 93 student orientation leaders (OLs) have engaged with first-years all summer through Slack channels, Zoom gatherings, friendly competitions among orientation teams, and social media. “We’ve been interacting with them 24/7 since the second week of May,” says Elizabeth Cogliano Young, associate dean and director of first-year advising.

Slack has been a particularly effective way to connect with the students, says Chelsea Truesdell, assistant dean of advising and new student programs. In addition to class-wide channels for general questions about advising, classes, and campus life, there are private channels for smaller orientation groups to get to know each other. “They can pretty much get instantaneous answers to their questions,” says Truesdell. “And that right-on-time response in Slack has introduced us to the students earlier. I feel like they know us better because we respond to them all the time, and that is something that has not happened before.”

OFY also launched a private virtual yearbook for 2024s, using the same platform that the Alumni Association uses. Over 800 first-years — nearly 75 percent of the class — have created a page for themselves. Using the yearbook, students can find others who live near them, and — in conjunction with the OFY Slack channels for specific courses, like 18.01 (Single Variable Calculus) — can form pset (problem set) or study groups for their geographic region.

Now in his third year as an OL, senior Richard Colwell is quite familiar with the inherent awkwardness incoming students feel when they first meet. In fact, he says, it may be slightly amplified this year, given that it’s virtual. But he’s noticed in his group that, after spending 10 weeks getting to know each other, first-year attendance during this year’s orientation has been higher than usual. “Maybe some of that is students realizing that even though this isn’t ideal, even though this is a bit awkward and it isn’t the semester we want, the students are each other’s best resource for making it through this unconventional fall,” he muses.

MIT’s first-year learning communities (FLCs) have also been busy this summer finding creative ways to build community among their first-year cohorts. In Terrascope, associate advisors (upper-level undergraduate students) have hosted a series of fun events, from online games and pet show-and-tell to baking and creating playlists. A book club has become a weekly staple, and the community discord server has been very popular.

The Experimental Study Group (ESG) welcomed each new student with a care package containing an ESG bandana, pen, and 50th anniversary booklet. Like Terrascope, ESG has offered a steady drumbeat of online activities, from meet-and-greet sessions to a six-week, self-study class led by physics lecturers Paulo Rebusco and Analia Barrantes, using the Media Lab’s Learning Creative Learning course.

In terms of academics, faculty and staff have given special consideration to adapting first-year classes for a virtual fall semester. Science core general Institute requirement (GIR) instructors are working to replicate traditional in-person elements that are central to the MIT experience, such as pset groups.

The Introductory Physics GIR (8.01) is a case in point. “Our big goal is to develop a community of students,” says Peter Dourmashkin, a physics senior lecturer who participated in the first-year town hall. “Psets are one of the great cultural things about MIT. We’re trying to recreate that peer-to-peer learning online through platforms that MIT students have been developing.” 8.01 will incorporate iPads and Apple Pencils provided by Information Systems and Technology — just two of the “flotilla of tools, learning platforms, trainings and support systems” that have been implemented to enhance remote instruction for all students, according to Krishna Rajagopal, dean for digital learning.

First-years can also find academic support in departments, programs, and offices throughout MIT. For example, the Office of Minority Education (OME) will offer small GIR study groups, facilitated by experienced upper-level students and graduate students. In addition, OME will host a daily virtual study lounge, the Talented Scholars Resource Room, where students can drop in and study with peers, or just hang out.

In a nod to an annual MIT tradition, the Mystery Hunt, the Alumni Association, and the Office of the Vice Chancellor are sponsoring a semester-long puzzle hunt called “Where in the Galaxy is Tim the Beaver?” Kate Weishaar, project coordinator in the first-year experience program, calls it a “one-of-a-kind quest to find new friends, inspiring professors, undiscovered interests, and, of course, Tim the Beaver.”

The idea came from history Professor Anne McCants, director of the Concourse First-Year Learning Community, with two primary goals: building community among first-years and allowing them to explore the humanities at MIT. Mystery Hunt student aficionados will help devise the puzzles, which will change every two weeks. And with each puzzle round, the first-years’ teams will be reshuffled, to help students meet new peers throughout the term.

The extended MIT family is doing its part to help first-years get to know each other and build their networks. In partnership with Office of the Vice Chancellor, the Alumni Association is planning a series of virtual dinners for the students in October and November hosted by alumni around the world. Some will be based on the students’ geographic region, while others will center on career-related themes, such as choosing a major.

Upper-level students are taking steps to make the Class of 2024’s virtual experience the best it can be. It’s been top-of-mind for Danielle Geathers, president of the UA, since she was elected in May. “One thing we really feared was that, how are you going to welcome first-years into this environment, into our virtual community?” she says.

To address those concerns, the UA plans to roll out a first-year program in late September to increase engagement with incoming students. Efforts will center on offering frequent UA events with officer meet-and-greets, creating stronger bonds between the senior class council and the first-year class council, and mentoring first-years. To increase visibility and generate buzz, they’ve created two videos, one released in July, and one to kick off Orientation. So far, nearly 70 first-years have reached out to express interest in getting involved.

Geathers believes that first-year voices have historically been underrepresented, and that now is an opportune time to address that. “It’s true that they have different interests, and they have things that affect them uniquely as a class, specifically this year. And although we can try to anticipate their interests, we don’t want to make any assumptions when we are representing their student voice to administrators. Now, more than ever, we have to depend on that engagement, because nobody knows what it’s like to do a fall virtual semester as a first-year.”



de MIT News https://ift.tt/3jt5q3x

A new platform for controlled delivery of key nanoscale drugs and more

In work that could have a major impact on several industries — from pharmaceuticals to cosmetics and even food — MIT engineers have developed a novel platform for the controlled delivery of certain important drugs, nutrients, and other substances to human cells.

The researchers believe that their simple approach, which creates small capsules containing thousands of nanosized droplets loaded with a drug or other active ingredient, will be easy to transition from the lab to industry.

The active ingredients in many consumer products intended for use in or on the human body do not easily dissolve in water. As a result, they are hard for the body to absorb, and it is difficult to control their delivery to cells.

In the pharmaceutical industry alone, “40 percent of currently marketed drugs and 90 percent of drugs in development are hydrophobic wherein [their] low water solubility greatly limits their bioavailability and absorption efficiency,” the MIT team writes in a paper on the work in the August 28 issue of the journal Advanced Science.

Nanoemulsions to the Rescue

Those drugs and other hydrophobic active ingredients do, however, dissolve in oil. Hence the growing interest in nanoemulsions, the nanoscale equivalent of an oil-and-vinegar salad dressing that consists of miniscule droplets of oil dispersed in water. Dissolved in each oil droplet is the active ingredient of interest.

Among other advantages, the ingredient-loaded droplets can easily pass through cell walls. Each droplet is so small that between 1,000 to 5,000 could fit across the width of a human hair. (Their macroscale counterparts are too big to get through.) Once the droplets are inside the cell, their payload can exert an effect. The droplets are also exceptionally stable, resulting in a long shelf life, and can carry a large amount of active ingredient for their size.

But there’s a problem: How do you encapsulate a nanoemulsion into a dosage form like a pill? The technologies for doing so are still nascent.

In one of the most promising approaches, the nanoemulsion is encapsulated in a 3D network of a polymer gel to form small beads. Currently, however, when ingested those beads release their payload — the ingredient-loaded oil droplets — all at once. There is no control over the process.

The MIT team solved this by adding a shell, or capsule, around large individual droplets of nanoemulsion, each containing thousands of nano oil droplets. That shell not only protects the nano droplets inside from harmful physiological conditions in the body, but also could be used to mask the often unpalatable taste of the active ingredients they contain.

The result is a “pill” about 5 millimeters in diameter with a biodegradable shell that in turn can be “tuned” to release its contents at specific times. This is done by changing the thickness of the shell. To date they have successfully tested the system with both ibuprofen and Vitamin E.

“Our new delivery platform can be applied to a broad range of nanoemulsions, which themselves contain active ingredients ranging from drugs to nutraceuticals and sunscreens. Having this new control over how you deliver them opens up many new avenues in terms of future applications,” says Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering and senior author of the paper.

His colleagues on the work are Liang-Hsun Chen, a graduate student in chemical engineering and first author of the paper, and Li-Chiun Cheng SM ’18, PhD ’20, who received his PhD in chemical engineering earlier this year and is now at LiquiGlide.

Many Advantages

The MIT platform has a number of advantages in addition to its simplicity and scalability to industry. For example, the shell itself “is derived from the cell walls of brown algae, so it’s very natural and biocompatible with human bodies,” says Chen.

Further, the process for making the nanoemulsion containing its payload is economical because the simple stirring involved requires little energy. The process is also “really gentle, which protects the [active] molecule of interest, like a drug,” says Doyle. “Harsher techniques can damage them.”

The team also demonstrated the ability to turn the liquid nanoemulsion inside each shell into a solid core, which could allow a variety of other applications. They did so by adding a material that when activated by ultraviolet light cross-links the nano oil droplets together.

For Chen, the most exciting part of the work was preparing the capsules and then “watching them burst to release their contents at the target times I engineered them for.”

Doyle notes that from a pedagogical point of view, the work “combined all of the core elements of chemical engineering, from fluid dynamics to reaction engineering and mass transfer. And to me it’s pretty cool to have them all in one project.”

This work was supported by the Singapore National Research Foundation, the U.S. National Science Foundation, and the Think Global Education Trust (Taiwan).



de MIT News https://ift.tt/2ErhHab

How to help urban street commerce thrive

How many retail, food, and service establishments are there on the streets of New York City? How about Evanston, Illinois? Or Sacramento, California? It turns out the amount of urban street commerce is strikingly related to population size. The biggest metro areas in the U.S. have one retail, food, or service establishment for roughly every 120 people, while the smallest metro areas have roughly one for every 100 people.

Store clusters also tend to occur in predictable spatial patterns — with typically a few large clusters of stores, notably more medium-size clusters, and a lot of smaller neighborhood clusters, located at fairly predictable distances from each other. And store types are strongly affected by how often we visit them: There is one restaurant for every 445 people in U.S. cities and towns, but it takes around 13,000 people to support a bookstore in a typical U.S. metro area.

Of course, there are limits to these regularities. Myrtle Beach, South Carolina, has twice as many businesses as most places its size (thanks to tourists), while Brownsville, Texas, has many fewer. And plenty of neighborhoods within big cities either surprisingly lack shopping amenities or lose them over time.

In short, while urban commerce has clear patterns, it is still unpredictable and depends heavily on local conditions — including policies and planning. Enter MIT Associate Professor Andres Sevtsuk, whose new book, “Street Commerce: Creating Vibrant Urban Sidewalks,” published by the University of Pennsylvania Press, digs into the science of commerce and examines salient cases about sustaining main streets, from the U.S. and beyond, including London, Singapore, and Tallinn, Estonia. 

While the book’s research was completed before the Covid-19 pandemic hit, the empirical principles detailed in the book should apply to the rebuilding of street commerce after the pandemic as well.

“We now have the spatial science for understanding where retail clusters work and where they will do well,” says Sevtsuk, the Charles and Ann Spaulding Career Development Associate Professor of Urban Science and Planning in MIT’s Department of Urban Studies and Planning. “However, this knowledge is not really absorbed into practice very much. City governments often still zone for commerce not really based on evidence about where it would work best.”

Where people encounter each other

In the book, Sevtsuk also makes the case that lively shopping areas do more than provide access to goods and services: They are civic and social spaces where people mingle and gain access to opportunities.

“Commercial clusters are one of the few remaining places in contemporary cities where diverse sets of people can encounter each other,” Sevtsuk says. “So they’re also important venues for community building and for democracy. … We encounter people of different incomes, classes, races, and interests along main streets, which helps us establish new social connections and better understand the society around us.”

And when retail clusters include locally owned businesses, Sevtsuk notes, “they really feed the local economy more than big box stores.” Spending at a local business, rather than a chain store, keeps more money in the local economy, because local establishments source more of their own supplies and services locally.

Convenient commercial streets, whether accessible on foot or by public transport, also help cities tackle climate change and emissions. Because over two-thirds of all trips people make are for commercial, social, recreational, and family purposes, walkable commerce lowers our carbon footprint, Sevtsuk notes: “The more errands and social activities we can complete without having to drive, the more sustainable and energy-efficient our cities will be.”

And yet only about 15 percent of Americans live within a 15-minute walk of a cluster of amenities. (Another 56 percent are within 3 miles of one.) In order to capture the social, economic, and environmental benefits that vibrant street commerce has to offer, cities should consciously plan and support street commerce, and not just in exurban malls, Sevtsuk argues. Just like transit-oriented development has become a widely accepted model, amenity-oriented development should too. In the book, Sevtsuk suggests a number of strategies and tactics to support and grow main streets, emphasizing novel policy tools like affordable commercial space requirements for new developments, which apply the lessons learned from affordable housing requirements.

For one thing, successful store clusters are more likely to crop up around spots of unusually good accessibility, such as street corners and intersections for small neighborhood clusters. Clusters of establishments are also more likely to develop at places that welcome them architecturally, featuring walkable streets that are easy to cross and ground floors of buildings that can be easily converted to retail spaces. And neighborhood clusters often benefit from larger anchor businesses like supermarkets, which produce a lot of foot traffic — something MIT’s Kendall Square Initiative has encouraged.

“It is often to everyone’s benefit to have a good supermarket or otherwise frequently attended establishment on the corner,” Sevtsuk says. “Anchors produce a positive ripple effect on nearby stores.”

Anchors are what Sevtsuk calls “complementary” to most neighboring stores, but having multiple groceries in close proximity rarely works, because they compete with each other. However, some thriving clusters do feature businesses that compete with each other, where proximity gives customers more choice and thereby attracts a larger clientele. Restaurants, clothing stores, bookstores, or antique stores, are often found shoulder to shoulder in competitive clusters that make all stores better off.

Public transit access, surrounding building density, and mixed land uses also help store clusters thrive, according to Sevtsuk.

“Density is a real friend for commerce, though mostly people think about density in a negative way — having to share space and amenities with more people,” Sevtsuk says. “But density sustains amenities. And for retail clusters to work, we do not need a uniform swath of density everywhere, but rather local density, immediately along commercial streets and transit corridors.”

Planning for a rebound

“Street Commerce” has been praised by urban studies scholars; Ed Glaeser of Harvard University says it “provides an invaluable guide to the present and future of urban retail” and “reminds us that modern cities are built around gains from trade.” And while Sevtsuk’s work precedes the Covid-19 pandemic, he acknowledges the hardship it has had on business.

“I really feel for these business owners at the moment and I think inevitably we are going to have huge turnover, shops going out of business, new stores coming in, with a different retail market and different main streets emerging,” Sevtsuk says. Still, he notes, retailers have defied predictions of doom before, such as those involving e-commerce. Instead of vanishing, stores offering a richer customer experience have tended to thrive — and the balance of street business has tilted a bit more toward services and food establishments.

Ultimately Sevtsuk thinks there are several major lessons about helping street commerce flourish. There is no one-size-fits-all template for cities and towns, since each has a unique population, urban form, and history. Some are denser, some have better public transit, and some have long-established patterns of commerce that remain influential. But the same broad economic, organizational and spatial principles influence store patterns everywhere. Street commerce flourishes when city governments, civil society organizations and developers alike support and cherish it.

But while a corporation such as Starbucks has its own dedicated resources to identify good store locations and suitable building types, and to detect contemporary clustering dynamics, towns and cities typically do not. For that reason, Sevtsuk hopes policymakers and town officials can absorb the lessons of research and apply them to their own locales.

“There are certainly many things that a public official or planner can do to nudge the course of street commerce and support it,” Sevtsuk says. “If there is one good lesson from cities where street commerce has been successfully introduced, bolstered, or reinvented, or where the forces of gentrification have been equitably balanced, it is that successful street commerce almost never emerges or survives as a result of pure market forces alone. Good street commerce usually also represents the fruits of conscious planning choices.”



de MIT News https://ift.tt/2YItKGM

miércoles, 26 de agosto de 2020

Synthetic coating for the GI tract could deliver drugs or aid in digestion

By making use of enzymes found in the digestive tract, MIT engineers have devised a way to apply a temporary synthetic coating to the lining of the small intestine. This coating could be adapted to deliver drugs, aid in digestion, or prevent nutrients such as glucose from being absorbed.

In a study conducted in pigs, the researchers demonstrated that they could use this approach to simplify the delivery of medications that normally have to be taken multiple times per day. They also modified the coatings to deliver the enzyme lactase, which helps people digest the milk sugar lactose, and to block glucose absorption, which could offer a new strategy to treat diabetes or obesity.

“These three applications are fairly distinct, but they offer a sense of the breadth of things that can be done with this approach,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The lining consists of a polymer made from dopamine molecules, which can be consumed as a liquid. Once the solution reaches the small intestine, the molecules are assembled into a polymer, in a reaction catalyzed by an enzyme found in the small intestine.

Junwei Li, a postdoc at MIT’s Koch Institute for Integrative Cancer Research, is the lead author of the study, which appears today in Science Translational Medicine.

Sticky polymers

The MIT team began working on this project with the goal of trying to develop liquid drug formulations that could offer an easier-to-swallow alternative to capsules, especially for children. Their idea was to create a polymer coating for the intestinal lining, which would form after being swallowed as a solution of monomers (the building blocks of polymers).

“Children often aren’t able to take solid dosage forms like capsules and tablets,” Traverso says. “We started to think about whether we could develop liquid formulations that could form a synthetic epithelial lining that could then be used for drug delivery, making it easier for the patient to receive the medication.”

They took their inspiration from nature and began to experiment with a polymer called polydopamine (PDA), which is a component of the sticky substance that mussels secrete to help them cling to rocks. PDA is made from monomers of dopamine — the same chemical that acts as a neurotransmitter in the brain.

The researchers discovered that an enzyme called catalase could help assemble molecules of dopamine into the PDA polymer. Catalase is found throughout the digestive tract, with especially high levels in the upper region of the small intestine.

In a study conducted in pigs, the researchers showed that if they deliver dopamine in a liquid solution, along with a tiny amount of hydrogen peroxide (at levels recognized to be safe), catalase in the small intestine breaks the hydrogen peroxide down into water and oxygen. That oxygen helps the dopamine molecules to join together into the PDA polymer. Within a few minutes, a thin film of PDA forms, coating the lining of the small intestine.

“These polymers have muco-adhesion properties, which means that after polymerization, the polymer can attach to the intestinal wall very strongly,” Li says. “In this way, we can generate synthetic, epithelial-like coatings on the original intestinal surface.”

Once the researchers developed the coating, they began experimenting with ways to modify it for a variety of applications. They showed that they could attach an enzyme called beta-galactosidase (lactase) to the film, and that this film could then help with lactose digestion. In pigs, this coating improved the efficiency of lactose digestion around 20-fold.

For another application, the researchers incorporated a drug called praziquantel, which is used to treat schistosomiasis, a tropical disease caused by parasitic worms. Usually this drug has to be given three times a day, but using this formulation, it could be given just once a day and gradually released throughout the day. This approach could also be useful for antibiotics that have to be given more than once a day, the researchers say.

Lastly, the researchers showed that they could embed the polymer with tiny crosslinkers that make the coating impenetrable to glucose (and potentially other molecules). This could help in the management of diabetes, obesity, or other metabolic disorders, the researchers say.

Temporary coating

In this study, the researchers showed that the coating lasts for about 24 hours, after which it is shed along with the cells that make up the intestinal lining, which is continually replaced. For their studies in pigs, the researchers delivered the solution by endoscopy, but they envision developing a drinkable formulation for human use. The researchers are also developing other alternative formulations, including capsules and pills.

The researchers performed some preliminary safety studies in rats and found that the dopamine solution had no harmful effects. Their studies also suggested that most or all of the dopamine molecules become part of the synthetic coating and do not make it into the tissue or the bloodstream, but the team plans to do additional safety studies to explore any possible effects the dopamine may have.

Moreover, the researchers investigated the nutrient absorption capacity of the intestine after 24 hours and showed no difference between animals that had received the gastrointestinal synthetic epithelial lining (GSEL) and those that hadn’t received the GSEL.

Additionally, the team found that the coating was able to stick well to human GI tissue.

The research was funded by the Bill & Melinda Gates Foundation, the National Institutes of Health, and MIT’s Department of Mechanical Engineering.



de MIT News https://ift.tt/34GRGOR

Cosmic rays may soon stymie quantum computing

The practicality of quantum computing hangs on the integrity of the quantum bit, or qubit.

Qubits, the logic elements of quantum computers, are coherent two-level systems that represent quantum information. Each qubit has the strange ability to be in a quantum superposition, carrying aspects of both states simultaneously, enabling a quantum version of parallel computation. Quantum computers, if they can be scaled to accommodate many qubits on one processor, could be dizzyingly faster, and able to handle far more complex problems, than today’s conventional computers.

But that all depends on a qubit’s integrity, or how long it can operate before its superposition and the quantum information are lost — a process called decoherence, which ultimately limits the computer run-time. Superconducting qubits — a leading qubit modality today — have achieved exponential improvement in this key metric, from less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.

But researchers at MIT, MIT Lincoln Laboratory, and Pacific Northwest National Laboratory (PNNL) have found that a qubit’s performance will soon hit a wall. In a paper published today in Nature, the team reports that the low-level, otherwise harmless background radiation that is emitted by trace elements in concrete walls and incoming cosmic rays are enough to cause decoherence in qubits. They found that this effect, if left unmitigated, will limit the performance of qubits to just a few milliseconds.

Given the rate at which scientists have been improving qubits, they may hit this radiation-induced wall in just a few years. To overcome this barrier, scientists will have to find ways to shield qubits — and any practical quantum computers — from low-level radiation, perhaps by building the computers underground or designing qubits that are tolerant to radiation’s effects.

“These decoherence mechanisms are like an onion, and we’ve been peeling back the layers for past 20 years, but there’s another layer that left unabated is going to limit us in a couple years, which is environmental radiation,” says William Oliver, associate professor of electrical engineering and computer science and Lincoln Laboratory Fellow at MIT. “This is an exciting result, because it motivates us to think of other ways to design qubits to get around this problem.”

The paper’s lead author is Antti Vepsäläinen, a postdoc in MIT’s Research Laboratory of Electronics.

“It is fascinating how sensitive superconducting qubits are to the weak radiation. Understanding these effects in our devices can also be helpful in other applications such as superconducting sensors used in astronomy,” Vepsäläinen says.

Co-authors at MIT include Amir Karamlou, Akshunna Dogra, Francisca Vasconcelos, Simon Gustavsson, and physics professor Joseph Formaggio, along with David Kim, Alexander Melville, Bethany Niedzielski, and Jonilyn Yoder at Lincoln Laboratory, and John Orrell, Ben Loer, and Brent VanDevender of PNNL.

A cosmic effect

Superconducting qubits are electrical circuits made from superconducting materials. They comprise multitudes of paired electrons, known as Cooper pairs, that flow through the circuit without resistance and work together to maintain the qubit’s tenuous superposition state. If the circuit is heated or otherwise disrupted, electron pairs can split up into “quasiparticles,” causing decoherence in the qubit that limits its operation.

There are many sources of decoherence that could destabilize a qubit, such as fluctuating magnetic and electric fields, thermal energy, and even interference between qubits.

Scientists have long suspected that very low levels of radiation may have a similar destabilizing effect in qubits.

“I the last five years, the quality of superconducting qubits has become much better, and now we’re within a factor of 10 of where the effects of radiation are going to matter,” adds Kim, a technical staff member at MIT Lincoln Laboratotry.

So Oliver and Formaggio teamed up to see how they might nail down the effect of low-level environmental radiation on qubits. As a neutrino physicist, Formaggio has expertise in designing experiments that shield against the smallest sources of radiation, to be able to see neutrinos and other hard-to-detect particles.

“Calibration is key"

The team, working with collaborators at Lincoln Laboratory and PNNL, first had to design an experiment to calibrate the impact of known levels of radiation on superconducting qubit performance. To do this, they needed a known radioactive source — one which became less radioactive slowly enough to assess the impact at essentially constant radiation levels, yet quickly enough to assess a range of radiation levels within a few weeks, down to the level of background radiation.

The group chose to irradiate a foil of high purity copper. When exposed to a high flux of neutrons, copper produces copious amounts of copper-64, an unstable isotope with exactly the desired properties.

“Copper just absorbs neutrons like a sponge,” says Formaggio, who worked with operators at MIT’s Nuclear Reactor Laboratory to irradiate two small disks of copper for several minutes. They then placed one of the disks next to the superconducting qubits in a dilution refrigerator in Oliver’s lab on campus. At temperatures about 200 times colder than outer space, they measured the impact of the copper’s radioactivity on qubits’ coherence while the radioactivity decreased — down toward environmental background levels.

The radioactivity of the second disk was measured at room temperature as a gauge for the levels hitting the qubit. Through these measurements and related simulations, the team understood the relation between radiation levels and qubit performance, one that could be used to infer the effect of naturally occurring environmental radiation. Based on these measurements, the qubit coherence time would be limited to about 4 milliseconds.

“Not game over”

The team then removed the radioactive source and proceeded to demonstrate that shielding the qubits from the environmental radiation improves the coherence time. To do this, the researchers built a 2-ton wall of lead bricks that could be raised and lowered on a scissor lift, to either shield or expose the refrigerator to surrounding radiation.

“We built a little castle around this fridge,” Oliver says.

Every 10 minutes, and over several weeks, students in Oliver’s lab alternated pushing a button to either lift or lower the wall, as a detector measured the qubits’ integrity, or “relaxation rate,” a measure of how the environmental radiation impacts the qubit, with and without the shield. By comparing the two results, they effectively extracted the impact attributed to environmental radiation, confirming the 4 millisecond prediction and demonstrating that shielding improved qubit performance.

“Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”

Going underground isn’t the only option, and Oliver has ideas for how to design quantum computing devices that still work in the face of background radiation.

“If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”

This research was funded, in part, by the U.S. Department of Energy Office of Nuclear Physics, the U.S. Army Research Office, the U.S. Department of Defense, and the U.S. National Science Foundation.



de MIT News https://ift.tt/3gwmJz9

National Science Foundation announces MIT-led Institute for Artificial Intelligence and Fundamental Interactions

The U.S. National Science Foundation (NSF) announced today an investment of more than $100 million to establish five artificial intelligence (AI) institutes, each receiving roughly $20 million over five years. One of these, the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), will be led by MIT’s Laboratory for Nuclear Science (LNS) and become the intellectual home of more than 25 physics and AI senior researchers at MIT and Harvard, Northeastern, and Tufts universities. 

By merging research in physics and AI, the IAIFI seeks to tackle some of the most challenging problems in physics, including precision calculations of the structure of matter, gravitational-wave detection of merging black holes, and the extraction of new physical laws from noisy data.

“The goal of the IAIFI is to develop the next generation of AI technologies, based on the transformative idea that artificial intelligence can directly incorporate physics intelligence,” says Jesse Thaler, an associate professor of physics at MIT, LNS researcher, and IAIFI director.  “By fusing the ‘deep learning’ revolution with the time-tested strategies of ‘deep thinking’ in physics, we aim to gain a deeper understanding of our universe and of the principles underlying intelligence.”

IAIFI researchers say their approach will enable making groundbreaking physics discoveries, and advance AI more generally, through the development of novel AI approaches that incorporate first principles from fundamental physics.  

“Invoking the simple principle of translational symmetry — which in nature gives rise to conservation of momentum — led to dramatic improvements in image recognition,” says Mike Williams, an associate professor of physics at MIT, LNS researcher, and IAIFI deputy director. “We believe incorporating more complex physics principles will revolutionize how AI is used to study fundamental interactions, while simultaneously advancing the foundations of AI.”

In addition, a core element of the IAIFI mission is to transfer their technologies to the broader AI community.

“Recognizing the critical role of AI, NSF is investing in collaborative research and education hubs, such as the NSF IAIFI anchored at MIT, which will bring together academia, industry, and government to unearth profound discoveries and develop new capabilities,” says NSF Director Sethuraman Panchanathan. “Just as prior NSF investments enabled the breakthroughs that have given rise to today’s AI revolution, the awards being announced today will drive discovery and innovation that will sustain American leadership and competitiveness in AI for decades to come.”

Research in AI and fundamental interactions

Fundamental interactions are described by two pillars of modern physics: at short distances by the Standard Model of particle physics, and at long distances by the Lambda Cold Dark Matter model of Big Bang cosmology. Both models are based on physical first principles such as causality and space-time symmetries.  An abundance of experimental evidence supports these theories, but also exposes where they are incomplete, most pressingly that the Standard Model does not explain the nature of dark matter, which plays an essential role in cosmology.

AI has the potential to help answer these questions and others in physics.

For many physics problems, the governing equations that encode the fundamental physical laws are known. However, undertaking key calculations within these frameworks, as is essential to test our understanding of the universe and guide physics discovery, can be computationally demanding or even intractable. IAIFI researchers are developing AI for such first-principles theory studies, which naturally require AI approaches that rigorously encode physics knowledge. 

“My group is developing new provably exact algorithms for theoretical nuclear physics,” says Phiala Shanahan, an assistant professor of physics and LNS researcher at MIT. “Our first-principles approach turns out to have applications in other areas of science and even in robotics, leading to exciting collaborations with industry partners.”

Incorporating physics principles into AI could also have a major impact on many experimental applications, such as designing AI methods that are more easily verifiable. IAIFI researchers are working to enhance the scientific potential of various facilities, including the Large Hadron Collider (LHC) and the Laser Interferometer Gravity Wave Observatory (LIGO). 

“Gravitational-wave detectors are among the most sensitive instruments on Earth, but the computational systems used to operate them are mostly based on technology from the previous century,” says Principal Research Scientist Lisa Barsotti of the MIT Kavli Institute for Astrophysics and Space Research. “We have only begun to scratch the surface of what can be done with AI; just enough to see that the IAIFI will be a game-changer.”

The unique features of these physics applications also offer compelling research opportunities in AI more broadly. For example, physics-informed architectures and hardware development could lead to advances in the speed of AI algorithms, and work in statistical physics is providing a theoretical foundation for understanding AI dynamics. 

“Physics has inspired many time-tested ideas in machine learning: maximizing entropy, Boltzmann machines, and variational inference, to name a few,” says Pulkit Agrawal, an assistant professor of electrical engineering and computer science at MIT, and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We believe that close interaction between physics and AI researchers will be the catalyst that leads to the next generation of machine learning algorithms.” 

Cultivating early-career talent

AI technologies are advancing rapidly, making it both important and challenging to train junior researchers at the intersection of physics and AI. The IAIFI aims to recruit and train a talented and diverse group of early-career researchers, including at the postdoc level through its IAIFI Fellows Program.  

“By offering our fellows their choice of research problems, and the chance to focus on cutting-edge challenges in physics and AI, we will prepare many talented young scientists to become future leaders in both academia and industry,” says MIT professor of physics Marin Soljacic of the Research Laboratory of Electronics (RLE). 

IAIFI researchers hope these fellows will spark interdisciplinary and multi-investigator collaborations, generate new ideas and approaches, translate physics challenges beyond their native domains, and help develop a common language across disciplines. Applications for the inaugural IAIFI fellows are due in mid-October. 

Another related effort spearheaded by Thaler, Williams, and Alexander Rakhlin, an associate professor of brain and cognitive science at MIT and researcher in the Institute for Data, Systems, and Society (IDSS), is the development of a new interdisciplinary PhD program in physics, statistics, and data science, a collaborative effort between the Department of Physics and the Statistics and Data Science Center.

“Statistics and data science are among the foundational pillars of AI. Physics joining the interdisciplinary doctoral program will bring forth new ideas and areas of exploration, while fostering a new generation of leaders at the intersection of physics, statistics, and AI," says Rakhlin.  

Education, outreach, and partnerships 

The IAIFI aims to cultivate “human intelligence” by promoting education and outreach. For example, IAIFI members will contribute to establishing a MicroMasters degree program at MIT for students from non-traditional backgrounds.    

“We will increase the number of students in both physics and AI from underrepresented groups by providing fellowships for the MicroMasters program,” says Isaac Chuang, professor of physics and electrical engineering, senior associate dean for digital learning, and RLE researcher at MIT. “We also plan on working with undergraduate MIT Summer Research Program students, to introduce them to the tools of physics and AI research that they might not have access to at their home institutions.”

The IAIFI plans to expand its impact via numerous outreach efforts, including a K-12 program in which students are given data from the LHC and LIGO and tasked with rediscovering the Higgs boson and gravitational waves. 

“After confirming these recent Nobel Prizes, we can ask the students to find tiny artificial signals embedded in the data using AI and fundamental physics principles,” says assistant professor of physics Phil Harris, an LNS researcher at MIT. “With projects like this, we hope to disseminate knowledge about — and enthusiasm for — physics, AI, and their intersection.”

In addition, the IAIFI will collaborate with industry and government to advance the frontiers of both AI and physics, as well as societal sectors that stand to benefit from AI innovation. IAIFI members already have many active collaborations with industry partners, including DeepMind, Microsoft Research, and Amazon. 

“We will tackle two of the greatest mysteries of science: how our universe works and how intelligence works,” says MIT professor of physics Max Tegmark, an MIT Kavli Institute researcher. “Our key strategy is to link them, using physics to improve AI and AI to improve physics. We're delighted that the NSF is investing the vital seed funding needed to launch this exciting effort.”

Building new connections at MIT and beyond

Leveraging MIT’s culture of collaboration, the IAIFI aims to generate new connections and to strengthen existing ones across MIT and beyond.

Of the 27 current IAIFI senior investigators, 16 are at MIT and members of the LNS, RLE, MIT Kavli Institute, CSAIL, and IDSS. In addition, IAIFI investigators are members of related NSF-supported efforts at MIT, such as the Center for Brains, Minds, and Machines within the McGovern Institute for Brain Research and the MIT-Harvard Center for Ultracold Atoms.  

“We expect a lot of creative synergies as we bring physics and computer science together to study AI," says Bill Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and researcher in CSAIL. "I'm excited to work with my physics colleagues on topics that bridge these fields."

More broadly, the IAIFI aims to make Cambridge, Massachusetts, and the surrounding Boston area a hub for collaborative efforts to advance both physics and AI. 

“As we teach in 8.01 and 8.02, part of what makes physics so powerful is that it provides a universal language that can be applied to a wide range of scientific problems,” says Thaler. “Through the IAIFI, we will create a common language that transcends the intellectual borders between physics and AI to facilitate groundbreaking discoveries.”



de MIT News https://ift.tt/3lmjwFK

martes, 25 de agosto de 2020

Face-specific brain area responds to faces even in people born blind

More than 20 years ago, neuroscientist Nancy Kanwisher and others discovered that a small section of the brain located near the base of the skull responds much more strongly to faces than to other objects we see. This area, known as the fusiform face area, is believed to be specialized for identifying faces.

Now, in a surprising new finding, Kanwisher and her colleagues have shown that this same region also becomes active in people who have been blind since birth, when they touch a three-dimensional model of a face with their hands. The finding suggests that this area does not require visual experience to develop a preference for faces.

“That doesn’t mean that visual input doesn’t play a role in sighted subjects — it probably does,” she says. “What we showed here is that visual input is not necessary to develop this particular patch, in the same location, with the same selectivity for faces. That was pretty astonishing.”

Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study. N. Apurva Ratan Murty, an MIT postdoc, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Other authors of the paper include Santani Teng, a former MIT postdoc; Aude Oliva, a senior research scientist, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab; and David Beeler and Anna Mynick, both former lab technicians.

Selective for faces

Studying people who were born blind allowed the researchers to tackle longstanding questions regarding how specialization arises in the brain. In this case, they were specifically investigating face perception, but the same unanswered questions apply to many other aspects of human cognition, Kanwisher says.

“This is part of a broader question that scientists and philosophers have been asking themselves for hundreds of years, about where the structure of the mind and brain comes from,” she says. “To what extent are we products of experience, and to what extent do we have built-in structure? This is a version of that question asking about the particular role of visual experience in constructing the face area.”

The new work builds on a 2017 study from researchers in Belgium. In that study, congenitally blind subjects were scanned with functional magnetic resonance imaging (fMRI) as they listened to a variety of sounds, some related to faces (such as laughing or chewing), and others not. That study found higher responses in the vicinity of the FFA to face-related sounds than to sounds such as a ball bouncing or hands clapping.

In the new study, the MIT team wanted to use tactile experience to measure more directly how the brains of blind people respond to faces. They created a ring of 3D-printed objects that included faces, hands, chairs, and mazes, and rotated them so that the subject could handle each one while in the fMRI scanner.

They began with normally sighted subjects and found that when they handled the 3D objects, a small area that corresponded to the location of the FFA was preferentially active when the subjects touched the faces, compared to when they touched other objects. This activity, which was weaker than the signal produced when sighted subjects looked at faces, was not surprising to see, Kanwisher says.

“We know that people engage in visual imagery, and we know from prior studies that visual imagery can activate the FFA. So the fact that you see the response with touch in a sighted person is not shocking because they’re visually imagining what they’re feeling,” she says.

The researchers then performed the same experiments, using tactile input only, with 15 subjects who reported being blind since birth. To their surprise, they found that the brain showed face-specific activity in the same area as the sighted subjects, at levels similar to when sighted people handled the 3D-printed faces.

“When we saw it in the first few subjects, it was really shocking, because no one had seen individual face-specific activations in the fusiform gyrus in blind subjects previously,” Murty says.

Patterns of connection

The researchers also explored several hypotheses that have been put forward to explain why face-selectivity always seems to develop in the same region of the brain. One prominent hypothesis suggests that the FFA develops face-selectivity because it receives visual input from the fovea (the center of the retina), and we tend to focus on faces at the center of our visual field. However, since this region developed in blind people with no foveal input, the new findings do not support this idea.

Another hypothesis is that the FFA has a natural preference for curved shapes. To test that idea, the researchers performed another set of experiments in which they asked the blind subjects to handle a variety of 3D-printed shapes, including cubes, spheres, and eggs. They found that the FFA did not show any preference for the curved objects over the cube-shaped objects.

The researchers did find evidence for a third hypothesis, which is that face selectivity arises in the FFA because of its connections to other parts of the brain. They were able to measure the FFA’s “connectivity fingerprint” — a measure of the correlation between activity in the FFA and activity in other parts of the brain — in both blind and sighted subjects.

They then used the data from each group to train a computer model to predict the exact location of the brain’s selective response to faces based on the FFA connectivity fingerprint. They found that when the model was trained on data from sighted patients, it could accurately predict the results in blind subjects, and vice versa. They also found evidence that connections to the frontal and parietal lobes of the brain, which are involved in high-level processing of sensory information, may be the most important in determining the role of the FFA.

“It’s suggestive of this very interesting story that the brain wires itself up in development not just by taking perceptual information and doing statistics on the input and allocating patches of brain, according to some kind of broadly agnostic statistical procedure,” Kanwisher says. “Rather, there are endogenous constraints in the brain present at birth, in this case, in the form of connections to higher-level brain regions, and these connections are perhaps playing a causal role in its development.”

The research was funded by the National Institutes of Health Shared Instrumentation Grant to the Athinoula Martinos Center at MIT, a National Eye Institute Training Grant, the Smith-Kettlewell Eye Research Institute’s Rehabilitation Engineering Research Center, an Office of Naval Research Vannevar Bush Faculty Fellowship, an NIH Pioneer Award, and a National Science Foundation Science and Technology Center Grant.



de MIT News https://ift.tt/34BHEye

lunes, 24 de agosto de 2020

Uncertainty, belief, and economic outcomes

In late 1994 Mexico suffered a severe currency crisis, with attacks on the peso by international traders that led to inflation, bailouts, and macroeconomic woes. Some experts had thought Mexico was ripe for a currency crisis a couple of years before it happened. So if the peso was already vulnerable to attack, why didn’t that occur earlier?

Stephen Morris has some ideas about that. Influential ideas. The MIT economist is the co-author of “Unique Equilibrium in a Model of Self-Fulfilling Currency Attacks,” a widely cited 1998 paper co-authored with economist Hyun Song Shin, who is now at the Bank of International Settlements. The paper changed the way many people in economics and finance think about market dynamics.

Before Morris and Shin published their paper, a common line of thought was that there were multiple points of equilibrium at which currencies (among other tradeable things) would rest. Investors might short-sell a currency, leading to its collapse — which would establish one equilibrium. Alternately, the currency might avoid attacks and remain robust — representing another equilibrium. Either way, large numbers of people would be acting similarly based on the same information.

But Morris and Shin posited a new, more realistic view about how events like this happen. They stipulated that there is often uncertainty about some of the fundamentals concerning a country’s currency, and also uncertainty among investors about what other investors will do.

“If you say there are multiple equilibria, and everybody attacks or doesn’t attack, that’s on the assumption that [there] is common knowledge among the agents,” Morris says. “And that’s surely not going to be true in reality. In reality, there is going to be uncertainty about what other people think about the situation and what they think other people think.”

For that very reason, Morris says, “An attack would tend to occur when it made sense for you to attack, when you were very uncertain about what other people were doing.”

His paper with Shin codified this point, modeling how the behavior of investors hinges greatly on their beliefs about what other investors will do — and laying out how, in this situation, a single equilibrium for a given currency will result. The model made waves: The finance world used the paper by applying the model to their own decisions, while scholars used it to rethink, in general terms, existing assumptions about the way markets work. The model helped persuade people that markets did not operate with utmost efficiency and that “higher-order” beliefs among investors — what you think I will do, or what I think you think — matter hugely.

“I think people liked it because under some circumstances it delivered a unique prediction,” Morris says. “So it got widely used — people could use this and plug it in to different economic problems. I was happy for people to do that, but what got lost a little bit was the idea about people’s higher-order beliefs and the rich modeling of the information structure lying behind this.”

That kind of work is the through-line in Morris’ career: He takes thorny problems about information and beliefs, and finds sophisticated yet useful ways of modeling them, in areas applicable to finance, central banking, firm decisions, and even nonfinancial markets such as school-choice plans.

“I’ve always been very interested in information,” Morris says. “And in trying to take a richer perspective on information and how that affects economic outcomes.”

With a broad and deep portfolio of research that he is still building upon, Morris was hired with tenure at MIT, joining the Institute’s Department of Economics in 2019. He was recently named the inaugural Peter A. Diamond Professor in Economics.

Morris did not always think economics was something he would pursue. As an undergraduate at Cambridge University, he studied math and, for the first time, economics.

“I think I have an origin story which a reasonable number of economists have,” Morris says. “You’re interested in math and analytical reasoning, and then you discover you’re interested in the world and social science as well, and then you discover economics is a subject addressing big, real-world problems where these analytical tools are being used in a significant way.

Still, Morris did not instantly jump into graduate work in economics. First he attended Yale University as part of an exchange program, then spent two years in Uganda, working as what he calls a “practicing development economist,” before entering Yale’s PhD program in economics.

“At the end of the day I missed academia, came back, and did a very different type of economics,” Morris says. “I do theoretical microeconomics.”

Morris obtained his PhD from Yale, then joined the faculty at the University of Pennsylvania straight out of graduate school. He subsequently taught at Yale and at Princeton University, before joining MIT.

In his years as a practicing theoretical microeconomist, Morris’ work has ranged across a number of problems and bridged the gap between pure theory and more applied theoretical endeavors. A 2002 paper he wrote with Shin, “The Social Value of Private Information,” looked at the ways different market participants may coordinate, crowd out useful public information, and limit the spread of useful knowledge in markets — a work that has also been widely cited, and which generated considerable follow-up research among economists.

On a different note, a 2005 paper Morris wrote with economist Dirk Bergemann, “Robust Mechanism Design,” was influential in the field of mechanism design — the development of nonfinancial markets that apply to things like school choice or medical matches. In it, Morris and his colleagues queried whether such markets can reach optimal outcomes for everyone in them. One key point of the paper was to question how well we can know, and model, the beliefs of — say — parents choosing schools for children. The paper did not lead to a single outcome in the way the currency-attack model did, but it also generated a large follow-up literature in the field about assumptions inherent in mechanism-design work.

“To me it’s all unified,” Morris says of the different branches of his work. “What people may remember from the currency attack paper was that this was a useful trick to get the unique equilibrium. Whereas the robust mechanism design paper was saying there are lots of different things that can happen. So in that sense they may seem to be going in different directions, but in my mind, it was all about taking a richer perspective on information structures and what their consequences are.”

At MIT, he is returning to the question of when the economy switches between equilibria, started in his 1998 “Unique Equilibrium” paper, sometimes in tandem with MIT economist Muhamet Yildiz. Morris is also interested in the crossover between his work and that of computer scientists, and views MIT as a place with significant potential for collaborative, interdisciplinary research. He also finds the Department of Economics to be a highly productive place for him to work. 

“It’s collegial, but in particular that means there are more intellectual interactions as well,” Morris observes.

He notes that he came to the Institute partly for the teaching opportunities, as well. In his first semester at MIT, Morris taught MIT second-year PhD students in a course about writing effective papers; he anticipates extensive advising of graduate students, as well as good in-class experiences.

“The main thing that drew me to be here was the PhD program,” Morris says. “I’d heard great things about it over the years.”

Between research and teaching, Morris will no doubt find his own unique equilibrium at MIT, too.



de MIT News https://ift.tt/34vTM3K