jueves, 28 de julio de 2022

MIT engineers develop stickers that can see inside the body

Ultrasound imaging is a safe and noninvasive window into the body’s workings, providing clinicians with live images of a patient’s internal organs. To capture these images, trained technicians manipulate ultrasound wands and probes to direct sound waves into the body. These waves reflect back out to produce high-resolution images of a patient’s heart, lungs, and other deep organs.

Currently, ultrasound imaging requires bulky and specialized equipment available only in hospitals and doctor’s offices. But a new design by MIT engineers might make the technology as wearable and accessible as buying Band-Aids at the pharmacy.

In a paper appearing today in Science, the engineers present the design for a new ultrasound sticker — a stamp-sized device that sticks to skin and can provide continuous ultrasound imaging of internal organs for 48 hours.

The researchers applied the stickers to volunteers and showed the devices produced live, high-resolution images of major blood vessels and deeper organs such as the heart, lungs, and stomach. The stickers maintained a strong adhesion and captured changes in underlying organs as volunteers performed various activities, including sitting, standing, jogging, and biking.

The current design requires connecting the stickers to instruments that translate the reflected sound waves into images. The researchers point out that even in their current form, the stickers could have immediate applications: For instance, the devices could be applied to patients in the hospital, similar to heart-monitoring EKG stickers, and could continuously image internal organs without requiring a technician to hold a probe in place for long periods of time.

If the devices can be made to operate wirelessly — a goal the team is currently working toward — the ultrasound stickers could be made into wearable imaging products that patients could take home from a doctor’s office or even buy at a pharmacy.

“We envision a few patches adhered to different locations on the body, and the patches would communicate with your cellphone, where AI algorithms would analyze the images on demand,” says the study’s senior author, Xuanhe Zhao, professor of mechanical engineering and civil and environmental engineering at MIT. “We believe we’ve opened a new era of wearable imaging: With a few patches on your body, you could see your internal organs.”

The study also includes lead authors Chonghe Wang and Xiaoyu Chen, and co-authors Liu Wang, Mitsutoshi Makihata, and Tao Zhao at MIT, along with Hsiao-Chuan Liu of the Mayo Clinic in Rochester, Minnesota.

A sticky issue

To image with ultrasound, a technician first applies a liquid gel to a patient’s skin, which acts to transmit ultrasound waves. A probe, or transducer, is then pressed against the gel, sending sound waves into the body that echo off internal structures and back to the probe, where the echoed signals are translated into visual images.

For patients who require long periods of imaging, some hospitals offer probes affixed to robotic arms that can hold a transducer in place without tiring, but the liquid ultrasound gel flows away and dries out over time, interrupting long-term imaging.

In recent years, researchers have explored designs for stretchable ultrasound probes that would provide portable, low-profile imaging of internal organs. These designs gave a flexible array of tiny ultrasound transducers, the idea being that such a device would stretch and conform with a patient’s body.

But these experimental designs have produced low-resolution images, in part due to their stretch: In moving with the body, transducers shift location relative to each other, distorting the resulting image.

“Wearable ultrasound imaging tool would have huge potential in the future of clinical diagnosis. However, the resolution and imaging duration of existing ultrasound patches is relatively low, and they cannot image deep organs,” says Chonghe Wang, who is an MIT graduate student.

An inside look

The MIT team’s new ultrasound sticker produces higher resolution images over a longer duration by pairing a stretchy adhesive layer with a rigid array of transducers. “This combination enables the device to conform to the skin while maintaining the relative location of transducers to generate clearer and more precise images.” Wang says.

The device’s adhesive layer is made from two thin layers of elastomer that encapsulate a middle layer of solid hydrogel, a mostly water-based material that easily transmits sound waves. Unlike traditional ultrasound gels, the MIT team’s hydrogel is elastic and stretchy.

“The elastomer prevents dehydration of hydrogel,” says Chen, an MIT postdoc. “Only when hydrogel is highly hydrated can acoustic waves penetrate effectively and give high-resolution imaging of internal organs.”

The bottom elastomer layer is designed to stick to skin, while the top layer adheres to a rigid array of transducers that the team also designed and fabricated. The entire ultrasound sticker measures about 2 square centimeters across, and 3 millimeters thick — about the area of a postage stamp.

The researchers ran the ultrasound sticker through a battery of tests with healthy volunteers, who wore the stickers on various parts of their bodies, including the neck, chest, abdomen, and arms. The stickers stayed attached to their skin, and produced clear images of underlying structures for up to 48 hours. During this time, volunteers performed a variety of activities in the lab, from sitting and standing, to jogging, biking, and lifting weights.

From the stickers’ images, the team was able to observe the changing diameter of major blood vessels when seated versus standing. The stickers also captured details of deeper organs, such as how the heart changes shape as it exerts during exercise. The researchers were also able to watch the stomach distend, then shrink back as volunteers drank then later passed juice out of their system. And as some volunteers lifted weights, the team could detect bright patterns in underlying muscles, signaling temporary microdamage.

“With imaging, we might be able to capture the moment in a workout before overuse, and stop before muscles become sore,” says Chen. “We do not know when that moment might be yet, but now we can provide imaging data that experts can interpret.”

The team is working to make the stickers function wirelessly. They are also developing software algorithms based on artificial intelligence that can better interpret and diagnose the stickers’ images. Then, Zhao envisions ultrasound stickers could be packaged and purchased by patients and consumers, and used not only to monitor various internal organs, but also the progression of tumors, as well as the development of fetuses in the womb.

“We imagine we could have a box of stickers, each designed to image a different location of the body,” Zhao says. “We believe this represents a breakthrough in wearable devices and medical imaging.”

This research was funded, in part, by MIT, the Defense Advanced Research Projects Agency, the National Science Foundation, the National Institutes of Health, and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT.



de MIT News https://ift.tt/EOCznjg

New hardware offers faster computation for artificial intelligence, with much less energy

As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

“With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

“The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

“The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

Accelerating deep learning

Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

Surprising speed

PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

“The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

“The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

“Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

“The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

“Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

“This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

This research is funded, in part, by the MIT-IBM Watson AI Lab.



de MIT News https://ift.tt/ubwSj84

3 Questions: John Durant on the new MIT Museum at Kendall Square

To the outside world, much of what goes on at MIT can seem mysterious. But the MIT Museum, whose new location is in the heart of Kendall Square, wants to change that. With a specially designed space by architects Höweler + Yoon, new exhibitions, and new public programs, this fall marks a reset for the 50-year-old institution. 

The museum hopes to inspire future generations of scientists, engineers, and innovators. And with its new free Cambridge Residents Membership, the museum is sending a clear message to its neighbors that all are welcome.

John Durant, The Mark R. Epstein (Class of 1963) Director of the MIT Museum and an affiliate of MIT's Program in Science, Technology, and Society, speaks here about the museum’s transformation and what's to come when it opens its doors to the public on Oct. 2.

Q: What role will the new museum play in making MIT more accessible and better understood?

A: The MIT Museum is consciously standing at the interface between a world-famous research institute and the wider world. Our task here is to “turn MIT inside out,” by making what MIT does visible and accessible to the wider world. We are focused on the question: What does all this intensive MIT creativity, research, innovation, teaching and learning at MIT mean? What does it all mean for the wider community of which we're part? 

Our job as a museum is to make what MIT does, both the processes and the products, accessible. We do this for two reasons. First, MIT's mission statement is a public service mission statement — it intends to help make the world a better place. The second reason is that MIT is involved with potentially world-changing ideas and innovations. If we're about ideas, discoveries, inventions, and applications that can literally change the world, then we have a responsibility to the world. We have a responsibility to make these things available to the people who will end up being affected by them, so that we can have the kinds of informed conversations that are necessary in a democratic society. 

“Essential MIT,” the first gallery in the museum, highlights the people behind the research and innovation at MIT. Although it's tempting to focus on the products of research, in the end everything we do is about the people who do it. We want to humanize research and innovation, and the best way to do that is to put the people — whether they are senior faculty, junior faculty, students, or even visitors — at the center of the story. In fact, there will be a big digital wall display of all the people that we comprise, a visualization of the MIT community, and the visitor will be able to join this community on a temporary basis if they want to, by putting themselves in the display. 

MIT can sometimes seem like a rather austere place. It may be seen as the kind of a place where only those super-smart people go to do super-smart things. We don't want to send that message. We're an open campus, and we want to send a message to people that whoever they are, from whatever background, whatever part of the community, whatever language they speak, wherever they live, they have a warm welcome with us. 

Q: How will the museum be showcasing innovation and research? 

A: The new museum is structured in a series of eight galleries, which spiral up the building, and that travel from the local to the global and back again. “Essential MIT” is quite explicitly an introduction to the Institute itself. In that gallery, we feature a few examples of current big projects that illustrate the kind of work that MIT does. In the last gallery, the museum becomes local again through the museum's collections. On the top floor, for the first time in the museum's history, we will be able to show visitors that we're a collecting museum, and that we hold all manner of objects and artifacts, which make up a permanent record — an archive, if you will — of the research and innovation that has gone on in this place. 

But, of course, MIT doesn't only concern itself with things that only have local significance. It's involved in some of the biggest research questions that are being tackled worldwide: climate change, fundamental physics, genetics, artificial intelligence, the nature of cancer, and many more. Between the two bookends of these rather locally focused galleries, therefore, we have put galleries dealing with global questions in research and innovation. We're trying to point out that current research and innovation raises big questions that go beyond the purely scientific or purely technical. We don't want to shy away from the ethical, social, or even political questions posed by this new research, and some of these larger questions will be treated “head-on” in these galleries. 

For example, we've never before tried to explain to people what AI is, and what it isn't — as well as some of its larger implications for society. In “AI: Mind the Gap,” we're going to explain what AI is good at doing, and by the same token, what it is not good at doing. For example, we will have an interactive exhibit that allows visitors to see a neural network learning in real time — in this case, how to recognize faces and facial expressions. Such learning machines are fundamental to what AI can do, and there are many positive applications of that in the real world. We will also give people the chance to use AI to create poetry. But we'll also be looking at some of the larger concerns that some of these technologies raise — issues like algorithmic bias, or the area called deepfake technology, which is increasingly widely used. In order to explain this technology to people, we are going to display an artwork based on the Apollo moon landings that uses deepfakes.

Nothing in the new museum is something that the visitor will have seen before, but for one exception, and it's by careful design. We're bringing with us some of the kinetic or moving sculptures by the artist Arthur Ganson. We value the connections his work raises at the interface between science, technology and the arts. In trying to get people to think in different ways about what's happening in the worlds of research and innovation, artists often bring fresh perspectives.

Q: What kinds of educational opportunities will the museum now be able to present?

A: The new museum has about 30 percent more space for galleries and exhibitions than the old museum, but it has about 300 percent more space for face-to-face activities. We're going to have two fully equipped teaching labs in the new museum, where we can teach a very wide variety of subjects, including wet lab work. We shall also have the Maker Hub, a fully-equipped maker space for the public. MIT's motto is “mens et manus,” mind and hand, and we want to be true to that. We want to give people a chance not only just to look at stuff, but also to make stuff, to do it themselves. 

At the heart of the new museum is a space called The Exchange, which is designed for face-to-face meetings, short talks, demonstrations, panel discussions, debates, films, anything you like. I think of The Exchange as the living room of the new museum, a place with double-height ceilings, bleacher seating, and a very big LED screen so that we can show almost anything we need to show. It's a place where visitors can gather, learn, discuss, and debate; where they can have the conversations about what to do about deepfakes, or how to apply gene editing most wisely, or whatever the issue of the day happens to be. We’re unapologetically putting these conversations center stage. 

Finally, the first month of the opening events includes an MIT Community Day, a Cambridge Residents Day, and the museum’s public opening on Oct. 2. The first week after the opening will feature the Cambridge Science Festival, the festival founded and presented by the MIT Museum which has been re-imagined this year. The festival will feature large-scale projects, many taking place in MIT’s Open Space, an area we think of as the new museum’s “front lawn.”



de MIT News https://ift.tt/c1dNKkz

Sprint then stop? The brain is wired for the math to make it happen

Your new apartment is just a couple of blocks down the street from the bus stop, but today you are late and you see the bus roll past you. You break into a full sprint. Your goal is to get to the bus as fast as possible and then to stop exactly in front of the doors (which are never in exactly the same place along the curb) to enter before they close. To stop quickly and precisely enough, a new MIT study in mice finds, the mammalian brain is niftily wired to implement principles of calculus.

One might think that coming to a screeching halt at a target after a flat out run would be as simple as a reflex, but catching a bus or running right up to a visually indicated landmark to earn a reward is a learned, visually guided, goal-directed feat. In such tasks, which are a major interest in the lab of Mriganka Sur, the Newton Professor of Neuroscience in The Picower Institute for Learning and Memory at MIT, the crucial decision to switch from one behavior (running) to another (stopping) comes from the brain’s cortex, where the brain integrates the learned rules of life with sensory information to guide plans and actions.

“The goal is where the cortex comes in,” says Sur, a faculty member in MIT’s Department of Brain and Cognitive Sciences. “Where am I supposed to stop to achieve this goal of getting on the bus?”

And that’s also where it gets complicated. Mathematical models of the behavior that MIT postdoc Elie Adam developed predicted that a “stop” signal going directly from the M2 region of the cortex to regions in the brainstem of mice, which actually control the legs, would be processed too slowly.

“You have M2 that is sending a stop signal, but when you model it and go through the mathematics, you find that this signal, by itself, would not be fast enough to make the animal stop in time,” says Adam, first author of a new paper on this research, which appears in the journal Cell Reports.

So how does the brain speed up the process? What Adam, Sur, and co-author Taylor Johns found was that M2 sends the signal to an intermediary region called the subthalamic nucleus (STN), which then sends out two signals down two separate paths that re-converge in the brainstem. Why? Because the difference made by those two signals, one inhibitory and one excitatory, arriving one right after the other turns the problem from one of integration, which is a relatively slow adding up of inputs, to differentiation, which is a direct recognition of change. The shift in calculus implements the stop signal much more quickly.

Adam’s model, employing systems and control theory from engineering, accurately predicted the speed needed for a proper stop and that differentiation would be necessary to achieve it, but it took a series of anatomical investigations and experimental manipulations to confirm the model’s predictions.

First, Adam confirmed that indeed M2 was producing a surge in neural activity only when the mice needed to achieve their trained goal of stopping at the landmark. He also showed it was sending the resulting signals to the STN. Other stops for other reasons did not employ that pathway. Moreover, artificially activating the M2-STN pathway compelled the mice to stop, and artificially inhibiting it caused mice to overrun the landmark somewhat more often.

The STN certainly then needed to signal the brainstem — specifically the pedunculopontine nucleus (PPN) in the mesenecephalic locomotor region. But when the scientists looked at neural activity starting in the M2 and then quickly resulting in the PPN, they saw that different types of cells in the PPN responded with different timing. In particular, before the stop, excitatory cells were active and their activity reflected the speed of the animal during stops. Then, looking at the STN, they saw two kinds of surges of activity around stops — one slightly slower than the other — that were conveyed either directly to PPN through excitation or indirectly via the substantia nigra pars reticulata through inhibition. The net result of the interplay of these signals in the PPN was an inhibition sharpened by excitation. That sudden change could be quickly found by differentiation to implement stopping.

“An inhibitory surge followed by excitation can create a sharp [change of] signal,” Sur says.

The study dovetails with other recent papers. Working with Picower Institute investigator Emery N. Brown, Adam recently produced a new model of how deep brain stimulation in the STN quickly corrects motor problems that result from Parkinson’s disease. And last year, members of Sur’s lab, including Adam, published a study showing how the cortex overrides the brain’s more deeply ingrained reflexes in visually guided motor tasks. Together, such studies contribute to understanding how the cortex can consciously control instinctually wired motor behaviors, but also how important deeper regions, such as the STN, are to quickly implementing goal-directed behavior. A recent review from the lab expounds on this.

Adam speculates that the “hyperdirect pathway” of cortex-to-STN communications may have a role broader than quickly stopping action, potentially expanding beyond motor control to other brain functions such as interruptions and switches in thinking or mood.

The JPB Foundation, the National Institutes of Health, and the Simons Foundation Autism Research Initiative funded the study.



de MIT News https://ift.tt/TOpw4WN

Friendly skies? Study charts Covid-19 odds for plane flights

What are the chances you will contract Covid-19 on a plane flight? A study led by MIT scholars offers a calculation of that for the period from June 2020 through February 2021. While the conditions that applied at that stage of the Covid-19 pandemic differ from those of today, the study offers a method that could be adapted as the pandemic evolves.

The study estimates that from mid-2020 through early 2021, the probability of getting Covid-19 on an airplane surpassed 1 in 1,000 on a totally full flight lasting two hours at the height of the early pandemic, roughly December 2020 and January 2021. It dropped to about 1 in 6,000 on a half-full two-hour flight when the pandemic was at its least severe, in the summer of 2020. The overall risk of transmission from June 2020 through February 2021 was about 1 in 2,000, with a mean of 1 in 1,400 and a median of 1 in 2,250.

To be clear, current conditions differ from the study’s setting. Masks are no longer required for U.S. domestic passengers; in the study’s time period, airlines were commonly leaving middle seats open, which they are no longer doing; and newer Covid-19 variants are more contagious than the virus was during the study period. While those factors may increase the current risk, most people have received Covid-19 vaccinations since February 2021, which could serve to lower today’s risk — though the precise impact of those vaccines against new variants is uncertain.

Still, the study does provide a general estimate about air travel safety with regard to Covid-19 transmission, and a methodology that can be applied to future studies. Some U.S. carriers at the time stated that onboard transmission was “virtually nonexistent” and “nearly nonexistent,” but as the research shows, there was a discernible risk. On the other hand, passengers were not exactly facing coin-flip odds of catching the virus in flight, either.

“The aim is to set out the facts,” says Arnold Barnett, a management professor at MIT and aviation risk expert, who is co-author of a recent paper detailing the study’s results. “Some people might say, ‘Oh, that doesn’t sound like very much.’ But if we at least tell people what the risk is, they can make judgments.”

As Barnett also observes, a round-trip flight with a change of planes and two two-hour segments in each direction counts as four flights in this accounting, so a 1 in 1,000 probability, per flight, would lead to approximately a 1 in 250 chance for such a trip as a whole.

All told, given about 204 million U.S. domestic airline passengers from June 2020 through February 2021, the researchers estimate that about 100,000 cases of Covid-19 were transmitted on flights during that time.

The paper, “Covid-19 infection risk on U.S. domestic airlines,” appears in advance online form this month in the journal Health Care Management Science. The authors are Barnett, who is the George Eastman Professor of Management Science in the MIT Sloan School of Management; and Keith Fleming, a student from MIT Sloan’s master’s program in business analytics.

Barnett is a longtime expert in airline safety who has analyzed the long-term reduction in aviation crashes in recent decades, among other topics. The current study about transmission of the Covid-19 virus was spurred by an airline policy change from early in the pandemic — Delta Air Lines started leaving open the middle seats on domestic flights, in order to de-densify its planes, a practice that some other airlines followed for a while. (Delta and all other airlines are no longer using this policy.)

To conduct the study, Barnett and Fleming amalgamated public health statistics about Covid-19 prevalence, data from peer-reviewed studies about Covid-19 contagion mechanisms, data about the spread of viruses on airlines generally and the spread of Covid-19 on international airlines, and some available industry data about seat-occupancy rates on U.S. domestic jet flights. They then estimated transmission risks on U.S. domestic airlines through extensive modeling.

The researchers used a two-hour flight for their estimates because that is about the average duration of a domestic flight in the U.S. As their airplane settings, the scholars used a Boeing 737 and Airbus A320, workhorse planes in the U.S. with a single aisle, three seats on either side, and typical capacities of about 175 passengers. Most such planes do have high-functioning HEPA air-purification systems, which help reduce the transmission risk of airborne illnesses.

Using the prevalence of Covid-19 in the U.S. as a starting point, and integrating airborne transmission data, Barnett and Fleming modeled what would likely happen on flights filled with a wide variety of passenger loads. The modeling includes a series of adjustments to make the passenger profile as realistic as possible. For instance, airline passengers are a bit more affluent than the U.S. population as a whole, and Covid-19 has affected more affluent populations slightly less than other social groups, so those things are quantified in the study, among other factors.

Ultimately Barnett and Fleming did find a notable dropoff in transmission risk when planes have fewer people on them — whether having fewer passengers is due to lack of demand, or because airlines were leaving middle seats open. While it is true that leaving middle seats open does not eliminate all proximity with all other passengers, it does reduce the extent of close proximity with others, and thus appears to lower the overall transmission risk. 

“The [medical] literature suggests the proximity matters,” Barnett says.

As Barnett readily notes, pandemic circumstances and airline policies keep evolving, meaning that their estimates for the 2020-2021 period in the study may not translate precisely to the summer of 2022. Even despite the availability of vaccines, he believes the reduced amount of masking, the more-crowded flights, and easy transmissibility of current variants all mean that risks could have increased.

“If we were to do an estimate of the chances of infection now, it could be considerably higher,” Barnett says.

Still, he adds, the approach used in this paper could readily be adapted to updated studies about in-flight transmission risks, for Covid-19 or other viruses.

“Modeling like that presented here could help in assessing the changed situation, much as the general approach might help in connection with a future pandemic,” Barnett and Fleming write in the paper.

Open access funding making the paper free for readers was provided by MIT Libraries.



de MIT News https://ift.tt/KQL8SJv

miércoles, 27 de julio de 2022

Emma Gibson: Optimizing health care logistics in Africa

Growing up in South Africa at the turn of the century, Emma Gibson saw the rise of the HIV/AIDS epidemic and its devastating impact on her home country, where many people lacked life-saving health care. At the time, Gibson was too young to understand what a sexually transmitted infection was, but she knew that HIV was infecting millions of South Africans and AIDS was taking hundreds of thousands of lives. “As a child, I was terrified by this monster that was HIV and felt so powerless to do anything about it,” she says.

Now, as an adult, her childhood fear of the HIV epidemic has evolved into a desire to fight it. Gibson seeks to improve health care for HIV and other diseases in regions with limited resources, including South Africa. She wants to help health care facilities in these areas to use their resources more effectively so that patients can more easily obtain care.

To help reach her goal, Gibson sought mathematics and logistics training through higher education in South Africa. She first earned her bachelor’s degree in mathematical sciences at the University of the Witwatersrand, and then her master’s degree in operations research at Stellenbosch University. There, she learned to tackle complex decision-making problems using math, statistics, and computer simulations.

During her master’s, Gibson studied the operational challenges faced in rural South African health care facilities by working with staff at Zithulele Hospital in the Eastern Cape, one of the country’s poorest provinces. Her research focused on ways to reduce hours-long wait times for patients seeking same-day care. In the end, she developed a software tool to model patient congestion throughout the day and optimize staff schedules accordingly, enabling the hospital to care for its patients more efficiently.

After completing her master’s, Gibson wanted to further her education outside of South Africa and left to pursue a PhD in operations research at MIT. Upon arrival, she branched out in her research and worked on a project to improve breast cancer treatment in U.S. health care, a very different environment from what she was used to.

Two years later, Gibson had the opportunity to return to researching health care in resource-limited settings and began working with Jónas Jónasson, an associate professor at the MIT Sloan School of Management, on a new project to improve diagnostic services in sub-Saharan Africa. For the past four years, she has been working diligently on this project in collaboration with researchers at the Indian School of Business and Northwestern University. “My love language is time,” she says. “If I’m investing a lot of time in something, I really value it.”

Scheduling sample transport

Diagnostic testing is an essential tool that allows medical professionals to identify new diagnoses in patients and monitor patients’ conditions as they undergo treatment. For example, people living with HIV require regular blood tests to ensure that their prescribed treatments are working effectively and provide an early warning of potential treatment failures.

For Gibson’s current project, she’s trying to improve diagnostic services in Malawi, a landlocked country in southeast Africa. “We have the tools” to diagnose and treat diseases like HIV, she says. “But in resource-limited settings, we often lack the money, the staff, and the infrastructure to reach every patient that needs them.”

When diagnostic testing is needed, clinicians collect samples from patients and send the samples to be tested at a laboratory, which then returns the results to the facility where the patient is treated. To move these items between facilities and laboratories, Malawi has developed a national sample transportation network. The transportation system plays an important role in linking remote, rural facilities to laboratory services and ensuring that patients in these areas can access diagnostic testing through community clinics. Samples collected at these clinics are first transported to nearby district hubs, and then forwarded to laboratories located in urban areas. Since most facilities do not have computers or communications infrastructure, laboratories print copies of test results and send them back to facilities through the same transportation process.

The sample transportation cycle is onerous, but it’s a practical solution to a difficult problem. “During the Covid pandemic, we saw how hard it was to scale up diagnostic infrastructure,” Gibson says. Diagnostic services in sub-Saharan Africa face “similar challenges, but in a much poorer setting.”

In Malawi, sample transportation is managed by a  nongovernment organization called Riders 4 Health. The organization has around 80 couriers on motorcycles who transport samples and test results between facilities. “When we started working with [Riders], the couriers operated on fixed weekly schedules, visiting each site once or twice a week,” Gibson says. But that led to “a lot of unnecessary trips and delays.”

To make sample transportation more efficient, Gibson developed a dynamic scheduling system that adapts to the current demand for diagnostic testing. The system consists of two main parts: an information sharing platform that aggregates sample transportation data, and an algorithm that uses the data to generate optimized routes and schedules for sample transport couriers.

In 2019, Gibson ran a four-month-long pilot test for this system in three out of the 27 districts in Malawi. During the pilot study, six couriers transported over 20,000 samples and results across 51 health care facilities, and 150 health care workers participated in data sharing.

The pilot was a success. Gibson’s dynamic scheduling system eliminated about half the unnecessary trips and reduced transportation delays by 25 percent — a delay that used to be four days was reduced to three. Now, Riders 4 Health is developing their own version of Gibson’s system to operate nationally in Malawi. Throughout this project, “we focused on making sure this was something that could grow with the organization,” she says. “It’s gratifying to see that actually happening.”

Leveraging patient data

Gibson is completing her MIT degree this September but will continue working to improve health care in Africa. After graduation, she will join the technology and analytics health care practice of an established company in South Africa. Her initial focus will be on public health care institutions, including Chris Hani Baragwanath Academic Hospital in Johannesburg, the third-largest hospital in the world.

In this role, Gibson will work to fill in gaps in African patient data for medical operational research and develop ways to use this data more effectively to improve health care in resource-limited areas. For example, better data systems can help to monitor the prevalence and impact of different diseases, guiding where health care workers and researchers put their efforts to help the most people. “You can’t make good decisions if you don’t have all the information,” Gibson says.

To best leverage patient data for improving health care, Gibson plans to reevaluate how data systems are structured and used in the hospital. For ideas on upgrading the current system, she’ll look to existing data systems in other countries to see what works and what doesn’t, while also drawing upon her past research experience in U.S. health care. Ultimately, she’ll tailor the new hospital data system to South African needs to accurately inform future directions in health care.

Gibson’s new job — her “dream job” — will be based in the United Kingdom, but she anticipates spending a significant amount of time in Johannesburg. “I have so many opportunities in the wider world, but the ones that appeal to me are always back in the place I came from,” she says.



de MIT News https://ift.tt/3sDnPCp

Study finds Wikipedia influences judicial behavior

Mixed appraisals of one of the internet’s major resources, Wikipedia, are reflected in the slightly dystopian article “List of Wikipedia Scandals.” Yet billions of users routinely flock to the online, anonymously editable, encyclopedic knowledge bank for just about everything. How this unauthoritative source influences our discourse and decisions is hard to reliably trace. But a new study attempts to measure how knowledge gleaned from Wikipedia may play out in one specific realm: the courts.

A team of researchers led by Neil Thompson, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), recently came up with a friendly experiment: creating new legal Wikipedia articles to examine how they affect the legal decisions of judges. They set off by developing over 150 new Wikipedia articles on Irish Supreme Court decisions, written by law students. Half of these were randomly chosen to be uploaded online, where they could be used by judges, clerks, lawyers, and so on — the “treatment” group. The other half were kept offline, and this second group of cases provided the counterfactual basis of what would happen to a case absent a Wikipedia article about it (the “control”). They then looked at two measures: whether the cases were more likely to be cited as precedents by subsequent judicial decisions, and whether the argumentation in court judgments echoed the linguistic content of the new Wikipedia pages. 

It turned out the published articles tipped the scales: Getting a public Wikipedia article increased a case’s citations by more than 20 percent. The increase was statistically significant, and the effect was particularly strong for cases that supported the argument the citing judge was making in their decision (but not the converse). Unsurprisingly, the increase was bigger for citations by lower courts — the High Court — and mostly absent for citations by appellate courts — the Supreme Court and Court of Appeal. The researchers suspect this is showing that Wikipedia is used more by judges or clerks who have a heavier workload, for whom the convenience of Wikipedia offers a greater attraction. 

“To our knowledge, this is the first randomized field experiment that investigates the influence of legal sources on judicial behavior. And because randomized experiments are the gold standard for this type of research, we know the effect we are seeing is causation, not just correlation,” says Thompson, the lead author of the study. “The fact that we wrote up all these cases, but the only ones that ended up on Wikipedia were those that won the proverbial 'coin flip,' allows us to show that Wikipedia is influencing both what judges cite and how they write up their decisions.”

“Our results also highlight an important public policy issue,” Thompson adds. “With a source that is as widely used as Wikipedia, we want to make sure we are building institutions to ensure that the information is of the highest quality. The finding that judges or their staffs are using Wikipedia is a much bigger worry if the information they find there isn’t reliable.” 

A paper describing the study is being published in “The Cambridge Handbook of Experimental Jurisprudence” (Cambridge University Press, 2022). Joining Thompson on the paper are Brian Flannigan and Edana Richardson of the National University of Ireland at Maynooth in Ireland, Brian McKenzie of Maynooth University in Ireland, and Xueyun Luo of Cornell University.

The researchers' statistical model essentially compared how much citation behavior changed for the treatment group (first difference: before versus after) and how that compared with the change that happened for the control group (second difference: treatment versus control).

In 2018, Thompson first visited the idea of proving the causal role that Wikipedia plays in shaping knowledge and behavior by looking at how it shapes academic science. It turns out that adding scientific articles, in this case about chemistry, changed how the topic was discussed in scientific literature, and science articles added as references to Wikipedia received more academic citations as well. 

That led Brian McKenzie, an associate professor at Maynooth University, to make a call. I was working with students to add articles to Wikipedia at the time I read Neil’s research on the influence of Wikipedia on scientific research,” explains McKenzie. “There were only a handful of Irish Supreme Court cases on Wikipedia so I reached out to Neil to ask if he wanted to design another iteration of his experiment using court cases.”

The Irish legal system proved the perfect test bed, as it shares a key similarity with other national legal systems such as the United Kingdom and United States — it operates within a hierarchical court structure where decisions of higher courts subsequently bind lower courts. Also, there are relatively few Wikipedia articles on Irish Supreme Court decisions compared to those of the U.S. Supreme Court — over the course of their project, the researchers increased the number of such articles tenfold. 

In addition to looking at the case citations made in the decisions, the team also analyzed the language used in the written decision using natural language processing. What they found were the linguistic fingerprints of the Wikipedia articles that they’d created.

So what might this influence look like? Suppose A sues B in federal district court. A argues that B is liable for breach of contract; B acknowledges A’s account of the facts but maintains that they gave rise to no contract between them. The assigned judge, conscious of the heavy work already delegated to his clerks, decides to conduct her own research. On reviewing the parties’ submissions, the judge forms the preliminary view that a contract has not truly been formed and that she should give judgment for the defendant. To write his official opinion, the judge googles some previous decisions cited in B’s brief that seem similar to the case between A and B. On confirming their similarity by reading the relevant case summaries on Wikipedia, the judge paraphrases some of the text of the Wikipedia entries in his draft opinion to complete his analysis. The judge then enters his judgment and publishes his opinion. 

“The text of a court’s judgment itself will guide the law as it becomes a source of precedent for subsequent judicial decision-making. Future lawyers and judges will look back at that written judgment, and use it to decide what its implications are so that they can treat ‘like’ cases alike,” says coauthor Brian Flanagan. “If the text itself is influenced, as this experiment shows, by anonymously sourced internet content, that's a problem. For the many potential cracks that have opened up in our “information superhighway” that is the internet, you can imagine that this vulnerability could potentially lead to adversarial actors manipulating information. If easily accessible analysis of legal questions is already being relied on, it behooves the legal community to accelerate efforts to ensure that such analysis is both comprehensive and expert.”



de MIT News https://ift.tt/sGLSQz3

A global resource for better transportation systems

Launched in 2020, the MIT Mobility Initiative (MMI) is a unique cross-Institute initiative aimed at convening key stakeholders to drive innovation, while providing unbiased strategic direction to guide a deeper collective understanding of mobility challenges, and shape a mobility system that is sustainable, safe, clean, and accessible.

“The mobility system is undergoing profound transformation with new technologies — autonomy, electrification, and AI — colliding with new and evolving priorities and objectives including decarbonization, public health, and social justice,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “The time frame for these changes, decarbonization in particular, is short in a system with massive amounts of fixed, long-life assets and entrenched behaviors and cultures.”

The MMI, a collaboration between the School of Engineering and the School of Architecture and Planning, is designed as a platform to connect all mobility and transportation activities at MIT, building an integrated approach for the Institute’s efforts on research, education, entrepreneurship, and civic engagement related to transportation systems.

“As cities grapple with the challenges of congestion, pollution, and vehicle-related fatalities, new mobility systems offer the possibility for dramatic urban transformation,” says Hashim Sarkis, dean of the MIT School of Architecture and Planning. “Our aim is to provide a platform for many stakeholders to jointly build a better mobility future in the world’s cities.”

The five inaugural industry affiliate members of the MMI represent the many diverse sectors engaging with and innovating within mobility:

  • Ferrovial, a sustainable infrastructure and mobility company;
     
  • Hyundai Motor Group, a global corporation offering smart and sustainable mobility solutions;
     
  • Intel, a technology leader driving innovation, enabling global progress, and enriching lives;
     
  • Liberty Mutual Insurance, a global property and casualty insurer providing protection for the unexpected in an increasingly mobile world; and
     
  • Toyota, a multinational company committed to advancing sustainable, next-generation mobility.

“Our collaboration with the MIT Mobility Initiative aligns with our strategic focus on open innovation and learning outside the enterprise” says Youngcho Chi, president and chief innovation officer for Hyundai.

Dimitris Bountolos, CIO for Ferrovial, adds, “The MMI is the perfect meeting point to share Ferrovial knowledge, learning together with mobility experts, technologists, and OEMs [original equipment manufacturers] on how to build a sustainable future of mobility where infrastructures will be key.”

Ann Stanberry, chief strategy officer for Liberty Mutual, says, “The way people move is constantly evolving, and our engagement with the MIT Mobility Initiative and its industry members ensures that we are on the forefront of this change — working together to help people get where they need to be safely and with peace of mind.”

The Mobility Initiative’s research agenda is centered around answering the most difficult, cross-disciplinary questions inherent in today’s mobility challenges. The initial research is focused on autonomous and connected mobility and electric vehicle charging infrastructure. MIT researchers are exploring how to quantify and value risk for autonomous vehicles; how to ensure the cybersecurity of the infrastructure supporting their movement; and how to use innovative data methodologies to identify the gaps in today’s electric vehicle charging infrastructure.

“Strengthening MIT’s engagement in the global mobility community is a priority for the initiative. It is critical to build a program that would ultimately have a real-world impact,” says Jinhua Zhao, the Edward and Joyce Linde Associate Professor of City and Transportation Planning, and founder and faculty director of the MIT Mobility Initiative. Zhao is host of the MIT Mobility Forum, which features the innovative research taking place across disciplines at the MIT. First founded in 2020, the weekly forum now reaches nearly 10,000 individuals across the globe.

“The future of mobility is created within a complex and highly dynamic ecosystem of established transportation companies, Big Tech firms, and an explosion of startups. Governments are also actively rewriting the legal and regulatory frameworks for mobility,” says John Moavenzadeh, executive director of the MIT Mobility Initiative. Moavenzadeh led the design of the MMI Mobility Vision Day last November, which convened more than 130 leaders, including over 40 C-suite business executives, to address multiple dimensions of the mobility system.

“The MMI recognizes the importance of engaging with the business and government leaders who are ‘on the front lines’ of the mobility revolution,” says Moavenzadeh.



de MIT News https://ift.tt/9ygomGA

martes, 26 de julio de 2022

Researchers 3D print sensors for satellites

MIT scientists have created the first completely digitally manufactured plasma sensors for orbiting spacecraft. These plasma sensors, also known as retarding potential analyzers (RPAs), are used by satellites to determine the chemical composition and ion energy distribution of the atmosphere.

The 3D-printed and laser-cut hardware performed as well as state-of-the-art semiconductor plasma sensors that are manufactured in a cleanroom, which makes them expensive and requires weeks of intricate fabrication. By contrast, the 3D-printed sensors can be produced for tens of dollars in a matter of days.

Due to their low cost and speedy production, the sensors are ideal for CubeSats. These inexpensive, low-power, and lightweight satellites are often used for communication and environmental monitoring in Earth’s upper atmosphere.

The researchers developed RPAs using a glass-ceramic material that is more durable than traditional sensor materials like silicon and thin-film coatings. By using the glass-ceramic in a fabrication process that was developed for 3D printing with plastics, there were able to create sensors with complex shapes that can withstand the wide temperature swings a spacecraft would encounter in lower Earth orbit.

“Additive manufacturing can make a big difference in the future of space hardware. Some people think that when you 3D-print something, you have to concede less performance. But we’ve shown that is not always the case. Sometimes there is nothing to trade off,” says Luis Fernando Velásquez-García, a principal scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper presenting the plasma sensors.

Joining Velásquez-García on the paper are lead author and MTL postdoc Javier Izquierdo-Reyes; graduate student Zoey Bigelow; and postdoc Nicholas K. Lubinsky. The research is published in Additive Manufacturing.

Versatile sensors

An RPA was first used in a space mission in 1959. The sensors detect the energy in ions, or charged particles, that are floating in plasma, which is a superheated mix of molecules present in the Earth’s upper atmosphere. Aboard an orbiting spacecraft like a CubeSat, the versatile instruments measure energy and conduct chemical analyses that can help scientists predict the weather or monitor climate change. 

The sensors contain a series of electrically charged meshes dotted with tiny holes. As plasma passes through the holes, electrons and other particles are stripped away until only ions remain. These ions create an electric current that the sensor measures and analyzes.

Key to the success of an RPA is the housing structure that aligns the meshes. It must be electrically insulating while also able to withstand sudden, drastic swings in temperature. The researchers used a printable, glass-ceramic material that displays these properties, known as Vitrolite.

Pioneered in the early 20th century, Vitrolite was often used in colorful tiles that became a common sight in art deco buildings.

The durable material can also withstand temperatures as high as 800 degrees Celsius without breaking down, whereas polymers used in semiconductor RPAs start to melt at 400 degrees Celsius.

“When you make this sensor in the cleanroom, you don’t have the same degree of freedom to define materials and structures and how they interact together. What made this possible is the latest developments in additive manufacturing,” Velásquez-García says.

Rethinking fabrication

The 3D printing process for ceramics typically involves ceramic powder that is hit with a laser to fuse it into shapes, but this process often leaves the material coarse and creates weak points due to the high heat from the lasers.

Instead, the MIT researchers used vat polymerization, a process introduced decades ago for additive manufacturing with polymers or resins. With vat polymerization, a 3D structure is built one layer at a time by submerging it repeatedly into a vat of liquid material, in this case Vitrolite. Ultraviolet light is used to cure the material after each layer is added, and then the platform is submerged in the vat again. Each layer is only 100 microns thick (roughly the diameter of a human hair), enabling the creation of smooth, pore-free, complex ceramic shapes.

In digital manufacturing, objects described in a design file can be very intricate. This precision allowed the researchers to create laser-cut meshes with unique shapes so the holes lined up perfectly when they were set inside the RPA housing. This enables more ions to pass through, which leads to higher-resolution measurements.

Because the sensors were cheap to produce and could be fabricated so quickly, the team prototyped four unique designs.

While one design was especially effective at capturing and measuring a wide range of plasmas, like those a satellite would encounter in orbit, another was well-suited for sensing extremely dense and cold plasmas, which are typically only measurable using ultraprecise semiconductor devices.    

This high precision could enable 3D-printed sensors for applications in fusion energy research or supersonic flight. The rapid prototyping process could even spur more innovation in satellite and spacecraft design, Velásquez-García adds.

“If you want to innovate, you need to be able to fail and afford the risk. Additive manufacturing is a very different way to make space hardware. I can make space hardware and if it fails, it doesn’t matter because I can make a new version very quickly and inexpensively, and really iterate on the design. It is an ideal sandbox for researchers,” he says.

While Velásquez-García is pleased with these sensors, in the future he wants to enhance the fabrication process. Reducing the thickness of layers or pixel size in glass-ceramic vat polymerization could create complex hardware that is even more precise. Moreover, fully additively manufacturing the sensors would make them compatible with in-space manufacturing. He also wants to explore the use of artificial intelligence to optimize sensor design for specific use cases, such as greatly reducing their mass while ensuring they remain structurally sound.

This work was funded, in part, by MIT, the MIT-Tecnológico de Monterrey Nanotechnology Program, the MIT Portugal Program, and the Portuguese Foundation for Science and Technology.



de MIT News https://ift.tt/iJcGnAE

Q&A: Warehouse robots that feel by sight

More than a decade ago, Ted Adelson set out to create tactile sensors for robots that would give them a sense of touch. The result? A handheld imaging system powerful enough to visualize the raised print on a dollar bill. The technology was spun into GelSight, to answer an industry need for low-cost, high-resolution imaging.

An expert in both human and machine vision, Adelson was pleased to have created something useful. But he never lost sight of his original dream: to endow robots with a sense of touch. In a new Science Hub project with Amazon, he’s back on the case. He plans to build out the GelSight system with added capabilities to sense temperature and vibrations. A professor in MIT’s Department of Brain and Cognitive Sciences, Adelson recently sat down to talk about his work.

Q: What makes the human hand so hard to recreate in a robot?

A: A human finger has soft, sensitive skin, which deforms as it touches things. The question is how to get precise sensing when the sensing surface itself is constantly moving and changing during manipulation.

Q: You’re an expert on human and computer vision. How did touch grab your interest?

A: When my daughters were babies, I was amazed by how skillfully they used their fingers and hands to explore the world. I wanted to understand the way they were gathering information through their sense of touch. Being a vision researcher, I naturally looked for a way to do it with cameras.

Q: How does the GelSight robot finger work? What are its limitations?

A: A camera captures an image of the skin from inside, and a computer vision system calculates the skin’s 3D deformation. GelSight fingers offer excellent tactile acuity, far exceeding that of human fingers. However, the need for an inner optical system limits the sizes and shapes we can achieve today.

Q: How did you come up with the idea of giving a robot finger a sense of touch by, in effect, giving it sight?

A: A camera can tell you about the geometry of the surface it is viewing. By putting a tiny camera inside the finger, we can measure how the skin geometry is changing from point to point. This tells us about tactile properties like force, shape, and texture.

Q: How did your prior work on cameras figure in?

A: My prior research on the appearance of reflective materials helped me engineer the optical properties of the skin. We create a very thin matte membrane and light it with grazing illumination so all the details can be seen.

Q: Did you know there was a market for measuring 3D surfaces?

A: No. My postdoc Kimo Johnson posted a YouTube video showing GelSight’s capabilities about a decade ago. The video went viral, and we got a flood of email with interesting suggested applications. People have since used the technology for measuring the microtexture of shark skin, packed snow, and sanded surfaces. The FBI uses it in forensics to compare spent cartridge casings.

Q: What’s GelSight’s main application?  

A: Industrial inspection. For example, an inspector can press a GelSight sensor against a scratch or bump on an airplane fuselage to measure its exact size and shape in 3D. This application may seem quite different from the original inspiration of baby fingers, but it shows that tactile sensing can have many uses. As for robotics, tactile sensing is mainly a research topic right now, but we expect it to increasingly be useful in industrial robots.

Q: You’re now building in a way to measure temperature and vibrations. How do you do that with a camera? How else will you try to emulate human touch?

A: You can convert temperature to a visual signal that a camera can read by using liquid crystals, the molecules that make mood rings and forehead thermometers change color. For vibrations we will use microphones. We also want to extend the range of shapes a finger can have. Finally, we need to understand how to use the information coming from the finger to improve robotics.

Q: Why are we sensitive to temperature and vibrations, and why is that useful for robotics?

A: Identifying material properties is an important aspect of touch. Sensing temperature helps you tell whether something is metal or wood, and whether it is wet or dry. Vibrations can help you distinguish a slightly textured surface, like unvarnished wood, from a perfectly smooth surface, like wood with a glossy finish.

Q: What’s next?

A: Making a tactile sensor is the first step. Integrating it into a useful finger and hand comes next. Then you have to get the robot to use the hand to perform real-world tasks.

Q: Evolution gave us five fingers and two hands. Will robots have the same?

A: Different robots will have different kinds of hands, optimized for different situations. Big hands, small hands, hands with three fingers or six fingers, and hands we can’t even imagine today. Our goal is to provide the sensing capability, so that the robot can skillfully interact with the world.



de MIT News https://ift.tt/2DYUSJX

lunes, 25 de julio de 2022

The hub of the local robotics industry

The MIT spinout Ori attracted a lot of attention when it unveiled its shapeshifting furniture prototypes in 2014. But after the founders left MIT, they faced a number of daunting challenges. Where would they find the space to build and demo their apartment-scale products? How would they get access to the machines and equipment necessary for prototyping? How would they decide on the control systems and software to run with their new furniture? Did anyone care about its innovations?

Ori, which signed a global agreement with Ikea in 2019, got help with all of those challenges when it found a home in MassRobotics, a nonprofit that incubates startups in addition to many other networking, education, and industry-building initiatives.

Ori is one of over 100 young companies MassRobotics has supported since its founding in 2014. With more than 40,000 square feet of office and lab space, MassRobotics’ headquarters in Boston’s Seaport District holds over 30 testing robots, prototyping machines, 3-D printers, and more.

Today MassRobotics works with hundreds of companies of all sizes, from startups to large corporate partners like Amazon, Google, and Mitsubishi Electric, fostering collaboration and advancing the robotics industry by publishing standards, hosting events, and organizing educational workshops to inspire the next generation of roboticists.

“MassRobotics is growing the robotics ecosystem in Massachusetts and beyond,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, who has served on the board of MassRobotics since its inception. “It’s doing so much more than helping startups. They work with the academic community on grants, they act as matchmaker between companies and research groups, they have educational programs for high school students and facilitate internships, and it’s also working on diversity and inclusion.”

Just as MIT’s mission emphasizes translating knowledge into impact, MassRobotics’ mission is to help roboticists and entrepreneurs make a positive impact by furthering a field that most agree will play an increasingly large role in our work and personal lives.

“We have a job to envision a future that is better, more equitable and sustainable than the past, and then make it happen,” says Daniel Theobald ’95 SM ’98, who co-founded MassRobotics with Fady Saad SM ’13, Tye Brady ’99, Steve Paschall SM ’04, and Joyce Sidopoulos.

Bringing an industry together

Theobald first got the idea to start a robotics organization when he was giving a tour of his company Vecna Robotics to former CSAIL director Rodney Brooks. Around 2014, he began brainstorming ways to start a robotics organization with former Vecna director of strategy Fady Saad.

Joyce Sidopoulos, who was working at the Massachusetts Technology Leadership Council (Mass TLC) at the time, connected the pair with Brady and Paschall, who were working on a similar idea while at Draper in Cambridge.

“Before MassRobotics, robotics startups were creating amazing technologies, but they couldn’t easily break through to a commercialized product, because even if you have a working prototype, you can’t ship anything, and investors want to see validation,” Saad says. “Our motivation for founding MassRobotics was helping more of these companies become successful.”

Early on, the founders worked with MIT’s Industrial Liaison Program to get input from robotics companies and received help from people including Liz Reynolds, a principal research scientist at MIT and executive director of the MIT Industrial Performance Center. The first check was written by Gururaj Deshpande, founder of the MIT Deshpande Center for Technological Innovation. Today, dozens of corporate partners provide funding as well as the state of Massachusetts.

None of the founders think it’s a coincidence that so many of them hail from MIT.

“At commencement, [President L. Rafael Reif] gave a message that I’ll never forget: He said, ‘Go hack the world,’” says Saad, who also recently launched an investment firm for early-stage robotics companies called Cybernetix Ventures. “I think Reif’s message captures the DNA of MIT alumni. We’re all hackers. We make things happen. We see a problem or a need and we fix it.”

Of course, MIT has also played a huge role in bolstering the local robotics ecosystem that MassRobotics seeks to foster.

“A lot of talent, tech and ideas are at MIT, but also a number of startups have come directly out of MIT and we house a number of them,” MassRobotics executive director Tom Ryden says. “That’s huge because it’s one thing to create technology, but creating companies is huge for the ecosystem and I think MIT does that exceptionally well.”

One of MassRobotics’ lead educational programs is geared toward female high school students from diverse backgrounds. The program includes six months of education during weekends or summer vacation and a guaranteed internship at a local robotics company.

MassRobotics also recently announced a new “Robotics Medal” that will be awarded each year to a female researcher that has made significant discoveries or advances in robotics. The medal comes with a $50,000 prize and a fellowship that will give the recipient access to MassRobotics facilities.

“This is the first time in our field we have such a visible prize for a female roboticist,” says Rus, who is also the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “I hope this sends a positive message to all the young aspiring roboticists. Robotics is an exciting field of work with the important mission to develop intelligent tools that help people with physical and cognitive work.”

Meanwhile, MIT’s connections with MassRobotics have come full circle: After years of collaboration, one of the first graduates of MassRobotics’ educational courses just finished her first year as an undergraduate in mechanical engineering at MIT.

Advancing an industry

The impact of MassRobotics’ educational programs hit home for Theobald a few years ago when he got a letter from a young woman who told him they had changed her life.

“The problem with robotics education is it’s very easy for young people to say, ‘Oh that’s hard’ and move on,” Theobald says. “Getting them to sit down and actually build something and realize what they can do is so powerful.”

A few weeks ago, Theobald was at MassRobotics to meet a group of German business leaders when he got off the elevator on the wrong floor and stumbled into a STEM education session with a group of middle schoolers. He could have just as easily walked into a networking session between startups and business leaders or, as Rus did recently, run into Bloomberg journalists hosting a television segment on the robotics industry.

The breadth of activities hosted by MassRobotics is a testament to the organization’s commitment to advancing every aspect of the industry.

“Robotics is the most challenging engineering endeavor humanity has ever taken on because it involves electrical engineering, mechanical engineering, software — plus you’re trying to emulate human behavior and intelligence — so it requires the best of artificial intelligence,” Theobald says. “It all has to come together for successful robotics. That’s what we help do.”



de MIT News https://ift.tt/l3qF2EH

School of Engineering second quarter 2022 awards

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. The School of Engineering periodically recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in our academic departments, labs, and centers.



de MIT News https://ift.tt/G9kdIgl

viernes, 22 de julio de 2022

Scientists capture first-ever view of a hidden quantum phase in a 2D crystal

The development of high-speed strobe-flash photography in the 1960s by the late MIT professor Harold “Doc” Edgerton allowed us to visualize events too fast for the eye — a bullet piercing an apple, or a droplet hitting a pool of milk.

Now, by using a suite of advanced spectroscopic tools, scientists at MIT and University of Texas at Austin have for the first time captured snapshots of a light-induced metastable phase hidden from the equilibrium universe. By using single-shot spectroscopy techniques on a 2D crystal with nanoscale modulations of electron density, they were able to view this transition in real-time.

“With this work, we are showing the birth and evolution of a hidden quantum phase induced by an ultrashort laser pulse in an electronically modulated crystal,” says Frank Gao PhD ’22, co-lead author on a paper about the work who is currently a postdoc at UT Austin.

“Usually, shining lasers on materials is the same as heating them, but not in this case,” adds Zhuquan Zhang­­, co-lead author and current MIT graduate student in chemistry. “Here, irradiation of the crystal rearranges the electronic order, creating an entirely new phase different from the high-temperature one.”

A paper on this research was published today in Science Advances. The project was jointly coordinated by Keith A. Nelson, the Haslam and Dewey Professor of Chemistry at MIT, and by Edoardo Baldini, an assistant professor of physics at UT-Austin.

Laser shows

“Understanding the origin of such metastable quantum phases is important to address long-standing fundamental questions in nonequilibrium thermodynamics,” says Nelson.

“The key to this result was the development of a state-of-the-art laser method that can ‘make movies’ of irreversible processes in quantum materials with a time resolution of 100 femtoseconds.” adds Baldini.

The material, tantalum disulfide, consists of covalently bound layers of tantalum and sulfur atoms stacked loosely on top of one another. Below a critical temperature, the atoms and electrons of the material pattern into nanoscale “Star of David” structures — an unconventional distribution of electrons known as a “charge density wave.”

The formation of this new phase makes the material an insulator, but shining one single, intense light pulse pushes the material into a metastable hidden metal. “It is a transient quantum state frozen in time,” says Baldini. “People have observed this light-induced hidden phase before, but the ultrafast quantum processes behind its genesis were still unknown.”

Adds Nelson, “One of the key challenges is that observing an ultrafast transformation from one electronic order to one that may persist indefinitely is not practical with conventional time-resolved techniques.”

Pulses of insight

The researchers developed a unique method that involved splitting a single probe laser pulse into several hundred distinct probe pulses that all arrived at the sample at different times before and after switching was initiated by a separate, ultrafast excitation pulse. By measuring changes in each of these probe pulses after they were reflected from or transmitted through the sample and then stringing the measurement results together like individual frames, they could construct a movie that provides microscopic insights into the mechanisms through which transformations occur.

By capturing the dynamics of this complex phase transformation in a single-shot measurement, the authors demonstrated that the melting and the reordering of the charge density wave leads to the formation of the hidden state. Theoretical calculations by Zhiyuan Sun, a Harvard Quantum Institute postdoc, confirmed this interpretation.

While this study was carried out with one specific material, the researchers say the same methodology can now be used to study other exotic phenomena in quantum materials. This discovery may also help with the development of optoelectronic devices with on-demand photoresponses.

Other authors on the paper are chemistry graduate student Jack Liu, Department of Physics MRL Mitsui Career Development Associate Professor Joseph G. Checkelsky; Linda Ye PhD ’20, now a postdoc at Stanford University; and Yu-Hsiang Cheng PhD ’19, now an assistant professor at National Taiwan University.

Support for this work was provided by the U.S. Department of Energy, Office of Basic Energy Sciences; the Gordon and Betty Moore Foundation EPiQS Initiative; and the Robert A. Welch Foundation.



de MIT News https://ift.tt/HDrosPY

jueves, 21 de julio de 2022

Explained: How to tell if artificial intelligence is working the way we want it to

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer.

These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain. 

As the field of machine learning has grown, artificial neural networks have grown along with it.

Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them don’t fully understand how they work. This makes it hard to know whether they are working correctly.

For instance, maybe a model designed to help physicians diagnose patients correctly predicted that a skin lesion was cancerous, but it did so by focusing on an unrelated mark that happens to frequently occur when there is cancerous tissue in a photo, rather than on the cancerous tissue itself. This is known as a spurious correlation. The model gets the prediction right, but it does so for the wrong reason. In a real clinical setting where the mark does not appear on cancer-positive images, it could result in missed diagnoses.

With so much uncertainty swirling around these so-called “black-box” models, how can one unravel what’s going on inside the box?

This puzzle has led to a new and rapidly growing area of study in which researchers develop and test explanation methods (also called interpretability methods) that seek to shed some light on how black-box machine-learning models make predictions.

What are explanation methods?

At their most basic level, explanation methods are either global or local. A local explanation method focuses on explaining how the model made one specific prediction, while global explanations seek to describe the overall behavior of an entire model. This is often done by developing a separate, simpler (and hopefully understandable) model that mimics the larger, black-box model.

But because deep learning models work in fundamentally complex and nonlinear ways, developing an effective global explanation model is particularly challenging. This has led researchers to turn much of their recent focus onto local explanation methods instead, explains Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who studies models, algorithms, and evaluations in interpretable machine learning.

The most popular types of local explanation methods fall into three broad categories.

The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.

Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the model’s prediction.

Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.

“Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted,” says Zhou.

A second type of explanation method is known as a counterfactual explanation. Given an input and a model’s prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the model’s prediction, need to be higher for her to be approved.

“The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didn’t get it, this explanation would tell them what they need to do to achieve their desired outcome,” he says.

The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.

A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.

How are explanation methods used?

One motivation for developing these explanations is to perform quality assurance and debug the model. With more understanding of how features impact a model’s decision, for instance, one could identify that a model is working incorrectly and intervene to fix the problem, or toss the model out and start over.

Another, more recent, area of research is exploring the use of machine-learning models to discover scientific patterns that humans haven’t uncovered before. For instance, a cancer diagnosing model that outperforms clinicians could be faulty, or it could actually be picking up on some hidden patterns in an X-ray image that represent an early pathological pathway for cancer that were either unknown to human doctors or thought to be irrelevant, Zhou says.

It's still very early days for that area of research, however.

Words of warning

While explanation methods can sometimes be useful for machine-learning practitioners when they are trying to catch bugs in their models or understand the inner-workings of a system, end-users should proceed with caution when trying to use them in practice, says Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group in CSAIL.

As machine learning has been adopted in more disciplines, from health care to education, explanation methods are being used to help decision makers better understand a model’s predictions so they know when to trust the model and use its guidance in practice. But Ghassemi warns against using these methods in that way.

“We have found that explanations make people, both experts and nonexperts, overconfident in the ability or the advice of a specific recommendation system. I think it is very important for humans not to turn off that internal circuitry asking, ‘let me question the advice that I am
given,’” she says.

Scientists know explanations make people over-confident based on other recent work, she adds, citing some recent studies by Microsoft researchers.

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemi’s recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups.

Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesn’t know how the model works, this is circular logic, Zhou says.

He and other researchers are working on improving explanation methods so they are more faithful to the actual model’s predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt.

“In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced,” he adds.

Zhou’s most recent research seeks to do just that.

What’s next for machine-learning explanation methods?

Rather than focusing on providing explanations, Ghassemi argues that more effort needs to be done by the research community to study how information is presented to decision makers so they understand it, and more regulation needs to be put in place to ensure machine-learning models are used responsibly in practice. Better explanation methods alone aren’t the answer.

“I have been excited to see that there is a lot more recognition, even in industry, that we can’t just take this information and make a pretty dashboard and assume people will perform better with that. You need to have measurable improvements in action, and I’m hoping that leads to real guidelines about improving the way we display information in these deeply technical fields, like medicine,” she says.

And in addition to new work focused on improving explanations, Zhou expects to see more research related to explanation methods for specific use cases, such as model debugging, scientific discovery, fairness auditing, and safety assurance. By identifying fine-grained characteristics of explanation methods and the requirements of different use cases, researchers could establish a theory that would match explanations with specific scenarios, which could help overcome some of the pitfalls that come from using them in real-world scenarios.



de MIT News https://ift.tt/3VOjbdg