sábado, 21 de marzo de 2026

Bridging medical realities in the study of technology and health

A few weeks ago, Amy Moran-Thomas and 20 students in her class 21A.311 (The Social Lives of Medical Objects) were gathered around a glucose meter, a jar of test strips, and various spare medical parts in the MIT Museum seminar room, talking about how to make them work better.

The class had just heard a presentation from the president of the Belize Diabetes Association in Dangriga, Norma Flores, a nurse whose hospital had recently received a huge shipment of insulin that, although durable in theory, seemed to have spoiled in a heat wave. Flores and the students discussed whether scientists could develop temperature-stable insulin and design repairable glucose meters and other technologies for hospitals worldwide.

“Whenever people keep saying they are concerned about an issue, but the medical literature doesn’t describe it yet, there is a key question about what’s happening,” says Moran-Thomas. “Ethnography can help us learn about it.”

For Moran-Thomas, an MIT anthropologist, that class session was a way of connecting people and ideas that are too often overlooked. Flores was a central figure in Moran-Thomas’ 2019 book, “Traveling with Sugar: Chronicles of a Global Epidemic,” about diabetes in Belize and the failures of medical technology designed to treat it. (At the end of class, Flores surprised Moran-Thomas with a framed commendation from the Belize Diabetes Association for their nearly 20 years of work together.)

That approach informs all of Moran-Thomas’ work. Currently she is co-leading a group working on a project called the “Sugar Atlas,” mapping the social and economic dimensions of diabetes in the Caribbean, in tandem with scholars Nicole Charles of the University of Toronto and Tonya Haynes of the University of West Indies. Moran-Thomas has also spent more than a decade following the case of notorious medical experiments that took place in Guatemala in the 1940s, the subject of a recent paper she published with Susan Reverby of Wellesley College.

Closer to home, Moran-Thomas is working on a book about how energy extraction affects chronic conditions and mental health in her native Pennsylvania, at a time of increasing hospital closures. As part of this research, she has been working with MIT seismologist William Frank to develop low-cost sensors that people can use to measure the impact of industrial activity on their home neighborhoods. The research team was recently awarded a grant by the MIT Human Insight Collaborative (MITHIC) for the work. And with another MITHIC grant, Moran-Thomas and several colleagues are working to create a new “Health and Society” educational program at MIT.

“A through line in my work is the question about how to put people at the center of health and medicine,” says Moran-Thomas, an associate professor in MIT’s anthropology program. “Because that’s not how it feels to most people in the world. Care technologies that work for everybody, and health technologies in relation to chronic disease, connect all these different projects.”

The work Moran-Thomas may be best known for occurred in 2020, during the Covid-19 pandemic, when her research recovered an array of neglected clinical studies showing that oximeters functioned differently depending on the skin color of patients. After she published a piece about it in the Boston Review, further hospital studies by physicians who found the essay confirmed a pattern of disproportionately inaccurate readings, leading to subsequent efforts to improve the technology — all steming from her careful, patient-centric approach.

“What anthropology has to offer the world, and other knowledge systems, is the insights of people that might be missing from many accounts, and highlighting these stories that are getting left out,” Moran-Thomas says. “Those are not footnotes, but the central thing to follow. And those histories are also alive in the material world around us.”

Thinking across medical and climate technologies

After growing up in Pennsylvania, Moran-Thomas majored in literature while earning her BA from American University. She decided to pursue ethnographic research as a graduate student, and entered Princeton University’s program in anthropology, earning an MA in 2008 and her PhD in 2012. After postdoc stints at Princeton and Brown University, Moran-Thomas joined the MIT faculty in 2015.

At Princeton, Moran-Thomas’ dissertation research examined the diabetes epidemic in Belize, forming the basis of her first book, “Traveling with Sugar,” whose title is an expression in Belize for living with diabetes. As she chronicles in the book, plantation-era changes that undermined indigenous agriculture, among other things, contributed to a local economy that made diets sugar-heavy, while medical technologies are often unreliable or ill-suited to local conditions. The book also traces breakdowns in care technologies, such as prosthetic limbs (often sought after diabetes-linked amputations), glucose meters, hyperbaric chambers, insulin supply chains, dialysis machines, and pain management technologies.

“Traveling with Sugar” also develops a critique that has become a theme of Moran-Thomas’ work: that society often shifts the blame for illness onto patients while minimizing the larger-scale factors affecting everyday health.

“There can be this focus on exclusively prevention without care, the implicit assumption that patients need to act differently,” Moran-Thomas says. “Blame falls on individuals and families instead of a focus on other questions. Why are these technologies always breaking down? How are they designed, and by whom, for whom? What role is history playing in the present? And how are people trying to remake those structures?”

Those issues are highlighted in Moran-Thomas’ ongoing project, “Sugar Atlas: Counter-Mapping Diabetes from the Caribbean,” which is backed by a two-year Digital Justice Seed Grant from the American Council of Learned Societies. Whereas international organizations tend to lump North America and the Caribbean together when tracking diabetes, this project zooms in on specific aspects of the disease and its historical and structural contributors in the Caribbean, such as the distance people must travel to buy vegetables, their proximity to insulin supplies, and the ways climate change is affecting sea life and fishing practices.

“We’re trying to create a community platform offering a different vision of these conditions,” Moran-Thomas says of the effort to map otherwise unrecorded aspects of the global diabetes epidemic, while tracing mutual aid networks and people’s “arts of care” in the present.

Better design for common devices

Following her research in Belize, where glucose meters were prone to breaking, Moran-Thomas began taking a more active focus on the design of medical technology. At MIT, she began co-teaching a course with tech innovator Jose Gomez-Marquez, 21A.311 (The Social Lives of Medical Objects). The idea was to get students to think about device design that could lead to more durable, fixable, and equitable products.

In turn, Moran-Thomas’ interest in devices led her to question the pulse oximeter readings she started seeing first-hand during the Covid-19 pandemic. Pulse oximeters measure oxygen saturation levels in patients and are a part of even routine appointment check-ins. They work visually, casting beams of light to measure the color of hemoglobin, which varies depending on how much oxygen it contains. 

After firsthand encounters with the sensors led to more research, Moran-Thomas learned that some medical professionals had lingering, unanswered questions about pulse oximeters and they way they were calibrated. After she published her essay in the Boston Review, arguing for more data collection, medical researchers examined the issue more closely, finding that patients with darker skin were about three times more likely to have erroneous blood-oxygen readings than patients with lighter skin. Ultimately, an FDA panel recommended changes to the devices.

“A lot of my work has been learning about health and medicine technologies from the perspectives of patients, families, and nurses, rather than beginning with engineers and doctors,” Moran-Thomas says. “Those two projects, about blood sugar and blood oxygen, were about the shortcomings of those devices and how they could be improved. Those are perspectives I can highlight in hopes others will pick up on them and make other kinds of designs and policies possible.”

Moran-Thomas’ interest in device design has continued with her current book project, about the chronic health effects of energy production in Pennsylvania. She has worked with MIT seismologist William Frank, of the Department of Earth, Atmospheric and Planetary Sciences, to construct an inexpensive meter people can use to measure shaking in their homes caused by industrial activities. (After colleagues in western Pennsylvania reached out with seismic concerns, Moran-Thomas first got the idea to contact Frank after reading about his work in MIT News, incidentally).

The effort is also inspired by guidance from community leaders based at the Center for Coalfield Justice in western Pennsylvania. The research team has received a MITHIC Connectivity grant for their project, “Seismic Collaboratory: Rural Health, Missing Science, and Communicating the Chronic Impacts of Extraction.”

“I’ve met people who have been told by their doctors they must have vertigo, while they thought the walls of their house were really shaking,” Moran-Thomas says. “In a case like that, the device you need is not in the clinic, it’s a monitor at home.”

The book, overall, will examine the effects of energy production on chronic disease and mental health issues in Pennsylvania, something exacerbated by more hospitals being shuttered in the state.

Moran-Thomas is simultaneously working with several co-investigators to create the “Health and Society” educational program at MIT, including Katharina Ribbeck, Erica James, Aleshia Carlsen-Bryan, and Dina Asfaha. Their work was recently awarded an Education Innovation Seed Grant from MITHIC.

From small devices to large-scale changes in health care systems, from the U.S. to other regions, Moran-Thomas remains focused on a core set of issues about how to improve and broaden health care — and lessen the need for it in the first place.

“Thinking across scales is something that’s really useful about anthropology,” Moran-Thomas says. “Even one medical device is a tiny piece of a bigger infrastructure. In order to study that technology or device or sensor, you have to understand the bigger infrastructure it’s attached to, and that there are people involved in all parts of it.” 



de MIT News https://ift.tt/5cqswja

viernes, 20 de marzo de 2026

Weekends@MIT offers connection through varied activities

Weekends at MIT are often a time for students to catch up on sleep or finish p-sets, lab work, and other school assignments. But for more than two decades, through a student-driven initiative supported by the Division of Student Life (DSL), students have been able to find welcoming activities designed to build community on Friday and Saturday nights through Weekends@MIT. All events are open to both graduate and undergraduate students.

At the heart of Weekends@MIT is a leadership team within the Wellbeing Ambassadors program. Ten leadership team members plan and host a variety of events from 9 to 11 p.m. in the MIT Wellbeing Lab, transforming the space into a hub for connection and creativity. While DSL staff provide advising, logistical support, and funding, event ideas come from students. Club members are committed to facilitating student social activities, all while increasing health awareness.

Student-led activities

Student ownership is intentional, says Robyn Priest, an assistant dean in the Division of Student Life. “All the ideas for activities come from the students. Leaders brainstorm themes, vote on their favorite concepts, and spearhead events in small teams. The only criterion is that it be substance-free. The students involved are dedicated, and the time commitment can be significant, so they are paid. But our students consistently step up, motivated by the opportunity to create experiences for their peers.”

Past events have included craft nights with boba tea, yoga, trivia competitions, bracelet-making workshops, waffle nights with customizable toppings, and even Spooky Skate, a Halloween costume ice-skating event hosted by the club in the Z Center.

Priest notes that just this past fall semester, more than 2,000 students attended the Friday night events, with many programs designed as drop-in experiences so students can participate around their busy schedules.

“I joined Weekends@MIT because I really liked the idea of helping organize activities on campus that promoted well-being for students and provided them with chill events that they can attend to build community and feel good on Friday nights,” says junior Emily Crespin Guerra.

Senior Ruting Hung adds, “I wanted to become more involved in promoting wellness on campus. Since then, I've found that it has also served as a way for me to recharge after a long week.”

Expanding Saturday events

Saturdays bring additional variety through collaborations with student clubs and groups. Organizations can apply for funding — typically several hundred dollars — to host events between 9 and 11 p.m. that are open to all students.

Undergraduate and graduate organizations, cultural groups, and hobby-based clubs have all contributed to programming. The partnerships also introduce new audiences to the Wellbeing Lab, helping the space become a familiar and welcoming destination across campus communities.

Connecting the campus through communication

Another key component of Weekends@MIT is a weekly newsletter distributed to thousands of students. The newsletter highlights upcoming programs in the Wellbeing Lab, along with other campus events that align with the initiative’s goals of connection and community without alcohol.

First-year student Vivian Dinh notes, “I love how the events provide a fun escape from the stress of classes and problem sets. The Wellbeing Lab is such a nice facility on campus for students to relax and enjoy themselves.”

A long tradition, evolving for the future

The current initiative builds on a long history of student-led weekend programming that began more than 20 years ago. Over time, the effort has evolved — from early safety campaigns to today’s comprehensive model focused on well-being, belonging, and social connection — but the core idea remains the same: students creating healthy spaces for other students.

Looking ahead, Weekends@MIT aims to continue expanding collaborations and exploring new ways to bring communities together on weekends. Additional events for this semester include: pupusas; blitz chess tournament with the Chess Club; craft night; movies and waffles; mocktails and latte art; a Bob Ross paint night, and much more.



de MIT News https://ift.tt/lFtCXUy

MIT and Hasso Plattner Institute establish collaborative hub for AI and creativity

The following is a joint announcement from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, Hasso Plattner Institute, and Hasso Plattner Foundation.

The MIT Morningside Academy for Design (MAD), MIT Schwarzman College of Computing, Hasso Plattner Institute (HPI), and Hasso Plattner Foundation celebrated the launch of the MIT and HPI AI and Creativity Hub (MHACH) at a signing ceremony this week. This 10-year initiative aims to deepen ties between computing and design as advances in artificial intelligence are reshaping how ideas are conceived and shared.

Funded by the Hasso Plattner Foundation, MIT and HPI will work together to foster collaborative interdisciplinary research and support a portfolio of educational programs, fellowships, and faculty engagement focused on AI and creativity, expanding scholarly inquiry into AI applications across disciplines, industries, and societal challenges. The collaboration begins with an inaugural two-day workshop March 19-20 at MIT, bringing together faculty, students, and researchers to set early priorities.

“As we hear from our faculty, as the Information Age gives way to an era of imagination, we expect a new emphasis on human creativity,” reflects MIT President Sally Kornbluth. “Through this collaboration, MIT and HPI are creating a shared space where students and faculty will come together across disciplines to explore new ideas, experiment with emerging tools, and invent new frontiers at the intersection of human creativity and AI.”

“The best minds need the right environment to do their most creative work,” says Rouven Westphal, from the Hasso Plattner Foundation. “When HPI and MIT come together across disciplines and borders, they create exactly that. The Hasso Plattner Foundation is committed to supporting this collaboration for the long term, building on Hasso Plattner’s vision of uniting technological excellence with human-centered design and creativity.”

Deepening collaboration at the intersection of technology, creativity, and societal impact

Building on the success of the Hasso Plattner Institute-MIT Research Program on Designing for Sustainability, established in 2022 between MIT MAD and HPI, the new MHACH hub represents a commitment to deepen collaboration at the intersection of technology, creativity, and societal impact.

“MIT and HPI share a common commitment to turning scientific excellence into real-world impact. Through this collaboration, we will create an environment where students and researchers from both sides of the Atlantic can work together, experiment across disciplines, and learn from one another — at a time when artificial intelligence is set to profoundly shape our lives. We are convinced that this collaboration will generate ideas with impact far beyond both institutions and inspire international cooperation and innovation,” says Professor Tobias Friedrich, dean and managing director of the Hasso Plattner Institute.

“HPI and MIT exist at the nexus of technology and creativity. Expanding this dynamic relationship will generate new paths for the infusion of AI, design, and creativity, enabling students, faculty, and researchers to dream and discover novel solutions, moving more quickly than ever from idea to implementation. MAD was established to connect thinkers across and beyond the Institute, and this new era of collaboration with HPI advances that mission on a global scale,” comments Hashim Sarkis, dean of the MIT School of Architecture and Planning and the Elizabeth and James Killian (1926) Professor.

Academic leadership from MIT and HPI will jointly shape the hub’s research and teaching agenda. Based in Potsdam, Germany, HPI is a center of excellence for digital engineering advancing research, education, and societal transfer in IT systems engineering, data engineering, cybersecurity, entrepreneurship, and digital health. Through its globally recognized HPI d-school and pioneering work in design thinking methodology, HPI brings a distinctive perspective on human-centered innovation to the collaboration, alongside a strong record in AI and data science research and technology transfer.

Expanding research and education on AI and creativity

The efforts of this multifaceted initiative are intended to foster a dynamic academic community spanning MIT and HPI, anchored by Hasso Plattner–named professorships and graduate fellowships whose recipients will be actively engaged in the hub. The long-term framework is designed to provide continuity for faculty appointments, doctoral training, and cross-campus research.

The agreement also includes the development of classes and educational programs in areas of shared AI focus, along with expanded experiential opportunities through AI-focused workshops, hackathons, and summer exchanges. A steering committee composed of representatives from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, and Hasso Plattner Institute will facilitate the shared governance of MHACH.

“Creativity has always been about extending human capability. At its core, this collaboration asks what it truly means to create something new. The question isn’t whether AI diminishes creativity, but how new forms of intelligence can deepen and enrich that process. Our goal is to explore that intersection with rigor and build a cross-disciplinary scholarly and research community that shapes how AI supports the creation of new ideas and knowledge,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

This collaboration is made possible by the Hasso Plattner Foundation’s long-term philanthropic commitment to institutions that connect technological innovation with design thinking and education. The Hasso Plattner Foundation has played a central role in establishing and supporting institutions such as the Hasso Plattner Institute and international design thinking programs that bridge disciplines and geographies.



de MIT News https://ift.tt/eNiAO5w

jueves, 19 de marzo de 2026

Preserving Keres

Growing up in the village of Kewa — located between Santa Fe and Albuquerque in New Mexico — William Pacheco, a member of the Santo Domingo Pueblo, learned the value of his language, its history, and the traditions it carries.

“We speak Keres, a language isolate found in seven villages and communities in central New Mexico,” he says. “It’s an endangered language with fewer than 10,000 speakers.” The Pueblos’ conception of ‘language,’ according to Pacheco, evokes the idea that speaking “comes from deep within.”

Pacheco is a graduate student in the MIT Indigenous Languages Initiative, a special master’s program in linguistics for members of communities whose languages are threatened. The two-year program provides its graduates with the linguistic knowledge to help them keep their communities’ languages alive. The initiative also offers expanded opportunities for students and faculty to become involved in Indigenous and endangered languages, working with both native speaker linguists in the master’s program and outside groups, ideas that appealed to him.

“There’s some complexity to our language that defies traditional instruction,” says Pacheco, who will complete his studies this spring. “I want to develop the linguistic tools I need to improve my understanding of its construction and how best to teach and preserve it.” Pacheco is keenly aware of cultural differences in how language transmission occurs. Language, he believes, evolves over time and is best learned experientially; the Western model of language learning prioritizes immediacy and test-taking.

A variety of factors complicate efforts to preserve and potentially teach Keres. Each of the villages where it’s spoken has its own distinct dialect. These dialects are mutually intelligible to various degrees based on where they’re being spoken. Additionally, the last three decades have seen a significant increase in English usage by young Pueblos, which further endangers Keres’ existence.

Furthermore, Keres isn’t a written language. For centuries, the Pueblo have relied on daily use within their homes and communities to maintain its vitality. “The community doesn’t want it written,” Pacheco says. 

Contact with the wider world has previously imperiled Indigenous ideas, an outcome Pacheco wants to avoid. “We believe [Keres] is a form of intellectual property, a tradition and artifact that is best served by empowering our people to preserve it,” he says.

From the Southwest to MIT

While he’s now passionate about linguistics, languages weren’t Pacheco’s first choice when considering an educational path. “I always admired [MIT alumnus and Nobel laureate] Richard Feynman,” he recalls. “I wanted to study physics.”

After earning an undergraduate degree from the University of New Mexico, Pacheco, who’d been working as a K-12 educator, began efforts to preserve Keres, increasing the language’s vitality and preserving its usefulness for, and value to, future generations. He sought permission and certification from the tribe to teach the language at the Santa Fe Indian School, an off-reservation boarding school. He soon discovered that a traditional Western approach to language learning wouldn’t suffice.

“Students weren’t taking the course to be scholars of the language; they wanted to learn it to build community and create opportunities to connect with elders,” Pacheco says. It was students’ advocacy, he notes, that led to the Keres learning initiative. While designing the course, however, he found gaps in his knowledge that led him to consider graduate study. 

“There are fascinating idiosyncrasies in Keres, including, for example, verb morphology — the ways in which verbs and verb sounds change,” he notes. “I wasn’t sure about how to teach them.” He sought to improve his understanding and ability by earning a master’s degree in learning design, innovation, and technology from Harvard University. While completing his studies there, he had another burst of inspiration.

“I thought a background in linguistics would prove useful,” he says. “An advisor told me about the Indigenous Languages Initiative at MIT and recommended I apply.” Pacheco knew of Professor Emeritus Noam Chomsky’s pioneering work in generative linguistics at the Institute and sought to learn more about the field’s potential to help him become a better, more effective educator and linguist. 

Upon arriving at MIT in 2024, Pacheco found himself embraced by faculty and students alike. “[MIT linguists] Adam Albright and Norvin Richards have been wonderfully supportive mentors, offering enthusiasm and expertise” he says. “I’ve benefited from MIT’s approach to linguistics and its use of scientific inquiry as a tool to explore language.” Engaging with other students working to preserve languages at risk of extinction continues to drive his work.

“MIT continually encourages us to use its resources, to collaborate, and to help one another find solutions to our unique challenges,” he says. “Networking, gathering good ideas, and having access to professors and students from a variety of disciplines is incredibly valuable.” 

MIT’s scholars, Pacheco says, are experienced with Indigenous language learning, education, and pedagogy.

Developing an organized approach to Keres research and instruction

While gratified that his work created opportunities for him to preserve and teach Keres, Pacheco marvels at his path to the Institute and its impact on his life. “It was my language, not my interest in physics, which led me to Harvard and MIT,” he says. “How did I end up at these places?”

An advantage of language and linguistics education at MIT is the rigor with which it explores language acquisition modeling and allows for alternatives to established systems. Pacheco is after new ideas for Keres language learning and education, working to develop an effective course based on generative linguistics that both preserves the Pueblos’ approach to community and offers an educational model students are likely to embrace. He’s already had opportunities to test novel theories and practices as an educator back home. 

“I was teaching students to use Keres as a programming tool,” he says. “We modeled a robot as a member of the community navigating a maze, and students would have to teach it to accept commands in Keres.” 

Pacheco also wants to explore community-centered language issues. He wants to standardize the development and education of community linguists, creating a cohort of scholars trained to use the tools he designs that are deeply invested in Keres’ preservation and instruction.

“We want to drive inquiries into Keres and how it’s taught,” he says, “while also centering Indigenous knowledge systems and expanding access to linguistics study for Indigenous scholars.”

Pacheco believes there’s value in exposing scholars and communities to the cultural and ideological exchanges he’s enjoyed between the sciences, humanities, Indigenous ideas, and experiences. “Indigenous scholars exist at MIT,” he says. “We’re here, and the Institute’s support helps preserve languages like Keres as important communal and cultural artifacts.” 

Pacheco is grateful for the opportunities his research at MIT have afforded him. While his education as a linguist and scholar continues, Pacheco’s community, culture, and support for Keres language learning remain top priorities.

“I want to amplify the impact in tribal language policy and Indigenous-centered education,” he says. “Language, its study, and its transmission is both science and art.”



de MIT News https://ift.tt/XiMCS3y

Improving cartilage repair through cell therapy

Researchers have developed a new method for monitoring iron flux — the movement and rate at which cells take in, store, use and release iron — in stem cells known as mesenchymal stromal cells (MSCs). The system can provide insights within a minute about a cell’s ability to grow cartilage tissue for cartilage repair. 

The breakthrough offers a promising pathway toward more consistent and efficient manufacturing of high‑quality MSCs for regenerative therapies to treat joint diseases such as osteoarthritis, chronic joint degeneration conditions, and cartilage injuries.

The work was led by researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) group within the Singapore-MIT Alliance for Research and Technology (SMART), and was supported by the SMART Antimicrobial Resistance (AMR) research group, in collaboration with MIT and the National University of Singapore (NUS).

A paper describing the work, “Cellular iron flux measurement by micromagnetic resonance relaxometry as a critical quality attribute of mesenchymal stromal cells,” was published in February in the journal Stem Cells Translational Medicine.

Regenerative therapies hold significant promise for patients with the potential to repair damaged tissues rather than simply manage symptoms. However, one of the biggest challenges in bringing these therapies to patients lies in the unpredictable quality of the MSC’s chondrogenic potential — a cell’s ability to develop and form cartilage tissue — during the in vitro manufacturing process.

Even when grown under controlled laboratory conditions, MSCs are prone to losing some of their potential and ability to form cartilage tissue, leading to inconsistent cartilage repair outcomes due to the varying quality of MSC batches. Existing tests that evaluate the quality of MSCs’ cartilage‑forming potential are destructive in nature, which causes irreversible damage to the cells being tested and renders them unusable for further therapeutic or manufacturing purposes.

In addition, the tests require a prolonged — up to 21-day — period for cells to grow. This slows decision‑making, extends production timelines, and can hinder the timely translation of MSC-based therapies into clinical use and delay treatment for patients. As MSCs can lose chondrogenic potential during this process, early assessment is essential for manufacturers to determine whether a batch should be continued or discontinued. Hence, there is a need for a reliable and rapid method to predict MSCs’ chondrogenic potential during the cell manufacturing process.

The new developement represents a rapid, non-destructive method to monitor iron flux in MSCs by measuring iron changes in spent media — residual components in the culture medium after cell growth. Using an inexpensive benchtop micromagnetic resonance relaxometry (µMRR) device, the approach enables real‑time monitoring of cellular iron changes without damaging the cells. The inexpensive µMRR device can be easily integrated into existing laboratories and manufacturing workflows, enabling routine, real‑time quality monitoring without significant infrastructure or cost barriers.

Iron homeostasis is a critical process that maintains normal levels of iron for cell function, maintaining the balance between providing sufficient iron for essential processes, while preventing toxic accumulation. The study found that iron homeostasis is highly correlated with the MSC’s chondrogenic potential, where significant iron uptake and accumulation will reduce the cell’s ability to form cartilage. The researchers also found that supplementing the cell growth process with ascorbic acid (AA) helps regulate iron homeostasis by limiting iron flux, thereby improving the MSC’s chondrogenic potential.

Using this novel method, spent media are collected as samples and treated with AA. The µMRR device is then used to track and provide real-time insights into small iron concentration changes within the spent media. These iron concentration changes reflect how MSCs take up and release iron and can provide an early indicator of whether a batch is likely to succeed in forming good cartilage.

These findings allow manufacturers to not only monitor MSCs quality for cartilage repair in real-time, but also to assess when, and to what extent, interventions such as AA supplementation are likely to be beneficial - supporting efficient manufacturing of more effective and consistent MSC‑based therapies.

“One of the key challenges in cartilage regeneration is the inability to reliably predict whether MSCs will retain their chondrogenic potential during manufacturing. Our study addresses this by introducing a rapid, non-destructive method to monitor iron flux dynamics as a novel critical quality attribute (CQA) of MSCs' chondrogenic capacity. This approach enables early identification of suboptimal cell batches during culture, enhancing quality control efficiency, reducing manufacturing costs, and accelerating clinical translation,” says Yanmeng Yang, CAMP postdoc and first author of the paper.

“Our research sheds light on a fundamental biological process that, until now, has been extremely difficult to measure. By monitoring iron flux in real-time without destroying the cells, we can gain actionable insights into a cell batch’s chondrogenic potential, which allows for early decision-making during the manufacturing process. The findings support µMRR‑based iron monitoring as an effective quality control strategy for MSC-based therapy manufacturing, paving the way for more consistent and clinically viable regenerative medicine for cartilage regeneration,” says MIT Professor Jongyoon Han, co-head CAMP PI, AMP PI, and corresponding author of the paper.

This method represents a promising step toward improving manufacturing consistency and functional characterisation of MSC-based cellular products. Beyond advancing cell therapy manufacturing, it contributes to the scientific industry studying iron biology by providing real-time iron flux measurements that were previously unavailable. The research also advances clinical translation of high-quality cell therapies for cartilage regeneration, bringing these closer to patients with joint degeneration conditions and cartilage injuries.

Building on these findings, the researchers plan to carry out future preclinical and clinical studies to expand this approach beyond quality control in manufacturing, with the aim of establishing µMRR as a validated method for the clinical translation of MSC-based therapies in patients for cartilage repair.

The research, conducted at SMART, was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.



de MIT News https://ift.tt/DTQgUrA

miércoles, 18 de marzo de 2026

Generative AI improves a wireless vision system that sees through obstructions

MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.

Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.

This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.

The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.  

This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.

These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.

“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”

Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.

Surmounting specularity

The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.

These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.

But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.

“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.

The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.

In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.

“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.

Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.

Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.

“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.

The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.

The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.

Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.

Seeing “ghosts”

The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.

Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.

These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.

“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.

They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.

They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.

In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.

This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.



de MIT News https://ift.tt/CgSNeYO

A better method for identifying overconfident large language models

Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.

But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.   

To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.

Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.

They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.

“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.

She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.

The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.    

“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.

To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.

An ensemble approach

The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.

To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.

“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.

Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.

“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.

TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.

They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.

Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.

Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.

This work is funded, in part, by the MIT-IBM Watson AI Lab.



de MIT News https://ift.tt/By3LMv1

Turning extreme heat into large-scale energy storage

Thermal batteries can efficiently store energy as heat. But building them requires a carefully designed system with materials that can withstand cycles of extremely high temperatures, without succumbing to problems like corrosion, thermal expansion, and structural fatigue.

Many thermal battery systems move high-temperature gas or molten salt around through metal pipes. Fourth Power, founded by MIT Professor Asegun Henry SM ’06, PhD ’09, is turning these materials inside out, using molten metal to transport the heat, which is stored in carbon bricks. Henry’s approach earned him a Guinness World Record for the hottest liquid pump back in 2017 — important because when you double the absolute temperature of a material, to the point where it glows white-hot, the amount of light it emits doesn’t just double, it increases 16 times (or to the fourth power).

The company is harvesting all that light with thermophotovoltaic cells, which work like solar cells to convert light into electricity. Henry and his collaborators broke another record when they demonstrated a lab version of a thermophotovoltaic cell that could convert light to electricity with an efficiency above 40 percent.

Fourth Power is working to use those record-breaking innovations to provide energy for power grids, power producers, and technology companies building power-hungry infrastructure like data centers. Henry says the batteries can provide anywhere from 10 to over 100 hours of electricity at a storage cost that is significantly cheaper than lithium-ion batteries at grid scale. The company is currently cycling each section of its system through relevant operating temperatures — which are nearly half as hot as the sun — and plans to have a fully integrated demonstration unit operating later this year.

“Explaining why our system is such a huge improvement over everything else centers around power density,” explains Henry, who serves as Fourth Power’s chief technologist. “We realized if you push the temperature higher, you will transfer heat at a higher rate and shrink the system. Then everything gets cheaper. That’s why we pursue such high temperatures at Fourth Power. We operate our thermal battery between 1,900 and 2,400 degrees Celsius, which allows us to save a tremendous amount on the balance of system costs.”

A career in heat

Henry earned his master’s and PhD degrees from MIT before working in faculty positions at Georgia Tech and MIT. As a professor at both schools, his research has focused on thermal transport, storage, renewable energy, and other technologies that could lead to improvements in sustainability and decarbonization. Today, he is the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering.

Heat transfer systems are usually made out of metals like iron and nickel. Generally, the higher temperature you want to reach, the more expensive the metal. Henry noticed ceramics can get much hotter than metals, but they’re not used nearly as often. He started asking why.

“The answer is often pretty straightforward: You can’t weld ceramics,” Henry says. “Ceramics aren’t ductile. They generally fail in a catastrophically brittle way, and that’s not how we like large systems to behave. But I couldn’t find many problems beyond that.”

After receiving funding from the Department of Energy, Henry spent years developing a pump made from ceramics and graphite (which is similar to a ceramic). In 2017, his pump set the record for the highest recorded operating temperature for a liquid pump, at 1,200 Celsius. The pump used white-hot liquid tin as a fuel. He chose tin because it doesn’t react with carbon, eliminating corrosion. It also has a relatively low melting point and high boiling point, which keeps it liquid in a large temperature range.

“The idea was, instead of making the system from metal, let’s move liquid metals,” Henry says.

The challenge then became designing the system.

“Typically, a mechanical engineer would come up with a design and say, ‘Give me the best materials to do this,’” Henry says. “We flipped the problem, so we were saying, ‘We know what materials will work, now we need to figure out how to make a system out of it.’”

In 2023, Henry met Arvin Ganesan, who had previously led global energy work at Apple. At first, Ganesan wasn’t interested in joining a startup — he had two young kids and wanted to prioritize his family — but he was intrigued by the potential of the technology. At their first meeting, the two connected over shared values and fatherhood, as Henry surprised Ganesan by bringing his own young children.

“I had a sense this technology had the promise to tackle the twin crises of affordability and climate change at the same time,” says Ganesan, who is now Fourth Power’s CEO. “As energy demand becomes more pronounced, we either need to deploy harder and deeper tech, which is also important, or improve existing tech. Fourth Power is trying to simplify the physics and thermodynamic principles to deliver an approach that has been very well-studied for a very long time.”

The system Fourth Power designed takes in excess electricity from sources like the grid and uses it to heat a series of 6-foot-long, 20-inch thick graphite bricks until they reach about 2,400 Celsius. At that point the system is considered fully charged.

When the customer wants the electricity back, the bricks are used to heat up liquid tin, which flows through a series of graphite pipes, pumps, and flow meters to thermophotovoltaic cells, which turn the light from the glowing hot infrastructure back into electricity.

“You can basically dip the cells into the light and get power, or you can pull them back out and shut it off,” Henry explains. “The liquid metal starts at 2,400 Celsius and then cools as it’s going through the system because it’s giving a bunch of its energy to the photovoltaic, and then it circulates back through the graphite blocks, which act as a furnace, to retrieve more heat.”

From concept to company

Later this year, Fourth Power plans to turn on a 1-megawatt-hour system in its new headquarters in Bedford, Massachusetts. A full-scale system would offer 25 megawatts of power and 250 megawatt hours of storage and take up about half a football field.

“Most technologies you’ll see in storage are around 10 megawatts an acre or less,” Henry explains. “Fourth Power is more like 100 megawatts per acre. It’s very power-dense.”

The power and storage units of Fourth Power’s system are modular, which will allow customers to start with a smaller system and add storage units to extend storage length later. The company expects to lose about 1 percent of total heat stored per day.

“Customers can buy one storage and one power module, and that’s a 10-hour battery,” Henry explains. “But if they want one power module and two storage modules, that’s a 20-hour battery. Customers can mix and match, which is really advantageous for utilities as renewables scale and storage needs change.”

Down the line, the system could also be run as a power plant, converting fuel into electricity or using fuel to charge its batteries during stretches with little wind or sun. It could also be used to provide industrial heat.

But for now, Fourth Power is focused on the battery application.

“Utilities need something cheap and they need something reliable,” Henry says. “The only technology that has managed to reach at least one of those requirements is lithium ion. But the world is waiting for something that’s much cheaper than lithium ion and just as reliable, if not better. That’s what we’re focused on demonstrating to the world.”



de MIT News https://ift.tt/QsmN17E

martes, 17 de marzo de 2026

Sustaining diplomacy amid competition in US-China relations

The United States and China “are the two largest emitters of carbon in the world,” said Nicholas Burns, former U.S. ambassador to the People’s Republic of China, at a recent MIT seminar. “We need to work with each other for the good of both of our countries.” 

During the MITEI Presents: Advancing the Energy Transition presentation, Burns gave insight into the evolving state of U.S.-China relations, its implications for the global order, and its impact on global efforts to advance the energy transition and address climate change.

“We are the two largest global economies,” said Burns, who is now the Goodman Professor of the Practice of Diplomacy and International Relations at Harvard University’s Kennedy School of Government. “These are the only two countries that affect everybody else in the international system because of our weight.”

The relationship between the United States and China can be summarized in three words, according to Burns: competitive, tough, and adversarial — a description that rings true on both sides. He listed four primary areas for this competition: military, technology, trade and economics, and values.

Burns described the especially complicated area of trade and economics. “We both want to be number one. Neither of us — to be honest — is willing to be number two,” said Burns. Outside of North America, China is the United States’ largest trade partner. Outright trade wars — like those in April and October 2025 — create friction. “At one point, you’ll remember, 145 percent tariffs by the United States, and 125 percent by China on the United States. That just grinds a relationship. Those level of tariffs, had they been sustained, would have meant zero trade between the two countries.”

The energy field can be significantly impacted by this area of competition, Burns added. China is dominant in the production and processing of rare earth elements, many of which are critical to products like lithium batteries, solar panels, and electric vehicles. In 2024 and 2025, the United States was not the only country to place tariffs on these products; India, Turkey, South Africa, Mexico, Canada, the EU, and others followed suit. “I think the Trump administration is right, as President Biden was, to try to diversify sources on rare earths,” Burns said.

Burns also noted with interest the dichotomy in the Chinese energy sector between their lead on clean energy technology and their continual use of coal, standing out as an inconsistency in China’s efforts. Burns believes that climate change could be a key area of cooperation between China and the United States, emphasizing the importance of the United States’ participation, both technologically and diplomatically.

Burns also described the significant technological competition between the United States and China — an area of central importance. Throughout his presentation, Burns was quick to praise the emphasis that China puts on education and academic achievement, particularly in STEM fields. Pulling from a recent article in The Economist, he compared the 36 percent of Chinese first-year university students majoring in STEM fields to the 5 percent of American first-year students in STEM. “Think about the volume of graduates and the disparity between our country and China,” he said. “Then think about the percentage of those graduates who go into science and technology.”

Currently, areas like artificial intelligence, quantum computing, and biotechnology are taking center stage in technological innovation. “The Chinese are very skilled in terms of industrial processes and doctrine of adapting quickly,” said Burns. He explained that holding a competitive edge lies not only in who is first on the market, but who adopts the technology first, and who is able to unite that technological progress with policy.

“This is the most important relationship that we have in the world,” said Burns. He believes that the true test is whether the United States and China can manage competition so that interests are protected, while avoiding the use of the massive destructive power both countries possess. “We’ve got to normalize the communication and engagement to prevent the worst from happening,” said Burns.

“We’re at a stage of human history where we’re all linked together, and the fate of everybody in this room and all of our countries is linked together by these huge transnational challenges,” said Burns. “We’ve got to learn to compete and yet live in peace with each other in the process.”

This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit MITEI’s Events page for more information on this and additional events.



de MIT News https://ift.tt/2UwBrNk

MIT-IBM Watson AI Lab seed to signal: Amplifying early-career faculty impact

The early years of faculty members’ careers are a formative and exciting time in which to establish a firm footing that helps determine the trajectory of researchers’ studies. This includes building a research team, which demands innovative ideas and direction, creative collaborators, and reliable resources. 

For a group of MIT faculty working with and on artificial intelligence, early engagement with the MIT-IBM Watson AI Lab through projects has played an important role helping to promote ambitious lines of inquiry and shaping prolific research groups.

Building momentum

“The MIT-IBM Watson AI Lab has been hugely important for my success, especially when I was starting out,” says Jacob Andreas — associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab — who studies natural language processing (NLP). Shortly after joining MIT, Andreas jump-started his first major project through the MIT-IBM Watson AI Lab, working on language representation and structured data augmentation methods for low-resource languages. “It really was the thing that let me launch my lab and start recruiting students.” 

Andreas notes that this occurred during a “pivotal moment” when the field of NLP was undergoing significant shifts to understand language models — a task that required significantly more compute, which was available through the MIT-IBM Watson AI Lab. “I feel like the kind of the work that we did under that [first] project, and in collaboration with all of our people on the IBM side, was pretty helpful in figuring out just how to navigate that transition.” Further, the Andreas group was able to pursue multi-year projects on pre-training, reinforcement learning, and calibration for trustworthy responses, thanks to the computing resources and expertise within the MIT-IBM community.

For several other faculty members, timely participation with the MIT-IBM Watson AI Lab proved to be highly advantageous as well. “Having both intellectual support and also being able to leverage some of the computational resources that are within MIT-IBM, that’s been completely transformative and incredibly important for my research program,” says Yoon Kim — associate professor in EECS, CSAIL, and a researcher with the MIT-IBM Watson AI Lab — who has also seen his research field alter trajectory. Before joining MIT, Kim met his future collaborators during an MIT-IBM postdoctoral position, where he pursued neuro-symbolic model development; now, Kim’s team develops methods to improve large language model (LLM) capabilities and efficiency. 

One factor he points to that led to his group’s success is a seamless research process with intellectual partners. This has allowed his MIT-IBM team to apply for a project, experiment at scale, identify bottlenecks, validate techniques, and adapt as necessary to develop cutting-edge methods for potential inclusion in real-world applications. “This is an impetus for new ideas, and that’s, I think, what’s unique about this relationship,” says Kim.

Merging expertise

The nature of the MIT-IBM Watson AI Lab is that it not only brings together researchers in the AI realm to accelerate research, but also blends work across disciplines. Lab researcher and MIT associate professor in EECS and CSAIL Justin Solomon describes his research group as growing up with the lab, and the collaboration as being “crucial … from its beginning until now.” Solomon’s research team focuses on theoretically oriented, geometric problems as they pertain to computer graphics, vision, and machine learning. 

Solomon credits the MIT-IBM collaboration with expanding his skill set as well as applications of his group’s work — a sentiment that’s also shared by lab researchers Chuchu Fan, an associate professor of aeronautics and astronautics and a member of the Laboratory for Information and Decision Systems, and Faez Ahmed, associate professor of mechanical engineering. “They [IBM] are able to translate some of these really messy problems from engineering into the sort of mathematical assets that our team can work on, and close the loop,” says Solomon. This, for Solomon, includes fusing distinct AI models that were trained on different datasets for separate tasks. “I think these are all really exciting spaces,” he says.

“I think these early-career projects [with the MIT-IBM Watson AI Lab] largely shaped my own research agenda,” says Fan, whose research intersects robotics, control theory, and safety-critical systems. Like Kim, Solomon, and Andreas, Fan and Ahmed began projects through the collaboration the first year they were able to at MIT. Constraints and optimization govern the problems that Fan and Ahmed address, and so require deep domain knowledge outside of AI. 

Working with the MIT-IBM Watson AI Lab enabled Fan’s group to combine formal methods with natural language processing, which she says, allowed the team to go from developing autoregressive task and motion planning for robots to creating LLM-based agents for travel planning, decision-making, and verification. “That work was the first exploration of using an LLM to translate any free-form natural language into some specification that robot can understand, can execute. That’s something that I’m very proud of, and very difficult at the time,” says Fan. Further, through joint investigation, her team has been able to improve LLM reasoning­ — work that “would be impossible without the IBM support,” she says.   

Through the lab, Faez Ahmed’s collaboration facilitated the development of machine-learning methods to accelerate discovery and design within complex mechanical systems. Their Linkages work, for instance, employs “generative optimization” to solve engineering problems in a way that is both data-driven and has precision; more recently, they’re applying multi-modal data and LLMs to computer-aided design. Ahmed states that AI is frequently applied to problems that are already solvable, but could benefit from increased speed or efficiency; however, challenges — like mechanical linkages that were deemed “almost unsolvable” — are now within reach. “I do think that is definitely the hallmark [of our MIT-IBM team],” says Ahmed, praising the achievements of his MIT-IBM group, which is co-lead by Akash Srivastava and Dan Gutfreund of IBM.

What began as initial collaborations for each MIT faculty member has evolved into a lasting intellectual relationship, where both parties are “excited about the science,” and “student-driven,” Ahmed adds. Taken together, the experiences of Jacob Andreas, Yoon Kim, Justin Solomon, Chuchu Fan, and Faez Ahmed speak to the impact that a durable, hands-on, academia-industry relationship can have on establishing research groups and ambitious scientific exploration.



de MIT News https://ift.tt/4yHXgDM

Three anesthesia drugs all have the same effect in the brain, MIT researchers find

When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.

This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.

“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

Miller, Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience Emery Brown, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.

Destabilizing the brain

Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.

When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.

“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”

In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.

For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.

In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.

This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.

Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.

“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”

The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.

“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”

Monitoring anesthesia

Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.

For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.

To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.

“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.

Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.

The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.



de MIT News https://ift.tt/Qsztlug

lunes, 16 de marzo de 2026

“We the People” depicts inventors, dreamers, and innovators in all 50 states

Zora Neale Hurston remains one of America’s best-known authors. Charles Henry Turner developed landmark studies about the behavior of bees and spiders. Brian Wilson founded the Beach Boys. George Nissen invented the trampoline. What do they all have in common?

Well, for one thing, they were all innovative Americans — creators and discoverers, producing work no one anticipated. For another, they are all now celebrated as such, in verse, by Joshua Bennett.

That’s right. Bennett — an MIT professor, lauded poet, and literary scholar — is marking the 250th anniversary of the founding of the U.S. with a book-length work of poetry about the country and some of its distinctive figures. In fact, 50 of them: Bennett has written a substantial work featuring remarkable people or inventions from each of the 50 states, meditating on their place in cultural fabric of the U.S.

“There’s so much to be said for a country where you and I are possible, and the things we do are possible,” Bennett says.

The book, “We (The People of the United States),” is published today by Penguin Books. Bennett is a professor and the Distinguished Chair of the Humanities at MIT.

Bennett’s new work has some prominent Americans in it, but is no gauzy listing of familiar icons. Many of the 50 people in his book overcame hardship, poverty, rejection, or discrimination; some have already been rescued from obscurity, but others have not received proper acclaim. Few of them had a straightforward, simple connection with their times.

“It’s about feeling that you have a life in this country which is undeniably complex, but also has this remarkable beauty to it,” Bennett says of the work. “A beauty you helped to create, and that no one can take away from you.”

The figures that Bennett writes about are sources of fascination, and inspiration, demonstrating the kinds of lives it is possible to invent in the U.S.

“We’re in a moment that calls for compelling, historically grounded stories about what America is, what it has been, and what it can be,” Bennett adds. “Can we build a life-affirming vision for the future and those who will inherit it? I’m trying to. I work on it every day.”

Taking flight

“We (The People of the United States)” is inspired, in part, by Virgil’s “Georgics,” pastoral poems by the great Roman poet. Bennett encountered them while a PhD student in literature at Princeton University.

“The poet Susan Stewart, my professor at Princeton, introduced me to Virgil’s Georgics,” Bennett says. “I eventually started to think: What would it look like for me to cover Virgil?” Adding to his interest in the concept, one of his favorite poets, Gwendolyn Brooks, had spent time recasting Virgil’s ancient epic, “The Aeneid,” for her Pulitzer Prize-winning work, “Annie Allen.” She also translated the original work from Latin as a teenager. Moreover, Bennett’s writing has long engaged with the subject of people working the land in America.

“I decided to start writing all these poems about agriculture,” Bennett says. “But then I thought, this would be interesting as an epic poem about America.” As he launched the project, its focus shifted some more: “I started to think about the book as an ode to invention.”

Soon Bennett had worked out the structure. An opening section of the work is about his own family background, becoming a father, and the process of building a life here in Massachusetts.

“Where does my influence, my aspiration, end and the child begin?” Bennett writes in one poem. That section prefigures further themes in the collection about the domestic environments many of its figures emerged from. For the rest of the work, with one innovator or innovation for each of the 50 states, Bennett adopted a regular writing schedule, producing at least one new poem per week until he was finished. 

Hurston, one of several famous authors and artists featured in the book, represents Florida. From Ohio, entomologist Charles Henry Turner was the first Black person to receive a PhD from the University of Chicago, in 1907, before conducting a wide range of studies about the cognition and behavior of spiders and bees, among other things.

George Nissen, alternately, was a University of Iowa gymnast who built the first trampoline in the 1930s in his home state — something Bennett calls a “magical device” that brings to life “the scene in your mind of the leap/and of the leap itself, where you are airborne, illuminated/quickly immortal.” Whether these innovations appear through rigorous academic exploration or became mass-market goods that produce flights of fancy, Bennett has a keen eye for people who break new ground and fire our own feelings of wonder.

“We actually are all bound up in it together,” Bennett says. “These different figures, from various fields, eras, and lifelong pursuits are in here together precisely because they helped weave the story of this country together. It’s a story that is still unfolding.”

Bennett is straightforward about the struggles many of his subjects faced. His choice to represent North Carolina is the poet George Moses Horton, an enslaved man who not only learned to read and write in the early 1800s — the state later made that illegal for enslaved persons, in 1830 — but made money selling poems to University of North Carolina students. Indeed, Horton’s work was published in the 1820s. Bennett writes that Horton’s public performance of his poetry was “an ancient art revived in the flesh of a prodigy in chains.”

Bennett’s unblinking regard for historical reality is a motif throughout the work. “To me it’s not only about exploring a history that a reader might feel connected to or want to learn more about,” he says. “It’s about honoring those who lived that history, who helped make some of the most beautiful parts of the present possible, through an engagement with the substance of their lives.”

Just my imagination

Many figures in “We (The People of the United States)” are artists, but of many forms. From watching VH1 as a child, Bennett got into the Beach Boys, and he devotes the California entry in the poem to them. Or as Bennett puts it, he was “newly initiated into a sound/I do not understand until I am old enough to be nostalgic/for windswept locales, and singular moments in time/I never lived through.”

Bennett was learning about the Beach Boys while growing up in Yonkers, New York, far from any California beaches. But then, Brian Wilson wasn’t a surfer either — he grew up in an industrial suburb of Los Angeles. Imagination was the coin of the realm for Wilson, something Bennett understood when Beach Boys songs would veer off in unexpected directions.

“I’ve always been drawn to moments of great surprise, or revelation, in the works of art I love,” Bennett says. “Which is part of why I’ve dedicated my life to poetry. You think one thing is happening in a poem, and suddenly that shock comes, that unexpected turn, or volta. Brian Wilson always had a great understanding of that. It works in pop music. Surprise, sometimes, is a shift in register that takes you higher.”

Various poems in the collection have down-to-earth origins. Bennett remembers his father often fixing things in the family home, from toys to the boiler, saying, “Pass me the Phillips-head,” when he needed a screwdriver. Thus Oregon appears in the book: Portland is where the Phillips-head screwdriver was invented.

In conversation, Bennett notes the hopeful disposition of his father, who after living through Jim Crow and serving in the Vietnam War, worked 10-hour shifts at the U.S. Postal Service to support his family. Even with all the difficulty he experienced in his life, Bennett’s father always encouraged his son to pursue his dreams.

“I’m grateful that I inherited a profound sense of belonging, and dignity, from my parents,” Bennett says. “There was always this feeling that we were part of a much larger story, and that we had a responsibility to tell the truth about the world as we knew it.”

And that’s really what Bennett’s new book is about.

“We can reckon with our history in its fullness and work, tirelessly, toward a world that’s worthy of the most vulnerable among us,” Bennett says. “Like Toni Morrison, we can ‘dream the world as it ought to be.’ And then make it real. That’s my vision.”



de MIT News https://ift.tt/PlmVfyC

Ocean bacteria team up to break down biodegradable plastic

Biodegradable plastics could help alleviate the plastic waste crisis that is polluting the environment and harming our health. But how long plastics take to degrade and how environmental bacteria work together to break them down is still largely unknown.

Understanding how plastics are broken down by microbes could help scientists create more sustainable materials and even new microbial recycling systems that convert plastic waste into useful materials.

Now MIT researchers have taken an important first step toward understanding how bacteria work together to break down plastic. In a new paper, the researchers uncovered the role of individual ocean bacteria in the breakdown of a widely used biodegradable plastic. They also showed the complementary processes microbes use to fully consume the plastic, with one microbe cleaving the plastic into its component chemicals and others consuming each chemical.

The researchers say it’s one of the first studies illuminating specific bacterial species’ role in the breakdown of plastic and indicates the speed of plastic degradation can vary widely depending on a few key factors.

“There is a lot of ambiguity about how long these materials actually exist in the environment,” says lead author Marc Foster, a PhD student in the MIT-WHOI Joint Program. “This shows plastic biodegradation is highly dependent on the microbial community where the plastic ends up. It’s also dependent on the plastics — the chemistry of the polymer and how they’re made as a product. It’s important to understand these processes because we’re trying to constrain the environmental lifetime of these materials.”

Joining Foster on the paper are MIT PhD candidate Philip Wasson; former MIT postdoc Andreas Sichert; MIT undergraduate Deborah Madden; Woods Hole Oceanographic Institute researchers Matthew Hayden and Adam Subhas; Chong Becker and Sebastian Gross of the international chemical and plastic company BASF; Otto Cordero, an MIT associate professor of civil and environmental engineering; Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor; and Desirée Plata, MIT’s School of Engineering Distinguished Climate and Energy Professor. The paper appears in the journal Environmental Science and Technology.

Uncovering collaboration

Scientists hope biodegradable plastic can be used to address the mountains of plastic waste piling up in our oceans and landfills.

“More than half of produced plastic is either sent to landfills or directly released into the environment,” Foster says. “But without knowing the specifics of different degradation processes, we won’t be able to accurately predict the lifetime of these materials and better control that degradation.”

To date, many studies into the biodegradation of plastics have focused on single microbial organisms, but Foster says that’s not representative of how most plastics are broken down in the environment.

“It’s really rare for a single bacterium to carry out the full degradation process because it requires a significant metabolic burden to carry all of the enzymatic functions to depolymerize the polymer and then use those chemical subunits as a carbon and energy source,” Foster says.

Other studies have sought to capture the molecular footprints of groups of bacteria as they degrade plastic, which gives a snapshot of the species involved without uncovering the mechanisms of action.

For this study, the researchers wanted to uncover the roles of specific bacterial species as they fully degraded plastic. They started with a type of biodegradable plastic known as an aromatic aliphatic co-polyester. Such plastic is used in shopping bags and food packaging. It’s also often laid across the soil of farms to prevent weeds and retain moisture.

To begin the study, researchers at BASF, which produces that type of plastic, first placed samples of the product into different depths of the Mediterranean Sea to let bacteria grow as a thin biofilm around the plastic. The company then shipped the samples to researchers at MIT, who isolated as many species of bacteria as possible from the samples. The researchers mixed those isolates and identified 30 bacterial species that continued to grow in abundance on the plastic.

Using carbon dioxide as a measure of plastic degradation, the researchers isolated each bacterium and found one, Pseudomonas pachastrellae, that could depolymerize the plastic compounds, breaking them into the three chemical components of the plastic: terephthalic acid, sebacic acid, and butanediol.

But that bacterium couldn’t consume all three components on its own. One by one, the researchers exposed each bacterium to each chemical, finding no bacteria that could consume all three, although they did find some species that could consume one or two chemicals on their own.

Finally, the researchers selected five bacterial species based on their complementary breakdown abilities and showed the small group exhibited the same ability to fully degrade the plastic as the 30-member bacteria community.

“I was able to minimize the degradation process to this simplistic set of specific metabolic functions,” Foster says. “And then when I took out one bacterium, the mineralization dropped, which indicated the organism was controlling the degradation of the polymer. Then when I had each one of the bacteria alone in a culture, none of them could reach the same degradation as all five together, indicating there was this complementary function required. It worked much better than I thought it would.”

The researchers also found the five-member bacteria community couldn’t mineralize a different plastic, showing groups of bacteria may only be able to mineralize specific plastics.

“It highlights that the microbes living where this plastic ends up are going to dictate the plastic’s lifetime,” Foster says.

Faster plastic degradation

Foster notes the bacteria in his study are likely specific to the Mediterranean Sea. The study also only involved bacteria that could survive in his lab environment. Still, Foster says it’s one of the first papers that identifies the roles of bacteria in consuming plastic.

“Most studies wouldn’t be able to identify the specific bacteria that’s controlling each complementary mineralization process,” Foster says. “Here we can say this bacteria controls degradation, these bacteria handle mineralization, and then we show the function of each bacteria and show that together, they can remove the entire polymer.”

Foster says the work is an important first step toward creating microbial systems that are better at breaking down plastic or converting it into something useful. In follow-up work for his PhD, he is exploring what makes successful bacterial pairs for faster plastic consumption and how enzymes dock on plastic particles to initiate and continue degradation.

The work was supported by the MIT Climate and Sustainability Consortium and BASF SE. Partial support was provided by the U.S. National Science Foundation Graduate Research Fellowship Program.



de MIT News https://ift.tt/XhiN3LZ

domingo, 15 de marzo de 2026

New sensor sniffs out pneumonia on a patient’s breath

Diagnosing some diseases could be as easy as breathing into a tube. MIT engineers have developed a test to detect disease-related compounds in a patient’s breath. The new test could provide a faster way to diagnose pneumonia and other lung conditions. Rather than sit for a chest X-ray or wait hours for a lab result, a patient may one day take a breath test and get a diagnosis within minutes.

The new breath test is a portable, chip-scale sensor that traps and detects synthetic compounds, or “biomarkers,” of disease, which are initially attached to inhalable nanoparticles. The biomarkers serve as tiny tags that can only be unlocked and detached from the nanoparticle by a very particular key, such as a disease-related enzyme.

The idea is that a person would first breathe in the nanoparticles, similar to inhaling asthma medicine. If the person is healthy, the nanoparticles would eventually circulate out of the body intact. If a disease such as pneumonia is present, however, enzymes produced as a result of the infection would snip off the nanoparticles’ biomarkers. These untethered biomarkers would be exhaled and measured, confirming the presence of the disease.

Until now, detecting such exhaled biomarkers required laboratory-grade instruments that are not available in most doctor’s offices. The MIT team has now shown they can detect exhaled biomarkers of pneumonia at extremely low concentrations using the new portable, chip-scale breath test, which they’ve dubbed “PlasmoSniff.”

They plan to incorporate the new sensor into a handheld instrument that could be used in clinical or at-home settings to quickly diagnose pneumonia and other diseases.

“In practice, we envision that a patient would inhale nanoparticles and, within about 10 minutes, exhale a synthetic biomarker that reports on lung status,” says Aditya Garg, a postdoc in MIT’s Department of Mechanical Engineering. “Our new PlasmoSniff technology would enable detection of these exhaled biomarkers within minutes at the point of care.”

Garg is the first author of a study that details the team’s new sensor design. The study appears online in the journal Nano Letters. MIT co-authors include Marissa Morales, Aashini Shah, Daniel Kim, Ming Lei, Jia Dong, Seleem Badawy, Sahil Patel, Sangeeta Bhatia, and Loza Tadesse.

Tailored tags

PlasmoSniff is a project led by Loza Tadesse, an assistant professor of mechanical engineering at MIT. Tadesse’s group builds diagnostic devices that can be used directly in doctor’s office and other point-of-care settings. Her work specializes in spectroscopy, using light to identify key fingerprints in a chemical or molecule.

Several years ago, Tadesse teamed up with Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT. Bhatia’s group focuses in part on developing nanoparticle sensors — tiny particles that can be tagged with a synthetic biomarker. Bhatia can tailor these biomarkers to cleave from their nanoparticle only in the presence of specific “protease” enzymes that are produced by certain diseases.

In work that was reported in 2020, Bhatia’s group demonstrated they could detect cleaved biomarkers of pneumonia from the breath of infected mice. The biomarkers were exhaled at extremely low concentrations, of about 10 parts per billion. Nevertheless, the researchers were able to detect the compounds using mass spectrometry — a technology that is highly sensitive but requires bulky and expensive instrumentation that is not widely available in clinical settings.

“We thought, ‘How can we achieve that same sensitivity, in a way that’s accessible, at the point of need, and in a chip format that can be scalable in terms of cost?’” Tadesse says. 

A fingerprint trap

For their new study, Tadesse’s group looked to design a sensitive, portable breath test to quickly detect Bhatia’s biomarkers. Their new design centers on “plasmonics” — the study and manipulation of light and how it interacts with matter at the nanoscale.

The researchers noted that molecules exhibit characteristic vibrational modes, corresponding to the motions of atoms within their chemical bonds. These vibrations can be detected using Raman spectroscopy, an optical technique in which molecules are illuminated with light. A small fraction of the scattered light shifts in energy due to interactions with a molecule’s vibrations. By measuring these energy shifts, researchers can identify molecules based on their distinctive vibrational fingerprints.

To detect Bhatia’s biomarkers, however, they would need to isolate the comparatively few molecules from the dense cloud of many other exhaled molecules. They would also need to boost the biomarker’s vibrational signal, as the Raman-scattered light by an individual molecule is inherently extremely small.

“This is a needle-in-a-haystack problem,” Tadesse says. “Our method detects that needle that would otherwise be embedded in the noise.”

The team’s new sensor is designed to trap target biomarkers and boost their vibrational signal. The core of the sensor is made from a thin gold film, above which the researchers suspended a layer of gold nanoparticles. The gold nanoparticles are coated with a porous silica shell, generating a 5-nanometer-wide gap between the gold nanoparticles and the gold film. The silica is modified to strongly bond with molecules of water. The hydrogen in water can in turn bond with the target biomarkers. If any biomarkers pass through the sensor’s gap, they stick to the water molecules like Velcro.

The sensor’s gap is engineered to strongly amplify light due to plasmonic resonance, where electrons in the nearby gold structures collectively oscillate in response to incoming light, concentrating the electromagnetic field into the gap. Biomarkers trapped in these gaps experience a greatly enhanced electromagnetic field, which amplifies their Raman scattering signal. The researchers can then measure the Raman scattered light, and compare the pattern to the biomarker’s known “fingerprint,” to confirm its presence.

The team worked with Daniel Kim, a graduate student in Bhatia’s lab, and tested the sensor’s performance on samples of lung fluid that they obtained from healthy mice. They spiked these samples with biomarkers of pneumonia that Bhatia’s group previously designed. They then placed the spiked fluid in a vial and heated it to evaporate the fluid, to simulate exhaled breath. They placed the new sensor on the underside of the vial’s cap and used a Raman spectrometer to measure the scattered light as the fluid vapor passed through the sensor.

Through these experiments, they showed the sensor quickly detected biomarkers of pneumonia at extremely low, clinically relevant concentrations.

“Our next goal is to have a breath collection system, like a mask you can breathe into,” Garg says. “A patient would first use something like an asthma inhaler to inhale the nanoparticles. They could then breathe through the mask sensor for five minutes. We could then integrate a handheld Raman spectrometer to detect whatever biomarker is breathed out, within minutes.”

Breath tests for disease, sometimes referred to as disease breathalyzers, are an emerging technology. Most designs are still in the experimental stage, and take different approaches to detect various conditions such as certain cancers, intestinal infections, and viruses such as Covid-19. The MIT team notes that its design can be used to detect diseases beyond pneumonia, as well as biomarkers that are not related to disease, as long as the biomarker of interest has a known vibrational “fingerprint.”

“It’s not just limited to these biomarkers or even diagnostic applications,” Tadesse says. “It can sniff out industrial chemicals or airborne pollutants as well. If a molecule can form hydrogen bonds with water, we can use its vibrational fingerprint to detect it. It’s a pretty universal platform.”

This work was supported, in part, by funding from Open Philanthropy (now Coefficient Giving). Several characterization and fabrication steps were conducted at MIT.nano.



de MIT News https://ift.tt/ufPhc4U