martes, 10 de marzo de 2026

3 Questions: Building predictive models to characterize tumor progression

Just as Darwin’s finches evolved in response to natural selection in order to endure, the cells that make up a cancerous tumor similarly counter selective pressures in order to survive, evolve, and spread. Tumors are, in fact, complex sets of cells with their own unique structure and ability to change. 

Today, artificial Intelligence and machine learning tools offer an unparalleled opportunity to illuminate the generalizable rules governing tumor progression on the genetic, epigenetic, metabolic, and microenvironmental levels. 

Matthew G. Jones, an assistant professor in the MIT Department of Biology, the Koch Institute for Integrative Cancer Research, and the Institute for Medical Engineering and Science, hopes to use computational approaches to build predictive models — to play a game of chess with cancer, making sense of a tumor’s ability to evolve and resist treatment with the ultimate goal of improving patient outcomes. In this interview, he describes his current work.

Q: What aspect of tumor progression are you working to explore and characterize? 

A: A very common story with cancer is that patients will respond to a therapy at first, and then eventually that treatment will stop working. The reason this largely happens is that tumors have an incredible, and very challenging, ability to evolve: the ability to change their genetic makeup, protein signaling composition, and cellular dynamics. The tumor as a system also evolves at a structural level. Oftentimes, the reason why a patient succumbs to a tumor is because either the tumor has evolved to a state we can no longer control, or it evolves in an unpredictable manner. 

In many ways, cancers can be thought of as, on the one hand, incredibly dysregulated and disorganized, and on the other hand, as having their own internal logic, which is constantly changing. The central thesis of my lab is that tumors follow stereotypical patterns in space and time, and we’re hoping to use computation and experimental technology to decode the molecular processes underlying these transformations.  

We’re focused on one specific way tumors are evolving through a form of DNA amplification called extrachromosomal DNA. Excised from the chromosome, these ecDNAs are circularized and exist as their own separate pool of DNA particles in the nucleus. 

Initially discovered in the 1960s, ecDNA were thought to be a rare event in cancer. However, as researchers began applying next-generation sequencing to large patient cohorts in the 2010s, it seemed like not only were these ecDNA amplifications conferring the ability of tumors to adapt to stresses, and therapies, faster, but that they were far more prevalent than initially thought.

We now know these ecDNA amplifications are apparent in about 25 percent of cancers, in the most aggressive cancers: brain, lung, and ovarian cancers. We have found that, for a variety of reasons, ecDNA amplifications are able to change the rule book by which tumors evolve in ways that allow them to accelerate to a more aggressive disease in very surprising ways. 

Q: How are you using machine learning and artificial intelligence to study ecDNA amplifications and tumor evolution? 

A: There’s a mandate to translate what I’m doing in the lab to improve patients’ lives. I want to start with patient data to discover how various evolutionary pressures are driving disease and the mutations we observe. 

One of the tools we use to study tumor evolution is single-cell lineage tracing technologies. Broadly, they allow us to study the lineages of individual cells. When we sample a particular cell, not only do we know what that cell looks like, but we can (ideally) pinpoint exactly when aggressive mutations appeared in the tumor’s history. That evolutionary history gives us a way of studying these dynamic processes that we otherwise wouldn’t be able to observe in real time, and helps us make sense of how we might be able to intercept that evolution. 

I hope we’re going to get better at stratifying patients who will respond to certain drugs, to anticipate and overcome drug resistance, and to identify new therapeutic targets.

Q: What excited you about joining the MIT community?

A: One of the things that I was really attracted to was the integration of excellence in both engineering and biological sciences. At the Koch Institute, every floor is structured to promote this interface between engineers and basic scientists, and beyond campus, we can connect with all the biomedical research enterprises in the greater Boston area. 

Another thing that drew me to MIT was the fact that it places such a strong emphasis on education, training, and investing in student success. I’m a personal believer that what distinguishes academic research from industry research is that academic research is fundamentally a service job, in that we are training the next generation of scientists. 

It was always a mission of mine to bring excellence to both computational and experimental technology disciplines. The types of trainees I’m hoping to recruit are those who are eager to collaborate and solve big problems that require both disciplines. The KI [Koch Institute] is uniquely set up for this type of hybrid lab: my dry lab is right next to my wet lab, and it’s a source of collaboration and connection, and that reflects the KI’s general vision. 



de MIT News https://ift.tt/uSDn7lC

MIT School of Engineering faculty receive awards in fall 2025

Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in fall 2025:

Hal Abelson, the Class of 1922 Professor in the Department of Electrical Engineering and Computer Science, received the 2025 Lifetime Achievement Award for Excellence from Open Education Global. The award honors his foundational impact on open education, Creative Commons, and open knowledge movements.

Faez Ahmed, the Henry L. Doherty Career Development Professor in Ocean Utilization in the Department of Mechanical Engineering, received an Amazon Research Award for his project “AutoDA‑Sim: A Multi‑Agent Framework for Safe, Aesthetic, and Aerodynamic Vehicle Design.” Amazon Research Awards provide unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines.

Pulkit Agrawal, an associate professor in the Department of Electrical Engineering and Computer Science, received the 2025 IROS Toshio Fukuda Young Professional Award for contributions to robot learning, policy learning, agile locomotion, and dexterous manipulation. The award recognizes outstanding contributions of an individual of the IROS community who has pioneered activities in robotics and intelligent systems.

Ahmad Bahai, a professor of the practice in the Department of Electrical Engineering and Computer Science, was elected to the 2025 class of Fellows of the National Academy of Inventors for contribution to innovation in new semiconductor devices with extensive applications in clinical grade personal sensors for a variety of biomarkers. The honor recognizes inventors whose patented work has made a meaningful global impact.

Yufeng (Kevin) Chen, an associate professor in the Department of Electrical Engineering and Computer Science, received the 2025 IROS Toshio Fukuda Young Professional Award for contributions to insect‑scale multimodal robots and soft‑actuated aerial systems. The award recognizes outstanding contributions of an individual of the IROS community who has pioneered activities in robotics and intelligent systems.

Angela Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering, received the 2025 Sato Memorial International Award from the Pharmaceutical Society of Japan, recognizing advancements in pharmaceutical sciences and U.S.–Japan scientific collaboration.

Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Medicine for pioneering digital health technology that enables noninvasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. Election to the academy is considered one of the highest honors in the fields of health and medicine, and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Professor in the Department of Civil and Environmental Engineering, was selected as a 2025 Packard Fellow for Science and Engineering. The Packard Foundation established the Packard Fellowships for Science and Engineering to allow the nation’s most promising early-career scientists and engineers flexible funding to take risks and explore new frontiers in their fields of study.

Muriel Médard, the NEC Professor of Software Science and Engineering in the Department of Electrical Engineering and Computer Science, received the 2026 IEEE Richard W. Hamming Medal for contributions to coding for reliable communications and networking. Recognized for breakthroughs in network coding and information theory, Médard’s innovations improve the reliability of data transmission in applications such as streaming video, wireless networks, and satellite communications. The award is given for exceptional contributions to information sciences, systems and technology.

Tess Smidt, an associate professor in the Department of Electrical Engineering and Computer Science, was selected as a 2025 AI2050 Fellow by Schmidt Sciences for her project, “Hierarchical Representations of Complex Physical Systems with Euclidean Neural Networks.” The program supports research that aims to help AI benefit humanity by mid‑century.



de MIT News https://ift.tt/WDLTQNO

lunes, 9 de marzo de 2026

MIT undergraduates help US high schoolers tackle calculus

This year in a rural school district in southeastern Montana, one high school student is taking calculus. For many people, calculus is daunting enough, even when teachers are used to offering it and peers are around to help. Studying it solo can be even harder. Yet this lone student has an unusual source of support: weekly tutoring directly from an MIT undergraduate, by Zoom, a long-distance but helpful way to stay on track.

It's part of a new program called the MIT4America Calculus Project, launched from the Institute last summer, in which MIT undergraduates and alumni work with school districts across the U.S., from Montana to Texas to New York, to tutor high school students. The logic is compelling: Students are highly proficient at calculus at MIT, where it is almost a requirement for admissions and success. The new civic-minded outreach program lets those MIT people share their knowledge and skills, getting high schoolers ready for further studies and even jobs, especially in STEM fields. 

“Calculus is a gateway for many students into STEM higher education and careers,” says MIT Professor Eric Klopfer, a co-director of the MIT4America Calculus Project. “We can help more students, in more places, fulfill requirements and get into great universities across the country, whether MIT or others, and then into STEM careers. We want to make sure they have the skills to do that.”

At this point, the project is working closely with 14 school districts across the U.S., deploying 30 current MIT undergraduates and seven alumni as tutors. The weekly sessions are carefully coordinated with school administrators and teachers, and the MIT tutors have all received training. The program started with an in-person summer calculus camp in 2025; by next summer, the goal is to be collaborating with about 20 schools districts.

“We want it to have a lasting impact,” says Claudia Urrea, an education scholar and co-director of the MIT4America Calculus Project “It’s not just about students passing an exam, but having tutors who look like what the students want to be in the future, who are mentors, have conversations, and make sure the high school students are learning.” 

Klopfer and Urrea bring substantial experience to the project. Klopfer is a professor and director of the Scheller Teacher Education Program and the Education Arcade at MIT; Urrea is executive director for the PreK-12 Initiative at MIT Open Learning.

The MIT4America Calculus Project is supported through a gift from the Siegel Family Endowment and was developed as a project in consultation with David Siegel SM ’86, PhD ’91, a computer scientist and entrepreneur who is chairman of the firm Two Sigma.

“David Siegel came to us with two powerful questions: How can we spread the educational impact of MIT beyond our walls? And how can we open doors to STEM careers for U.S. high school students who don’t have access to calculus?” says MIT President Sally Kornbluth.

She adds: “The MIT4America Calculus Project answers those questions in a perfectly MIT way: Reflecting the Institute’s longstanding commitment to national service, the MIT4America Calculus Project supplies an innovative answer to a hard practical problem, and it taps the uncommon skill of the people of MIT to create opportunity for others. We’re enormously grateful to David for his inspiration and guidance, and to the Siegel Family Endowment for the financial support that brought this idea to life.”

The U.S. has more than 13,000 school districts, and about half of them offer calculus classes. The MIT effort aims to work with districts that already have existing programs but are striving to add educational support for them, often while facing funding constraints or other limitations.

In contrast to the one-student calculus situation in Montana, the project is also working with a 5,000-student district in Texas, south of Dallas, where about 60 high school students take calculus; currently five Institute undergraduates are tutoring 15 students from the district’s schools.

“Other organizations are involved in efforts like this, but I think MIT brings some unique things to it,” Klopfer says. “I think involving our undergraduates in this is an awesome contribution. Our students really do come from all over the place, and are sometimes connecting back to their home states and communities, and that makes a difference on both sides.”

He adds: “I see benefits for our students, too. They develop good ways of communicating, working with other people and building skills. They can gain a lot of great experience.”

In addition to the in-person summer calculus camp, which is expected to continue, and the weekly video tutoring, the MIT4America Calculus Project is working on developing online tools that help guide high school students as well. Still, Urrea emphasizes, the project is built around “the importance of people. A community of support is very important, to have connections that build over time.  The human aspect of the program is irreplaceable.”

The MIT tutors must pass rigorous training sessions that cover pedagogy and other aspects of working with high school students, and know they are making a substantial commitment of time and effort.

It has been worth it, as teachers say their high school students have been responding very well to the MIT tutors.

“For students to be able to see themselves in their tutors is a really cool thing,” says Shilpa Agrawal ’15, director of computer science and an AP calculus AB teacher at Comp Sci High in the Bronx, New York, where 15 students are participating in the project.

“It’s led to a lot of success for my students,” adds Agrawal, who majored in computer science at MIT. She is part of the national network of MIT-connected teachers who have been helping the program grow organically, having reached out to Jenny Gardony, manager of the MIT4America Calculus Project.

Gardony, who is also the math project manager in MIT’s Scheller Teacher Education program, has been receiving enthusiastic emails from teachers in other participating districts since the project started.

“I have to start by saying thank you,” one teacher wrote to Gardony, adding that one student “was so excited in class today. The session she had with you made her so confident. She’s always nervous, but today she was smiling and helping others, and that was 100 percent because of you.”

Gardony adds: “The fact that a busy teacher takes the time to send that email, I’m touched they would do that.” 



de MIT News https://ift.tt/JAp5gXd

Understanding how “marine snow” acts as a carbon sink

In some parts of the deep ocean, it can look like it’s snowing. This “marine snow” is the dust and detritus that organisms slough off as they die and decompose. Marine snow can fall several kilometers to the deepest parts of the ocean, where the particles are buried in the seafloor for millennia.

Now, researchers at MIT and their collaborators have found that as marine snow falls, tiny hitchhikers may limit how deep the particles can sink before dissolving away. The team shows that when bacteria hitch a ride on marine snow particles, the microbes can eat away at calcium carbonate, which is an essential ballast that helps particles sink.

The findings, which appear this week in the Proceedings of the National Academy of Sciences, could explain how calcium carbonate dissolves in shallow layers of the ocean, where scientists had assumed it should remain intact. The results could also change scientists’ understanding of how quickly the ocean can sequester carbon from the atmosphere.

Marine snow is a main vehicle by which the ocean stores carbon. At the ocean’s surface, phytoplankton absorb carbon dioxide from the atmosphere and convert the gas into other forms of carbon, including calcium carbonate — the same stuff that’s found in shells and corals. When they die, bits of phytoplankton drift down through the ocean as marine snow, carrying the carbon with them. If the particles make it to the deep ocean, the carbon they carry can be buried and locked away for hundreds to thousands of years.

But the new study suggests bacteria may be working against the ocean’s ability to sequester carbon. By eroding the particles’ calcium carbonate, bacteria can significantly slow the sinking of marine snow. The more they linger, the more likely the particles are to be respired quickly, releasing carbon dioxide into the shallow ocean, and possibly back into the atmosphere.

“What we’ve shown is that carbon may not sink as deep or as fast as one may expect,” says study co-author Andrew Babbin, an associate professor in the Department of Earth, Atmospheric and Planetary Sciences and a mission director at the Climate Project at MIT. “As humanity tries to design our way out of the problem of having so much CO2 in the atmosphere, we have to take into account these natural microbial mechanisms and feedbacks.”

The study’s primary author is Benedict Borer, a former MIT postdoc who is now an assistant professor of marine and coastal sciences at the Rutgers School of Environmental and Biological Sciences; co-authors include Adam Subhas and Matthew Hayden at the Woods Hole Oceanographic Institution and Ryan Woosley, a principal research scientist at MIT’s Center for Sustainability Science and Strategy.

Losing weight

Marine snow acts as the ocean’s main “biological pump,” the process by which the ocean pulls carbon from the surface down into the deep ocean. Scientists estimate that marine snow is responsible for drawing down billions of tons of carbon each year. Marine snow’s ability to sink comes mainly from minerals such as calcium carbonate embedded within the particles. The mineral is a dense ballast that weighs down the particle. The more calcium carbonate a particle has, the faster it sinks.

Scientists had assumed based on thermodynamics that calcium carbonate should not dissolve within the ocean’s upper layers, given the general temperature and pH conditions in the surface ocean. Any calcium carbonate that is bound up in marine snow should then safely sink to depths greater than 1,000 meters without dissolving along the way.

But oceanographers have long observed signs of dissolved calcium carbonate in the upper layers of the ocean, suggesting that something other than the ocean’s macroscale conditions was dissolving the mineral and slowing down the ocean’s biological pump.

And indeed, the MIT team has found that what is dissolving calcium carbonate in shallow waters is a microscale process that occurs within the immediate environment of an individual particle.

“Most oceanographers think about the macroscale, and in this instance what’s happening in microscopic particles is what is actually controlling bulk seawater chemistry,” Borer says. “Consequences abound for the ocean’s carbon dioxide sequestration capacity.”

A sinking sweetspot

In their new study, the researchers set up an experiment to simulate a sinking particle of marine snow and its interactions at the microscale. The team synthesized particles similar to marine snow that they made from varying concentrations of calcium carbonate and bacteria — organisms that are often found feasting on the particles in the ocean.

“The ocean is a fairly dilute medium with respect to organic matter,” Babbin says. “So organisms like bacteria have to search for food. And particles of marine snow are like cheeseburgers for bacteria.”

The team designed a small microfluidic chip to contain the particles, and flowed seawater through the chip at various rates to simulate different sinking speeds in the ocean. Their experiments revealed that whenever particles hosted any bacteria, they also rapidly lost some calcium carbonate, which dissolved into the surrounding seawater. As bacteria feed on the particles’ organic material, the microbes excrete acidic waste products that act to dissolve the particles’ inorganic, ballasting calcium carbonate.

The researchers also found that the amount of calcium carbonate that dissolves depends on how fast the particles sink. They flowed seawater around the particles at slow, intermediate, and fast speeds and found that both slow and fast sinking limit the amount of calcium carbonate that’s dissolved. With slow sinking, particles don’t receive as much oxygen from their surroundings, which essentially suffocates any hitchhiking bacteria. When particles sink quickly, bacteria may be sufficiently oxygenated, but any waste products that they produce can be easily flushed away before they can dissolve the particles’ calcium carbonate.

At intermediate speeds, there is a sweet spot: Bacteria are sufficiently oxygenated and can also build up enough waste, enabling the microbes to efficiently dissolve calcium carbonate.

Overall, the work shows that bacteria can have a significant effect on marine snow’s ability to sink and sequester carbon in the deep ocean. Bacteria can be found everywhere, and particularly in the shallower ocean regions. Even if macroscale conditions in these upper layers should not dissolve calcium carbonate, the study finds bacteria working at the microscale most likely do.

The findings could explain oceanographers’ observations of dissolved calcium carbonate in shallow ocean regions. They also illustrate that bacteria and other microbes may be working against the ocean’s natural ability to sequester carbon, by dissolving marine snow’s ballast and slowing its descent into the deep ocean. As humans consider climate solutions that involve enhancing the ocean’s biological pump, the researchers emphasize that bacteria’s role must be taken into account.

“Insights from this work are vital to predict how ecosystems will respond to marine carbon dioxide removal attempts, and overall how the oceans will change in response to future climate scenarios,” says Benedict Borer, who carried out the study’s experiments as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

This work was supported, in part, by the Simons Foundation, the National Science Foundation, and the Climate Project at MIT.



de MIT News https://ift.tt/Z7gXaue

Neurons receive precisely tailored teaching signals as we learn

When we learn a new skill, the brain has to decide — cell by cell — what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.

The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A long-standing question has been whether the brain also uses that kind of individualized feedback. In an open-access study published in the Feb. 25 issue of the journal Nature, MIT researchers report evidence that it does.

A research team led by Mark Harnett, a McGovern Institute for Brain Research investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.

The changing brain

Our brains are constantly changing as we interact with the world, modifying their circuitry as we learn and adapt. “We know a lot from 50 years of studies that there are many ways to change the strength of connections between neurons,” Harnett says. “What the field really lacks is a way of understanding how those changes are orchestrated to actually produce efficient learning.”

Some actions — and the neural connections that enable them — are reinforced with the release of neuromodulators like dopamine or norepinephrine in the brain. But those signals are broadcast to large groups of neurons, without discriminating between cells’ individual contributions to a failure or a success. “Reinforcement learning via neuromodulators works, but it’s inefficient, because all the neurons and all the synapses basically get only one signal,” Harnett says.

Machine learning uses an alternative, and extremely powerful, way to learn from mistakes. Using a method called back propagation, artificial neural networks compute an error signal and use it to adjust their individual connections. They do this over and over, learning from experience how to fine-tune their networks for success. “It works really well and it’s computationally very effective,” Harnett says.

It seemed likely that brains might use similar error signals for learning. But neuroscientists were skeptical that brains would have the precision to send tailored signals to individual neurons, due to the constraints imposed by using living cells and circuits instead of software and equations. A major problem for testing this idea was how to find the signals that provide personalized instructions to neurons, which are called vectorized instructive signals. The challenge, explains Valerio Francioni, first author of the Nature paper and a former postdoc in Harnett’s lab, is that scientists don’t know how individual neurons contribute to specific behaviors.

“If I was recording your brain activity while you were learning to play piano,” Francioni explains, “I would learn that there is a correlation between the changes happening in your brain and you learning piano. But if you asked me to make you a better piano player by manipulating your brain activity, I would not be able to do that, because we don’t know how the activity of individual neurons map to that ultimate performance.”

Without knowing which neurons need to become more active and which ones should be reined in, it is impossible to look for signals directing those changes.

Understanding neuron function

To get around this problem, Harnett’s team developed a brain-computer interface task to directly link neural activity and reward outcome — akin to linking the keys of the piano directly to the activity of single neurons. To succeed at the task, certain neurons needed to increase their activity, whereas others were required to decrease their activity.

They set up a BCI to directly link activity in those neurons — just eight to 10 of the millions of neurons in a mouse’s brain — to a visual readout, providing sensory feedback to the mice about their performance. Success was accompanied by delivery of a sugary reward.

“Now if you ask me, ‘How does the mouse get more rewards? Which neuron do you have to activate and which neuron do you have to inhibit?’ I know exactly what the answer to that question is,” says Francioni, whose work was supported by a Y. Eva Tan Fellowship from the Yang Tan Collective at MIT.

The scientists didn’t know the exact function of the particular neurons they linked to the BCI, but the cells were active enough that mice received occasional rewards whenever the signals happened to be right. Within a week, mice learned to switch on the right neurons while leaving the other set of neurons inactive, earning themselves more rewards.

Francioni monitored the target neurons daily during this learning process using a powerful microscope to visualize fluorescent indicators of neural activity. He zeroed in on the neurons’ branching dendrites, where the appropriate feedback signals have long been suspected to arrive. At the same time, he tracked activity in the parent cell bodies of those neurons. The team used these data to examine the relationship between signals received at a neuron’s dendrites and its activity, as well as how these changed when mice were rewarded for activating the right neurons or when they failed at their task.

Vectorized neural signals

They concluded that the two groups of neurons whose activity controlled the BCI in opposite ways, also received opposing error signals at their dendrites as the mice learned. Some were told to ramp up their activity during the task, while others were instructed to dial it down. What’s more, when the team manipulated the dendrites to inhibit these instructive signals, mice failed to learn the task. “This is the first biological evidence that vectorized [neuron-specific] signal-based instructive learning is taking place in the cortex,” Harnett says.

The discovery of vectorized signals in the brain — and the team’s ability to find them — should promote more back-and-forth between neuroscientists and machine learning researchers, says postdoc Vincent Tang. “It provides further incentive for the machine learning community to keep developing models and proposing new hypotheses along this direction,” he says. “Then we can come back and test them.”

The researchers say they are just as excited about applying their approach to future experiments as they are about their current discovery.

“Machine learning offers a robust, mathematically tractable way to really study learning. The fact that we can now translate at least some of this directly into the brain is very powerful,” Francioni says.

Harnett says the approach opens new opportunities to investigate possible parallels between the brain and machine learning. “Now we can go after figuring out, how does cortex learn? How do other brain regions learn? How similar or how different is it to this particular algorithm? Can we figure out how to build better, more brain-inspired models from what we learn from the biology?” he says. “This feels like a really big new beginning.” 



de MIT News https://ift.tt/5yDZxbM

domingo, 8 de marzo de 2026

Improving AI models’ ability to explain their predictions

In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output.

Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.

The concepts the model uses are usually defined in advance by human experts. For instance, a clinician could suggest the use of concepts like “clustered brown dots” and “variegated pigmentation” to predict that a medical image shows melanoma.

But previously defined concepts could be irrelevant or lack sufficient detail for a specific task, reducing the model’s accuracy. The new method extracts concepts the model has already learned while it was trained to perform that particular task, and forces the model to use those, producing better explanations than standard concept bottleneck models.

The approach utilizes a pair of specialized machine-learning models that automatically extract knowledge from a target model and translate it into plain-language concepts. In the end, their technique can convert any pretrained computer vision model into one that can use concepts to explain its reasoning.

“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” says lead author Antonio De Santis, a graduate student at Polytechnic University of Milan who completed this research while a visiting graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

He is joined on a paper about the work by Schrasing Tong SM ’20, PhD ’26; Marco Brambilla, professor of computer science and engineering at Polytechnic University of Milan; and senior author Lalana Kagal, a principal research scientist in CSAIL. The research will be presented at the International Conference on Learning Representations.

Building a better bottleneck

Concept bottleneck models (CBMs) are a popular approach for improving AI explainability. These techniques add an intermediate step by forcing a computer vision model to predict the concepts present in an image, then use those concepts to make a final prediction.

This intermediate step, or “bottleneck,” helps users understand the model’s reasoning.

For example, a model that identifies bird species could select concepts like “yellow legs” and “blue wings” before predicting a barn swallow.

But because these concepts are often generated in advance by humans or large language models (LLMs), they might not fit the specific task. In addition, even if given a set of pre-defined concepts, the model sometimes utilizes undesirable learned information anyway, which is a problem known as information leakage.

“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis explains.

The MIT researchers had a different idea: Since the model has been trained on a vast amount of data, it may have learned the concepts needed to generate accurate predictions for the particular task at hand. They sought to build a CBM by extracting this existing knowledge and converting it into text a human can understand.

In the first step of their method, a specialized deep-learning model called a sparse autoencoder selectively takes the most relevant features the model learned and reconstructs them into a handful of concepts. Then, a multimodal LLM describes each concept in plain language.

This multimodal LLM also annotates images in the dataset by identifying which concepts are present and absent in each image. The researchers use this annotated dataset to train a concept bottleneck module to recognize the concepts.

They incorporate this module into the target model, forcing it to make predictions using only the set of learned concepts the researchers extracted.

Controlling the concepts

They overcame many challenges as they developed this method, from ensuring the LLM annotated concepts correctly to determining whether the sparse autoencoder had identified human-understandable concepts.

To prevent the model from using unknown or unwanted concepts, they restrict it to use only five concepts for each prediction. This also forces the model to choose the most relevant concepts and makes the explanations more understandable.

When they compared their approach to state-of-the-art CBMs on tasks like predicting bird species and identifying skin lesions in medical images, their method achieved the highest accuracy while providing more precise explanations.

Their approach also generated concepts that were more applicable to the images in the dataset. 

“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis says.

In the future, the researchers want to study potential solutions to the information leakage problem, perhaps by adding additional concept bottleneck modules so unwanted concepts can’t leak through. They also plan to scale up their method by using a larger multimodal LLM to annotate a bigger training dataset, which could boost performance.

“I’m excited by this work because it pushes interpretable AI in a very promising direction and creates a natural bridge to symbolic AI and knowledge graphs,” says Andreas Hotho, professor and head of the Data Science Chair at the University of Würzburg, who was not involved with this work. “By deriving concept bottlenecks from the model’s own internal mechanisms rather than only from human-defined concepts, it offers a path toward explanations that are more faithful to the model and opens many opportunities for follow-up work with structured knowledge.”

This research was supported by the Progetto Rocca Doctoral Fellowship, the Italian Ministry of University and Research under the National Recovery and Resilience Plan, Thales Alenia Space, and the European Union under the NextGenerationEU project.



de MIT News https://ift.tt/PkCJM6T

viernes, 6 de marzo de 2026

Personal tech, social media, and the “decline of humanity”

Social psychologist Jonathan Haidt presented a forceful analysis of the damage smartphones and social media are doing to our cognition, our civic fabric, and our children’s wellbeing, while calling for renewed action to ward off their effects, in the latest of MIT’s Compton Lectures on Wednesday.

“Around the world, people are getting diminished,” Haidt said. “Less intelligent, less happy, less competent. And it’s happening very fast … My argument is that if we continue with current trends as AI is coming in, it’s going to accelerate. The decline of humanity is going to accelerate.”

Haidt is the Thomas Cooley Professor of Ethical Leadership at New York University’s Stern School of Business and the author of the recent bestseller “The Anxious Generation,” which suggests that the widespread adoption of social media in the 2010s has been especially damaging to young women, making them prone to anxiety and depression.

But as Haidt has continued to examine the effects of social media on society, he has started focusing on additional issues. Our inability to put our phones away, our compulsion to check social media, and the way we spend hours a day watching short-form videos, may be causing problems that go far beyond any rise in anxiety and depression.

“It turns out, it’s not the biggest thing,” Haidt said. “There’s something bigger. It is the destruction of the human capacity to pay attention. Because this is affecting most people, including most adults. And if you imagine humanity with 10 to 50 percent of its attentional ability sucked out of it, there’s not much left. We’re not very capable of doing things if we can’t focus or stay on a task for more than 30 seconds.”

Whatever solution may emerge to these problems, Haidt declared, is going to have to come from “human agency. People see a problem, they figure out a way around it. That’s what I’m hoping to promote here [to] this very important audience. So please consider what I’m saying, these trends, and then work to change them.”

Haidt’s lecture, titled, “Life After Babel: Democracy and Human Development in the Fractured, Lonely World That Technology Gave Us,” was delivered before a capacity audience of over 400 people in MIT’s Huntington Hall (Room 10-250).

The lecture spanned a variety of related topics, with Haidt presenting chart after chart showing the onset of declines in cognition, educational achievement, and happiness, which all have seemed to occur soon after the widespread adoption of smartphones in the 2010s. The individual adoption of smartphones, he notes, has been compounded by the way schools brought internet-connected computing devices into classrooms around the same time.

“The biggest, the most costly mistake we’ve ever made in the history of American education [was] to put computers and high tech on people’s desks,” Haidt said.

Distractible students with shorter attention spans are reading fewer books, he noted; some cinema students cannot sit through films. The top quartile of students is continuing to do well, he noted, but for most students, proficiency levels have dipped notably since the 2010s.

“Fifty years of progress in education, 50 years of progress, up in smoke, gone,” Haidt said. “We’re back to where we were 50 years ago. That’s pretty big, that’s pretty serious.”

As Haidt mentioned multiple times in his remarks, he is not an opponent of all forms of technology, or even personal communication technology, but rather is seeking to mitigate its harmful effects.

“I love tech, I love modernity, we’re all dependent on it, I love my iPhone,” Haidt said. Just as he finished that sentence, an audience member’s cellphone started ringing loudly — drawing a huge laugh from the audience.

“I did not plant that, that was a truly spontaneous demonstration of what I’m talking about,” Haidt said.

Haidt was introduced by MIT President Sally A. Kornbluth, who called him “a leading voice for reforming society’s relationship with technology.” She praised Haidt’s work, noting that he wants to “encourage us to imagine a more positive role for technology in humanity’s future.”

The Karl Taylor Compton Lecture Series was introduced in 1957. It is named for MIT’s ninth president, who led the Institute from 1930 to 1948 and also served as chair of the MIT Corporation from 1948 to 1954.

Compton, as Kornbluth observed, helped MIT evolve from being more strictly an engineering school into “a great global university” with “a new focus on fundamental scientific research.” During World War II, she added, Compton “helped invent the longstanding partnership between the federal government and America’s research universities.”

Haidt received his undergraduate degree from Yale University and his PhD from the University of Pennsylvania. He taught on the faculty at the University of Virginia for 16 years before joining New York University. He has written several widely discussed books about contemporary civic life. Haidt observed that the problems stemming from device distraction and compulsion appear to have hit so-called Gen Z — those born from roughly the mid 1990s to the early 2010s — especially hard, though he emphasized that people in that cohort are essentially victims of circumstance.

“I am not blaming Gen Z,” Haidt said. “I am saying we raised our kids in a way — we allowed the technology companies to take over childhood. We allowed a few giant companies to own our children’s attention, to show them millions of short videos, to destroy their ability to pay attention, to stop them from reading books, and this is the result.”

For a portion of his remarks, Haidt also examined the consequences of social media for politics, showing data that chart the global diminishment of democracy since the 2010s, while the world has become soaked in misinformation and conflictual online interactions.

“That, I think, is what digital technology has done to us,” Haidt said. “It was supposed to connect us, but instead it has broken things, divided us, and made it very, very hard to ever have common facts, common truths, common stories again.”

Towards the end of his remarks, Haidt also speculated that the effects of using AI will be corrosive as well, intellectually and psychologically.

“AI is not exactly going to make us better at interacting with human beings,” Haidt said.

With all this in mind, what is to be done, to limit the intellectual and social damage from tech devices and social media? For one thing, Haidt suggested, we should be less impressed by high-tech innovations and social media.

“We need to disenthrall ourselves from technology,” Haidt said, paraphrasing a line written by President Abraham Lincoln. He added: “I suggest that we have a generally negative view … of social media and of AI.” This kind of “more emotionally negative or ambivalent view” will make it easier for us to reverse the way technology seems to control us.

As a practical matter, Haidt suggested, that means taking steps to limit our exposure to technology. His own public-advocacy group, The Anxious Generation Movement, suggests a set of four reforms: No smartphones for kids before they are high-school age; no social media before age 16; making school phone-free, from bell to bell; and giving kids more independence, free play, and responsibility in the world.

Certainly there is movement toward some of these concepts. Some school districts in the U.S. are banning or limiting phone usage; Australia has also instituted a ban on social media for anyone under 16, while a handful of other countries have announced similar plans.

“There’s a gigantic techlash happening right now,” Haidt suggested. For all the sudden changes technology has introduced within the last 15 years, it is still possible, for now, for people to find a way out of our tech-induced predicament.

“The good news is, there is human agency,” Haidt said.



de MIT News https://ift.tt/dyQtriB