miércoles, 9 de julio de 2025

MIT Open Learning bootcamp supports effort to bring invention for long-term fentanyl recovery to market

Evan Kharasch, professor of anesthesiology and vice chair for innovation at Duke University, has developed two approaches that may aid in fentanyl addiction recovery. After attending MIT’s Substance Use Disorders (SUD) Ventures Bootcamp, he’s committed to bringing them to market.

Illicit fentanyl addiction is still a national emergency in the United States, fueled by years of opioid misuse. As opioid prescriptions fell by 50 percent over 15 years, many turned to street drugs. Among those drugs, fentanyl stands out for its potency — just 2 milligrams can be fatal — and its low production cost. Often mixed with other drugs, it contributed to a large portion of over 80,000 overdose deaths in 2024. It has been particularly challenging to treat with currently available medications for opioid use disorder.  

​​As an anesthesiologist, Kharasch is highly experienced with opioids, including methadone, one of only three drugs approved in the United States for treating opioid use disorder. Methadone is a key option for managing fentanyl use. It’s employed to transition patients off fentanyl and to support ongoing maintenance, but access is limited, with only 20 percent of eligible patients receiving it. Initiating and adjusting methadone treatment can take weeks due to its clinical characteristics, often causing withdrawal and requiring longer hospital stays. Maintenance demands daily visits to one of just over 2,000 clinics, disrupting work or study and leading most patients to drop out after a few months.

To tackle these challenges, Kharasch developed two novel methadone formulations: one for faster absorption to cut initiation time from weeks to days — or even hours — and one to slow elimination, thereby potentially requiring only weekly, rather than daily, dosing. As a clinician, scientist, and entrepreneur, he sees the science as demanding, but bringing these treatments to patients presents an even greater challenge. Kharasch learned about the SUD Ventures Bootcamp, part of MIT Open Learning, as a recipient of research funding from the National Institute on Drug Abuse (NIDA). He decided to apply to bridge the gap in his expertise and was selected to attend as a fellow.

Each year, the SUD Ventures Bootcamp unites innovators — including scientists, entrepreneurs, and medical professionals — to develop bold, cross-disciplinary solutions to substance use disorders. Through online learning and an intensive one-week in-person bootcamp, teams tackle challenges in different “high priority” areas. Guided by experts in science, entrepreneurship, and policy, they build and pitch ventures aimed at real-world impact. Beyond the multidisciplinary curriculum, the program connects people deeply committed to this space and equipped to drive progress.

Throughout the program, Kharasch’s concepts were validated by the invited industry experts, who highlighted the potential impact of a longer-acting methadone formulation, particularly in correctional settings. Encouragement from MIT professors, coaches, and peers energized Kharasch to fully pursue commercialization. He has already begun securing intellectual property rights, validating the regulatory pathway through the U.S Food and Drug Administration, and gathering market and patient feedback.

The SUD Ventures Bootcamp, he says, both activated and validated his passion for bringing these innovations to patients. “After many years of basic, translational and clinical research on methadone all — supported by NIDA — I experienced that a ha moment of recognizing a potential opportunity to apply the findings to benefit patients at scale,” Kharasch says. “The NIDA-sponsored participation in the MIT SUD Ventures Bootcamp was the critical catalyst which ignited the inspiration and commitment to pursue commercializing our research findings into better treatments for opioid use disorder.”

As next steps, Kharasch is seeking an experienced co-founder and finalizing IP protections. He remains engaged with the SUD Ventures network as mentors, industry experts, and peers offer help with advancing this needed solution to market. For example, the program's mentor, Nat Sims, the Newbower/Eitan Endowed Chair in Biomedical Technology Innovation at Massachusetts General Hospital (MGH) and a fellow anesthesiologist, has helped Kharasch arrange technology validation conversations within the MGH ecosystem and the drug development community.

“Evan’s collaboration with the MGH ecosystem can help define an optimum process for commercializing these innovations — identifying who would benefit, how they would benefit, and who is willing to pilot the product once it’s available,” says Sims.

Kharasch has also presented his project in the program’s webinar series. Looking ahead, Kharasch hopes to involve MIT Sloan School of Management students in advancing his project through health care entrepreneurship classes, continuing the momentum that began with the SUD Ventures Bootcamp.

The program and its research are supported by the NIDA of the National Institutes of Health. Cynthia Breazeal, a professor of media arts and sciences at the MIT Media Lab and dean for digital learning at MIT Open Learning, serves as the principal investigator on the grant.



de MIT News https://ift.tt/5ysU4VL

Implantable device could save diabetes patients from dangerously low blood sugar

For people with Type 1 diabetes, developing hypoglycemia, or low blood sugar, is an ever-present threat. When glucose levels become extremely low, it creates a life-threatening situation for which the standard treatment of care is injecting a hormone called glucagon.

As an emergency backup, for cases where patients may not realize that their blood sugar is dropping to dangerous levels, MIT engineers have designed an implantable reservoir that can remain under the skin and be triggered to release glucagon when blood sugar levels get too low.

This approach could also help in cases where hypoglycemia occurs during sleep, or for diabetic children who are unable to administer injections on their own.

“This is a small, emergency-event device that can be placed under the skin, where it is ready to act if the patient’s blood sugar drops too low,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. “Our goal was to build a device that is always ready to protect patients from low blood sugar. We think this can also help relieve the fear of hypoglycemia that many patients, and their parents, suffer from.”

The researchers showed that this device could also be used to deliver emergency doses of epinephrine, a drug that is used to treat heart attacks and can also prevent severe allergic reactions, including anaphylactic shock.

Siddharth Krishnan, a former MIT research scientist who is now an assistant professor of electrical engineering at Stanford University, is the lead author of the study, which appears today in Nature Biomedical Engineering.

Emergency response

Most patients with type 1 diabetes use daily insulin injections to help their body absorb sugar and prevent their blood sugar levels from getting too high. However, if their blood sugar levels get too low, they develop hypoglycemia, which can lead to confusion and seizures, and may be fatal if it goes untreated.

To combat hypoglycemia, some patients carry preloaded syringes of glucagon, a hormone that stimulates the liver to release glucose into the bloodstream. However, it isn’t always easy for people, especially children, to know when they are becoming hypoglycemic.

“Some patients can sense when they’re getting low blood sugar, and go eat something or give themselves glucagon,” Anderson says. “But some are unaware that they’re hypoglycemic, and they can just slip into confusion and coma. This is also a problem when patients sleep, as they are reliant on glucose sensor alarms to wake them when sugar drops dangerously low.”

To make it easier to counteract hypoglycemia, the MIT team set out to design an emergency device that could be triggered either by the person using it, or automatically by a sensor.

The device, which is about the size of a quarter, contains a small drug reservoir made of a 3D-printed polymer. The reservoir is sealed with a special material known as a shape-memory alloy, which can be programmed to change its shape when heated. In this case, the researcher used a nickel-titanium alloy that is programmed to curl from a flat slab into a U-shape when heated to 40 degrees Celsius.

Like many other protein or peptide drugs, glucagon tends to break down quickly, so the liquid form can’t be stored long-term in the body. Instead, the MIT team created a powdered version of the drug, which remains stable for much longer and stays in the reservoir until released.

Each device can carry either one or four doses of glucagon, and it also includes an antenna tuned to respond to a specific frequency in the radiofrequency range. That allows it to be remotely triggered to turn on a small electrical current, which is used to heat the shape-memory alloy. When the temperature reaches the 40-degree threshold, the slab bends into a U shape, releasing the contents of the reservoir.

Because the device can receive wireless signals, it could also be designed so that drug release is triggered by a glucose monitor when the wearer’s blood sugar drops below a certain level.

“One of the key features of this type of digital drug delivery system is that you can have it talk to sensors,” Krishnan says. “In this case, the continuous glucose-monitoring technology that a lot of patients use is something that would be easy for these types of devices to interface with.”

Reversing hypoglycemia

After implanting the device in diabetic mice, the researchers used it to trigger glucagon release as the animals’ blood sugar levels were dropping. Within less than 10 minutes of activating the drug release, blood sugar levels began to level off, allowing them to remain within the normal range and avert hypoglycemia.

The researchers also tested the device with a powdered version of epinephrine. They found that within 10 minutes of drug release, epinephrine levels in the bloodstream became elevated and heart rate increased.

In this study, the researchers kept the devices implanted for up to four weeks, but they now plan to see if they can extend that time up to at least a year.

“The idea is you would have enough doses that can provide this therapeutic rescue event over a significant period of time. We don’t know exactly what that is — maybe a year, maybe a few years, and we’re currently working on establishing what the optimal lifetime is. But then after that, it would need to be replaced,” Krishnan says.

Typically, when a medical device is implanted in the body, scar tissue develops around the device, which can interfere with its function. However, in this study, the researchers showed that even after fibrotic tissue formed around the implant, they were able to successfully trigger the drug release.

The researchers are now planning for additional animal studies and hope to begin testing the device in clinical trials within the next three years.

“It’s really exciting to see our team accomplish this, which I hope will someday help diabetic patients and could more broadly provide a new paradigm for delivering any emergency medicine,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.

Other authors of the paper include Laura O’Keeffe, Arnab Rudra, Derin Gumustop, Nima Khatib, Claudia Liu, Jiawei Yang, Athena Wang, Matthew Bochenek, Yen-Chun Lu, Suman Bose, and Kaelan Reed.

The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, a JDRF postdoctoral fellowship, and the National Institute of Biomedical Imaging and Bioengineering.



de MIT News https://ift.tt/JlXaBz3

martes, 8 de julio de 2025

Processing our technological angst through humor

The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”

There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.

“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.

In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.

“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”

Reversals of fortune

Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.

Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.

“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”

That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.

“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”

Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.

Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.

“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”

A complicated, messy picture

“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.

“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.

“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”

For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.

“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”

Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”



de MIT News https://ift.tt/46EC7qS

From MIT, an instruction manual for turning research into startups

Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures. 

Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.

“There is a huge opportunity cost to letting great research sit idle,” says Fiona Murray, associate dean for innovation at the MIT Sloan School of Management and a faculty director for Proto Ventures. “The venture studio model makes research systematic, rather than messy and happenstance.” 

Bigger than MIT

The new playbook arrives amid growing national interest in revitalizing the United States’ innovation pipeline — a challenge underscored by the fact that just a fraction of academic patents ever reach commercialization.

“Venture-building across R&D organizations, and especially within academia, has been based on serendipity,” says MIT Professor Dennis Whyte, a faculty director for Proto Ventures who helped develop the playbook. “The goal of R&D venture studios is to take away the aspect of chance — to turn venture-building into a systemic process. And this is something not just MIT needs; all research universities and institutions need it.”

Indeed, MIT Proto Ventures is actively sharing the playbook with peer institutions, federal agencies, and corporate R&D leaders seeking to increase the translational return on their research investments.

“We’ve been following MIT’s Proto Ventures model with the vision of delivering new ventures that possess both strong tech push and strong market pull,” says Mark Arnold, associate vice president of Discovery to Impact and managing director of Texas startups at The University of Texas at Austin. “By focusing on market problems first and creating ventures with a supportive ecosystem around them, universities can accelerate the transition of ideas from the lab into real-world solutions.” 

What’s in the playbook

The playbook outlines the venture studio model process followed by MIT Proto Ventures. MIT’s venture studio embeds full-time entrepreneurial scientists — called venture builders — inside research labs. These builders work shoulder-to-shoulder with faculty and graduate students to scout promising technologies, validate market opportunities, and co-create new ventures.

“We see this as an open-source framework for impact,” says MIT Proto Ventures Managing Director Gene Keselman. “Our goal is not just to build startups out of MIT — it’s to inspire innovation wherever breakthrough science is happening.”

The playbook was developed by the MIT Proto Ventures team — including Keselman, venture builders David Cohen-Tanugi and Andrew Inglis, and faculty leaders Murray, Whyte, Andrew Lo, Michael Cima, and Michael Short. 

“This problem is universal, so we knew if it worked there’d be an opportunity to write the book on how to build a translational engine,” Keselman said. “We’ve had enough success now to be able to say, ‘Yes, this works, and here are the key components.’” 

In addition to detailing core processes, the playbook includes case studies, sample templates, and guidance for institutions seeking to tailor the model to fit their unique advantages. It emphasizes that building successful ventures from R&D requires more than mentorship and IP licensing — it demands deliberate, sustained focus, and a new kind of translational infrastructure. 

How it works

A key part of MIT’s venture studio is structuring efforts into distinct tracks or problem areas — MIT Proto Ventures calls these channels. Venture builders work in a single track that aligns with their expertise and interest. For example, Cohen-Tanugi is embedded in the MIT Plasma Science and Fusion Center, working in the Fusion and Clean Energy channel. His first two venture successes have been a venture using superconducting magnets for in-space propulsion and a deep-tech startup improving power efficiency in data centers.

“This playbook is both a call to action and a blueprint,” says Cohen-Tanugi, lead author of the playbook. “We’ve learned that world-changing inventions often remain on the lab bench not because they lack potential, but because no one is explicitly responsible for turning them into businesses. The R&D venture studio model fixes that.”



de MIT News https://ift.tt/Cq2GAsH

lunes, 7 de julio de 2025

Study could lead to LLMs that are better at complex reasoning

For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.

While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.

To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.

They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.

Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.

“Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.

Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.

Tackling hard domains

LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.

But in-context learning doesn’t always work for problems that require logic and reasoning.

The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.

The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.

“We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.

In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.

To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.

In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.

“This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.

Developing new skills

Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.

A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.

“We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.

The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.

Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.

“For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.

In the future, the researchers want to use these insights toward the development of models that continually learn.

The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.

This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.



de MIT News https://ift.tt/EdMwkgr

Exploring data and its influence on political behavior

Data and politics are becoming increasingly intertwined. Today’s political campaigns and voter mobilization efforts are now entirely data-driven. Voters, pollsters, and elected officials are relying on data to make choices that have local, regional, and national impacts.

A Department of Political Science course offers students tools to help make sense of these choices and their outcomes.

In class 17.831 (Data and Politics), students are introduced to principles and practices necessary to understand electoral and other types of political behavior. Taught by associate professor of political science Daniel Hidalgo, students use real-world datasets to explore topics like election polling and prediction, voter turnout, voter targeting, and shifts in public opinion over time.

The course wants students to describe why and how the use of data and statistical methods has changed electoral politics, understand the basic principles of social science statistics, and analyze data using modern statistical computing tools. The course capstone is an original project that involves the collection, analysis, and interpretation of original survey data used in modern campaigns.

“I wanted to create an applied, practice-based course that would appeal to undergraduates and provide a foundation for parsing, understanding, and reporting on large datasets in politics,” says Hidalgo, who redesigned the course for the spring 2025 semester.

Hidalgo, who also works in the Political Methodology Lab at MIT, investigates the political economy of elections, campaigns, and representation in developing democracies, especially in Latin America, as well as quantitative methods in the social sciences.

Politics and modernity

The influence of, and access to, artificial intelligence and large language models makes a course like Data and Politics even more important, Hidalgo says. “You have to understand the people at the other end of the data,” he argues.

The course also centers the human element in politics, exploring conflict, bias, their structures, and impacts while also working to improve information literacy and coherent storytelling.

“Data analysis and collection will never be perfect,” Hidalgo says. “But analyzing and understanding who holds which ideas, and why, and using the information to tell a coherent story is valuable in politics and elsewhere.”

The “always on” nature of news and related content, coupled with the variety of communications channels available to voters, has increased the complexity of the data collection process in polling and campaigns. “In the past, people would answer the phone when you called their homes,” Hidalgo notes, describing analog methods previously used to collect voter data. Now, political scientists, data analysts, and others must contend with the availability of streaming content, mobile devices, and other channels comprising a vast, fractured media ecosystem.

The course opens a window into what happens behind the scenes of local and national political campaigns, which appealed to second-year political science major Jackson Hamilton. “I took this class hoping to expand my ability to use coding for political science applications, and in order to better understand how political models and predictions work,” he says.

“We tailor-made our own sets of questions and experimental designs that we thought would be interesting,” Hamilton adds. “I found that political issues that get a lot of media coverage are not necessarily the same issues which divide lawmakers, at least locally.”

Transparency and accountability in politics and other areas

Teaching students to use tools like polling and data analysis effectively can improve their ability to identify and combat disinformation and misinformation. “As a political scientist, I’m substantively engaged,” Hidalgo says, “and I’d like to help others be engaged, too.”

“There’s lots of data available, and this course provides a foundation and the resources necessary to understand and visualize it,” Hidalgo continues. “The ability to design, implement, and understand surveys has value inside and outside the classroom.”

In politics, Hidalgo believes equipping students to navigate these spaces effectively can potentially improve and increase civic engagement. Data, he says, can help defend ideas. “There’s so much information, it’s important to develop the skills and abilities necessary to understand and visualize it,” he says. “This has value for everyone.”

Second-year physics major Sean Wilson, who also took the class this spring, notes the value of data visualization and analysis both as a potential physicist and a voter. “Data analysis in both politics and in physics is essential work given that voting tendencies, public opinion, and government leadership change so often in the United States,” he says, “and that modeling can be used to support physical hypotheses and improve our understanding of how things work.”

For Wilson, the course can help anyone interested in understanding large groups’ behaviors. “Political scientists are constantly working to better understand how and why certain events occur in U.S. politics, and data analysis is an effective tool for doing so,” he says. “Members of a representative democracy can make better decisions with this kind of information.”

Hamilton, meanwhile, learned more about the behind-the-scenes machinery at work in electoral politics. “I had the opportunity to create a couple of budget trade-off questions, to get a sense of what people actually thought the government should spend money on when they had to make choices,” he says.

“Computer science and data science aren’t just useful for STEM applications; data science approaches can also be extremely useful in many social sciences,” Hamilton argues.

“[Hidalgo helped me realize] that I needed to understand and use data science approaches to gain a deeper understanding of my areas of interest,” Hamilton says. “He focuses on how different approaches in coding can be applied to different types of problems in political science.” 



de MIT News https://ift.tt/eXTh0vM

Professor Emeritus Barry Vercoe, a pioneering force in computer music, dies at 87

MIT Professor Emeritus Barry Lloyd Vercoe, a pioneering force in computer music, a founding faculty member of the MIT Media Lab, and a leader in the development of MIT’s Music and Theater Arts Section, passed away on June 15. He was 87.

Vercoe’s life was a rich symphony of artistry, science, and innovation that led to profound enhancements of musical experience for expert musicians as well as for the general public — and especially young people.

Born in Wellington, New Zealand, on July 24, 1937, Vercoe earned bachelor’s degrees in music (in 1959) and mathematics (in 1962) from the University of Auckland, followed by a doctor of musical arts in music composition from the University of Michigan in 1968.

After completing postdoctoral research in digital audio processing at Princeton University and a visiting lectureship at Yale University, Vercoe joined MIT’s Department of Humanities (Music) in 1971, beginning a tenure in the department that lasted through 1984. During this period, he played a key role in advancing what would become MIT’s Music and Theater Arts (MTA) Section, helping to shape its forward-thinking curriculum and interdisciplinary philosophy. Vercoe championed the integration of musical creativity with scientific inquiry, laying the groundwork for MTA’s enduring emphasis on music technology and experimental composition.

In 1973, Vercoe founded MIT’s Experimental Music Studio (EMS) — the Institute’s first dedicated computer music facility, and one of the first in the world. Operated under the auspices of the music program, EMS became a crucible for innovation in algorithmic composition, digital synthesis, and computer-assisted performance. His leadership not only positioned MIT as a hub for music technology, but also influenced how the Institute approached the intersection of the arts with engineering. This legacy is honored today by a commemorative plaque in the Kendall Square MBTA station.

Violist, faculty founder of the MIT Chamber Music Society, and Institute Professor Marcus Thompson says: “Barry was first and foremost a fine musician, and composer for traditional instruments and ensembles. As a young professor, he taught our MIT undergraduates to write and sing Renaissance counterpoint as he envisioned how the act of traditional music-making offered a guide to potential artistic interaction between humans and computers. In 1976, he enlisted me to premiere what became his iconic, and my most-performed, work, ‘Synapse for Viola and Computer.’”

During a Guggenheim Fellowship in 1982–83, Vercoe developed the Synthetic Performer, a groundbreaking real-time interactive accompaniment system, while working closely with flautist Larry Beauregard at the Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris.

In 1984, Vercoe became a founding faculty member of the MIT Media Lab, where he launched the Music, Mind, and Machine group. His research spanned machine listening, music cognition, and real-time digital audio synthesis. His Csound language, created in 1985, is still widely used for music programming, and his contributions helped define the MPEG-4 Structured Audio standard.

He also served as associate academic head of the Media Lab’s graduate program in Media Arts and Sciences (MAS). Vercoe mentored many future leaders in digital music and sound computation, including two of his MAS graduate students — Anna Huang SM ’08 and Paris Smaragdis PhD ’01 — who have recently joined MIT’s music faculty, and Miller Puckette, an emeritus faculty member at the University of California at San Diego, and Richard Boulanger, a professor of electronic production and design at the Berklee College of Music.

“Barry Vercoe will be remembered by designers, developers, researchers, and composers for his greatest ‘composition,’ Csound, his free and open-source software synthesis language,” states Boulanger. “I know that, through Csound, Barry’s musical spirit will live on, not only in my teaching, my research, and my music, but in the apps, plugins, and musical compositions of generations to come.”

Tod Machover, faculty director of the MIT Media Lab and Muriel R. Cooper Professor of Music and Media, reflects, “Barry Vercoe was a giant in the field of computer music whose innovations in software synthesis, interactive performance, and educational tools for young people influenced and inspired many, including myself. He was a superb mentor, always making sure that artistic sensibility drove music tech innovation, and that sophisticated expression was at the core of Media Lab — and MIT — culture.”

Vercoe’s work earned numerous accolades. In addition to the Guggenheim Fellowship, he was also honored with the 1992 Computerworld Smithsonian Award for innovation and the 2004 SEAMUS Lifetime Achievement Award.

Beyond MIT, Vercoe consulted with Analog Devices and collaborated with international institutions like IRCAM under the direction of Pierre Boulez. His commitment to democratizing music technology was evident in his contributions to the One Laptop per Child initiative, which brought accessible digital sound tools to young people in underserved communities worldwide.

He is survived by his former wives, Kathryn Veda Vaughn and Elizabeth Vercoe; their children, Andrea Vercoe and Scott Vercoe; and generations of students and collaborators who continue to build on his groundbreaking work. A memorial service for family will be held in New Zealand later this summer, and a special event in his honor will take place at MIT in the fall. The Media Lab will share details about the MIT gathering as they become available.

Named professor emeritus at the MIT Media Lab upon his retirement in 2010, Vercoe’s legacy embodies the lab’s — and MIT’s — vision of creative, ethical, interdisciplinary research at the convergence of art, science, and technology. His music, machines, and generously inventive spirit will continue to forever shape the way we listen, learn, and communicate.



de MIT News https://ift.tt/fW2Ue6q

Study shows how a common fertilizer ingredient benefits plants

Lanthanides are a class of rare earth elements that in many countries are added to fertilizer as micronutrients to stimulate plant growth. But little is known about how they are absorbed by plants or influence photosynthesis, potentially leaving their benefits untapped.

Now, researchers from MIT have shed light on how lanthanides move through and operate within plants. These insights could help farmers optimize their use to grow some of the world’s most popular crops.

Published today in the Journal of the American Chemical Society, the study shows that a single nanoscale dose of lanthanides applied to seeds can make some of the world’s most common crops more resilient to UV stress. The researchers also uncovered the chemical processes by which lanthanides interact with the chlorophyll pigments that drive photosynthesis, showing that different lanthanide elements strengthen chlorophyll by replacing the magnesium at its center.

“This is a first step to better understand how these elements work in plants, and to provide an example of how they could be better delivered to plants, compared to simply applying them in the soil,” says Associate Professor Benedetto Marelli, who conducted the research with postdoc Giorgio Rizzo. “This is the first example of a thorough study showing the effects of lanthanides on chlorophyll, and their beneficial effects to protect plants from UV stress.”

Inside plant connections

Certain lanthanides are used as contrast agents in MRI and for applications including light-emitting diodes, solar cells, and lasers. Over the last 50 years, lanthanides have become increasingly used in agriculture to enhance crop yields, with China alone applying lanthanide-based fertilizers to nearly 4 million hectares of land each year.

“Lanthanides have been considered for a long time to be biologically irrelevant, but that’s changed in agriculture, especially in China,” says Rizzo, the paper’s first author. “But we largely don’t know how lanthanides work to benefit plants — nor do we understand their uptake mechanisms from plant tissues.”

Recent studies have shown that low concentrations of lanthanides can promote plant growth, root elongation, hormone synthesis, and stress tolerance, but higher doses can cause harm to plants. Striking the right balance has been hard because of our lack of understanding around how lanthanides are absorbed by plants or how they interact with root soil.

For the study, the researchers leveraged seed coating and treatment technologies they previously developed to investigate the way the plant pigment chlorophyll interacts with lanthanides, both inside and outside of plants. Up until now, researchers haven’t been sure whether chlorophyll interacts with lanthanide ions at all.

Chlorophyll drives photosynthesis, but the pigments lose their ability to efficiently absorb light when the magnesium ion at their core is removed. The researchers discovered that lanthanides can fill that void, helping chlorophyll pigments partially recover some of their optical properties in a process known as re-greening.

“We found that lanthanides can boost several parameters of plant health,” Marelli says. “They mostly accumulate in the roots, but a small amount also makes its way to the leaves, and some of the new chlorophyll molecules made in leaves have lanthanides incorporated in their structure.”

This study also offers the first experimental evidence that lanthanides can increase plant resilience to UV stress, something the researchers say was completely unexpected.

“Chlorophylls are very sensitive pigments,” Rizzo says. “They can convert light to energy in plants, but when they are isolated from the cell structure, they rapidly hydrolyze and degrade. However, in the form with lanthanides at their center, they are pretty stable, even after extracting them from plant cells.”

The researchers, using different spectroscopic techniques, found the benefits held across a range of staple crops, including chickpea, barley, corn, and soybeans.

The findings could be used to boost crop yield and increase the resilience of some of the world’s most popular crops to extreme weather.

“As we move into an environment where extreme heat and extreme climate events are more common, and particularly where we can have prolonged periods of sun in the field, we want to provide new ways to protect our plants,” Marelli says. “There are existing agrochemicals that can be applied to leaves for protecting plants from stressors such as UV, but they can be toxic, increase microplastics, and can require multiple applications. This could be a complementary way to protect plants from UV stress.”

Identifying new applications

The researchers also found that larger lanthanide elements like lanthanum were more effective at strengthening chlorophyll pigments than smaller ones. Lanthanum is considered a low-value byproduct of rare earths mining, and can become a burden to the rare earth element (REE) supply chain due to the need to separate it from more desirable rare earths. Increasing the demand for lanthanum could diversify the economics of REEs and improve the stability of their supply chain, the scientists suggest.

“This study shows what we could do with these lower-value metals,” Marelli says. “We know lanthanides are extremely useful in electronics, magnets, and energy. In the U.S., there’s a big push to recycle them. That’s why for the plant studies, we focused on lanthanum, being the most abundant, cheapest lanthanide ion.”

Moving forward, the team plans to explore how lanthanides work with other biological molecules, including proteins in the human body.

In agriculture, the team hopes to scale up its research to include field and greenhouse studies to continue testing the results of UV resilience on different crop types and in experimental farm conditions.

“Lanthanides are already widely used in agriculture,” Rizzo says. “We hope this study provides evidence that allows more conscious use of them and also a new way to apply them through seed treatments.”

The research was supported by the MIT Climate Grand Challenge and the Office for Naval Research.



de MIT News https://ift.tt/Ynm1pH8

viernes, 4 de julio de 2025

Robotic probe quickly measures key properties of new materials

Scientists are striving to discover new semiconductor materials that could boost the efficiency of solar cells and other electronics. But the pace of innovation is bottlenecked by the speed at which researchers can manually measure important material properties.

A fully autonomous robotic system developed by MIT researchers could speed things up.

Their system utilizes a robotic probe to measure an important electrical property known as photoconductance, which is how electrically responsive a material is to the presence of light.

The researchers inject materials-science-domain knowledge from human experts into the machine-learning model that guides the robot’s decision making. This enables the robot to identify the best places to contact a material with the probe to gain the most information about its photoconductance, while a specialized planning procedure finds the fastest way to move between contact points.

During a 24-hour test, the fully autonomous robotic probe took more than 125 unique measurements per hour, with more precision and reliability than other artificial intelligence-based methods.

By dramatically increasing the speed at which scientists can characterize important properties of new semiconductor materials, this method could spur the development of solar panels that produce more electricity.

“I find this paper to be incredibly exciting because it provides a pathway for autonomous, contact-based characterization methods. Not every important property of a material can be measured in a contactless way. If you need to make contact with your sample, you want it to be fast and you want to maximize the amount of information that you gain,” says Tonio Buonassisi, professor of mechanical engineering and senior author of a paper on the autonomous system.

His co-authors include lead author Alexander (Aleks) Siemenn, a graduate student; postdocs Basita Das and Kangyu Ji; and graduate student Fang Sheng. The work appears today in Science Advances.

Making contact

Since 2018, researchers in Buonassisi’s laboratory have been working toward a fully autonomous materials discovery laboratory. They’ve recently focused on discovering new perovskites, which are a class of semiconductor materials used in photovoltaics like solar panels.

In prior work, they developed techniques to rapidly synthesize and print unique combinations of perovskite material. They also designed imaging-based methods to determine some important material properties.

But photoconductance is most accurately characterized by placing a probe onto the material, shining a light, and measuring the electrical response.

“To allow our experimental laboratory to operate as quickly and accurately as possible, we had to come up with a solution that would produce the best measurements while minimizing the time it takes to run the whole procedure,” says Siemenn.

Doing so required the integration of machine learning, robotics, and material science into one autonomous system.

To begin, the robotic system uses its onboard camera to take an image of a slide with perovskite material printed on it.

Then it uses computer vision to cut that image into segments, which are fed into a neural network model that has been specially designed to incorporate domain expertise from chemists and materials scientists.

“These robots can improve the repeatability and precision of our operations, but it is important to still have a human in the loop. If we don’t have a good way to implement the rich knowledge from these chemical experts into our robots, we are not going to be able to discover new materials,” Siemenn adds.

The model uses this domain knowledge to determine the optimal points for the probe to contact based on the shape of the sample and its material composition. These contact points are fed into a path planner that finds the most efficient way for the probe to reach all points.

The adaptability of this machine-learning approach is especially important because the printed samples have unique shapes, from circular drops to jellybean-like structures.

“It is almost like measuring snowflakes — it is difficult to get two that are identical,” Buonassisi says.

Once the path planner finds the shortest path, it sends signals to the robot’s motors, which manipulate the probe and take measurements at each contact point in rapid succession.

Key to the speed of this approach is the self-supervised nature of the neural network model. The model determines optimal contact points directly on a sample image — without the need for labeled training data.

The researchers also accelerated the system by enhancing the path planning procedure. They found that adding a small amount of noise, or randomness, to the algorithm helped it find the shortest path.

“As we progress in this age of autonomous labs, you really do need all three of these expertise — hardware building, software, and an understanding of materials science — coming together into the same team to be able to innovate quickly. And that is part of the secret sauce here,” Buonassisi says.

Rich data, rapid results

Once they had built the system from the ground up, the researchers tested each component. Their results showed that the neural network model found better contact points with less computation time than seven other AI-based methods. In addition, the path planning algorithm consistently found shorter path plans than other methods.

When they put all the pieces together to conduct a 24-hour fully autonomous experiment, the robotic system conducted more than 3,000 unique photoconductance measurements at a rate exceeding 125 per hour.

In addition, the level of detail provided by this precise measurement approach enabled the researchers to identify hotspots with higher photoconductance as well as areas of material degradation.

“Being able to gather such rich data that can be captured at such fast rates, without the need for human guidance, starts to open up doors to be able to discover and develop new high-performance semiconductors, especially for sustainability applications like solar panels,” Siemenn says.

The researchers want to continue building on this robotic system as they strive to create a fully autonomous lab for materials discovery.

This work is supported, in part, by First Solar, Eni through the MIT Energy Initiative, MathWorks, the University of Toronto’s Acceleration Consortium, the U.S. Department of Energy, and the U.S. National Science Foundation.



de MIT News https://ift.tt/lUaCDEJ

jueves, 3 de julio de 2025

MIT and Mass General Hospital researchers find disparities in organ allocation

In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.

The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant. 

Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ. 

“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and recent MIT PhD graduate Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ allocation process.  

For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.

The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”

The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality. 

Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups. 

Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.

“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ allocation project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race. 

While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.

To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”

“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”

This work was supported with funding from the MIT Jameel Clinic. This research was supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.



de MIT News https://ift.tt/TPrC3bD

Study: Babies’ poor vision may help organize visual brain pathways

Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.

Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.

To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.

“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.

MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.

Sensory input

The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.

In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.

“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.

In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.

To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.

In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.

To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.

After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.

“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.

Object recognition

The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.

This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.

In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.

Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.

“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.

The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.



de MIT News https://ift.tt/S70Zn1U

miércoles, 2 de julio de 2025

A new platform for developing advanced metals at scale

Companies building next-generation products and breakthrough technologies are often limited by the physical constraints of traditional materials. In aerospace, defense, energy, and industrial tooling, pushing those constraints introduces possible failure points into the system, but companies don’t have better options, given that producing new materials at scale involves multiyear timelines and huge expenses.

Foundation Alloy wants to break the mold. The company, founded by a team from MIT, is capable of producing a new class of ultra-high-performance metal alloys using a novel production process that doesn’t rely on melting raw materials. The company’s solid-state metallurgy technology, which simplifies development and manufacturing of next-generation alloys, was developed over many years of research by former MIT professor Chris Schuh and collaborators.

“This is an entirely new approach to making metals,” says CEO Jake Guglin MBA ’19, who co-founded Foundation Alloy with Schuh, Jasper Lienhard ’15, PhD ’22, and Tim Rupert PhD ’11. “It gives us a broad set of rules on the materials engineering side that allows us to design a lot of different compositions with previously unattainable properties. We use that to make products that work better for advanced industrial applications.”

Foundation Alloy says its metal alloys can be made twice as strong as traditional metals, with 10 times faster product development, allowing companies to test, iterate, and deploy new metals into products in months instead of years.

The company is already designing metals and shipping demonstration parts to companies manufacturing components for things like planes, bikes, and cars. It’s also making test parts for partners in industries with longer development cycles, such as defense and aerospace.

Moving forward, the company believes its approach enables companies to build higher-performing, more reliable systems, from rockets to cars, nuclear fusion reactors, and artificial intelligence chips.

“For advanced systems like rocket and jet engines, if you can run them hotter, you can get more efficient use of fuel and a more powerful system,” Guglin says. “The limiting factor is whether or not you have structural integrity at those higher temperatures, and that is fundamentally a materials problem. Right now, we’re also doing a lot of work in advanced manufacturing and tooling, which is the unsexy but super critical backbone of the industrial world, where being able to push properties up without multiplying costs can unlock efficiencies in operations, performance, and capacity, all in a way that’s only possible with different materials.”

From MIT to the world

Schuh joined MIT’s faculty in 2002 to study the processing, structure, and properties of metal and other materials. He was named head of the Department of Materials Science and Engineering in 2011 before becoming dean of engineering at Northwestern University in 2023, after more than 20 years at MIT.

“Chris wanted to look at metals from different perspectives and make things more economically efficient and higher performance than what’s possible with traditional processes,” Guglin says. “It wasn’t just for academic papers — it was about making new methods that would be valuable for the industrial world.”

Rupert and Lienhard conducted their PhDs in Schuh’s lab, and Rupert invented complementary technologies to the solid-state processes developed by Schuh and his collaborators as a professor at the University of California at Irvine.

Guglin came to MIT’s Sloan School of Management in 2017 eager to work with high-impact technologies.

“I wanted to go somewhere where I could find the types of fundamental technological breakthroughs that create asymmetric value — the types of things where if they didn’t happen here, they weren’t going to happen anywhere else,” Guglin recalls.

In one of his classes, a PhD student in Schuh’s lab practiced his thesis defense by describing his research on a new way to create metal alloys.

“I didn’t understand any of it — I have a philosophy background,” Guglin says. “But I heard ‘stronger metals’ and I saw the potential of this incredible platform Chris’ lab was working on, and it tied into exactly why I wanted to come to MIT.”

Guglin connected with Schuh, and the pair stayed in touch over the next several years as Guglin graduated and went to work for aerospace companies SpaceX and Blue Origin, where he saw firsthand the problems being caused by the metal parts supply chain.

In 2022, the pair finally decided to launch a company, adding Rupert and Lienhard and licensing technology from MIT and UC Irvine.

The founders’ first challenge was scaling up the technology.

“There’s a lot of process engineering to go from doing something once at 5 grams to doing it 100 times a week at 100 kilograms per batch,” Guglin says.

Today, Foundation Alloys starts with its customers’ material requirements and decides on a precise mixture of the powdered raw materials that every metal starts out as. From there, it uses a specialized industrial mixer — Guglin calls it an industrial KitchenAid blender — to create a metal powder that is homogenous down to the atomic level.

“In our process, from raw material all the way through to the final part, we never melt the metal,” Guglin says. “That is uncommon if not unknown in traditional metal manufacturing.

From there, the company’s material can be solidified using traditional methods like metal injection molding, pressing, or 3D printing. The final step is sintering in a furnace.

“We also do a lot of work around how the metal reacts in the sintering furnace,” Guglin says. “Our materials are specifically designed to sinter at relatively low temperatures, relatively quickly, and all the way to full density.”

The advanced sintering process uses an order of magnitude less heat, saving on costs while allowing the company to forego secondary processes for quality control. It also gives Foundation Alloy more control over the microstructure of the final parts.

“That’s where we get a lot of our performance boost from,” Guglin says. “And by not needing those secondary processing steps, we’re saving days if not weeks in addition to the costs and energy savings.”

A foundation for industry

Foundation Alloy is currently piloting their metals across the industrial base and has also received grants to develop parts for critical components of nuclear fusion reactors.

“The name Foundation Alloy in a lot of ways came from wanting to be the foundation for the next generation of industry,” Guglin says.

Unlike in traditional metals manufacturing, where new alloys require huge investments to scale, Guglin says the company’s process for developing new alloys is nearly the same as its production processes, allowing it to scale new materials production far more quickly.

“At the core of our approach is looking at problems like material scientists with a new technology,” Guglin says. “We’re not beholden to the idea that this type of steel must solve this type of problem. We try to understand why that steel is failing and then use our technology to solve the problem in a way that produces not a 10 percent improvement, but a two- or five-times improvement in terms of performance.”



de MIT News https://ift.tt/avxbyEN

Confronting the AI/energy conundrum

The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.

“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.

AI’s startling energy demands

From the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation's electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.

Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”

Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”

“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.

Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.

Strategies for clean energy solutions

The symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.

Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.

“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.

Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.

Can AI accelerate the energy transition?

Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT's Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”

AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year," she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.

AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.

Securing growth with sustainability

Throughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.

Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.

Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.

Navigating the AI-energy paradox

The symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.

Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.

Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”

In addition, attendees revealed that most view AI's potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following.



de MIT News https://ift.tt/H5ydaiD

3 Questions: How MIT’s venture studio is partnering with MIT labs to solve “holy grail” problems

MIT Proto Ventures is the Institute’s in-house venture studio — a program designed not to support existing startups, but to create entirely new ones from the ground up. Operating at the intersection of breakthrough research and urgent real-world problems, Proto Ventures proactively builds startups that leverage MIT technologies, talent, and ideas to address high-impact industry challenges. 

Each venture-building effort begins with a “channel” — a defined domain such as clean energy, fusion, or AI in health care — where MIT is uniquely positioned to lead, and where there are pressing real-world problems needing solutions. Proto Ventures hires full-time venture builders, deeply technical entrepreneurs who embed in MIT labs, connect with faculty, scout promising inventions, and explore unmet market needs. These venture builders work alongside researchers and aspiring founders from across MIT who are accepted into Proto Ventures’ fellowship program to form new teams, shape business concepts, and drive early-stage validation. Once a venture is ready to spin out, Proto Ventures connects it with MIT’s broader innovation ecosystem, including incubation programs, accelerators, and technology licensing. 

David Cohen-Tanugi SM '12, PhD '15 has been the venture builder for the fusion and clean energy channel since 2023.  

Q: What are the challenges of launching startups out of MIT labs? In other words, why does MIT need a venture studio? 

A: MIT regularly takes on the world’s “holy grail” challenges, such as decarbonizing heavy industry, preventing future pandemics, or adapting to climate extremes. Yet despite its extraordinary depth in research, too few of MIT’s technical breakthroughs evolve into successful startups targeting these problems. Not enough technical breakthroughs in MIT labs are turning into commercial efforts to address these highest-impact problems. 

There are a few reasons for this. Right now, it takes a great deal of serendipity for a technology or idea in the lab to evolve into a startup project within the Institute’s ecosystem. Great startups don’t just emerge from great technology alone — they emerge from combinations of great technology, unmet market needs, and committed people. 

A second reason is that many MIT researchers don’t have the time, professional incentives, or skill set to commercialize a technology. They often lack someone that they can partner with, someone who is technical enough to understand the technology but who also has experience bringing technologies to market. 

Finally, while MIT excels at supporting entrepreneurial teams that are already in motion — thanks to world-class accelerators, mentorship services, and research funding programs — what’s missing is actually further upstream: a way to deliberately uncover and develop venture opportunities that haven’t even taken shape yet.  

MIT needs a venture studio because we need a new, proactive model for research translation — one that breaks down silos and that bridges deep technical talent with validated market needs. 

Q: How do you add value for MIT researchers?

A: As a venture builder, I act as a translational partner for researchers — someone who can take the lead on exploring commercial pathways in partnership with the lab. Proto Ventures fills the gap for faculty and researchers who believe their work could have real-world applications but don’t have the time, entrepreneurial expertise, or interested graduate students to pursue them. Proto Ventures fills that gap. 

Having done my PhD studies at MIT a decade ago, I’ve seen firsthand how many researchers are interested in impact beyond academia but don’t know where to start. I help them think strategically about how their work fits into the real market, I break down tactical blockers such as intellectual property conversations or finding a first commercial partner, and I roll up my sleeves to do customer discovery, identify potential co-founders, or locate new funding opportunities. Even when the outcome isn’t a startup, the process often reveals new collaborators, use cases, or research directions. We’re not just scouting for IP — we’re building a deeper culture of tech translation at MIT, one lab at a time. 

Q: What counts as a success? 

A: We’ve launched five startups across two channels so far, including one that will provide energy-efficient propulsion systems for satellites and another that is developing advanced power supply units for data centers.  

But counting startups is not the only way to measure impact. While embedded at the MIT Plasma Science and Fusion Center, I have engaged with 75 researchers in translational activities — many for the first time. For example, I’ve helped research scientist Dongkeun Park craft funding proposals for next-generation MRI and aircraft engines enabled by high-temperature superconducting magnets. Working with Mike Nour from the MIT Sloan Executive MBA program, we’ve also developed an innovative licensing strategy for Professor Michael P. Short and his antifouling coating technology. Sometimes it takes an outsider like me to connect researchers across departments, suggest a new collaboration, or unearth an overlooked idea. Perhaps most importantly, we’ve validated that this model works: embedding entrepreneurial scientists in labs changes how research is translated. 

We’ve also seen that researchers are eager to translate their work — they just need a structure and a partner to help them do it. That’s especially true in the hard tech in which MIT excels. That’s what Proto Ventures offers. And based on our early results, we believe this model could be transformative not just for MIT, but for research institutions everywhere. 



de MIT News https://ift.tt/2UNJuVc

martes, 1 de julio de 2025

The high-tech wizardry of integrated photonics

Inspired by the “Harry Potter” stories and the Disney Channel show “Wizards of Waverly Place,” 7-year-old Sabrina Corsetti emphatically declared to her parents one afternoon that she was, in fact, a wizard.

“My dad turned to me and said that, if I really wanted to be a wizard, then I should become a physicist. Physicists are the real wizards of the world,” she recalls.

That conversation stuck with Corsetti throughout her childhood, all the way up to her decision to double-major in physics and math in college, which set her on a path to MIT, where she is now a graduate student in the Department of Electrical Engineering and Computer Science.

While her work may not involve incantations or magic wands, Corsetti’s research centers on an area that often produces astonishing results: integrated photonics. A relatively young field, integrated photonics involves building computer chips that route light instead of electricity, enabling compact and scalable solutions for applications ranging from communications to sensing.

Corsetti and her collaborators in the Photonics and Electronics Research Group, led by Professor Jelena Notaros, develop chip-sized devices which enable innovative applications that push the boundaries of what is possible in optics.

For instance, Corsetti and the team developed a chip-based 3D printer, small enough to sit in the palm of one’s hand, that emits a reconfigurable beam of light into resin to create solid shapes. Such a device could someday enable a user to rapidly fabricate customized, low-cost objects on the go.

She also contributed to creating a miniature “tractor beam” that uses a beam of light to capture and manipulate biological particles using a chip. This could help biologists study DNA or investigate the mechanisms of disease without contaminating tissue samples.

More recently, Corsetti has been working on a project in collaboration with MIT Lincoln Laboratory, focused on trapped-ion quantum computing, which involves the manipulation of ions to store and process quantum information.

“Our team has a strong focus on designing devices and systems that interact with the environment. The opportunity to join a new research group, led by a supportive and engaged advisor, that works on projects with a lot of real-world impacts, is primarily what drew me to MIT,” Corsetti says.

Embracing challenges

Years before she set foot in a research lab, Corsetti was a science- and math-focused kid growing up with her parents and younger brother in the suburbs of Chicago, where her family operates a structural steelwork company.

Throughout her childhood, her teachers fostered her love of learning, from her early years in the Frankfort 157-C school district through her time at the Lincoln-Way East High School.

She enjoyed working on science experiments outside the classroom and relished the chance to tackle complex conundrums during independent study projects curated by her teachers (like calculating the math behind the Brachistochrone Curve, or the shortest path between two points, which was famously solved by Isaac Newton).

Corsetti decided to double-major in physics and math at the University of Michigan after graduating from high school a year early.

“When I went to the University of Michigan, I couldn’t wait to get started. I enrolled in the toughest math and physics track right off the bat,” she recalls.

But Corsetti soon found that she had bitten off a bit more than she could chew. A lot of her tough undergraduate courses assumed students had prior knowledge from AP physics and math classes, which Corsetti hadn’t taken because she graduated early.

She met with professors, attended office hours, and tried to pick up the lessons she had missed, but felt so discouraged she contemplated switching majors. Before she made the switch, Corsetti decided to try working in a physics lab to see if she liked a day in the life of a researcher.

After joining Professor Wolfgang Lorenzon’s lab at Michigan, Corsetti spent hours working with grad students and postdocs on a hands-on project to build cells that would hold liquid hydrogen for a particle physics experiment.

As they collaborated for hours at a time to roll material into tubes, she peppered the older students with questions about their experiences in the field.

“Being in the lab made me fall in love with physics. I really enjoyed that environment, working with my hands, and working with people as part of a bigger team,” she says.

Her affinity for hands-on lab work was amplified a few years later when she met Professor Tom Schwarz, her research advisor for the rest of her time at Michigan.

Following a chance conversation with Schwarz, she applied to a research abroad program at CERN in Switzerland, where she was mentored by Siyuan Sun. There, she had the opportunity to join thousands of physicists and engineers on the ATLAS project, writing code and optimizing circuits for new particle-detector technologies.

“That was one of the most transformative experiences of my life. After I came back to Michigan, I was ready to spend my career focusing on research,” she says.

Hooked on photonics

Corsetti began applying to graduate schools but decided to shift focus from the more theoretical particle physics to electrical engineering, with an interest in conducting hands-on chip-design and testing research.

She applied to MIT with a focus on standard electronic-chip design, so it came as a surprise when Notaros reached out to her to schedule a Zoom call. At the time, Corsetti was completely unfamiliar with integrated photonics. However, after one conversation with the new professor, she was hooked.

“Jelena has an infectious enthusiasm for integrated photonics,” she recalls. “After those initial conversations, I took a leap of faith.”

Corsetti joined Notaros’ team as it was just getting started. Closely mentored by a senior student, Milica Notaros, she and her cohort grew immersed in integrated photonics.

Over the years, she’s particularly enjoyed the collaborative and close-knit nature of the lab and how the work involves so many different aspects of the experimental process, from design to simulation to analysis to hardware testing.

“An exciting challenge that we’re always running up against is new chip-fabrication requirements. There is a lot of back-and-forth between new application areas that demand new fabrication technologies, followed by improved fabrication technologies motivating additional application areas. That cycle is constantly pushing the field forward,” she says.

Corsetti plans to stay at the cutting edge of the field after graduation as an integrated-photonics researcher in industry or at a national lab. She would like to focus on trapped-ion quantum computing, which scientists are rapidly scaling up toward commercially viable systems, or other high-performance computing applications.

“You really need accelerated computing for any modern research area. It would be exciting and rewarding to contribute to high-performance computing that can enable a lot of other interesting research areas,” she says.

Paying it forward

In addition to making an impact with research, Corsetti is focused on making a personal impact in the lives of others. Through her involvement in MIT Graduate Hillel, she joined the Jewish Big Brothers Big Sisters of Boston, where she volunteers for the friend-to-friend program.

Participating in the program, which pairs adults who have disabilities with friends in the community for fun activities like watching movies or painting has been an especially uplifting and gratifying experience for Corsetti.

She’s also enjoyed the opportunity to support, mentor, and bond with her fellow MIT EECS students, drawing on the advice she’s received throughout her own academic journey.

“Don’t trust feelings of imposter syndrome,” she advises others. “Keep moving forward, ask for feedback and help, and be confident that you will reach a point where you can make meaningful contributions to a team.”

Outside the lab, she enjoys playing classical music on the clarinet (her favorite piece is Leonard Bernstein’s famous overture to “Candide”), reading, and caring for a family of fish in her aquarium.



de MIT News https://ift.tt/C6exNMb