martes, 24 de febrero de 2026

MIT’s delta v accelerator receives $6M gift to supercharge startups being built by student founders

With the impact artificial intelligence is having on how companies operate, the environment for how MIT students are learning entrepreneurship and choosing to create new ventures is seeing rapid changes as well. To address how these student startups are being built, the Martin Trust Center for MIT Entrepreneurship undertook a months-long series of discussions with key stakeholders to help shape a new direction for delta v, MIT’s capstone entrepreneurship accelerator for student founders.

Two of Boston’s most successful tech entrepreneurs have stepped forward to fund this growth of new MIT ventures through a combined $6 million gift that supports the delta v accelerator run out of the Trust Center. Ed Hallen MBA ’12 and Andrew Bialecki, co-founders of Boston-based customer relationship management firm Klaviyo, are providing the donation to support the next wave of innovation-driven entrepreneurship taking place at MIT.

“In the early days of Klaviyo, we learned almost everything by building, testing assumptions, making mistakes, and figuring things out as we went,” Hallen says. “MIT delta v creates that same learning-by-doing environment for students, while surrounding them with mentorship and resources that help founders build with clarity and momentum. We’ve seen the difference delta v can make for founders, and we’re excited to help the Trust Center extend that opportunity to the next generation of students.”

“We’ve always believed the world needs more entrepreneurs, and that Boston should be one of the places leading the way,” adds Bialecki. “Boston is a hub of innovation with ambitious students and a strong community of builders. MIT delta v plays a critical role in developing founders early, not just helping them start companies but helping them build companies that last. Supporting that mission is something Ed and I care deeply about.”

The Martin Trust Center plans to “accelerate the accelerator” with the funding. Recognizing the opportunity that exists as AI impacts how students are able to build companies, along with the increased interest being shown by students to learn about entrepreneurship during their time on campus, is a major driver for these changes. One of the main impacts will be the ability of delta v participants to earn up to $75,000 in equity-free funding during the program, an increase from $20,000 in years past. 

Also, delta v will be introducing a partner model composed of leading founders from companies such as HubSpot, Okta, and Kayak, C-suite operators, subject matter experts, and early-stage investors who will all be providing significant guidance and mentorship to the student ventures.

“Core to MIT’s mission is developing the innovative technologies and solutions that can help solve tough problems at global scale,” says MIT Provost Anantha Chandrakasan. “The AI revolution is creating exciting new opportunities for MIT students to build the next wave of impactful companies, and the delta v accelerator is a perfect vehicle to help them make that happen.”

In recent years MIT-founded startups such as Cursor and Delve who use AI as a core part of their business have seen explosive growth in both customers and revenue as well as valuation. In addition, delta v alumni entrepreneurs and their companies such as Klarity and Reducto are providing software-as-a-service (SaaS) platforms using AI tools while Vertical Semiconductor is growing thanks to providing the energy solutions that data centers need to power today’s computing demands. These are just some of the businesses MIT students are looking to as models they can follow to build and launch successfully, whether they are working on solutions in health care, climate, finance, the future of work, or another global challenge.

“MIT Sloan is the place for entrepreneurship education, part of a unique ecosystem of collaboration across MIT to solve problems," says Richard M. Locke, the John C Head III Dean at the MIT Sloan School of Management. “The delta v program is a great example of how MIT students dedicate their energy to starting a venture, connect with mentors, and incorporate proven frameworks for disciplined entrepreneurship. This gift from Ed Hallen and Andrew Bialecki will provide additional funding for this important program, and I’m so grateful for their support of entrepreneurship education at MIT.” 

“I remember when Ed and Andrew were giving birth to Klaviyo at the Trust Center,” says Bill Aulet, the Ethernet Inventors Professor of the Practice and managing director of the Trust Center. “Through their ingenuity and drive, they have created an iconic tech company here in Boston with the support of our ecosystem. Through their willingness to give back, many more students will now be able to follow their path and become entrepreneurs who can create extraordinary positive impact in the world.”

Applications for the next delta v cohort will open on March 1 and close on April 1. Teams will be announced in May for the summer 2026 accelerator.

“MIT delta v is about creating belief in our most exceptional entrepreneurial talent — and turning that belief into consequential impact for the world. By supporting early-stage founders who take bold ideas from improbable to possible, we help them build companies that matter,” says Ana Bakshi, the Trust Center’s executive director. “Our students are the next generation of job creators, economic drivers, and thought leaders. To realize this potential, it is critical that we continue to invest in and scale startup programs and spaces so they can build at unprecedented levels. Ed and Andrew’s generosity gives us a powerful opportunity to change velocity—and make that future possible.”

Founded in 1991, the award-winning Martin Trust Center for MIT Entrepreneurship is today focused on teaching entrepreneurship as a craft. It combines evidence-based entrepreneurship frameworks, used in over a thousand other organizations, with experiential learning, experiences, and community building inside and outside the classroom to create the next generation of innovation-driven entrepreneurs. Alumni who have gone through Trust Center programs have started companies including Cursor, Delve, Okta, HubSpot, PillPack, Honey, WHOOP, Reducto, Klarity, and Biobot Analytics, and thousands more in industries as diverse as biotech, climate and energy, AI, health care, fintech, business and consumer software, and more. 

In the first 10 years of delta v, the program's alumni have helped create entrepreneurs who have gone on to experience extraordinary success. The five-year survival rate of their companies has been 69%, and they have raised well over $3 billion in funding while addressing the world’s greatest challenges — evidenced by the fact that 89% are directly aligned with the UN Sustainable Development goals.



de MIT News https://ift.tt/SgsR4l2

lunes, 23 de febrero de 2026

More trees where they matter, please

One of the best forms of heat relief is pretty simple: trees. In cities, as studies have documented, more tree cover lowers surface temperatures and heat-related health risks.

However, as a new study led by MIT researchers shows, the amount of tree cover varies widely within cities, and is generally connected to wealth levels. After examining a cross-section of cities on four continents at different latitudes, the research finds a consistent link between wealth and neighborhood tree abundance within a city, with better-off residents usually enjoying much more shade on nearby sidewalks.

“Shade is the easiest way to counter warm weather,” says Fabio Duarte, an MIT urban studies scholar and co-author of a new paper detailing the study’s results. “Strictly by looking at which areas are shaded, we can tell where rich people and poor people live.”

That disparity is evident within a range of cities, and is present whether a city contains a large amount of tree cover overall or just a little. Either way, there are more trees in wealthier spots.

“When we compare the most well-shaded city in our study, Stockholm, with the worst-shaded, Belem in northern Brazil, we still see marked inequality,” says Duarte, the associate director of MIT’s Senseable City Lab in the Department of Urban Studies and Planning (DUSP). “Even though the most-shaded parts of Belem are less shaded than the least-shaded parts of Stockholm, shade inequality in Stockholm is greater. Rich people in Stockholm have much better shade provison as pedestrians than we see in poor areas of Stockholm.”

The paper, “Global patterns of pedestrian shade inequality,” is published today in Nature Communications. The authors are Xinyue Gu of Hong Kong Polytechnic University; Lukas Beuster, a research fellow at the Amsterdam Institute for Advanced Metropolitan Solutions and MIT’s Senseable City Lab; Xintao Liu, an associate professor at Hong Kong Polytechnic University; Eveline van Leeuwen, scientific director at the Amsterdam Institute for Advanced Metropolitan Solutions; Titus Venverloo, who leads the MIT Senseable City Amsterdam lab; and Duarte, who is also a lecturer in DUSP.

From Stockholm to Sydney

To conduct the study, the researchers used satellite data from multiple sources, along with urban mapping programs and granular economic data about the cities they examined. There are nine cities in the study: Amsterdam, Barcelona, Belem, Boston, Hong Kong, Milan, Rio de Janeiro, Stockholm, and Sydney. Those places are intended to create a cross-section of cities with different characteristics, including latitude, wealth levels, urban form, and more.

The scholars looked at the amount of shade available on city sidewalks on summer solistice day, as well as the hottest recorded day each year from 1991 to 2020. They then created a scale, ranging from 0 to 1, to rate the amount of shade available on sidewalks, both citywide and within neighborhoods.

“We focused on sidewalks because they are a major counduit of urban activity, even on hot summer days,” Gu says. “Adding tree cover for sidewalks is one crucial way cities can pursue heat-reduction measures.”

Duarte adds: “When it comes to those who are not protected by air conditioning, they are also using the city, walking, taking buses, and anybody who takes a bus is walking or biking to or from bus stops. They are using sidewalks as the main infrastructure.”

The cities in the study offer very different levels of tree coverage. On the 0-to-1 scale the researchers developed, much of Stockholm falls in the 0.6-0.9 range, with some neighborhoods being over 0.9. By contrast, large swaths of Rio de Janeiro are under the 0.1 mark. Much of Boston ranges from 0.15 to 0.4, with a few neighborhoods reaching 0.45 on the scale.

The overall pattern of disparities, however, is very consistent, and includes the more affluent cities. The bottom 20 percent of neighborhoods in Stockholm, in terms of shade coverage, are rated at 0.58 on the scale, while the top 20 percent of Belem neighborhoods rate at 0.37; Stockholm has a greater disparity between most-covered and least-covered. To be sure, there is variety within many cities: Milan and Barcelona have some lower-income neighborhoods with abundant shade, for instance. But the aggregate trend is clear. Amsterdam, another well-off place on average, has a distinct pattern of less shade in lower-income areas.

“In rich cities like Amsterdam, even though it’s relatively well-shaded, the disparity is still very high,” Beuster says. “For us the most surprising point was not that in poor cities and more unequal societies the disparity would be notable — that was expected. What was unexpected was how the disparity still happens and is sometimes more pronounced in rich countries.”

“Follow transit”

If the tree-shade disparity issue is quite persistent, then it raises the matter of what to do about it. The researchers have a basic answer: Add trees in areas with public transit, which generate a lot of pedestrian mileage.

“In each city, from Sydney to Rio to Amsterdam, there are people who, regardless of the weather, need to walk,” Duarte says. “And it’s those people who also take public transportation. Therefore, link a tree-planting scheme to a public transportation network. And secondly, they are also the medium-and low-income part of the population. So the action deriving from this result is quite clear: If you need to increase your tree coverage and don’t know where, follow transit. If you follow transit, you will have the right shading.”

Indeed, one takeaway from the study is to think of trees not just as a nice-to-have part of urban aesthetics, but in functional terms.

“Planners and city officials should think about tree placement at least partly in terms of the heat-mitigating effect they have,” Beuster says.

“It’s not just about planting trees,” Duarte observes. “It’s about providing shade by planting trees. If you remove a tree that’s providing shade in a pedestrian area and you plant two other trees in a park, you are still removing part of the public function of the tree.”

He adds: “With increasing temperatures, providing shade is an essential public amenity. Along with providing transportation, I think providing shade in pedestrian spaces should almost be a public right.”

The Amsterdam Institute for Advanced Metropolitan Solutions and all members of the MIT Senseable City Consortium (including FAE Technology, Dubai Foundation, Sondotécnica, Seoul AI Foundation, Arnold Ventures, Sidara, Toyota, Abu Dhabi’s Department of Municipal Transportation, A2A, UnipolTech, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, Hospital Israelita Albert Einstein, KACST, KAIST, and the cities of Laval, Amsterdam, and Rio de Janeiro) supported the research.



de MIT News https://ift.tt/Xu4IQJj

Study reveals climatic fingerprints of wildfires and volcanic eruptions

Volcanoes and wildfires can inject millions of tons of gases and aerosol particles into the air, affecting temperatures on a global scale. But picking out the specific impact of individual events against a background of many contributing factors is like listening for one person’s voice from across a crowded concourse.

MIT scientists now have a way to quiet the noise and identify the specific signal of wildfires and volcanic eruptions, including their effects on Earth’s global atmospheric temperatures.

In a study appearing this week in the Proceedings of the National Academy of Sciences, the researchers report that they detected statistically significant changes in global atmospheric temperatures in response to three major natural events: the eruption of Mount Pinatubo in 1991, the Australian wildfires in 2019-2020, and the eruption of the underwater volcano Hunga Tonga in the South Pacific in 2022.

While the specifics of each event differed, all three events appeared to significantly affect temperatures in the stratosphere. The stratosphere lies above the troposphere, which is the lowest layer of the atmosphere, closest to the surface, where global warming has accelerated in recent years. In the new study, Pinatubo showed the classic pattern of stratospheric warming paired with tropospheric cooling. The Australian wildfires and the Hunga Tonga eruption also showed significant warming or cooling in the stratosphere, respectively, but they did not produce a robust, globally detectable tropospheric signal over the first two years following each event. This new understanding will help scientists further pin down the effect of human-related emissions on global temperature change.

“Understanding the climate responses to natural forcings is essential for us to interpret anthropogenic climate change,” says study author Yaowei Li, a former postdoc and currently a visiting scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Unlike the global tropospheric and surface cooling caused by Pinatubo, our results also indicate that the Australian wildfires and Hunga Tonga eruption may not have played a role in the acceleration of global surface warming in recent years. So, there must be some other factors.”

The study’s co-authors include Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry at MIT, along with Benjamin Santer of the University of East Anglia, David Thompson of the University of East Anglia and Colorado State University, and Qiang Fu of the University of Washington.

Extraordinary events

The past several years have set back-to-back records for global average surface temperatures. The World Meteorological Organization recently confirmed that the years 2023 to 2025 were the three warmest years on record, while the past 11 years have been the 11 warmest years ever recorded. The world is warming, due mainly to human activities that have emitted huge amounts of greenhouse gases into the atmosphere over centuries.

In addition to greenhouse gases, the atmosphere has been on the receiving end of other large-scale emissions, including sulfur gases and water vapor from volcanic eruptions and smoke particles from wildfires. Li and his colleagues have wondered whether such natural events could have any global impact on temperatures, and whether such an effect would be detectable.

“These events are extraordinary and very unique in terms of the different materials they inject into different altitudes,” Li says. “So we asked the question: Do these events actually perturb the global temperature to a degree that could be identifiable from natural, meteorological noise, and could they contribute to some of the exceptional global surface warming we’ve seen in the last few years?”

In particular, the team looked for signals of global temperature change in response to three large-scale natural events. The Pinatubo eruption resulted in around 20 million tons of volcanic aerosols in the stratosphere, which was the largest volume ever recorded by modern satellite instruments. The Australian fires injected around 1 million tons of smoke particles into the upper troposphere and stratosphere. And the Hunga Tonga eruption produced the largest atmospheric explosion on satellite record, launching nearly 150 million tons of water vapor into the stratosphere.

If any natural event could measurably shift global temperatures, the team reasoned, it would be any of these three.

Natural signals

For their new study, the team took a signal-to-noise approach. They looked to minimize “noise” from other known influences on global temperatures in order to isolate the “signal,” such as a change in temperature associated specifically with one of the three natural events.

To do so, they looked first through satellite measurements taken by the Stratospheric Sounding Unit (SSU) and the Microwave and Advanced Microwave Sounding Units (MSU), which have been measuring global temperatures at different altitudes throughout the atmosphere since 1979. The team compiled SSU and MSU measurements from 1986 to the present day. From these measurements, the researchers could see long-term trends of steady tropospheric warming and stratospheric cooling. Those long-term trends are largely associated with anthropogenic greenhouse gases, which the team subtracted from the dataset.

What was left over was more of a level baseline, which still contained some confounding noise, in the form of natural variability. Global temperature changes can also be affected by phenomena such as El Niño and La Niña, which naturally warm and cool the Earth every few years. The sun also swings global temperatures on a roughly 11-year cycle. The team took this natural variability into account, and subtracted out the effects of these influences.

After minimizing such noise from their dataset, the team reasoned that whatever temperature changes remained could be more easily traced to the three large-scale natural events and quantified. And indeed, when they pinned the events to the temperature measurements, at the times that they occurred, they could plainly see how each event influenced temperatures around the world.

The team found that Pinatubo decreased global tropospheric temperatures by up to about 0.7 degree Celsius, for more than two years following the eruption. The volcanic sulfate aerosols essentially acted as many tiny reflectors, cooling the troposphere and surface by scattering sunlight back into space. At the same time, the aerosols, which remained in the stratosphere, also absorbed heat that was emitted from the surface, subsequently warming the stratosphere.

This finding agreed with many other studies of the event, which confirmed that the team’s approach is accurate. They applied the same method to the 2019-2020 Australian wildfires, and the 2022 underwater eruption — events where the influence on global temperatures is less clear.

For the Australian wildfires, they found that the smoke particles caused the global stratosphere to warm up, by up to about 0.77 degree Celsius, which persisted for about five months but did not produce a clear global tropospheric signal.

“In the end we found that the wildfire smoke caused a very strong warming in the stratosphere, because these materials are very different chemically from sulfate,” Li explains. “They are particles that are dark colored, meaning they are efficient at absorbing solar radiation. So, a relatively small amount of smoke particles can cause a dramatic warming.”

In the case of the Hunga Tonga, the underwater eruption triggered a global cooling effect in the middle-to-upper stratosphere, of up to about half a degree Celsius, lasting for several years.

“The Australian fires and the Hunga Tonga really packed a punch at stratospheric altitudes, and this study shows for the first time how to quantify how strong that punch was,” says Solomon. “I find their impact up high quite remarkable, but the ongoing issue is why the last several years have been so warm lower down, in the troposphere — ruling out those natural events points even more strongly at human influences.”



de MIT News https://ift.tt/8UcqRCl

viernes, 20 de febrero de 2026

Fragile X study uncovers brain wave biomarker bridging humans and mice

Numerous potential treatments for neurological conditions, including autism spectrum disorders, have worked well in mice but then disappointed in humans. What would help is a non-invasive, objective readout of treatment efficacy that is shared in both species. 

In a new open-access study in Nature Communications, a team of MIT researchers, backed by collaborators across the United States and in the United Kingdom, identifies such a biomarker in fragile X syndrome, the most common inherited form of autism.

Led by postdoc Sara Kornfeld-Sylla and Picower Professor Mark Bear, the team measured the brain waves of human boys and men, with or without fragile X syndrome, and comparably aged male mice, with or without the genetic alteration that models the disorder. The novel approach Kornfeld-Sylla used for analysis enabled her to uncover specific and robust patterns of differences in low-frequency brain waves between typical and fragile X brains shared between species at each age range. In further experiments, the researchers related the brain waves to specific inhibitory neural activity in the mice and showed that the biomarker was able to indicate the effects of even single doses of a candidate treatment for fragile X called arbaclofen, which enhances inhibition in the brain.

Both Kornfeld-Sylla and Bear praised and thanked colleagues at Boston Children’s Hospital, the Phelan-McDermid Syndrome Foundation, Cincinnati Children’s Hospital, the University of Oklahoma, and King’s College London for gathering and sharing data for the study.

“This research weaves together these different datasets and finds the connection between the brain wave activity that’s happening in fragile X humans that is different from typically developed humans, and in the fragile X mouse model that is different than the ‘wild-type’ mice,” says Kornfeld-Sylla, who earned her PhD in Bear’s lab in 2024 and continued the research as a FRAXA postdoc. “The cross-species connection and the collaboration really makes this paper exciting.”

Bear, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT, says having a way to directly compare brain waves can advance treatment studies.

“Because that is something we can measure in mice and humans minimally invasively, you can pose the question: If drug treatment X affects this signature in the mouse, at what dose does that same drug treatment change that same signature in a human?” Bear says. “Then you have a mapping of physiological effects onto measures of behavior. And the mapping can go both ways.”

Peaks and powers

In the study, the researchers measured EEG over the occipital lobe of humans and on the surface of the visual cortex of the mice. They measured power across the frequency spectrum, replicating previous reports of altered low-frequency brain waves in adult humans with fragile X and showing for the first time how these disruptions differ in children with fragile X.

To enable comparisons with mice, Kornfeld-Sylla subtracted out background activity to specifically isolate only “periodic” fluctuations in power (i.e., the brain waves) at each frequency. She also disregarded the typical way brain waves are grouped by frequency (into distinct bands with Greek letter designations delta, theta, alpha, beta, and gamma) so that she could simply juxtapose the periodic power spectra of the humans and mice without trying to match them band by band (e.g., trying to compare the mouse “alpha” band to the human one). This turned out to be crucial because the significant, similar patterns exhibited by the mice actually occurred in a different low-frequency band than in the humans (theta vs. alpha). Both species also had alterations in higher-frequency bands in fragile X, but Kornfeld-Sylla noted that the differences in the low-frequency brainwaves are easier to measure and more reliable in humans, making them a more promising biomarker.

So what patterns constitute the biomarker? In adult men and mice alike, a peak in the power of low-frequency waves is shifted to a significantly slower frequency in fragile X cases compared to in neurotypical cases. Meanwhile, in fragile X boys and juvenile mice, while the peak is somewhat shifted to a slower frequency, what is really significant is a reduced power in that same peak.

The researchers were also able to discern that the peak in question is actually made of two distinct subpeaks, and that the lower-frequency subpeak is the one that varies specifically with fragile X syndrome.

Curious about the neural activity underlying the measurements, the researchers engaged in experiments in which they turned off activity of two different kinds of inhibitory neurons that are known to help produce and shape brain wave patterns: somatostatin-expressing and parvalbumin-expressing interneurons. Manipulating the somatostatin neurons specifically affected the lower-frequency subpeak that contained the newly discovered biomarker in fragile X model mice.

Drug testing

Somatostatin interneurons exert their effects on the neurons they connect to via the neurotransmitter chemical GABA, and evidence from prior studies suggest that GABA receptivity is reduced in fragile X syndrome. A therapeutic approach pioneered by Bear and others has been to give the drug arbaclofen, which enhances GABA activity. In the new study, the researchers treated both control and fragile X model mice with arbaclofen to see how it affected the low-frequency biomarker.

Even the lowest administered single dose made a significant difference in the neurotypical mice, which is consistent with those mice having normal GABA responsiveness. Fragile X mice needed a higher dose, but after one was administered, there was a notable increase in the power of the key subpeak, reducing the deficit exhibited by juvenile mice.

The arbaclofen experiments therefore demonstrated that the biomarker provides a significant readout of an underlying pathophysiology of fragile X: the reduced GABA responsiveness. Bear also noted that it helped to identify a dose at which arbaclofen exerted a corrective effect, even though the drug was only administered acutely, rather than chronically. An arbaclofen therapy would, of course, be given over a long time frame, not just once.

“This is a proof of concept that a drug treatment could move this phenotype acutely in a direction that makes it closer to wild-type,” Bear says. “This effort reveals that we have readouts that can be sensitive to drug treatments.”

Meanwhile, Kornfeld-Sylla notes, there is a broad spectrum of brain disorders in which human patients exhibit significant differences in low-frequency (alpha) brain waves compared to neurotypical peers.

“Disruptions akin to the biomarker we found in this fragile X study might prove to be evident in mouse models of those other disorders, too,” she says. “Identifying this biomarker could broadly impact future translational neuroscience research.”

The paper’s other authors are Cigdem Gelegen, Jordan Norris, Francesca Chaloner, Maia Lee, Michael Khela, Maxwell Heinrich, Peter Finnie, Lauren Ethridge, Craig Erickson, Lauren Schmitt, Sam Cooke, and Carol Wilkinson.

The National Institutes of Health, the National Science Foundation, the FRAXA Foundation, the Pierce Family Fragile X Foundation, the Autism Science Foundation, the Thrasher Research Fund, Harvard University, the Simons Foundation, Wellcome, the Biotechnology and Biological Sciences Research Council, and the Freedom Together Foundation provided support for the research.



de MIT News https://ift.tt/Xni1Tlw

jueves, 19 de febrero de 2026

Chip-processing method could assist cryptography schemes to keep data secure

Just like each person has unique fingerprints, every CMOS chip has a distinctive “fingerprint” caused by tiny, random manufacturing variations. Engineers can leverage this unforgeable ID for authentication, to safeguard a device from attackers trying to steal private data.

But these cryptographic schemes typically require secret information about a chip’s fingerprint to be stored on a third-party server. This creates security vulnerabilities and requires additional memory and computation.

To overcome this limitation, MIT engineers developed a manufacturing method that enables secure, fingerprint-based authentication, without the need to store secret information outside the chip.

They split a specially designed chip during fabrication in such a way that each half has an identical, shared fingerprint that is unique to these two chips. Each chip can be used to directly authenticate the other. This low-cost fingerprint fabrication method is compatible with standard CMOS foundry processes and requires no special materials.

The technique could be useful in power-constrained electronic systems with non-interchangeable device pairs, like an ingestible sensor pill and its paired wearable patch that monitor gastrointestinal health conditions. Using a shared fingerprint, the pill and patch can authenticate each other without a device in between to mediate.

“The biggest advantage of this security method is that we don’t need to store any information. All the secrets will always remain safe inside the silicon. This can give a higher level of security. As long as you have this digital key, you can always unlock the door,” says Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this security method.

Lee is joined on the paper by EECS graduate students Jaehong Jung and Maitreyi Ashok; as well as co-senior authors Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and Ruonan Han, a professor of EECS and a member of the MIT Research Laboratory of Electronics. The research was recently presented at the IEEE International Solid-States Circuits Conference.

“Creation of shared encryption keys in trusted semiconductor foundries could help break the tradeoffs between being more secure and more convenient to use for protection of data transmission,” Han says. “This work, which is digital-based, is still a preliminary trial in this direction; we are exploring how more complex, analog-based secrecy can be duplicated — and only duplicated once.”

Leveraging variations

Even though they are intended to be identical, each CMOS chip is slightly different due to unavoidable microscopic variations during fabrication. These randomizations give each chip a unique identifier, known as a physical unclonable function (PUF), that is nearly impossible to replicate.

A chip’s PUF can be used to provide security just like the human fingerprint identification system on a laptop or door panel.

For authentication, a server sends a request to the device, which responds with a secret key based on its unique physical structure. If the key matches an expected value, the server authenticates the device.

But the PUF authentication data must be registered and stored in a server for access later, creating a potential security vulnerability.

“If we don’t need to store information on these unique randomizations, then the PUF becomes even more secure,” Lee says.

The researchers wanted to accomplish this by developing a matched PUF pair on two chips. One could authenticate the other directly, without the need to store PUF data on third-party servers.

As an analogy, consider a sheet of paper torn in half. The torn edges are random and unique, but the pieces have a shared randomness because they fit back together perfectly along the torn edge.

While CMOS chips aren’t torn in half like paper, many are fabricated at once on a silicon wafer which is diced to separate the individual chips.

By incorporating shared randomness at the edge of two chips before they are diced to separate them, the researchers could create a twin PUF that is unique to these two chips.

“We needed to find a way to do this before the chip leaves the foundry, for added security. Once the fabricated chip enters the supply chain, we won’t know what might happen to it,” Lee explains.

Sharing randomness

To create the twin PUF, the researchers change the properties of a set of transistors fabricated along the edge of two chips, using a process called gate oxide breakdown.

Essentially, they pump high voltage into a pair of transistors by shining light with a low-cost LED until the first transistor breaks down. Because of tiny manufacturing variations, each transistor has a slightly different breakdown time. The researchers can use this unique breakdown state as the basis for a PUF.

To enable a twin PUF, the MIT researchers fabricate two pairs of transistors along the edge of two chips before they are diced to separate them. By connecting the transistors with metal layers, they create paired structures that have correlated breakdown states. In this way, they enable a unique PUF to be shared by each pair of transistors.

After shining LED light to create the PUF, they dice the chips between the transistors so there is one pair on each device, giving each separate chip a shared PUF.

“In our case, transistor breakdown has not been modeled well in many of the simulations we had, so there was a lot of uncertainty about how the process would work. Figuring out all the steps, and the order they needed to happen, to generate this shared randomness is the novelty of this work,” Lee says.

After finetuning their PUF generation process, the researchers developed a prototype pair of twin PUF chips in which the randomization was matched with more than 98 percent reliability. This would ensure the generated PUF key matches consistently, enabling secure authentication.

Because they generated this twin PUF using circuit techniques and low-cost LEDs, the process would be easier to implement at scale than other methods that are more complicated or not compatible with standard CMOS fabrication.

“In the current design, shared randomness generated by transistor breakdown is immediately converted into digital data. Future versions could preserve this shared randomness directly within the transistors, strengthening security at the most fundamental physical level of the chip,” Lee says.

“There is a rapidly increasing demand for physical-layer security for edge devices, such as between medical sensors and devices on a body, which often operate under strict energy constraints. A twin-paired PUF approach enables secure communication between nodes without the burden of heavy protocol overhead, thereby delivering both energy efficiency and strong security. This initial demonstration paves the way for innovative advancements in secure hardware design,” Chandrakasan adds.

This work is funded by Lockheed Martin, the MIT School of Engineering MathWorks Fellowship, and the Korea Foundation for Advanced Studies Fellowship.



de MIT News https://ift.tt/o0VeDxa

Study: AI chatbots provide less-accurate information to vulnerable users

Large language models (LLMs) have been championed as tools that could democratize access to information worldwide, offering knowledge in a user-friendly interface regardless of a person’s background or location. However, new research from MIT’s Center for Constructive Communication (CCC) suggests these artificial intelligence systems may actually perform worse for the very users who could most benefit from them.

A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

“We were motivated by the prospect of LLMs helping to address inequitable information accessibility worldwide,” says lead author Elinor Poole-Dayan SM ’25, a technical associate in the MIT Sloan School of Management who led the research as a CCC affiliate and master’s student in media arts and sciences. “But that vision cannot become a reality without ensuring that model biases and harmful tendencies are safely mitigated for all users, regardless of language, nationality, or other demographics.”

A paper describing the work, “LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users,” was presented at the AAAI Conference on Artificial Intelligence in January.

Systematic underperformance across multiple dimensions

For this research, the team tested how the three LLMs responded to questions from two datasets: TruthfulQA and SciQ. TruthfulQA is designed to measure a model’s truthfulness (by relying on common misconceptions and literal truths about the real world), while SciQ contains science exam questions testing factual accuracy. The researchers prepended short user biographies to each question, varying three traits: education level, English proficiency, and country of origin.

Across all three models and both datasets, the researchers found significant drops in accuracy when questions came from users described as having less formal education or being non-native English speakers. The effects were most pronounced for users at the intersection of these categories: those with less formal education who were also non-native English speakers saw the largest declines in response quality.

The research also examined how country of origin affected model performance. Testing users from the United States, Iran, and China with equivalent educational backgrounds, the researchers found that Claude 3 Opus in particular performed significantly worse for users from Iran on both datasets.

“We see the largest drop in accuracy for the user who is both a non-native English speaker and less educated,” says Jad Kabbara, a research scientist at CCC and a co-author on the paper. “These results show that the negative effects of model behavior with respect to these user traits compound in concerning ways, thus suggesting that such models deployed at scale risk spreading harmful behavior or misinformation downstream to those who are least able to identify it.”

Refusals and condescending language

Perhaps most striking were the differences in how often the models refused to answer questions altogether. For example, Claude 3 Opus refused to answer nearly 11 percent of questions for less educated, non-native English-speaking users — compared to just 3.6 percent for the control condition with no user biography.

When the researchers manually analyzed these refusals, they found that Claude responded with condescending, patronizing, or mocking language 43.7 percent of the time for less-educated users, compared to less than 1 percent for highly educated users. In some cases, the model mimicked broken English or adopted an exaggerated dialect.

The model also refused to provide information on certain topics specifically for less-educated users from Iran or Russia, including questions about nuclear power, anatomy, and historical events — even though it answered the same questions correctly for other users.

“This is another indicator suggesting that the alignment process might incentivize models to withhold information from certain users to avoid potentially misinforming them, although the model clearly knows the correct answer and provides it to other users,” says Kabbara.

Echoes of human bias

The findings mirror documented patterns of human sociocognitive bias. Research in the social sciences has shown that native English speakers often perceive non-native speakers as less educated, intelligent, and competent, regardless of their actual expertise. Similar biased perceptions have been documented among teachers evaluating non-native English-speaking students.

“The value of large language models is evident in their extraordinary uptake by individuals and the massive investment flowing into the technology,” says Deb Roy, professor of media arts and sciences, CCC director, and a co-author on the paper. “This study is a reminder of how important it is to continually assess systematic biases that can quietly slip into these systems, creating unfair harms for certain groups without any of us being fully aware.”

The implications are particularly concerning given that personalization features — like ChatGPT’s Memory, which tracks user information across conversations — are becoming increasingly common. Such features risk differentially treating already-marginalized groups.

“LLMs have been marketed as tools that will foster more equitable access to information and revolutionize personalized learning,” says Poole-Dayan. “But our findings suggest they may actually exacerbate existing inequities by systematically providing misinformation or refusing to answer queries to certain users. The people who may rely on these tools the most could receive subpar, false, or even harmful information.”



de MIT News https://ift.tt/Nri2kad

Exposing biases, moods, personalities, and abstract concepts hidden in large language models

By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they’re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it’s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.

Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.

The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.

In the case of the “conspiracy theorist” concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous “Blue Marble” image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.

The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model’s safety or enhance its performance.

“What this really says about LLMs is that they have these concepts in them, but they’re not all actively exposed,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of mathematics at MIT. “With our method, there’s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.”

The team published their findings today in a study appearing in the journal Science. The study’s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the University of Pennsylvania.

A fish in a black box

As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as “hallucination” and “deception.” In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has “hallucinated,” or constructed erroneously as fact.

To find out whether a concept such as “hallucination” is encoded in an LLM, scientists have often taken an approach of “unsupervised learning” — a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as “hallucination.” But to Radhakrishnan, such an approach can be too broad and computationally expensive.

“It’s like going fishing with a big net, trying to catch one species of fish. You’re gonna get a lot of fish that you have to look through to find the right one,” he says. “Instead, we’re going in with bait for the right species of fish.”

He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks — a broad category of AI models that includes LLMs — implicitly use to learn features.

Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.

“We wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,” Radhakrishnan says.

Converging on a concept

The team’s new approach identifies any concept of interest within a LLM and “steers” or guides a model’s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).

The researchers then searched for representations of each concept in several of today’s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.

A standard large language model is, broadly, a neural network that takes a natural language prompt, such as “Why is the sky blue?” and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.

The team’s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a “conspiracy theorist,” the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns. 

The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a “conspiracy theorist.” They also identified and enhanced the concept of “anti-refusal,” and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.

Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of “brevity” or “reasoning” in any response an LLM generates. The team has made the method’s underlying code publicly available.

“LLMs clearly have a lot of these abstract concepts stored within them, in some representation,” Radhakrishnan says. “There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.”

This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research. 



de MIT News https://ift.tt/3K9rJov

A neural blueprint for human-like intelligence in soft robots

A new artificial intelligence control system enables soft robotic arms to learn a wide repertoire of motions and tasks once, then adjust to new scenarios on the fly, without needing retraining or sacrificing functionality. 

This breakthrough brings soft robotics closer to human-like adaptability for real-world applications, such as in assistive robotics, rehabilitation robots, and wearable or medical soft robots, by making them more intelligent, versatile, and safe.

The work was led by the Mens, Manus and Machina (M3S) interdisciplinary research group — a play on the Latin MIT motto “mens et manus,” or “mind and hand,” with the addition of “machina” for “machine” — within the Singapore-MIT Alliance for Research and Technology. Co-leading the project are researchers from the National University of Singapore (NUS), alongside collaborators from MIT and Nanyang Technological University in Singapore (NTU Singapore).

Unlike regular robots that move using rigid motors and joints, soft robots are made from flexible materials such as soft rubber and move using special actuators — components that act like artificial muscles to produce physical motion. While their flexibility makes them ideal for delicate or adaptive tasks, controlling soft robots has always been a challenge because their shape changes in unpredictable ways. Real-world environments are often complicated and full of unexpected disturbances, and even small changes in conditions — like a shift in weight, a gust of wind, or a minor hardware fault — can throw off their movements. 

Despite substantial progress in soft robotics, existing approaches often can only achieve one or two of the three capabilities needed for soft robots to operate intelligently in real-world environments: using what they’ve learned from one task to perform a different task, adapting quickly when the situation changes, and guaranteeing that the robot will stay stable and safe while adapting its movements. This lack of adaptability and reliability has been a major barrier to deploying soft robots in real-world applications until now.

In an open-access study titled “A general soft robotic controller inspired by neuronal structural and plastic synapses that adapts to diverse arms, tasks, and perturbations,” published Jan. 6 in Science Advances, the researchers describe how they developed a new AI control system that allows soft robots to adapt across diverse tasks and disturbances. The study takes inspiration from the way the human brain learns and adapts, and was built on extensive research in learning-based robotic control, embodied intelligence, soft robotics, and meta-learning.

The system uses two complementary sets of “synapses” — connections that adjust how the robot moves — working in tandem. The first set, known as “structural synapses”, is trained offline on a variety of foundational movements, such as bending or extending a soft arm smoothly. These form the robot’s built‑in skills and provide a strong, stable foundation. The second set, called “plastic synapses,” continually updates online as the robot operates, fine-tuning the arm’s behavior to respond to what is happening in the moment. A built-in stability measure acts like a safeguard, so even as the robot adjusts during online adaptation, its behavior remains smooth and controlled.

“Soft robots hold immense potential to take on tasks that conventional machines simply cannot, but true adoption requires control systems that are both highly capable and reliably safe. By combining structural learning with real-time adaptiveness, we’ve created a system that can handle the complexity of soft materials in unpredictable environments,” says MIT Professor Daniela Rus, co-lead principal investigator at M3S, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-corresponding author of the paper. “It’s a step closer to a future where versatile soft robots can operate safely and intelligently alongside people — in clinics, factories, or everyday lives.”

“This new AI control system is one of the first general soft-robot controllers that can achieve all three key aspects needed for soft robots to be used in society and various industries. It can apply what it learned offline across different tasks, adapt instantly to new conditions, and remain stable throughout — all within one control framework,” says Associate Professor Zhiqiang Tang, first author and co-corresponding author of the paper who was a postdoc at M3S and at NUS when he carried out the research and is now an associate professor at Southeast University in China (SEU China).

The system supports multiple task types, enabling soft robotic arms to execute trajectory tracking, object placement, and whole-body shape regulation within one unified approach. The method also generalizes across different soft-arm platforms, demonstrating cross-platform applicability. 

The system was tested and validated on two physical platforms — a cable-driven soft arm and a shape-memory-alloy–actuated soft arm — and delivered impressive results. It achieved a 44–55 percent reduction in tracking error under heavy disturbances; over 92 percent shape accuracy under payload changes, airflow disturbances, and actuator failures; and stable performance even when up to half of the actuators failed. 

“This work redefines what’s possible in soft robotics. We’ve shifted the paradigm from task-specific tuning and capabilities toward a truly generalizable framework with human-like intelligence. It is a breakthrough that opens the door to scalable, intelligent soft machines capable of operating in real-world environments,” says Professor Cecilia Laschi, co-corresponding author and principal investigator at M3S, Provost’s Chair Professor in the NUS Department of Mechanical Engineering at the College of Design and Engineering, and director of the NUS Advanced Robotics Centre.

This breakthrough opens doors for more robust soft robotic systems to develop manufacturing, logistics, inspection, and medical robotics without the need for constant reprogramming — reducing downtime and costs. In health care, assistive and rehabilitation devices can automatically tailor their movements to a patient’s changing strength or posture, while wearable or medical soft robots can respond more sensitively to individual needs, improving safety and patient outcomes.

The researchers plan to extend this technology to robotic systems or components that can operate at higher speeds and more complex environments, with potential applications in assistive robotics, medical devices, and industrial soft manipulators, as well as integration into real-world autonomous systems.

The research conducted at SMART was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.



de MIT News https://ift.tt/os49HO6

miércoles, 18 de febrero de 2026

Parking-aware navigation system could prevent frustration and emissions

It happens every day — a motorist heading across town checks a navigation app to see how long the trip will take, but they find no parking spots available when they reach their destination. By the time they finally park and walk to their destination, they’re significantly later than they expected to be.

Most popular navigation systems send drivers to a location without considering the extra time that could be needed to find parking. This causes more than just a headache for drivers. It can worsen congestion and increase emissions by causing motorists to cruise around looking for a parking spot. This underestimation could also discourage people from taking mass transit because they don’t realize it might be faster than driving and parking.

MIT researchers tackled this problem by developing a system that can be used to identify parking lots that offer the best balance of proximity to the desired location and likelihood of parking availability. Their adaptable method points users to the ideal parking area rather than their destination.

In simulated tests with real-world traffic data from Seattle, this technique achieved time savings of up to 66 percent in the most congested settings. For a motorist, this would reduce travel time by about 35 minutes, compared to waiting for a spot to open in the closest parking lot.

While they haven’t designed a system ready for the real world yet, their demonstrations show the viability of this approach and indicate how it could be implemented.

“This frustration is real and felt by a lot of people, and the bigger issue here is that systematically underestimating these drive times prevents people from making informed choices. It makes it that much harder for people to make shifts to public transit, bikes, or alternative forms of transportation,” says MIT graduate student Cameron Hickert, lead author on a paper describing the work.

Hickert is joined on the paper by Sirui Li PhD ’25; Zhengbing He, a research scientist in the Laboratory for Information and Decision Systems (LIDS); and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in Transactions on Intelligent Transportation Systems.

Probable parking

To solve the parking problem, the researchers developed a probability-aware approach that considers all possible public parking lots near a destination, the distance to drive there from a point of origin, the distance to walk from each lot to the destination, and the likelihood of parking success.

The approach, based on dynamic programming, works backward from good outcomes to calculate the best route for the user.

Their method also considers the case where a user arrives at the ideal parking lot but can’t find a space. It takes into the account the distance to other parking lots and the probability of success of parking at each.

“If there are several lots nearby that have slightly lower probabilities of success, but are very close to each other, it might be a smarter play to drive there rather than going to the higher-probability lot and hoping to find an opening. Our framework can account for that,” Hickert says.

In the end, their system can identify the optimal lot that has the lowest expected time required to drive, park, and walk to the destination.

But no motorist expects to be the only one trying to park in a busy city center. So, this method also incorporates the actions of other drivers, which affect the user’s probability of parking success.

For instance, another driver may arrive at the user’s ideal lot first and take the last parking spot. Or another motorist could try parking in another lot but then park in the user’s ideal lot if unsuccessful. In addition, another motorist may park in a different lot and cause spillover effects that lower the user’s chances of success.

“With our framework, we show how you can model all those scenarios in a very clean and principled manner,” Hickert says.

Crowdsourced parking data

The data on parking availability could come from several sources. For example, some parking lots have magnetic detectors or gates that track the number of cars entering and exiting.

But such sensors aren’t widely used, so to make their system more feasible for real-world deployment, the researchers studied the effectiveness of using crowdsourced data instead.

For instance, users could indicate available parking using an app. Data could also be gathered by tracking the number of vehicles circling to find parking, or how many enter a lot and exit after being unsuccessful.

Someday, autonomous vehicles could even report on open parking spots they drive by.

“Right now, a lot of that information goes nowhere. But if we could capture it, even by having someone simply tap ‘no parking’ in an app, that could be an important source of information that allows people to make more informed decisions,” Hickert adds.

The researchers evaluated their system using real-world traffic data from the Seattle area, simulating different times of day in a congested urban setting and a suburban area. In congested settings, their approach cut total travel time by about 60 percent compared to sitting and waiting for a spot to open, and by about 20 percent compared to a strategy of continually driving to the next closet parking lot.

They also found that crowdsourced observations of parking availability would have an error rate of only about 7 percent, compared to actual parking availability. This indicates it could be an effective way to gather parking probability data.

In the future, the researchers want to conduct larger studies using real-time route information in an entire city. They also want to explore additional avenues for gathering data on parking availability, such as using satellite images, and estimate potential emissions reductions.

“Transportation systems are so large and complex that they are really hard to change. What we look for, and what we found with this approach, is small changes that can have a big impact to help people make better choices, reduce congestion, and reduce emissions,” says Wu.

This research was supported, in part, by Cintra, the MIT Energy Initiative, and the National Science Foundation.



de MIT News https://ift.tt/QbvzJdn

How MIT OpenCourseWare is fueling one learner’s passion for education

Training for a clerical military role in France, Gustavo Barboza felt a spark he couldn’t ignore. He remembered his love of learning, which once guided him through two college semesters of mechanical engineering courses in his native Colombia, coupled with supplemental resources from MIT Open Learning’s OpenCourseWare. Now, thousands of miles away, he realized it was time to follow that spark again.

“I wasn’t ready to sit down in the classroom,” says Barboza, remembering his initial foray into higher education. “I left to try and figure out life. I realized I wanted more adventure.”

Joining the military in France in 2017 was his answer. For the first three years of service, he was very military-minded, only focused on his training and deployments. With more seniority, he took on more responsibilities, and eventually was sent to take a four-month training course on military correspondence and software. 

“I reminded myself that I like to study,” he says. “I started to go back to OpenCourseWare because I knew in the back of my mind that these very complete courses were out there.”

At that point, Barboza realized that military service was only a chapter in his life, and the next would lead him back to learning. He was still interested in engineering, and knew that MIT OpenCourseWare could help prepare him for what was next. 

He dove into OpenCourseWare’s free, online, open educational resources — which cover nearly the entire MIT curriculum — including classical mechanics, intro to electrical engineering, and single variable calculus with David Jerison, which he says was his most-visited resource. These allowed him to brush up on old skills and learn new ones, helping him tremendously in preparing for college entrance exams and his first-year courses. 

Now in his third year at Grenoble-Alpes University, Barboza studies electrical engineering, a shift from his initial interest in mechanical engineering.

“There is an OpenCourseWare lecture that explains all the specializations you can get into with electrical engineering,” he says. “They go from very natural things to things like microprocessors. What interests me is that if someone says they are an electrical engineer, there are so many different things they could be doing.” 

At this point in his academic career, Barboza is most interested in microelectronics and the study of radio frequencies and electromagnetic waves. But he admits he has more to learn and is open to where his studies may take him. 

MIT OpenCourseWare remains a valuable resource, he says. When thinking about his future, he checks out graduate course listings and considers the different paths he might take. When he is having trouble with a certain concept, he looks for a lecture on the subject, undeterred by the differences between French and U.S. conventions.  

“Of course, the science doesn't change, but the way you would write an equation or draw a circuit is different at my school in France versus what I see from MIT. So, you have to be careful,” he explains. “But it is still the first place I visit for problem sets, readings, and lecture notes. It’s amazing.”

The thoroughness and openness of MIT Open Learning’s courses and resources — like OpenCourseWare — stand out to Barboza. In the wide world of the internet, he has found resources from other universities, but he says their offerings are not as robust. And in a time of disinformation and questionable sources, he appreciates that MIT values transparency, accessibility, and knowledge. 

“Human knowledge has never been more accessible,” he says. “MIT puts coursework online and says, ‘here’s what we do.’ As long as you have an internet connection, you can learn all of it.”

“I just feel like MIT OpenCourseWare is what the internet was originally for,” Barboza continues. “A network for sharing knowledge. I’m a big fan.”

Explore lifelong learning opportunities from MIT, including courses, resources, and professional programs, on MIT Learn.



de MIT News https://ift.tt/ar6JzE5

3D-printing platform rapidly produces complex electric machines

A broken motor in an automated machine can bring production on a busy factory floor to a halt. If engineers can’t find a replacement part, they may have to order one from a distributor hundreds of miles away, leading to costly production delays.

It would be easier, faster, and cheaper to make a new motor onsite, but fabricating electric machines typically requires specialized equipment and complicated processes, which restricts production to a few manufacturing centers.

In an effort to democratize the manufacturing of complex devices, MIT researchers have developed a multimaterial 3D-printing platform that could be used to fully print electric machines in a single step.

They designed their system to process multiple functional materials, including electrically conductive materials and magnetic materials, using four extrusion tools that can handle varied forms of printable material. The printer switches between extruders, which deposit material by squeezing it through a nozzle as it fabricates a device one layer at a time.

The researchers used this system to produce a fully 3D-printed electric linear motor in a matter of hours using five materials. They only needed to perform one post-processing step for the motor to be fully functional.

The assembled device performed as well or better than similar motors that require more complex fabrication methods or additional post-processing steps.

In the long run, this 3D printing platform could be used to rapidly fabricate customizable electronic components for robots, vehicles, or medical equipment with much less waste.

“This is a great feat, but it is just the beginning. We have an opportunity to fundamentally change the way things are made by making hardware onsite in one step, rather than relying on a global supply chain. With this demonstration, we’ve shown that this is feasible,” says Luis Fernando Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the 3D-printing platform, which appears today in Virtual and Physical Prototyping.

He is joined on the paper by electrical engineering and computer science (EECS) graduate students Jorge Cañada, who is the lead author, and Zoey Bigelow.

More materials

The researchers focused on extrusion 3D printing, a tried-and-true method that involves squirting material through a nozzle to fabricate an object one layer at a time.

To fabricate an electric machine, the researchers needed to be able to switch between multiple materials that offer different functionalities. For instance, the device would need an electrically conductive material to carry electric current and hard magnetic materials to generate magnetic fields for efficient energy conversion.

Most multimaterial extrusion 3D printing systems can only switch between two materials that come in the same form, such as filament or pellets, so the researchers had to design their own. They retrofit an existing printer with four extruders that can each handle a different form of feedstock.

They carefully designed each extruder to balance the requirements and limitations of the material. For instance, the electrically conductive material must be able to harden without the use of too much heat or UV light because this can degrade the dielectric material.

At the same time, the best-performing electrically conductive materials come in the form of inks which are extruded using a pressure system. This process has vastly different requirements than standard extruders that use heated nozzles to squirt melted filament or pellets.

“There were significant engineering challenges. We had to figure out how to marry together many different expressions of the same printing method — extrusion — seamlessly into one platform,” Velásquez-García says.

The researchers utilized strategically placed sensors and a novel control framework so each tool is picked up and put down consistently by the platform’s robotic arms, and so each nozzle moves precisely and predictably.

This ensures each layer of material lines up properly — even a slight misalignment can derail the performance of the finished machine.

Making a motor

After perfecting the printing platform, the researchers fabricated a linear motor, which generates straight-line motion (as opposed to a rotating motor, like the one in a car). Linear motors are used in applications like pick-and-place robotics, optical systems, and baggage conveyers.

They fabricated the motor in about three hours and only needed to magnetize the hard magnetic materials after printing to enable full functionality. The researchers estimate total material costs would be about 50 cents per device. Their 3D-printed motor was able to generate several times more actuation than a common type of linear engine that relies on complex hydraulic amplifiers. 

“Even though we are excited by this engine and its performance, we are equally inspired because this is just an example of so many other things to come that could dramatically change how electronics are manufactured,” says Velásquez-García.

In the future, the researchers want to integrate the magnetization step into the multimaterial extrusion process, demonstrate the fabrication of fully 3D-printed rotary electrical motors, and add more tools to the platform to enable monolithic fabrication of more complex electronic devices.

This research is funded, in part, by Empiriko Corporation and the La Caixa Foundation.



de MIT News https://ift.tt/7UyZn6W

martes, 17 de febrero de 2026

New study unveils the mechanism behind “boomerang” earthquakes

An earthquake typically sets off ruptures that ripple out from its underground origins. But on rare occasions, seismologists have observed quakes that reverse course, further shaking up areas that they passed through only seconds before. These “boomerang” earthquakes often occur in regions with complex fault systems. But a new study by MIT researchers predicts that such ricochet ruptures can occur even along simple faults.

The study, which appears today in the journal AGU Advances, reports that boomerang earthquakes can happen along a simple fault under several conditions: if the quake propagates out in just one direction, over a large enough distance, and if friction along the rupturing fault builds and subsides rapidly during the quake. Under these conditions, even a simple straight fault, like some segments of the San Andreas fault in California, could experience a boomerang quake.

These newly identified conditions are relatively common, suggesting that many earthquakes that have occurred along simple faults may have experienced a boomerang effect, or what scientists term “back-propagating fronts.”

“Our work suggests that these boomerang quakes may have been undetected in a number of cases,” says study author Yudong Sun, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We do think this behavior may be more common than we have seen so far in the seismic data.”

The new results could help scientists better assess future hazards in simple fault zones where boomerang quakes could potentially strike twice.

“In most cases, it would be impossible for a person to tell that an earthquake has propagated back just from the ground shaking, because ground motion is complex and affected by many factors,” says co-author Camilla Cattania, the Cecil and Ida Green Career Development Professor of Geophysics at MIT. “However, we know that shaking is amplified in the direction of rupture, and buildings would shake more in response. So there is a real effect in terms of the damage that results. That’s why understanding where these boomerang events could occur matters.”

Keep it simple

There have been a handful of instances where scientists have recorded seismic data suggesting that a quake reversed direction. In 2016, an earthquake in the middle of the Atlantic Ocean rippled eastward, and then seconds later richocheted back west. Similar return rumblers may have occurred in 2011 during the magnitude 9 earthquake in Tohoku, Japan, and in 2023 during the destructive magnitude 7.8 quake in Turkey and Syria, among others.

These events took place in various fault regions, from complex zones of multiple intersecting fault lines to regions with just a single, straight fault. While seismologists have assumed that such complex quakes would be more likely to occur in multifault systems, the rare examples along simple faults got Sun and Cattania wondering: Could an earthquake reverse course along a simple fault? And if so, what could cause such a bounce-back in a seemingly simple system?

“When you see this boomerang-like behavior, it is tempting to explain this in terms of some complexity in the Earth,” Cattania says. “For instance, there may be many faults that interact, with earthquakes jumping between fault segments, or fault surfaces with prominent kinks and bends. In many cases, this could explain back-propagating behavior. But what we found was, you could have a very simple fault and still get this complex behavior.”

Underground, an earthquake blast moves left, but then burst also shoots out from behind it.

Faulty friction

In their new study, the team looked to simulate an earthquake along a simple fault system. In geology, a fault is a crack or fracture that runs through the Earth’s crust. An earthquake begins when the stress between rocks on either side of the fault, suddenly decreases, and one side slides against the other, setting off seismic waves that rupture rocks all along the fault. This seismic activity, which initiates deep in the crust, can sometimes reach and shake up the surface.

Cattania and Sun used a computer model to represent the fundamental physics at play during an earthquake along a simple fault. In their model, they simulated the Earth’s crust as a simple elastic material, in which they embedded a single straight fault. They then simulated how the fault would exhibit an earthquake under different scenarios. For instance, the team varied the length of the fault and the location of the quake’s initation point below the surface, as well as whether the quake traveled in one versus two directions.

Over multiple simulations, they observed that only the unilateral quakes — those that traveled in one direction — exhibited a boomerang effect. Specifically, these quakes seemed to include a type that seismologists term “back-propagating” events, in which the rumbler splits at some point along the fault, partly continuing in the same direction and partly reversing back the way it came.

“When you look at a simulation, sometimes you don’t fully understand what causes a given behavior,” Cattania says. “So we developed mathematical models to understand it. And we went back and forth, to ultimately develop a simple theory that tells you should only see this back-propagation under these certain conditions.”

Those conditions, as the team’s new theory lays out, have to do with the friction along the fault. In standard earthquake physics, it’s generally understood that an earthquake is triggered when the stress built up between rocks on either side of a fault, is suddenly released. Rocks slide against each other in response, decreasing a fault’s friction. The reduction in fault friction creates a positive feedback that facilitates further sliding, sustaining the earthquake.

However, in their simulations, the team observed that when a quake travels along a fault in one direction, it can back-propagate when friction along the fault goes down, then up, and then down again.

“When the quake propagates in one direction, it produces a “breaking’’ effect that reduces the sliding velocity, increases friction, and allows only a narrow section of the fault to slide at a time,” Cattania says. “The region behind the quake, which stops sliding, can then rupture again, because it has accumulated more stress to slide again.”

The team found that, in addition to traveling in one direction and along a fault with changing friction, a boomerang is likely to occur if a quake has traveled over a large enough distance.

“This implies that large earthquakes are not simply ‘scaled-up’ versions of small earthquakes, but instead they have their own unique rupture behavior,” Sun says.

The team suspects that back-propagating quakes may be more common than scientists have thought, and they may occur along simple, straight faults, which are typically older than more complex fault systems.

“You shouldn’t only expect this complex behavior on a young, complex fault system. You can also see it on mature, simple faults,” Cattania says. “The key open question now is how often rupture reversals, or ‘boomerang’ earthquakes, occur in nature. Many observational studies so far have used methods that can’t detect back-propagating fronts. Our work motivates actively looking for them, to further advance our understanding of earthquake physics and ultimately mitigate seismic risk.”



de MIT News https://ift.tt/6XL034H

MIT community members elected to the National Academy of Engineering for 2026

Seven MIT researchers are among the 130 new members and 28 international members recently elected to the National Academy of Engineering (NAE) for 2026. Twelve additional MIT alumni were also elected as new members.

One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

The seven MIT electees this year include:

Moungi Gabriel Bawendi, the Lester Wolfe Professor of Chemistry in the Department of Chemistry, was honored for the synthesis and characterization of semiconductor quantum dots and their applications in displays, photovoltaics, and biology.

Charles Harvey, a professor in the Department of Civil and Environmental Engineering, was honored for contributions to hydrogeology regarding groundwater arsenic contamination, transport, and consequences.

Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory, was honored for contributions to approximate nearest neighbor search, streaming, and sketching algorithms for massive data processing.

John Henry Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering in the Department of Mechanical Engineering, was honored for advances and technological innovations in desalination.

Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Physics and Physics in the Department of Biological Engineering, was honored for discovering the U.S. heparin contaminant in 2008 and creating clinical antibodies for Zika, dengue, SARS-CoV-2, and other diseases.

Frances Ross, the TDK Professor in the Department of Materials Science and Engineering, was honored for ultra-high vacuum and liquid-cell transmission electron microscopies and their worldwide adoptions for materials research and semiconductor technology development.

Zoltán Sandor Spakovszky SM ’99, PhD ’01, the T. Wilson (1953) Professor in Aeronautics in the Department of Aeronautics and Astronautics, was honored for contributions, through rigorous discoveries and advancements, in aeroengine aerodynamic and aerostructural stability and acoustics.

“Each of the MIT faculty and alumni elected to the National Academy of Engineering has made extraordinary contributions to their fields through research, education, and innovation,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering. "They represent the breadth of excellence we have here at MIT. This honor reflects the impact of their work, and I’m proud to celebrate their achievement and offer my warmest congratulations.”

Twelve additional alumni were elected to the National Academy of Engineering this year. They are: Anne Hammons Aunins PhD ’91; Lars James Blackmore PhD ’07; John-Paul Clarke ’91, SM ’92, SCD ’97; Michael Fardis SM ’77, SM ’78, PhD ’79; David Hays PhD ’98; Stephen Thomas Kent ’76, EE ’78, ENG ’78, PhD ’81; Randal D. Koster SM ’85, SCD ’88; Fred Mannering PhD ’83; Peyman Milanfar SM ’91, EE ’93, ENG ’93, PhD ’93; Amnon Shashua PhD ’93; Michael Paul Thien SCD ’88; and Terry A. Winograd PhD ’70.



de MIT News https://ift.tt/rsft9K3

The strength of “infinite hope”

Dean of Engineering Paula Hammond ’84 PhD ’93 made a resounding call for the MIT community to “embrace endless hope” and “never stop looking forward,” in a keynote address at the Institute’s annual MLK Celebration on Wednesday, Feb. 11.

“We each have a role to play in contributing to our future, and we each must embrace endless hope and continuously renew our faith in ourselves to accomplish that dream,” Hammond said, to an audience of hundreds at the event.

She added: “Whether it is through caring for those in our community, teaching others, providing inspiration, leadership, or critical support to others in their moment of need, we provide support for one another on our journey … It is that future that will feed the optimism and faith that we need to move forward, to inspire and encourage, and to never stop looking forward.”

The MLK Celebration is an annual tribute to the life and legacy of Martin Luther King Jr., and is always thematically organized around a quotation of King’s. This year, that passage was, “We must accept finite disappointment, but never lose infinite hope.”

Hammond and multiple other speakers at the event organized their remarks around that idea, while weaving in personal reflections about the importance of community, family, and mentorship.

As Hammond noted, “We can lay the path toward a better, greater time with the steps that we take today even in the face of incredible disappointment, shock and disruption.” She added: “Principles founded in fear, ignorance, or injustice ultimately fail because they do not meet the needs of a growing and prosperous nation and world.”

The event, which took place in MIT’s Walker Memorial (Building 50), featured remarks by students, staff, and campus leaders, as well as musical performances by the recently reconstituted MIT Gospel Choir. (Listen to one of those performances by clicking on the player at the end of this article.)

MIT President Sally A. Kornbluth provided introductory remarks, noting that this year’s event was occurring during “a time when feeling fractured, isolated, and pitted against each other feels exhaustingly routine. A time when it’s easy to feel discouraged.” As such, she added, “the solace we take from [coming together at this event] couldn’t be more relevant now.”

Kornbluth also offered laudatory thoughts about Hammond, a highly accomplished research scientist who has held numerous leadership roles at MIT and elsewhere. Hammond, a chemical engineer, was named dean of the MIT School of Engineering in December. Prior to that, she has served as vice provost for faculty, from 2023 to 2025, and head of the Department of Chemical Engineering, from 2015 to 2023. In honor of her accomplishments, Hammond was named an Institute Professor, MIT’s highest faculty honor. A member of MIT’s Koch Institute for Integrative Cancer Research, Hammond has developed polymers and nanoscale materials with multiple applications, including drug delivery, imaging, and even battery advances.

Hammond was awarded the National Medal of Technology and Innovation in 2024. That year she also received MIT’s Killian Award, for faculty achievement. And she has earned the rare distinction of having been elected to all three national academies — the National Academy of Engineering, the National Academy of Medicine, and the National Academy of Sciences.

“I’ve never met anyone who better represents MIT’s highest values and aspirations than Paula Hammond,” Kornbluth said, citing both Hammond’s record of academic excellence and Institute service.

Among other things, Kornbluth observed, “Paula has been a longtime champion of MIT’s culture of openness to people and ideas from everywhere. In fact, it’s hard to think of anyone more open to sharing what she knows — and more interested in hearing your point of view. And the respect she shows to everyone — no matter their job or background — is an example for us all.”

Michael Ewing ’27, a mechanical engineering major, provided welcoming remarks while introducing the speakers as well as the MLK Celebration planning committee.

Ewing noted that the event remains “extremely and vitally important” to the MIT community, and reflected on the meaning of this year’s motif, for individuals and larger communities.

“Dr. King’s hope constitutes the belief that one can make things better, even when current conditions are poor,” Ewing said. “In the face of adversity, we must remain connected to what’s most important, be grateful for both the challenges and the opportunities, and hold on to the long-term belief that no matter what, no matter what, there’s an opportunity for us to learn, grow, and improve.”

The annual MLK Celebration also highlighted further reflections from students and staff on King’s life and legacy and the value of his work.

“Everyone that has fought for a greater good in this world has left the battle without something that they came with,” said Oluwadara Deru, a senior in mechanical engineering and the featured undergraduate speaker. “But what they gained is invaluable.”

Ekua Beneman, a graduate student in chemistry, offered thoughts relating matters of academic achievement, and helping others in a university setting, to the larger themes of the celebration.

“Hope is not pretending disappointment doesn’t exist,” Beneman said. “Hope is choosing to pass forward what was once given to you. At a place like MIT, infinite hope looks like mentorship. It looks like making space. It looks like sharing knowledge instead of guarding or gatekeeping it. If we truly want to honor Dr. King’s legacy, beyond this beautiful celebration today, we do it by choosing community, mentorship, and hope in action.”

Denzil Streete, associate dean and director of the Office of Graduate Education, related the annual theme to everyday life at the Institute, as well as social life everywhere.

“Hope lies in small, often uncelebrated acts,” Streete said. “Showing up. Being present. Responding with patience. Translating complicated processes into next steps. Making one more call. Sending one more email.”

He concluded: “See your daily work as moral work … Every day, through joy and care, we choose infinite hope, for our students, and for one another.”

Reverend Thea Keith-Lucas, chaplain to the Institute and associate dean in the Office of Religious, Spiritual, and Ethical Life, offered both an invocation and a benediction at the event.

The annual celebration includes the Dr. Martin Luther King Jr. Leadership Awards Recipients, given this year to Melissa Smith PhD ’12, Fred Harris, Carissma McGee, Janine Medrano, and Edwin Marrero.

For all the turbulence in the world, Hammond said toward the conclusion of her address, people can continue to make progress in their own communities, and can be intentional about focusing, in part, on the possibilities of progress ahead.

At MIT, Hammond noted, “The commitment of our faculty, students, and staff to continuously learn, to ask deep questions and to apply our knowledge, our perspectives and our insights to the biggest world problems is something that gives me infinite hope and optimism for the future.”



de MIT News https://ift.tt/dMtRWpF

lunes, 16 de febrero de 2026

Exploring the promise of regenerative aquaculture at an Arkansas fish farm

In many academic circles, innovation is imagined as a lab-to-market pipeline that travels through patent filings, venture rounds, and coastal research hubs. But a growing movement inside U.S. universities is pushing students toward a different frontier: solving real engineering problems alongside rural communities whose challenges directly shape national food security. 

A compelling example of this shift can be found in the story of Kiyoko “Kik” Hayano, a second-year mechanical engineering student at MIT, and her work through MIT D-Lab with Keo Fish Farms, a commercial aquaculture operation in the Arkansas Delta.

Hayano’s journey — from a small, windswept town in rural Wyoming to MIT’s campus in Cambridge, Massachusetts, and on to a working Arkansas fish farm — offers a tangible glimpse into how applied engineering, academic partnerships, and on-the-ground innovation can create new models for regenerative agriculture in the United States.

Wyoming childhood and an engineering dream

Hayano grew up in Powell, Wyoming (population ~6,400), a community defined by agriculture, water scarcity, and long distances. Her early interests in gardening with her grandmother and tinkering with irrigation projects through her high school’s agricultural center formed the foundation for a more ambitious goal: studying mechanical engineering at MIT.

That ambition paid off. Shortly after arriving in Cambridge, Hayano connected with MIT D-Lab, a program founded to co-create engineering solutions with communities, rather than for them — especially in regions facing poverty, resource constraints, or climate-related disruptions. For many MIT students, D-Lab is their entry point into field-based development work across Africa, Latin America, and Southeast Asia. Increasingly, however, the program has expanded its domestic mission to include rural areas of the United States experiencing food, water, and energy insecurity.

MIT D-Lab meets the Arkansas Delta

That domestic shift set the stage for a new joint effort. In 2024, Keo Fish Farms — a commercial aquaculture farm near Keo, Arkansas — contacted D-Lab seeking technical collaboration on a growing water quality challenge. The farm had begun to observe elevated iron levels in its groundwater, leading to fish mortality events during peak summer conditions. The problem was both biological and mechanical: Aquaculture species like hybrid striped bass and triploid grass carp require consistent, clean water inputs, and well systems tapping iron-rich geologic layers were compromising fish health, hatchery performance, and long-term viability.

Kendra Leith, MIT D-Lab associate director for research, saw an opportunity. The Delta region represents a collision of three major realities that matter deeply to both public policy and academic research: high-value protein production, aging or inadequate water infrastructure, and generational rural decline.

For Hayano, the chance to work on an important engineering problem with environmental, agricultural, and economic implications was exactly why she chose mechanical engineering in the first place.

Applied engineering in a living laboratory

When Hayano arrived at Keo Fish Farms, the project was structured as a co-creative engineering engagement — D-Lab’s core model. She documented the existing water intake system, analyzed the well depth relative to geological iron strata, and evaluated filtration options including aeration, sedimentation, and emerging biochar-based media.

The collaboration generated three immediate academic values. First, the team reviewed real constraints, a process known as ground truthing. Constraints in this situation included iron levels that shift seasonally, capital budgets that do not assume infinite funding, and labor cycles tied to harvest seasons. The team then scoped out the technology that might be used to mitigate problem areas. Iron-reduction solutions ranged from drilling deeper wells to incorporating biochar and other regenerative filtration mediums capable of binding contaminants while improving soil and plant health elsewhere on the farm. Finally, they reviewed policy relevance: Water quality in aquaculture sits at the intersection of U.S. Department of Agriculture (USDA) conservation, Environmental Protection Agency (EPA) water standards, climate-driven aquifer variability, and domestic protein security — issues central to U.S. food systems.

Leith notes that “the most transformative experiences happen when students and communities learn from one another.” The Keo project, she adds, is an example of how domestic food production systems can act as test beds for innovation that previously would have been deployed exclusively abroad.

Regenerative agriculture as a national opportunity

While Keo Fish Farms played a supporting role in the narrative, the project highlighted a broader challenge and opportunity: Can U.S. aquaculture transition toward regenerative agriculture principles?

Regenerative agriculture — long associated with row crops, grazing systems, and soil carbon — rarely includes aquaculture in the national conversation. Yet aquaculture sits at the nexus of water chemistry, nutrient cycling, renewable energy integration, biochar and filtration research, protein production, and greenhouse gas mitigation.

Hayano’s work helped illuminate that regenerative aquaculture will likely depend on regenerative water systems, where filtration, biochar, solar energy, and nutrient reuse form a closed-loop infrastructure, rather than a linear extract–use–discharge model.

D-Lab’s domestic projects increasingly intersect with this space, creating pathways for MIT students and faculty to collaborate with USDA, the U.S. Department of Energy (DoE), and National Science Foundation (NSF) priorities around rural innovation, renewable energy, and water systems engineering.

The role of industry partners: less spotlight, more signal

Keo Fish Farms’ involvement served as a platform — not a spotlight — for the engineering and policy implications emerging from the project. The farm provided three critical ingredients academic institutions often lack: a real commercial engineering problem with economic consequences, a living laboratory for field research and prototyping, and a pathway for future regenerative adoption at scale.

The farm’s leadership has stated that its long-term goal is to become a first-in-class demonstration site for regenerative aquaculture in the United States, combining advanced iron and sediment filtration, biochar production from local rice hull waste streams, renewable solar energy systems, water recycling and nutrient recovery, reduced chemical inputs, and habitat and biodiversity considerations.

To be sure, the D-Lab collaboration did not solve that entire puzzle, but it created the blueprint for a pathway, showing how academic partnerships can accelerate regenerative transitions in rural U.S. agriculture and aquaculture systems.

Lessons for universities and policymakers

For universities, the Keo–MIT D-Lab partnership offers a replicable model for experiential learning for STEM students, field-based regenerative research, technology validation in live agricultural systems, and cross-disciplinary collaboration. And for federal and state policymakers, it illustrates how rural communities can serve as innovation sites, why water infrastructure modernization matters to food security, how regenerative agriculture can expand beyond soil and grazing, and why public-private-academic partnerships deserve new funding pathways.

All of this aligns with emerging priorities at the USDA, DoE, NSF, and EPA around sustainability, climate resilience, and domestic protein systems.

For Hayano, the experience reinforced that engineering careers can be rooted not only in Silicon Valley labs or aerospace firms, but also in overlooked rural systems that feed the country. 

“I’m really grateful for the experience,” she reflected after the project. “It opened my eyes to how engineering can support sustainable food systems and rural communities.”

The sentiment echoes a broader trend among students seeking careers at the intersection of technology, environment, and public good. Whether Hayano returns to the Arkansas Delta or not, her path captures something deeply relevant to America’s innovation story: talent emerging from rural places, innovating at world-class institutions, and returning engineering capacity back into the country’s agricultural heartland.

It is, in many ways, a modern form of the American dream — one grounded not in abstraction, but in water, food, soil, and the systems that will define our next century.



de MIT News https://ift.tt/1uEem8V