miércoles, 22 de abril de 2026

New chip can protect wireless biomedical devices from quantum attacks

As quantum computers advance, they are expected to be able to break tried-and-true security schemes that currently keep most sensitive data secure from attackers. Scientists and policymakers are working to design and implement post-quantum cryptography to defend against these future attacks.

MIT researchers have developed an ultra-efficient microchip that can bring post-quantum cryptography techniques to wireless biomedical devices, like pacemakers and insulin pumps. Such wearable, ingestible, or implantable devices are usually too power-constrained to implement these computationally demanding security protocols.

Their tiny chip, which is about the size of a very fine needle tip, also includes built-in protections against physical hacking attempts that can bypass encryption to steal user data, such as a patient’s social security number or device credentials. Compared to prior designs, the new technology is more than an order of magnitude more energy-efficient.

In the long run, the new chip could enable next-generation wireless medical devices to maintain strong security even as quantum computing becomes more prevalent. In addition, it could be applied to many types of resource-constrained edge devices, like industrial sensors and smart inventory tags.

“Tiny edge devices are everywhere, and biomedical devices are often the most vulnerable attack targets because power constraints prevent them from having the most advanced levels of security. We’ve demonstrated a very practical hardware solution to secure the privacy of patients,” says Seoyoon Jang, an MIT electrical engineering and computer science (EECS) graduate student and lead author of a paper on the chip.

Jang is joined on the paper by Saurav Maji PhD ’23; visiting scholar Rashmi Agrawal; EECS graduate students Hyemin Stella Lee and Eunseok Lee; Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and an associate member of the Broad Institute of MIT and Harvard; and senior author Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science. The research was recently presented at the IEEE Custom Integrated Circuits Conference.

Stronger security

A large percentage of wireless biomedical devices, like ingestible biosensors for health monitoring, currently lack strong protection due to the computational demands of existing security protocols, Jang says.

But the complexity of post-quantum cryptography (PQC) can increase power consumption by two or three orders of magnitude.

Implementing PQC is of paramount importance, since regulatory bodies like the National Institute of Standards and Technology (NIST) will soon begin phasing out traditional cryptography protocols in favor of stronger PQC algorithms. In addition, some industry leaders believe rapid advances in quantum hardware make PQC implementation even more urgent.

To bring these power-hungry PQC protocols to wireless biomedical devices, the MIT researchers designed a customized microchip, known as an application-specific integrated circuit (ASIC), that greatly reduces energy overhead while guaranteeing the highest level of security.

“PQC is very secure algorithmically, but making a device resilient against physical attacks usually requires additional countermeasures that pump up the energy consumption at least two or three times. We want our chip to be robust to both security threats in a very lightweight manner,” Jang says.

A multi-pronged approach

To accomplish these goals, the researchers incorporated several design features into the chip.

First, they implemented two different PQC schemes to enhance robustness and “future-proof” their device in case one scheme is later proven to be insecure. To boost energy efficiency, they applied techniques that enable the PQC algorithms to share as much of the chip’s computational resources as possible.

Second, the researchers designed a highly efficient, on-chip true random number generator. This device continually generates random numbers to use for secret keys, which is essential to implement PQC.

Their on-chip design improves energy efficiency and security over standard approaches that usually receive random numbers from an external chip.

Third, they implemented countermeasures that prevent a type of physical hacking attempt, called a power side-channel attack, but only on the most vulnerable parts of the PQC protocols.

In power side-channel attacks, hackers steal secret information by analyzing the power consumption of a device while it processes data. The MIT researchers added just enough redundancy to the PQC operations to ensure the chip is protected from these types of attacks.

Fourth, they designed an early fault-detection mechanism so the chip will abort operations early if it detects a voltage glitch.

Wireless biomedical devices often have erratic power supplies, so they are susceptible to glitches that can cause an entire security procedure to fail. The MIT approach saves energy by stopping the chip from running a doomed procedure to completion.

“At the end of the day, because of the techniques we utilized, we can apply these post-quantum cryptography primitives while adding nothing to the overhead, with the added benefit of robustness to side-channel attacks,” Jang says.

Their device achieved between 20 to 60 times higher energy efficiency than all other PQC security techniques they compared it to, with a more compact area than many existing chips.

“As we transition into post-quantum approaches, providing strong security for even the most resource-limited devices is essential. This work shows that robust cryptographic protection for biomedical and edge devices can be achieved alongside energy efficiency and programmability,” says Chandrakasan.

In the future, the researchers want to apply these techniques to other vulnerable applications and energy-constrained devices.

This research was funded, in part, by the U.S. Advanced Research Projects Agency for Health.



de MIT News https://ift.tt/EDGzMXi

martes, 21 de abril de 2026

How morality and ethics shaped India’s economic development

In a world leaning away from globalization, governments face a tough choice: Should they block dominant foreign companies to protect local businesses, or welcome them in hopes of fast-tracking economic growth and modernization? 

In his recently published book, “Traders, Speculators, and Captains of Industry: How Capitalist Legitimacy Shaped Foreign Investment Policy in India” (Harvard University Press, November 2025), Jason Jackson, associate professor in political economy and urban planning in the MIT Department of Urban Studies and Planning, explains that these policy decisions aren’t just math, but long-standing and often heated moral debates over how businesses should conduct themselves, and who they serve.

Jackson argues that morality has a long history in economics and deserves more attention because, while ever-present in economic policy discourse, moral beliefs are often under-recognized or underappreciated.

“India is an exemplary case of ways in which moral beliefs shape economic policy decisions,” says Jackson. “But at the same time, I think it’s representative of a general feature of capitalism. It’s the perfect case.”

Jackson’s focus on India for this book stems from his interest in industrial policy and the politics of international development. Multinational firms have long been a source of controversy. They are seen as bringing two crucial resources to developing countries: finance and technology. However, while multinationals are potentially valuable contributors to economic development through the mechanism of foreign direct investment (FDI), they can also be monopolistic, dominating local industries and displacing domestic firms.

This long-standing tension in foreign investment policy became the backdrop for several emerging markets in developing countries — Brazil, Russia, India, China, and South Africa (BRICS) — in the early 2000s. India was growing at an extremely high level — 6-7 percent annually — and Indian companies were doing well, including those in industries that were seen as key to development, such as autos. Jackson wanted to understand why Indian companies were holding their own relative to foreign firms, which dominated more manufacturing in other places, and planned to focus on the period from the 1980s through the 2010s that coincides with the period of economic liberalization in India and, more broadly, with globalization. But while conducting field work, Jackson noticed that in describing how they made industrial policy decisions, Indian policymakers drew distinctions between firms that were fashioned in moral terms. There were some firms that policymakers believed would invest in technology and provide good jobs, and other firms — both foreign and domestic — seen as exploitative and not interested in engaging in activities that would advance economic growth and industrial transformation.

“I realized these distinctions had deep salience,” says Jackson. “My interlocutors would describe firms — especially foreign firms they saw as simply trading, or as exploitative — as ‘New East India’ companies, referencing the famous East India Company that was the governance authority in colonial India, but had been defunct for more than 150 years. That forced my research to become more historical, increasingly relying on archival work to make sense of these moralized distinctions between different types of business actors, whether foreign or domestic, and to understand how these beliefs became so powerful across Indian society.”

“Moral categories of capitalist legitimacy”

Jackson says there are several ways in which social scientists think that policymakers make decisions. One view considers the competing interest groups policymakers must negotiate with, in which case outcomes may depend on one group having more influence or power than others. Another approach assumes these individuals make decisions based on self-interest, particularly when their choices are perceived as corrupt.

“But what I found is that neither of these approaches gave enough credence to the ways in which policymakers in India grapple with quite technical and complex policy decisions regarding the type of development they want to promote in their country, and the types of companies they thought could help to achieve their development goals.” says Jackson. “Therefore, I was more interested in trying to understand what kind of ideas and beliefs animated their decision-making.”

What Jackson found was that Indian policymakers viewed both foreign firms and local Indian companies through what he terms “moral categories of capitalist legitimacy.” Would these firms invest in productive technologies? Would they provide good employment for the local population? Or would they be exploitative? These criteria were not only applied to multinational corporations. Even Indian family-controlled business groups were evaluated as to whether the gains accrued stayed within the confines of the extended family or whether they provided broader societal benefits. 

Coca-Cola goes to India

The story of Coca-Cola in India is an example of the tension experienced with regulating foreign investment where multinational companies were seen as exploitative. The company made its initial foray into India in the 1950s, and over the next two decades its reach became extensive. In the late 1970s, India’s Minister of Industry George Fernandes was visiting a village in Bihar — a state with one of the highest levels of poverty — when he asked for a glass of water. Instead, he was told the water was not suitable to drink, and was given Coca-Cola.

“This struck Fernandes as deeply problematic,” says Jackson. “He later recalled thinking that ‘after 30 years of freedom in India, our villages do not have clean drinking water, but they do have Coca-Cola — which, of course, is made with purified water, so safe to drink. How was this possible?’” Fernandes returned to his office in New Delhi determined to do something about it.

Just a few years earlier, India had passed a law, the Foreign Exchange Regulation Act (FERA), which required foreign companies to dilute their equity to no more than 40 percent. The law was explicitly designed to encourage technology transfer, but Coca-Cola had not complied. Fernandes told Coca-Cola that it had to take on an Indian partner or it would have to leave. Coca-Cola chose the latter. In the following year, IBM was also kicked out of India when it similarly balked at complying with FERA and sharing its technology.

“These companies were very much seen in the mold of the East India Co.,” says Jackson. “A firm comes from abroad and extracts resources from India while giving little benefit to the country. These are all very clearly morally coded beliefs that played a crucial role in these policy decisions.”

With Coca-Cola out of India, the beverage market became wide open, and several Indian companies emerged. Thums Up, an Indian cola brand — founded by Ramesh Chauhan ’62 — took off and became the dominant cola by the 1980s. Chauhan developed its own unique formula independently.

In 1991, India accelerated its economic liberalization, especially around FDI, and FERA’s standards were diluted. Coca-Cola returned to India, again without a partner. Other major brands, including Pepsi, had also entered the market. By then, Thums Up had a market share in India of well over 80 percent, but, concerned with its ability to compete in a war between the deep-pocketed American multinational giants, Thums Up sold out to Coca-Cola for $60 million in 1993, a figure that was later deemed to be small.

Trader, speculator, or captain of industry?

Jackson says that in India, there were two competing interpretations of this story. In one version, Fernandes kicking out a global multinational firm was seen as a developing country establishing its economic sovereignty by making a bold policy decision and “risking all kind of geopolitical blowback that might follow from the U.S.,” says Jackson. “In this view, the Indian government’s bold move allowed local entrepreneurs and local companies like Chauhan and Thums Up to emerge.”

Yet an important counter narrative emerged that challenged the view that companies like Thums Up and figures like Chauhan are enterprising entrepreneurs.

“Maybe they just took advantage of protectionism to form a company and make some money,” says Jackson. “So rather than being an intrepid captain of industry, observers wondered whether maybe Chauhan was ‘simply a trader’ who took advantage of policy protection, but sold out as soon as the market became competitive.”

Later developments added some credibility to this view. Ironically, Coca-Cola was unable to remove Thums Up and Limca, another soda brand from Chauhan’s company, from its product lineup, and both remained extremely popular and widely consumed. This suggested to many observers that Thums Up could have survived the cola wars had it not sold out to the American multinational. The public had acquired a taste for the distinctly Indian beverages that Chauhan had created.

“This narrative encapsulates this kind of tension policymakers face: If we provide policy support to our enterprising entrepreneurs and they thrive, will they also do well for the country? Or are they simply opportunists who will take advantage of policy support in ways that benefit themselves but have little broader benefits to the country,” says Jackson.

This episode was just one of dozens of instances of conflicts between Indian companies and multinational firms in the liberalizing 1990s and 2000s, which the government was often compelled to adjudicate. Throughout this period, the question persisted: How would policymakers identify the business figures who could be agents of industrial development and economic transformation, whether foreign or domestic? 

Ramesh Chauhan for one continued an enterprising path. He turned his attention to the bottled water industry in India and his brand — Bisleri — remains one of the country’s leading bottled water brands today.



de MIT News https://ift.tt/37rhlqV

Tackling the housing shortage with robotic microfactories

A national housing shortage is straining finances and communities across the United States. In Massachusetts, at least 222,000 homes will have to be built in the next 10 years to meet the population's needs. At the same time, there are numerous challenges in traditional construction. There's a shortage of skilled construction workers. Most projects involve multiple contractors and subcontractors, adding complexity and lag time. And the construction process, as well as the buildings themselves, can be a major source of emissions that contribute to climate change.

Reframe Systems, co-founded by Vikas Enti SM '20, uses robotics, software, and high-performance materials to address these problems. Founded in 2022, the company deploys microfactories that bring housing fabrication and production closer to the regions where the homes are needed. The first homes designed and manufactured in Reframe's first microfactory have been fully built in Arlington and Somerville, Massachusetts. 

Enti's experiences in MIT System Design and Management (SDM) shaped the company from its start. "Learning how to navigate the system and finding the optimal value for each stakeholder has been a key part of the business strategy," he says, "and that's rooted in what I learned at SDM."

Better tools for system-level problems

Enti applied to SDM's master of science in engineering and management while he was working at Kiva Systems, overseeing its acquisition by Amazon and transformation into Amazon Robotics. He found that the SDM program's fundamentals of systems engineering, system architecture, and project management provided him with the tools he needed to address system-level problems in his work.

While he was at MIT, Enti also served as an associate director for the MIT $100K Entrepreneurship Competition, which offers students and researchers mentorship, feedback, and potential funding for their startup ideas. He realized that "there isn't a single formula for how businesses start, or how long it takes to get them started," he says, which helped shape his plans to start his own business.

Enti took a leave of absence from MIT to oversee the expansion of Amazon Robotics in Europe. He returned and completed his degree in 2020, writing his thesis on developing technology that could mitigate falls for elderly people. This instinct to use his education for a good cause resurfaced when his daughters were born. He wanted his future business to address a real-world problem and have a social impact, while also reducing carbon emissions.

Growing housing, shrinking emissions

Enti concluded that housing, with immediate real-world impact and a significant share of global carbon emissions, was the right problem to work on. He reached out to his colleagues Aaron Small and Felipe Polido from Amazon Robotics to share his idea for advanced, low-cost factories that could be deployed quickly and close to where they were needed. The two joined him as co-founders.

Currently, the microfactory in Andover, Massachusetts, produces structural panels, with robotics completing wall and ceiling framing and people completing the rest of the work, including wiring and plumbing. Eventually, Reframe hopes to automate more of the building process through further use of robotics. The modular construction process allows for reduced waste and disruption on the eventual home site. And the finished homes are designed to be energy-efficient and ready for solar panel installation. The company is set to start work soon on a group of homes in Devens, Massachusetts.

In addition to the Andover location, Reframe is setting up in southern California to help rebuild homes that were destroyed in the area's January 2025 wildfires. The company's software-assisted design process and the adjustability of the microfactories allows them to meet local zoning and building codes and align with the local architectural aesthetic. This means that in Somerville, Reframe's completed buildings look like modernized versions of the neighboring three-story buildings, known locally as "triple-deckers." On the other side of the country, Reframe's design offerings include Spanish-style and craftsman homes.

"Housing is a complex systems problem," Enti says, explaining the impact SDM has had on his work at Reframe. The methods and tools taught in the integrated core class EM.412 (Foundations of System Design and Management) help him tackle systems-level problems and take the needs of multiple stakeholders into account. The Reframe team used technology roadmapping as they devised their overall business plan, inspired by the work of Olivier de Weck, associate head of the MIT Department of Aeronautics and Astronautics. And lectures on project management from Bryan Moser, SDM's academic director, remain relevant. 

"Embracing the fact that this is a systems problem, and learning how to navigate the system and the stakeholders to make sure we're finding the optimal value, has been a key part of the business strategy," Enti says.

Reframe Systems is set to continue learning through iteration as they plan to expand their network of microfactories. The company remains committed to the core vision of sustainably meeting the country's need for more housing. "I'm grateful we get to do this," Enti says. "Once you strip away all the robotics, the advanced algorithms, and the factories, these are high-quality, healthy homes that families get to live in and grow." 



de MIT News https://ift.tt/S5OokeK

How to expand the US economy

It’s an essential insight about our world: Innovation drives economic growth. For the U.S. to thrive, it must keep innovating. But how, and in what areas?

A new book co-authored by MIT faculty members focuses on six key areas where technology advances can drive the economy and support national security.

Those sectors — semiconductors, biotechnology, critical minerals, drones, quantum computing, and advanced manufacturing — are all built on U.S. know-how but are also areas where the country has either yielded a lead in production or innovation, or could yet fall behind.

As the book explains, a roadmap for U.S. prosperity and security involves sustaining notable areas of innovation and the national research ecosystem behind them, while rebuilding domestic manufacturing.

“In each of these areas, there are breakthroughs to be had, where the U.S. can leapfrog competitors and gain an advantage,” says Elisabeth Reynolds, an MIT expert on industrial innovation and editor of the new volume. “That’s a very exciting part of this.” She adds: “These areas are front and center for U.S. national economic and security policy.”

The book, “Priority Technologies: Ensuring U.S. Security and Shared Prosperity,” is published this week by the MIT Press. It features chapters by MIT faculty with expertise on the industrial sectors in question. Reynolds, a professor of the practice in MIT’s Department of Urban Studies and Planning, is a leading expert on industrial innovation and has long advocated for innovation-based growth that helps the U.S. workforce.

“All of this can be good for everyone,” says MIT economist Simon Johnson, who wrote the foreword to the book. “Out of that flow of innovations and ideas, we can create more good jobs for all Americans. Pushing the technological frontier and turning that into jobs is definitely going to help.”

Making more chips

“Priority Technologies” grew out of an ongoing MIT seminar by the same name, which Reynolds and Johnson began holding in 2023, often with appearances by other MIT faculty.

Both Reynolds and Johnson bring vast experience to the subject of innovation and production. Among other things, Reynolds headed MIT’s Industrial Performance Center for over a decade and was executive director of the MIT Task Force on the Work of the Future. She served in the White House National Economic Council as special assistant to the president for manufacturing and development.

Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship at the MIT Sloan School of Management, shared the 2024 Nobel Prize in economics, with MIT’s Daron Acemoglu and the University of Chicago’s James Robinson, for work about the historical relationship between institutions and economic growth. He has co-authored numerous books, including, with Acemoglu, the 2023 book “Power and Progress,” about the trajectory and implications of artificial intelligence.

As it happens, “Priority Technologies” does not focus on AI, instead opting to examine other vital, and often related, areas of innovation.

“We do not think this is the entire list of priority technologies,” Johnson says. “This is a partial list, and there are lots of other ideas.”

In the chapter on semiconductors, Jesús A. del Alamo, the Donner Professor of Science in MIT’s Department of Electrical Engineering and Computer Science, calls them “the oxygen of modern society.” This U.S.-born industry has seen a large manufacturing shift away from the country, however, leaving it vulnerable in terms of security and the economy; about one-third of inflation experienced in 2021 stemmed from a chip shortage. As he notes, the U.S. is now in the process of rebuilding its capacity to make leading-edge logic chips, for one thing.

“With semiconductors, people thought the U.S. could lose the manufacturing, stay on top of the innovation and design side, and would be fine,” Reynolds says. “But it’s turned out to make the country quite vulnerable. So we’ve had a massive shift to rebuild semiconductor manufacturing capabilities here in the U.S., and I would argue that’s been a successful strategy in recent years.”

Bringing biotech back home

In biotechnology, relocating manufacturing in the U.S. is also key, using new technologies in the process. As J. Christopher Love, the Laurent Professor of Chemical Engineering, puts it in his chapter, while the U.S. is the leader in biotech research, it “lacks the manufacturing infrastructure and expertise necessary to bring these ideas to the market at the same pace as it generates innovative new products.” Among other remedies, he suggests that smaller, more flexible production facilities can help the U.S. “leapfrog” other countries on the manufacturing side. Love is also co-director of MIT’s Initiative for New Manufacturing, which aims to drive advances in U.S. production across industries.

“We have tremendous biotech innovation, we’re the leaders, but we have a bottleneck when it comes manufacturing,” Reynolds observes. “If we can break through that with new technologies, new production processes, we’re in a position to make us less vulnerable, from a supply chain point of view, and capture more of what is going to be a $4 trillion market over the next 15 years.”

A similar story holds in other areas. Many drone innovations were developed in the U.S., while much manufacturing has shifted to China. Fiona Murray, the William Porter (1967) Professor of Entrepreneurship, writes that the U.S. has an “opportunity to rebuild its production at scale,” although that will also require significant strengthening of its supply chains, too.

Elsa Olivetti, the Jerry McAfee (1940) Professor of Engineering and a professor of materials science and engineering, recommends a multifaceted approach to help the U.S. regain traction in the production of critical minerals, including better forms of extraction, manufacturing, and recycling, to reduce potential scarcities.

And in the quantum computing chapter, two MIT co-authors — William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and a professor of physics; and Jonathan Ruane, a senior lecturer at MIT Sloan — note that the sector could help accelerate drug discovery, materials science, and energy applications. Noting that the U.S. still leads in private-sector investment in the field but tails China in public-sector investment, they urge more research support and stronger supply chains for quantum computing components, among other recommendations.

“The country that achieves quantum leadership will gain decisive advantages in these strategically important industries,” they write.

The university engine

From industry to industry, the book makes clear that certain key issues are broadly important to U.S. competitiveness and growth. The partnership between the federal government and the world-leading research capacities of U.S. universities, for one thing, has given the country an initial lead in many economic sectors and promises to continue driving innovation.

At the same time, the U.S. would benefit from expanding and strengthening its domestic supply chains, in the process of building up more domestic manufacturing, and needs capital investment that will help hardware-side, physically substantial industrial growth.

“These common themes include supply chain resilience and manufacturing capability,” Reynolds says. “Can we help drive the country’s innovation ecosystem through expansion of our industrial system and manufacturing? That’s a big question.”

On the research front, she reflects, over the years, “It’s been amazing how much MIT-led research has aligned with national priorities — or maybe that’s not so surprising.”

The partnership between the U.S. federal government and universities as research engines was formalized in the 1940s, thanks in part to then-MIT president Vannevar Bush. According to some estimates, government investment in non-defense research and development alone has accounted for up to 25 percent of U.S. economic growth since World War II.

“Vannevar Bush realized it wasn’t about a stock of technology, it was about a flow of innovation,” Johnson says. “And that brilliant insight is still relevant today. I think that is the insight of the last century. And that’s what we’re trying to capture and reiterate and repeat.”

“This is not even the future. This is current.”

Scholars and industry leaders have praised “Priority Technologies.” Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon University, has stated that when it comes to “ensuring American national security, economic competitiveness, and societal well-being,” the book underscores “the positive role technology can play in those outcomes.” Hemant Taneja, CEO of the venture capital firm General Catalyst, calls the volume “required reading for anyone interested in building the abundant, resilient future America deserves.”

For their part, Reynolds and Johnson hope the book will draw many kinds of readers interested in the economy, innovation, prosperity, and national security.

“We tried to make the volume accessible,” Reynolds says, noting that the book directly lays out “challenges for the country, and what we see as recommendations for next steps in how we position the country to succeed, and lead globally. Each of these chapters has something important to say.”

Johnson also notes the MIT scholars participating in the project want to enhance the ongoing policy conversation, in Washington and across the country, about supporting innovation and using it to drive U.S. economic and technological leadership.

“One reason to write a book is, you can’t pound the table with a podcast,” quips Johnson, who co-hosts a podcast, “Power and Consequences,” on major policy issues. In conversations with political leaders and their staffs, he adds, there is a core message to be transmitted about America and technology-driven growth: We have the knowledge and resources, but need to focus on supporting innovation while trying to increase domestic production.

“Here are the technologies we currently need,” Johnson says. “This is not imagination, this is not fanciful, this is not science fiction. This is not even the future. This is current. These are the technologies needed to defend the country and its interests. And we need to invest in these, and in everything we need to drive them forward.”



de MIT News https://ift.tt/Wy1wipM

domingo, 19 de abril de 2026

Managing traffic in space

Chances are, you’ve already used a satellite today. Satellites make it possible for us to stream our favorite shows, call and text a friend, check weather and navigation apps, and make an online purchase. Satellites also monitor the Earth’s climate, the extent of agricultural crops, wildlife habitats, and impacts from natural disasters.

As we’ve found more uses for them, satellites have exploded in number. Today, there are more than 10,000 satellites operating in low-Earth orbit. Another 5,000 decommissioned satellites drift through this region, along with over 100 million pieces of debris comprising everything from spent rocket stages to flecks of spacecraft paint.

For MIT’s Richard Linares, the rapid ballooning of satellites raises pressing questions: How can we safely manage traffic and growing congestion in space? And at what point will we reach orbital capacity, where adding more satellites is not sustainable, and may in fact compromise spacecraft and the services that we rely on?

“It is a judgement that society has to make, of what value do we derive from launching more satellites,” says Linares, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the things we try to do is approach these questions of traffic management and orbital capacity as engineering problems.”

Linares leads the MIT Astrodynamics, Space Robotics, and Controls Lab (ARCLab), a research group that applies astrodynamics (the motion and trajectory of orbiting objects) to help track and manage the millions of objects in orbit around the Earth. The group also develops tools to predict how space traffic and debris will change as operators launch large satellite “mega-constellations” into space.

He is also exploring the effects of space weather on satellites, as well as how climate change on Earth may limit the number of satellites that can safely orbit in space. And, anticipating that satellites will have to be smarter and faster to navigate a more cluttered environment, Linares is looking into artificial intelligence to help satellites autonomously learn and reason to adapt to changing conditions and fix issues onboard.

“Our research is pretty diverse,” Linares says. “But overall, we want to enable all these economic opportunities that satellites give us. And we are figuring out engineering solutions to make that possible.”

Grounding practical problems

Linares was born and raised in Yonkers, New York. His parents both worked as school bus drivers to support their children, Linares being the youngest of six. He was an active kid and loved sports, playing football throughout high school.

“Sports was a way to stay focused and organized, and to develop a work ethic,” Linares says. “It taught me to work hard.”

When applying for colleges, rather than aim for Division I schools like some of his teammates, Linares looked for programs that were strong in science, specifically in aerospace. Growing up, he was fascinated with Carl Sagan’s “Cosmos” docuseries. And being close to Manhattan, he took regular trips to the Hayden Planetarium to take in the center’s immersive projections of space and the technologies used to explore it.

“My interest in science came from the universe and trying to understand our place within it,” Linares recalls.

Choosing to stay close to home, he applied to in-state schools with strong aeronautical engineering departments, and happily landed at the State University of New York at Buffalo (SUNY Buffalo), where he would ultimately earn his bachelor’s, master’s, and doctoral degrees, all in aerospace engineering.

As an undergraduate, Linares took on a research project in astrodynamics, looking to solve the problem of how to determine the relative orientation of satellites flying in formation.

“Formation flying was a big topic in the early 2000s,” Linares says. “I liked the flavor of the math involved, which allowed me to go a layer deeper toward a solution.”

He worked out the math to show that when three satellites fly together, they essentially form a triangle, the angles of which can be calculated to determine where each satellite is in relation to the other two at any moment in time. His work introduced a new controls approach to enable satellites to fly safely together. The research had direct applications for the U.S. Air Force, which helped to sponsor the work.

As he expanded the research into a master’s thesis, Linares also took opportunities to work directly with the Air Force on issues of satellite tracking and orientation. He served two internships with the U.S. Air Force Research Lab, one at Kirtland Air Force Base in Albuquerque, New Mexico, and the other in Maui, Hawaii.

“Being able to collaborate with the Air Force back then kind of grounded the research in practical problems,” Linares says.

For his PhD, he turned to another practical problem of “uncorrelated tracks.” At the time, the Air Force operated a network of telescopes to observe more than 20,000 objects in space, which they were working to label and record in a catalog to help them track the objects over time. But while detecting objects was relatively straightforward, the challenge came in correlating a detected object with what was already in the catalog. In other words, is what they were seeing something they had already seen?

Linares developed image analysis techniques to identify key characteristics of objects such as their shape and orientation, which helped the Air Force “fingerprint” satellites and pieces of space debris, and track their activity — and potential for collisions — over time.

After completing his PhD, Linares worked as a postdoc at Los Alamos National Laboratory and the U.S. Naval Observatory. During that time he expanded his aerospace work to other areas including space weather, using satellite measurements to model how Earth’s ionosphere — the upper layer of the atmosphere that is ionized by the sun’s radiation — affects satellite drag.

He then accepted a position as assistant professor of aerospace engineering at the University of Minnesota at Minneapolis. For the next three years, he continued his research in modeling space weather, tracking space objects and coordinating satellites to fly in swarms.

Making space

In 2018, Linares made the move to MIT.

“I had a lot of respect for the people and for the history of the work that was done here,” says Linares, who was especially inspired by the legendary Charles Stark “Doc” Draper, who developed the first inertial guidance systems in the 1940s that would enable the self-navigation of airplanes, submarines, satellites, and spacecraft for decades to come. “This was essentially my field, and I knew MIT was the best place to continue my career.”

As a junior faculty member in AeroAstro, Linares spent his first years focused on an emerging challenge: space sustainability. Around that time, the first satellite constellations were launching into low-Earth orbit with SpaceX’s Starlink, which aimed to provide global internet coverage via a huge network of several thousand coordinating satellites. The launching of so many satellites, into orbits that already held other active and nonactive satellites, along with millions of pieces of space debris, raised questions about how to safely manage the satellite traffic and how much traffic an orbit can sustain.

“At what level do we reach a tipping point, where we have too many satellites in certain orbital regimes?” Linares says. “It was kind of a known problem at the time, but there weren’t many solutions.”

Linares’ group applied an understanding of astrodynamics, and the physics of how objects move in space, to figure out the best way to pack satellites in orbital “shells,” or lanes that would most likely prevent collisions. They also developed a state-of-the-art model of orbital traffic, that was able to simulate the trajectories of more than 10 million individual objects in space. Previous models were much more limited in the number of objects they could accurately simulate. Linares’ open-source model, called the MIT Orbital Capacity Assessment Tool, or MoCAT, could account for the millions of pieces of space debris, in addition to the many intact satellites in orbit.

The tools that his group has developed are used today by satellite operators to plan and predict safe spacecraft trajectories. His team is continuing to work on problems of space traffic management and orbital capacity. They are also branching out into space robotics. The team is testing ways to teleoperate a humanoid robot, which could potentially help to build future infrastructure and carry out long-duration tasks in space.

Linares is also exploring artificial intelligence, including ways that a satellite can autonomously “learn” from its experience and safely adapt to uncertain environments.

“Imagine if each satellite had a virtual Doc Draper onboard that could do the de-bugging that we did from the ground during the Apollo missions,” Linares says. “That way, satellites would become instantaneously more robust. And it’s not taking the human out of the equation. It’s allowing the human to be amplified. I think that’s within reach.”



de MIT News https://ift.tt/pxXU0QA

viernes, 17 de abril de 2026

Why bother with plausible deniability?

Picture this scenario in a business: An employee, Brad, disclosed some information that wound up in the hands of a competitor. He may not have meant to, but he did, and a few people at the firm know this. So, at the next company meeting, another employee, Linda, looks pointedly at Brad and says, “I know that no one would ever dream of leaking information, intentionally or otherwise, from our discussions.”

Linda means the opposite of what she says, of course. She is letting people know that Brad is to blame. However, while Linda is making her message public, she also wants what we often call “plausible deniability” for her statement. If anyone asks later if she was insinuating anything about Brad, she can claim she was just making a general comment about the firm.

From the boardroom to the courtroom, the talk show, and beyond, people frequently seek plausible deniability for their statements. It seems to work, too. Indeed, to have plausible deniability, the denial need not be plausible.

“People can say, ‘That’s not what I meant,’ and completely get away with it, even though it’s totally obvious they’re lying,” says MIT philosopher Sam Berstler. “They wouldn’t be getting away with it in the same respect by putting the content in explicit words.”

She adds: “This should be very puzzling to us, because in both cases the intent is maximally obvious.”

So why does plausible deniability work, and work like this? And what does it tell us about how we interact? Berstler, who studies language and communication, has published a new paper on plausible deniability, examining these issues. It is part of a larger body of work Berstler is generating, focused on everyday interactions involving deception.

To understand plausible deniability, Berstler thinks we should recognize that our conversations cannot be understood simply by analyzing the words we use. Our interactions always take place in social contexts, often have a performative aspect, and occasionally intersect with “non-acknowedgement norms,” the practice of keeping quiet about what we all know. Plausible deniability is bound up with social practices that incentivize us to not be fully transparent.

“A lot of indirect speech is designed, as it were, to facilitate this kind of deniability,” Berstler says.

The paper, “Non-Epistemic Deniability,” is published in the journal MIND. Berstler, the Laurance S. Rockefeller Career Development Chair and assistant professor of philosophy at MIT, is the sole author.

Managing a personal “Cold War”

In Berstler’s view, there are multiple ways to create plausible deniability. One is through the practice of open secrets, the subject of one of her previous papers. An open secret is widely known information that is never acknowledged, for reasons of power or in-group identification, among other things. Indeed, no one even acknowledges that they are not acknowledging the open secret.

Examining open secrets led Berstler directly to her analysis of plausible deniability. However, the new paper focuses more on another way of creating plausible deniability, which she calls “two-tracking norms.” Two-tracking is when a group divides its communications into two parts: One track consists of official, limited, courteous interaction, and the second track consists more of informal, resentful, uncooperative interactions. Linda, in our example, is engaging in two-tracking.

But why do we two-track at all? Why not just be fully transparent? Well, in an office scenario, if Linda is mad that Brad divulged some company secrets, calling out Brad directly might lead to recriminations and conflict beyond what Linda is willing to tolerate for the sake of critizing Brad on the record.

“It's like a Cold War situation where we each have an interest in not letting the conflict go to a state where we’re firing warheads at each other, but we can’t just purely manage relations around the negotiating table because we’re adversaries,” Berstler says. “We’re going to aggress against each other, but in a limited way. In a two-track conversation, communicating in the second track is like fighting a proxy battle, but we’re also providing evidence to each other that we’re only going to engage in a proxy battle.”

In this way, Linda takes Brad to task and some people pick up on it, but Brad is not explicitly publicly shamed. And though he might be unhappy, he is less likely to wreck all company norms in an attempt to retaliate. The firm more or less rolls on as usual.

Waiting for Goffman

Where Berstler differs in part from other philosophers is in her emphasis on the extent to which social practices are integral to our ways of deploying deniability. Our interactions are not just limited to rhetoric, but have additional layers.

“What we mean can often be different from what we say, or enhanced from what we say,” Berstler says. “Sometimes we figure out what others mean by relying on what they say in literal language. But sometimes we’re relying on other things, like the context.”

So, back at the firm, the colleagues of Linda and Brad might have some knowledge of a confidentiality breach, or they might know that Linda does not usually speak up at meetings, or they might read things into her tone of voice and the way she appeared to look at Brad. There is more to be gleaned than her literal words.

In this kind of analysis, Berstler finds illumination in the work of the midcentury sociologist Erving Goffman, who studied in minute detail the performative parts of our everyday interactions and speech. Goffman, as Berstler notes in the paper, proposed that we have a ritualized, social self (or “face”) and that normal, everyday behavior generally allows us, and others, to keep this face intact.

Relatedly, Goffman and some of his intellectual followers concluded that habits such as two-tracking are very common in everyday life; the price we pay for saving face is a bit less transparency, and a bit more secrecy and deniability.

“What I’m suggesting is we have these other established practices like two-tracking and open secrecy, where the deniability is just a byproduct,” Berstler says.

What’s the solution?

By bringing sociological ideas into her work, Berstler is moving beyond the normal philosophical discussion of the subject. On the other hand, she is not directly disputing core ideas in linguistics or the philosophy of language; she is just suggesting we add another layer to our analysis of communication and meaning.

Digging into issues of plausible deniability also raises the question of what to do about it. There may be something pernicious in the practice, but calling out plausible deniability threatens to dismantle our social guardrails and break the “Cold War” norms used to help people co-exist.

Berstler, though, has another suggestion: Instead of calling out such subterfuge, we can become verbally and performatively skilled enough to counteract it.

“I think the actual answer is becoming rhetorically clever,” Berstler says. “It’s being the person who uses indirect speech to respond strategically, without violating these norms. That is possible. It also means you have agency. You could become very good at verbal sparring.”

Besides, Berstler says, “Often that can be more powerful than just calling them out, and demonstrates your own verbal fluency. I think we admire it when we see it. Conversational skill is an important component of being morally good, in these cases by reprimanding someone in a way that’s not going to be counterproductive.”

She adds: “People who buy into the rhetoric of transparency can be setting back their own interests. Maybe speaking transparently is morally virtuous in some respects, but given the reality of our speech practices, transparency is not necessarily going to be the most effective way of handling things.”



de MIT News https://ift.tt/iq0JoCM

Jacob Andreas and Brett McGuire named Edgerton Award winners

MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.

“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”

“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”

Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the National Accrediting Agency for Clinical Laboratory Sciences, the International Conference on Machine Learning, and the Association for Computational Linguistics.

Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”

Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”

Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.

The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.

“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”

“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”

Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven't felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.” 

“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”

“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”



de MIT News https://ift.tt/z6O7mkW

jueves, 16 de abril de 2026

Bringing AI-driven protein-design tools to biologists everywhere

Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”



de MIT News https://ift.tt/4gzbMWC

With navigating nematodes, scientists map out how brains implement behaviors

Animal behavior reflects a complex interplay between an animal’s brain and its sensory surroundings. Only rarely have scientists been able to discern how actions emerge from this interaction. A new open-access study in Nature Neuroscience by researchers in The Picower Institute for Learning and Memory at MIT offers one example by revealing how circuits of neurons within C. elegans nematode worms respond to odors and generate movement as they pursue of smells they like and evade ones they don’t.

“Across the animal kingdom, there are just so many remarkable behaviors,” says study senior author Steven Flavell, associate professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. “With modern neuroscience tools, we are finally gaining the ability to map their mechanistic underpinnings.”

By the end of the study, which former graduate student Talya Kramer PhD ’25 led as her doctoral thesis research, the team was able to show exactly which neurons in the worm’s brain did which of the jobs needed to sense where smells were coming from, plan turns toward or away from them, shift to reverse (like old-fashioned radio-controlled cars, C. elegans worms turn in reverse), execute the turns, and then go back to moving forward. Not only did the study reveal the sequence and each neuron’s role in it, but it also demonstrated that worms are more skillful and intentional in these actions than perhaps they’ve received credit for. And finally, the study demonstrated that it’s all coordinated by the neuromodulatory chemical tyramine.

“One thing that really excited us about this study is that we were able to see what a sensorimotor arc looks like at the scale of a whole nervous system: all the bits and pieces, from responses to the sensory cue until the behavioral response is implemented,” Flavell says.

Seeing the sequence

To do the research, Kramer put worms in dishes with spots of odors they’d either want to navigate toward or slither away from. With the lab’s custom microscopes and software, she and her co-authors could track how the worms navigated and all the electrical activity of more than 100 neurons in their brains during those behaviors (the worms only have 302 neurons total).

The surveillance enabled Kramer, Flavell, and their colleagues to observe that the worms weren’t just ambling randomly until they happened to get where they’d want to be. Instead, the worms would execute turns with advantageous timing and at well-chosen angles. The worms seemed to know what they were doing as they navigated along the gradients of the odors.

Inside their heads, patterns of electrical activity among a cohort of 10 neurons (indicated by flashing green light tied to the flux of calcium ions in the cells), revealed the sequence of neural activation that enabled the worms to execute these sensible sensory-guided motions: forward, then into reverse, then into the turn, and then back to forward. Particular neurons guided each of these steps, including detecting the odors, planning the turn, switching into reverse, and then executing the turns.

A couple of neurons stood out as key gears in the sequence. A neuron called SAA proved pivotal for integrating odor detection with planning movement, as its activity predicted the direction of the eventual turn. Several neurons were flexible enough to show different activity patterns depending on factors such as where the odors were and whether the worm was moving forward or in reverse.

And if the neurons are indeed turning and shifting gears, then the neuromodulator tyramine (the worm analog of norepinephrine) was the signal essential to switch their gears. After the worms started moving in reverse, tyramine from the neuron RIM enabled other neurons in the sequence to change their activity appropriately to execute the turns. In several experiments the scientists knocked out RIM tyramine and saw that the navigation behaviors and the sequence of neural activity largely fell apart.

“The neuromodulator tyramine plays a central role in organizing these sequential brain activity patterns,” Flavell says.

In addition to Flavell and Kramer, the paper’s other authors are Flossie Wan, Sara Pugliese, Adam Atanas, Sreeparna Pradhan, Alex Hiser, Lillie Godinez, Jinyue Luo, Eric Bueno, and Thomas Felt.

A MathWorks Science Fellowship, the National Institutes of Health, the National Science Foundation, The McKnight Foundation, The Alfred P. Sloan Foundation, the Freedom Together Foundation, and HHMI provided funding to support the work.



de MIT News https://ift.tt/jHNJYRF

miércoles, 15 de abril de 2026

Waves hit different on other planets

On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.

This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.

In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.

The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock. 

“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”

The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.

“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”

If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.

“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”

The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.

“The first puff”

When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.

“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”

She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.

“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”

Making waves

The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.

The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.

They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.

“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”

The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.

Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.

The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.

This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.

Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.

“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”

This work was supported, in part, by NASA and the National Science Foundation.



de MIT News https://ift.tt/LMlVWoU

Geothermal energy turns red hot

Drill deep and drill differently. That’s what’s needed to exploit the nearly bottomless promise of geothermal energy in the United States and around the globe, according to participants at the 2026 Spring Symposium, titled “Next-generation geothermal energy for firm power.” 

Sponsored by the MIT Energy Initiative (MITEI), the March 4 event drew 120 people, including MIT faculty and students, investors, and representatives from startups, multinational energy companies, and zero-carbon advocacy groups.

“The time feels right to pull together good policy, great corporate partners, and the research and technological innovations … to make significant advances in the widespread utilization of this incredible resource,” said Karen Knutson, the vice president for government affairs at MIT, in welcoming attendees.

Technology from the oil and gas industry helped usher in a first wave of geothermal energy. But chewing vertical holes through rocks in traditional ways can’t deliver on the full potential of this resource. And the real treasure — geologic formations radiating heat at 374 degrees Celsius and above — lies kilometers beneath Earth’s surface, far beyond the reach of most conventional drilling rigs.

Panelists explored the many innovations in accessing and circulating subsurface heat, as well as digging to unprecedented depths through extremely challenging geological conditions, discussing advanced drilling technologies, materials, and subsurface imaging.

This work is needed urgently, as demand for firm (always-on) power skyrockets in response to the electrification of industry and rise of data centers, said Pablo Dueñas‑Martínez, a MITEI research scientist. “We cannot get through this only with solar and wind; we need dense, deployable energy like geothermal.”

From “minuscule” to “almost inexhaustible” energy

In her opening remarks, Carolyn Ruppel, MITEI’s deputy director of science and technology, noted that despite decades of successful projects in places like the United States, Kenya, Iceland, Indonesia, and Turkey, geothermal still contributes only a “minuscule” share of global electricity. “The tremendous heat beneath our feet remains largely untouched,” she said.

Citing MIT’s milestone 2006 study “The Future of Geothermal Energy,” keynote speaker John McLennan, a professor at the University of Utah and co–principal investigator of the U.S. Department of Energy’s Utah FORGE enhanced geothermal systems (EGS) field laboratory, reminded attendees that the continental crust holds enough accessible heat to supply power for generations. “For practical purposes, it’s almost inexhaustible,” he said.

The question now, he said, is how to access that resource economically and responsibly.

At the Utah FORGE test site, McLennan has been part of a team investigating one method — adapting the oil and gas industry’s drilling and reservoir engineering expertise for hot, relatively impermeable rocks.

The project has drilled multiple deep wells into crystalline granitic rock, including a pair of wells that have been hydraulically stimulated and connected. In a recent circulation test, cold water was pumped down one well, flowed through fractures, and returned hot through the other.

“On a commercial basis … this hot water would be converted to electricity at the surface,” McLennan said. “This has now been demonstrated at Utah FORGE.”

The basic physics, in other words, work. The harder problems now are cost, repeatability, and scale.

Geothermal on the grid

Several panels highlighted the fact that next-generation geothermal is already beginning to deliver firm power.

At Lightning Dock, New Mexico, geothermal company Zanskar used a probabilistic modeling framework that simulated thousands of possible subsurface configurations to identify where to drill a new production well at an underperforming geothermal field. By thermal power delivered, the resulting well is now “the most-productive pumped geothermal well in the country,” said Joel Edwards, Zanskar’s co-founder and chief technology officer — powering the entire 15 megawatt (MW) Lightning Dock plant from a single well.

This data-driven approach enables the company to find and develop new resources faster and more cheaply than traditional methods, said Edwards.

José Bona, the director of next-generation geothermal at Turboden, explained how his company’s technology uses specialized turbines to circulate organic fluids that conserve heat better than water, and then convert that heat efficiently into electrical power. This closed-cycle technology can utilize low- to medium-temperature heat sources. Turboden is supplying its technology both to the Lightning Dock geothermal facility in New Mexcio and to Fervo Energy’s Cape Station in southwest Utah, an EGS project that will begin delivering 100 MW of baseload, clean electricity to the grid this year, aiming for 500 MW by 2028.

In Geretsried, Germany, Eavor has developed its own proprietary closed-loop system by creating a kind of underground radiator.

“We drilled to about 4.5 kilometers vertical depth, completed six horizontal multilateral pairs, and we delivered the first power to the grid in December,” said Christian Besoiu, the team lead of technology development at Eavor. The project will ultimately be capable of supplying 8.2 MW of electricity to the 32,000 households in the Bavarian town of Geretsried and 64 MW of thermal energy to the district in which the town lies, prioritizing heat when needed.

Beyond oil and gas technology

Early geothermal exploration typically targeted preexisting faults using vertical wells left by oil and gas drilling. Today, companies are experimenting with rock fracturing at multiple subsurface levels and creating heat reservoirs in previously untenable formations by using propping materials.

“Instead of vertical wells, we’re going to horizontal wells, we’re going to cased wells, we’re introducing proppants [solid materials that hold open hydraulically fractured rock] … we do dozens of stages with these designs,” said Koenraad Beckers, the geothermal engineering lead at ResFrac. This shale-style approach has already yielded much higher flow rates and more-reliable performance than earlier EGS.

Some current geothermal wells manage to achieve depths close to 15,000 feet using the oil and gas industry’s polycrystalline diamond compact drill bits, which can bore through hard rock like granite at more than 100 feet per hour. But these bits and the rigs that drive them are no match for conditions six or more kilometers down — and it is at those depths that the heat on hand begins to make an overwhelming economic case for geothermal.

“If we go to around 300 to 350 degrees, your power potential increases 10 times,” said Lev Ring, CEO of Sage Geosystems. “At that point, with reasonable CAPEX [capital expenditure] assumptions, levelized cost of electricity [a metric for comparing the cost of electricity across different generation technologies] is around 4 cents, and geothermal becomes cheaper than any other alternative.”

But “at 10 kilometers down … the largest land rigs in existence today cannot handle it,” Ring added. “We need alternatives — new materials, new ways to handle pressure, maybe even welding on the rig … a whole space that has not been addressed yet.”

One panel, featuring Quaise Energy, an MIT spinout with MITEI roots, spotlighted just how radically drilling might change. Co-founder Matt Houde described the company’s millimeter-wave drilling approach, which uses high-frequency electromagnetic waves derived from fusion research to vaporize rock instead of grinding it, as with conventional drilling. In a recent Texas field test, the team drilled 100 meters of hard basement rock in about a month, and is now planning kilometer-scale trials aimed at reaching superhot rock temperatures around 400 C, where each well could deliver many times the power of today’s geothermal projects.

Innovations for deep drilling

Moderating a panel on “MIT innovations for next-generation geothermal,” Andrew Inglis, the venture builder in residence with MIT Proto Ventures, whose position is sponsored by the U.S. Department of Energy GEODE program, framed the Institute’s role in getting such hard-tech ideas out of the lab and into the field. “The way MIT thinks about tech development, uniquely from other universities, can play a very singular role in geothermal commercial liftoff,” he said.

Materials researchers on that panel illustrated the point. Matěj Peč, an associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences, outlined work to build sensors that survive up to 900 C so that rock deformation and fracturing can be studied at supercritical conditions. Michael Short, the Class of 1941 Professor in the Department of Nuclear Science and Engineering, and C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering, respectively described coatings and alloys designed to resist corrosion, fouling, and cracking in extreme environments. In response to audience questions after their talks, Tasan made an important point, highlighting how academics need input from industry to understand the real-world problems (e.g., corrosion of pipes by geofluids) that require engineering solutions.

Other researchers are rethinking how to detect geothermal resources: Wanju Yuan, a research scientist with the Geological Survey of Canada at Natural Resources Canada, is using satellite imagery and thermal infrared sensing to screen vast regions for subtle hot spots and structures, processing thousands of images to identify promising sites in just a few months of work. “It’s a very efficient way to screen potential areas before more expensive exploration, thus reducing exploration and drilling risks,” he said.

Policy as backdrop, not center stage

Policy loomed in the background of many discussions — from bipartisan support for geothermal exploration and tax incentives to issues of regulation and permitting.

For Ruppel, that was by design.

“We wanted this meeting to showcase what’s technically possible and what’s already happening on the ground,” she said. “The policy world is starting to pay attention. Our job is to make sure that when that spotlight turns our way, next-generation geothermal is ready.”

MITEI’s Spring Symposium was followed by a gathering of geothermal entrepreneurs, investors, and energy industry experts co-hosted by MITEI and the Clean Air Task Force. “GeoTech Summit: Accelerating geothermal technology, projects, and deal flow” explored the financing challenges and opportunities of geothermal energy today.



de MIT News https://ift.tt/YBjabVF

MIT faculty, alumni receive 2025-26 American Physical Society honors

The American Physical Society (APS) recently honored two MIT faculty members — professors Yoel Fink PhD ’00 and Mehran Kardar PhD ’83 — as well as six alumni with prizes and awards for their contributions to physics and academic leadership.

In addition, several MIT faculty members — Professor Jorn Dunkel, Professor Yen-Jie Lee PhD ’11, Associate Professor Mingda Li PhD ’15, and Associate Professor Julien Tailleur — as well as 12 additional alumni were named APS Fellows.

Yoel Fink PhD ’00, the Danae and Vasilis (1961) Salapatas Professor in the Department of Materials Science and Engineering, received the Andrei Sakharov Prize “for defending the academic freedom and human rights of scientists working in the U.S.”

The prize, named for physicist and human rights advocate Andrei Sakharov, recognizes scientists whose leadership and impact advance the principles of intellectual freedom and human dignity. Fink’s research focuses on “computing fabrics” — fibers and textiles that sense, communicate, store, and process information. By embedding functionality at the fiber level, fabrics become computing systems that can infer human activity and context while keeping the traditional qualities of garments. These textiles enable noninvasive monitoring of physiological and health conditions, with applications ranging from fetal and maternal health to human performance analytics, injury prevention in challenging environments, and defense.

Mehran Kardar PhD ’83, the Francis Friedman Professor of Physics, received the Lars Onsager Prize “for ground-breaking contributions to statistical physics, including the Kardar-Parisi-Zhang equation, Casimir forces, active matter, and aspects of biological physics.”

Kardar’s research focuses on how complex behavior emerges from simple interactions in systems both in and far from equilibrium, including stable ones like a still pond and rapidly changing ones such as growing surfaces. The Kardar-Parisi-Zhang equation, which he helped develop, provides a unifying framework for understanding how randomness and fluctuations shape evolving phenomena, from fluids and interfaces to biological and quantum systems. His work has also advanced the theoretical understanding of disordered materials, soft matter such as polymers and gels, and fluctuation-induced forces — including Casimir forces arising from quantum and thermal effects. More recently, he has applied these ideas to active matter — systems of self-driven units — and biological systems, helping reveal patterns in living and evolving systems.

Alumni receiving awards

Joel Butler PhD ’75 was presented the W.K.H. Panofsky Prize in Experimental Particle Physics “for wide-ranging scientific, technical, and strategic contributions to particle physics, particularly exceptional leadership in fixed-target quark flavor experiments at Fermilab and collider physics at the Large Hadron Collider.”

Anthony Duncan PhD ’75 is the recipient of the Abraham Pais Prize for History of Physics “for research on the history of quantum physics between 1900 and 1927 that culminated in 'Constructing Quantum Mechanics,' an exemplary work that uses primary sources masterfully and employs scaffold and arch metaphors to describe developments in the quantum revolution.”

Laura A. Lopez ’04 was presented the Edward A. Bouchet Award “for pioneering contributions to X-ray astronomy, including foundational studies of supernova remnants, compact objects, and stellar feedback in galaxies, and for transformative leadership in advancing equity and inclusion in physics through innovative mentorship programs, national advocacy, and unwavering support for students from historically marginalized communities.”

Zhiquan Sun PhD ’25 is the recipient of the J.J. and Noriko Sakurai Dissertation Award in Theoretical Particle Physics “for applying effective field theory to advance our understanding of QCD [quantum chromodynamics], including establishing a new formalism to study heavy quark fragmentation, determining how confinement affects energy correlators, and revealing an overlooked complexity of the axion solution to the strong CP [charge conjugation symmetry and parity symmetry] problem.”

Charles B. Thorn III ’68 received the Dannie Heineman Prize for Mathematical Physics for “fundamental contributions to elementary particle physics, primarily the theory of strong interactions and the development of string theory.”

Christina Wang ’19 received the Mitsuyoshi Tanaka Dissertation Award in Experimental Particle Physics “for pioneering a novel technique using CMS [Compact Muon Solenoid] muon chambers to search for weakly-coupled sub-GeV [giga-electronvolt] mass dark matter using long-lived particle searches, and for groundbreaking work in quantum sensing to enable new probes of dark matter.”

APS Fellows

Several MIT faculty were elected 2025 APS Fellows:

Jorn Dunkel, MathWorks Professor of Mathematics, is the recipient of the Division of Statistical and Nonlinear Physics Fellowship “for pioneering contributions to statistical, nonlinear, and biological physics, notably in understanding pattern formation in soft matter and biology, cell positioning in tissues, and turbulence in active media.”

Yen-Jie Lee PhD '11, professor of physics, received the Division of Nuclear Physics Fellowship “for pioneering measurements of jet quenching, medium response and heavy-quark diffusion in the quark-gluon plasma, and for using electron-positron collisions as an innovative control to understand collectivity in small collision systems.”

Mingda Li PhD '15, associate professor of nuclear science and engineering, is the recipient of the Topical Group on Data Science Fellowship “for pioneering the integration of artificial intelligence with scattering and spectroscopy, enabling breakthroughs in phonons, topological states, optical and time-resolved spectra, and data-driven discovery for quantum and energy applications.”

Julien Tailleur, associate professor of physics, is the recipient of the Division of Soft Matter Fellowship “for foundational theoretical work on motility-induced phase separation and emergent collective behavior in scalar active matter.”

The following additional MIT alumni were also honored as APS Fellows:

Andrew Cross SM ’05, PhD ’08 (EECS), Division of Quantum Information Fellowship 

Kevin D. Dorfman SM '01, PhD '02 (ChemE), Division of Polymer Physics Fellowship

Geoffroy Hautier PhD '11 (DMSE), Division of Computational Physics Fellowship

Douglas J. Jerolmack PhD '06 (EAPS), Division of Statistical and Nonlinear Physics Fellowship

Brian Lantz '92, PhD '99 (Physics), Division of Gravitational Physics Fellowship

Valerio Lucarini SM '03 (EAPS), Topical Group on Physics of Climate Fellowship

Giles Novak '81 (Physics), Division of Astrophysics Fellowship

Steve Presse PhD '08 (Physics), Division of Biological Physics Fellowship

Jonathan Rothstein PhD '01 (MechE), Division of Fluid Dynamics Fellowship

Gray Rybka PhD '07 (Physics), Division of Particles and Fields Fellowship

Sarah Sheldon '08, PhD '13 (Physics, NSE), Forum on Industrial and Applied Physics Fellowship

Lian Shen ScD '01 (MechE), Division of Fluid Dynamics Fellowship



de MIT News https://ift.tt/lHQBpit