martes, 21 de abril de 2026

How morality and ethics shaped India’s economic development

In a world leaning away from globalization, governments face a tough choice: Should they block dominant foreign companies to protect local businesses, or welcome them in hopes of fast-tracking economic growth and modernization? 

In his recently published book, “Traders, Speculators, and Captains of Industry: How Capitalist Legitimacy Shaped Foreign Investment Policy in India” (Harvard University Press, November 2025), Jason Jackson, associate professor in political economy and urban planning in the MIT Department of Urban Studies and Planning, explains that these policy decisions aren’t just math, but long-standing and often heated moral debates over how businesses should conduct themselves, and who they serve.

Jackson argues that morality has a long history in economics and deserves more attention because, while ever-present in economic policy discourse, moral beliefs are often under-recognized or underappreciated.

“India is an exemplary case of ways in which moral beliefs shape economic policy decisions,” says Jackson. “But at the same time, I think it’s representative of a general feature of capitalism. It’s the perfect case.”

Jackson’s focus on India for this book stems from his interest in industrial policy and the politics of international development. Multinational firms have long been a source of controversy. They are seen as bringing two crucial resources to developing countries: finance and technology. However, while multinationals are potentially valuable contributors to economic development through the mechanism of foreign direct investment (FDI), they can also be monopolistic, dominating local industries and displacing domestic firms.

This long-standing tension in foreign investment policy became the backdrop for several emerging markets in developing countries — Brazil, Russia, India, China, and South Africa (BRICS) — in the early 2000s. India was growing at an extremely high level — 6-7 percent annually — and Indian companies were doing well, including those in industries that were seen as key to development, such as autos. Jackson wanted to understand why Indian companies were holding their own relative to foreign firms, which dominated more manufacturing in other places, and planned to focus on the period from the 1980s through the 2010s that coincides with the period of economic liberalization in India and, more broadly, with globalization. But while conducting field work, Jackson noticed that in describing how they made industrial policy decisions, Indian policymakers drew distinctions between firms that were fashioned in moral terms. There were some firms that policymakers believed would invest in technology and provide good jobs, and other firms — both foreign and domestic — seen as exploitative and not interested in engaging in activities that would advance economic growth and industrial transformation.

“I realized these distinctions had deep salience,” says Jackson. “My interlocutors would describe firms — especially foreign firms they saw as simply trading, or as exploitative — as ‘New East India’ companies, referencing the famous East India Company that was the governance authority in colonial India, but had been defunct for more than 150 years. That forced my research to become more historical, increasingly relying on archival work to make sense of these moralized distinctions between different types of business actors, whether foreign or domestic, and to understand how these beliefs became so powerful across Indian society.”

“Moral categories of capitalist legitimacy”

Jackson says there are several ways in which social scientists think that policymakers make decisions. One view considers the competing interest groups policymakers must negotiate with, in which case outcomes may depend on one group having more influence or power than others. Another approach assumes these individuals make decisions based on self-interest, particularly when their choices are perceived as corrupt.

“But what I found is that neither of these approaches gave enough credence to the ways in which policymakers in India grapple with quite technical and complex policy decisions regarding the type of development they want to promote in their country, and the types of companies they thought could help to achieve their development goals.” says Jackson. “Therefore, I was more interested in trying to understand what kind of ideas and beliefs animated their decision-making.”

What Jackson found was that Indian policymakers viewed both foreign firms and local Indian companies through what he terms “moral categories of capitalist legitimacy.” Would these firms invest in productive technologies? Would they provide good employment for the local population? Or would they be exploitative? These criteria were not only applied to multinational corporations. Even Indian family-controlled business groups were evaluated as to whether the gains accrued stayed within the confines of the extended family or whether they provided broader societal benefits. 

Coca-Cola goes to India

The story of Coca-Cola in India is an example of the tension experienced with regulating foreign investment where multinational companies were seen as exploitative. The company made its initial foray into India in the 1950s, and over the next two decades its reach became extensive. In the late 1970s, India’s Minister of Industry George Fernandes was visiting a village in Bihar — a state with one of the highest levels of poverty — when he asked for a glass of water. Instead, he was told the water was not suitable to drink, and was given Coca-Cola.

“This struck Fernandes as deeply problematic,” says Jackson. “He later recalled thinking that ‘after 30 years of freedom in India, our villages do not have clean drinking water, but they do have Coca-Cola — which, of course, is made with purified water, so safe to drink. How was this possible?’” Fernandes returned to his office in New Delhi determined to do something about it.

Just a few years earlier, India had passed a law, the Foreign Exchange Regulation Act (FERA), which required foreign companies to dilute their equity to no more than 40 percent. The law was explicitly designed to encourage technology transfer, but Coca-Cola had not complied. Fernandes told Coca-Cola that it had to take on an Indian partner or it would have to leave. Coca-Cola chose the latter. In the following year, IBM was also kicked out of India when it similarly balked at complying with FERA and sharing its technology.

“These companies were very much seen in the mold of the East India Co.,” says Jackson. “A firm comes from abroad and extracts resources from India while giving little benefit to the country. These are all very clearly morally coded beliefs that played a crucial role in these policy decisions.”

With Coca-Cola out of India, the beverage market became wide open, and several Indian companies emerged. Thums Up, an Indian cola brand — founded by Ramesh Chauhan ’62 — took off and became the dominant cola by the 1980s. Chauhan developed its own unique formula independently.

In 1991, India accelerated its economic liberalization, especially around FDI, and FERA’s standards were diluted. Coca-Cola returned to India, again without a partner. Other major brands, including Pepsi, had also entered the market. By then, Thums Up had a market share in India of well over 80 percent, but, concerned with its ability to compete in a war between the deep-pocketed American multinational giants, Thums Up sold out to Coca-Cola for $60 million in 1993, a figure that was later deemed to be small.

Trader, speculator, or captain of industry?

Jackson says that in India, there were two competing interpretations of this story. In one version, Fernandes kicking out a global multinational firm was seen as a developing country establishing its economic sovereignty by making a bold policy decision and “risking all kind of geopolitical blowback that might follow from the U.S.,” says Jackson. “In this view, the Indian government’s bold move allowed local entrepreneurs and local companies like Chauhan and Thums Up to emerge.”

Yet an important counter narrative emerged that challenged the view that companies like Thums Up and figures like Chauhan are enterprising entrepreneurs.

“Maybe they just took advantage of protectionism to form a company and make some money,” says Jackson. “So rather than being an intrepid captain of industry, observers wondered whether maybe Chauhan was ‘simply a trader’ who took advantage of policy protection, but sold out as soon as the market became competitive.”

Later developments added some credibility to this view. Ironically, Coca-Cola was unable to remove Thums Up and Limca, another soda brand from Chauhan’s company, from its product lineup, and both remained extremely popular and widely consumed. This suggested to many observers that Thums Up could have survived the cola wars had it not sold out to the American multinational. The public had acquired a taste for the distinctly Indian beverages that Chauhan had created.

“This narrative encapsulates this kind of tension policymakers face: If we provide policy support to our enterprising entrepreneurs and they thrive, will they also do well for the country? Or are they simply opportunists who will take advantage of policy support in ways that benefit themselves but have little broader benefits to the country,” says Jackson.

This episode was just one of dozens of instances of conflicts between Indian companies and multinational firms in the liberalizing 1990s and 2000s, which the government was often compelled to adjudicate. Throughout this period, the question persisted: How would policymakers identify the business figures who could be agents of industrial development and economic transformation, whether foreign or domestic? 

Ramesh Chauhan for one continued an enterprising path. He turned his attention to the bottled water industry in India and his brand — Bisleri — remains one of the country’s leading bottled water brands today.



de MIT News https://ift.tt/37rhlqV

Tackling the housing shortage with robotic microfactories

A national housing shortage is straining finances and communities across the United States. In Massachusetts, at least 222,000 homes will have to be built in the next 10 years to meet the population's needs. At the same time, there are numerous challenges in traditional construction. There's a shortage of skilled construction workers. Most projects involve multiple contractors and subcontractors, adding complexity and lag time. And the construction process, as well as the buildings themselves, can be a major source of emissions that contribute to climate change.

Reframe Systems, co-founded by Vikas Enti SM '20, uses robotics, software, and high-performance materials to address these problems. Founded in 2022, the company deploys microfactories that bring housing fabrication and production closer to the regions where the homes are needed. The first homes designed and manufactured in Reframe's first microfactory have been fully built in Arlington and Somerville, Massachusetts. 

Enti's experiences in MIT System Design and Management (SDM) shaped the company from its start. "Learning how to navigate the system and finding the optimal value for each stakeholder has been a key part of the business strategy," he says, "and that's rooted in what I learned at SDM."

Better tools for system-level problems

Enti applied to SDM's master of science in engineering and management while he was working at Kiva Systems, overseeing its acquisition by Amazon and transformation into Amazon Robotics. He found that the SDM program's fundamentals of systems engineering, system architecture, and project management provided him with the tools he needed to address system-level problems in his work.

While he was at MIT, Enti also served as an associate director for the MIT $100K Entrepreneurship Competition, which offers students and researchers mentorship, feedback, and potential funding for their startup ideas. He realized that "there isn't a single formula for how businesses start, or how long it takes to get them started," he says, which helped shape his plans to start his own business.

Enti took a leave of absence from MIT to oversee the expansion of Amazon Robotics in Europe. He returned and completed his degree in 2020, writing his thesis on developing technology that could mitigate falls for elderly people. This instinct to use his education for a good cause resurfaced when his daughters were born. He wanted his future business to address a real-world problem and have a social impact, while also reducing carbon emissions.

Growing housing, shrinking emissions

Enti concluded that housing, with immediate real-world impact and a significant share of global carbon emissions, was the right problem to work on. He reached out to his colleagues Aaron Small and Felipe Polido from Amazon Robotics to share his idea for advanced, low-cost factories that could be deployed quickly and close to where they were needed. The two joined him as co-founders.

Currently, the microfactory in Andover, Massachusetts, produces structural panels, with robotics completing wall and ceiling framing and people completing the rest of the work, including wiring and plumbing. Eventually, Reframe hopes to automate more of the building process through further use of robotics. The modular construction process allows for reduced waste and disruption on the eventual home site. And the finished homes are designed to be energy-efficient and ready for solar panel installation. The company is set to start work soon on a group of homes in Devens, Massachusetts.

In addition to the Andover location, Reframe is setting up in southern California to help rebuild homes that were destroyed in the area's January 2025 wildfires. The company's software-assisted design process and the adjustability of the microfactories allows them to meet local zoning and building codes and align with the local architectural aesthetic. This means that in Somerville, Reframe's completed buildings look like modernized versions of the neighboring three-story buildings, known locally as "triple-deckers." On the other side of the country, Reframe's design offerings include Spanish-style and craftsman homes.

"Housing is a complex systems problem," Enti says, explaining the impact SDM has had on his work at Reframe. The methods and tools taught in the integrated core class EM.412 (Foundations of System Design and Management) help him tackle systems-level problems and take the needs of multiple stakeholders into account. The Reframe team used technology roadmapping as they devised their overall business plan, inspired by the work of Olivier de Weck, associate head of the MIT Department of Aeronautics and Astronautics. And lectures on project management from Bryan Moser, SDM's academic director, remain relevant. 

"Embracing the fact that this is a systems problem, and learning how to navigate the system and the stakeholders to make sure we're finding the optimal value, has been a key part of the business strategy," Enti says.

Reframe Systems is set to continue learning through iteration as they plan to expand their network of microfactories. The company remains committed to the core vision of sustainably meeting the country's need for more housing. "I'm grateful we get to do this," Enti says. "Once you strip away all the robotics, the advanced algorithms, and the factories, these are high-quality, healthy homes that families get to live in and grow." 



de MIT News https://ift.tt/S5OokeK

How to expand the US economy

It’s an essential insight about our world: Innovation drives economic growth. For the U.S. to thrive, it must keep innovating. But how, and in what areas?

A new book co-authored by MIT faculty members focuses on six key areas where technology advances can drive the economy and support national security.

Those sectors — semiconductors, biotechnology, critical minerals, drones, quantum computing, and advanced manufacturing — are all built on U.S. know-how but are also areas where the country has either yielded a lead in production or innovation, or could yet fall behind.

As the book explains, a roadmap for U.S. prosperity and security involves sustaining notable areas of innovation and the national research ecosystem behind them, while rebuilding domestic manufacturing.

“In each of these areas, there are breakthroughs to be had, where the U.S. can leapfrog competitors and gain an advantage,” says Elisabeth Reynolds, an MIT expert on industrial innovation and editor of the new volume. “That’s a very exciting part of this.” She adds: “These areas are front and center for U.S. national economic and security policy.”

The book, “Priority Technologies: Ensuring U.S. Security and Shared Prosperity,” is published this week by the MIT Press. It features chapters by MIT faculty with expertise on the industrial sectors in question. Reynolds, a professor of the practice in MIT’s Department of Urban Studies and Planning, is a leading expert on industrial innovation and has long advocated for innovation-based growth that helps the U.S. workforce.

“All of this can be good for everyone,” says MIT economist Simon Johnson, who wrote the foreword to the book. “Out of that flow of innovations and ideas, we can create more good jobs for all Americans. Pushing the technological frontier and turning that into jobs is definitely going to help.”

Making more chips

“Priority Technologies” grew out of an ongoing MIT seminar by the same name, which Reynolds and Johnson began holding in 2023, often with appearances by other MIT faculty.

Both Reynolds and Johnson bring vast experience to the subject of innovation and production. Among other things, Reynolds headed MIT’s Industrial Performance Center for over a decade and was executive director of the MIT Task Force on the Work of the Future. She served in the White House National Economic Council as special assistant to the president for manufacturing and development.

Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship at the MIT Sloan School of Management, shared the 2024 Nobel Prize in economics, with MIT’s Daron Acemoglu and the University of Chicago’s James Robinson, for work about the historical relationship between institutions and economic growth. He has co-authored numerous books, including, with Acemoglu, the 2023 book “Power and Progress,” about the trajectory and implications of artificial intelligence.

As it happens, “Priority Technologies” does not focus on AI, instead opting to examine other vital, and often related, areas of innovation.

“We do not think this is the entire list of priority technologies,” Johnson says. “This is a partial list, and there are lots of other ideas.”

In the chapter on semiconductors, Jesús A. del Alamo, the Donner Professor of Science in MIT’s Department of Electrical Engineering and Computer Science, calls them “the oxygen of modern society.” This U.S.-born industry has seen a large manufacturing shift away from the country, however, leaving it vulnerable in terms of security and the economy; about one-third of inflation experienced in 2021 stemmed from a chip shortage. As he notes, the U.S. is now in the process of rebuilding its capacity to make leading-edge logic chips, for one thing.

“With semiconductors, people thought the U.S. could lose the manufacturing, stay on top of the innovation and design side, and would be fine,” Reynolds says. “But it’s turned out to make the country quite vulnerable. So we’ve had a massive shift to rebuild semiconductor manufacturing capabilities here in the U.S., and I would argue that’s been a successful strategy in recent years.”

Bringing biotech back home

In biotechnology, relocating manufacturing in the U.S. is also key, using new technologies in the process. As J. Christopher Love, the Laurent Professor of Chemical Engineering, puts it in his chapter, while the U.S. is the leader in biotech research, it “lacks the manufacturing infrastructure and expertise necessary to bring these ideas to the market at the same pace as it generates innovative new products.” Among other remedies, he suggests that smaller, more flexible production facilities can help the U.S. “leapfrog” other countries on the manufacturing side. Love is also co-director of MIT’s Initiative for New Manufacturing, which aims to drive advances in U.S. production across industries.

“We have tremendous biotech innovation, we’re the leaders, but we have a bottleneck when it comes manufacturing,” Reynolds observes. “If we can break through that with new technologies, new production processes, we’re in a position to make us less vulnerable, from a supply chain point of view, and capture more of what is going to be a $4 trillion market over the next 15 years.”

A similar story holds in other areas. Many drone innovations were developed in the U.S., while much manufacturing has shifted to China. Fiona Murray, the William Porter (1967) Professor of Entrepreneurship, writes that the U.S. has an “opportunity to rebuild its production at scale,” although that will also require significant strengthening of its supply chains, too.

Elsa Olivetti, the Jerry McAfee (1940) Professor of Engineering and a professor of materials science and engineering, recommends a multifaceted approach to help the U.S. regain traction in the production of critical minerals, including better forms of extraction, manufacturing, and recycling, to reduce potential scarcities.

And in the quantum computing chapter, two MIT co-authors — William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and a professor of physics; and Jonathan Ruane, a senior lecturer at MIT Sloan — note that the sector could help accelerate drug discovery, materials science, and energy applications. Noting that the U.S. still leads in private-sector investment in the field but tails China in public-sector investment, they urge more research support and stronger supply chains for quantum computing components, among other recommendations.

“The country that achieves quantum leadership will gain decisive advantages in these strategically important industries,” they write.

The university engine

From industry to industry, the book makes clear that certain key issues are broadly important to U.S. competitiveness and growth. The partnership between the federal government and the world-leading research capacities of U.S. universities, for one thing, has given the country an initial lead in many economic sectors and promises to continue driving innovation.

At the same time, the U.S. would benefit from expanding and strengthening its domestic supply chains, in the process of building up more domestic manufacturing, and needs capital investment that will help hardware-side, physically substantial industrial growth.

“These common themes include supply chain resilience and manufacturing capability,” Reynolds says. “Can we help drive the country’s innovation ecosystem through expansion of our industrial system and manufacturing? That’s a big question.”

On the research front, she reflects, over the years, “It’s been amazing how much MIT-led research has aligned with national priorities — or maybe that’s not so surprising.”

The partnership between the U.S. federal government and universities as research engines was formalized in the 1940s, thanks in part to then-MIT president Vannevar Bush. According to some estimates, government investment in non-defense research and development alone has accounted for up to 25 percent of U.S. economic growth since World War II.

“Vannevar Bush realized it wasn’t about a stock of technology, it was about a flow of innovation,” Johnson says. “And that brilliant insight is still relevant today. I think that is the insight of the last century. And that’s what we’re trying to capture and reiterate and repeat.”

“This is not even the future. This is current.”

Scholars and industry leaders have praised “Priority Technologies.” Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon University, has stated that when it comes to “ensuring American national security, economic competitiveness, and societal well-being,” the book underscores “the positive role technology can play in those outcomes.” Hemant Taneja, CEO of the venture capital firm General Catalyst, calls the volume “required reading for anyone interested in building the abundant, resilient future America deserves.”

For their part, Reynolds and Johnson hope the book will draw many kinds of readers interested in the economy, innovation, prosperity, and national security.

“We tried to make the volume accessible,” Reynolds says, noting that the book directly lays out “challenges for the country, and what we see as recommendations for next steps in how we position the country to succeed, and lead globally. Each of these chapters has something important to say.”

Johnson also notes the MIT scholars participating in the project want to enhance the ongoing policy conversation, in Washington and across the country, about supporting innovation and using it to drive U.S. economic and technological leadership.

“One reason to write a book is, you can’t pound the table with a podcast,” quips Johnson, who co-hosts a podcast, “Power and Consequences,” on major policy issues. In conversations with political leaders and their staffs, he adds, there is a core message to be transmitted about America and technology-driven growth: We have the knowledge and resources, but need to focus on supporting innovation while trying to increase domestic production.

“Here are the technologies we currently need,” Johnson says. “This is not imagination, this is not fanciful, this is not science fiction. This is not even the future. This is current. These are the technologies needed to defend the country and its interests. And we need to invest in these, and in everything we need to drive them forward.”



de MIT News https://ift.tt/Wy1wipM

domingo, 19 de abril de 2026

Managing traffic in space

Chances are, you’ve already used a satellite today. Satellites make it possible for us to stream our favorite shows, call and text a friend, check weather and navigation apps, and make an online purchase. Satellites also monitor the Earth’s climate, the extent of agricultural crops, wildlife habitats, and impacts from natural disasters.

As we’ve found more uses for them, satellites have exploded in number. Today, there are more than 10,000 satellites operating in low-Earth orbit. Another 5,000 decommissioned satellites drift through this region, along with over 100 million pieces of debris comprising everything from spent rocket stages to flecks of spacecraft paint.

For MIT’s Richard Linares, the rapid ballooning of satellites raises pressing questions: How can we safely manage traffic and growing congestion in space? And at what point will we reach orbital capacity, where adding more satellites is not sustainable, and may in fact compromise spacecraft and the services that we rely on?

“It is a judgement that society has to make, of what value do we derive from launching more satellites,” says Linares, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the things we try to do is approach these questions of traffic management and orbital capacity as engineering problems.”

Linares leads the MIT Astrodynamics, Space Robotics, and Controls Lab (ARCLab), a research group that applies astrodynamics (the motion and trajectory of orbiting objects) to help track and manage the millions of objects in orbit around the Earth. The group also develops tools to predict how space traffic and debris will change as operators launch large satellite “mega-constellations” into space.

He is also exploring the effects of space weather on satellites, as well as how climate change on Earth may limit the number of satellites that can safely orbit in space. And, anticipating that satellites will have to be smarter and faster to navigate a more cluttered environment, Linares is looking into artificial intelligence to help satellites autonomously learn and reason to adapt to changing conditions and fix issues onboard.

“Our research is pretty diverse,” Linares says. “But overall, we want to enable all these economic opportunities that satellites give us. And we are figuring out engineering solutions to make that possible.”

Grounding practical problems

Linares was born and raised in Yonkers, New York. His parents both worked as school bus drivers to support their children, Linares being the youngest of six. He was an active kid and loved sports, playing football throughout high school.

“Sports was a way to stay focused and organized, and to develop a work ethic,” Linares says. “It taught me to work hard.”

When applying for colleges, rather than aim for Division I schools like some of his teammates, Linares looked for programs that were strong in science, specifically in aerospace. Growing up, he was fascinated with Carl Sagan’s “Cosmos” docuseries. And being close to Manhattan, he took regular trips to the Hayden Planetarium to take in the center’s immersive projections of space and the technologies used to explore it.

“My interest in science came from the universe and trying to understand our place within it,” Linares recalls.

Choosing to stay close to home, he applied to in-state schools with strong aeronautical engineering departments, and happily landed at the State University of New York at Buffalo (SUNY Buffalo), where he would ultimately earn his bachelor’s, master’s, and doctoral degrees, all in aerospace engineering.

As an undergraduate, Linares took on a research project in astrodynamics, looking to solve the problem of how to determine the relative orientation of satellites flying in formation.

“Formation flying was a big topic in the early 2000s,” Linares says. “I liked the flavor of the math involved, which allowed me to go a layer deeper toward a solution.”

He worked out the math to show that when three satellites fly together, they essentially form a triangle, the angles of which can be calculated to determine where each satellite is in relation to the other two at any moment in time. His work introduced a new controls approach to enable satellites to fly safely together. The research had direct applications for the U.S. Air Force, which helped to sponsor the work.

As he expanded the research into a master’s thesis, Linares also took opportunities to work directly with the Air Force on issues of satellite tracking and orientation. He served two internships with the U.S. Air Force Research Lab, one at Kirtland Air Force Base in Albuquerque, New Mexico, and the other in Maui, Hawaii.

“Being able to collaborate with the Air Force back then kind of grounded the research in practical problems,” Linares says.

For his PhD, he turned to another practical problem of “uncorrelated tracks.” At the time, the Air Force operated a network of telescopes to observe more than 20,000 objects in space, which they were working to label and record in a catalog to help them track the objects over time. But while detecting objects was relatively straightforward, the challenge came in correlating a detected object with what was already in the catalog. In other words, is what they were seeing something they had already seen?

Linares developed image analysis techniques to identify key characteristics of objects such as their shape and orientation, which helped the Air Force “fingerprint” satellites and pieces of space debris, and track their activity — and potential for collisions — over time.

After completing his PhD, Linares worked as a postdoc at Los Alamos National Laboratory and the U.S. Naval Observatory. During that time he expanded his aerospace work to other areas including space weather, using satellite measurements to model how Earth’s ionosphere — the upper layer of the atmosphere that is ionized by the sun’s radiation — affects satellite drag.

He then accepted a position as assistant professor of aerospace engineering at the University of Minnesota at Minneapolis. For the next three years, he continued his research in modeling space weather, tracking space objects and coordinating satellites to fly in swarms.

Making space

In 2018, Linares made the move to MIT.

“I had a lot of respect for the people and for the history of the work that was done here,” says Linares, who was especially inspired by the legendary Charles Stark “Doc” Draper, who developed the first inertial guidance systems in the 1940s that would enable the self-navigation of airplanes, submarines, satellites, and spacecraft for decades to come. “This was essentially my field, and I knew MIT was the best place to continue my career.”

As a junior faculty member in AeroAstro, Linares spent his first years focused on an emerging challenge: space sustainability. Around that time, the first satellite constellations were launching into low-Earth orbit with SpaceX’s Starlink, which aimed to provide global internet coverage via a huge network of several thousand coordinating satellites. The launching of so many satellites, into orbits that already held other active and nonactive satellites, along with millions of pieces of space debris, raised questions about how to safely manage the satellite traffic and how much traffic an orbit can sustain.

“At what level do we reach a tipping point, where we have too many satellites in certain orbital regimes?” Linares says. “It was kind of a known problem at the time, but there weren’t many solutions.”

Linares’ group applied an understanding of astrodynamics, and the physics of how objects move in space, to figure out the best way to pack satellites in orbital “shells,” or lanes that would most likely prevent collisions. They also developed a state-of-the-art model of orbital traffic, that was able to simulate the trajectories of more than 10 million individual objects in space. Previous models were much more limited in the number of objects they could accurately simulate. Linares’ open-source model, called the MIT Orbital Capacity Assessment Tool, or MoCAT, could account for the millions of pieces of space debris, in addition to the many intact satellites in orbit.

The tools that his group has developed are used today by satellite operators to plan and predict safe spacecraft trajectories. His team is continuing to work on problems of space traffic management and orbital capacity. They are also branching out into space robotics. The team is testing ways to teleoperate a humanoid robot, which could potentially help to build future infrastructure and carry out long-duration tasks in space.

Linares is also exploring artificial intelligence, including ways that a satellite can autonomously “learn” from its experience and safely adapt to uncertain environments.

“Imagine if each satellite had a virtual Doc Draper onboard that could do the de-bugging that we did from the ground during the Apollo missions,” Linares says. “That way, satellites would become instantaneously more robust. And it’s not taking the human out of the equation. It’s allowing the human to be amplified. I think that’s within reach.”



de MIT News https://ift.tt/pxXU0QA

viernes, 17 de abril de 2026

Why bother with plausible deniability?

Picture this scenario in a business: An employee, Brad, disclosed some information that wound up in the hands of a competitor. He may not have meant to, but he did, and a few people at the firm know this. So, at the next company meeting, another employee, Linda, looks pointedly at Brad and says, “I know that no one would ever dream of leaking information, intentionally or otherwise, from our discussions.”

Linda means the opposite of what she says, of course. She is letting people know that Brad is to blame. However, while Linda is making her message public, she also wants what we often call “plausible deniability” for her statement. If anyone asks later if she was insinuating anything about Brad, she can claim she was just making a general comment about the firm.

From the boardroom to the courtroom, the talk show, and beyond, people frequently seek plausible deniability for their statements. It seems to work, too. Indeed, to have plausible deniability, the denial need not be plausible.

“People can say, ‘That’s not what I meant,’ and completely get away with it, even though it’s totally obvious they’re lying,” says MIT philosopher Sam Berstler. “They wouldn’t be getting away with it in the same respect by putting the content in explicit words.”

She adds: “This should be very puzzling to us, because in both cases the intent is maximally obvious.”

So why does plausible deniability work, and work like this? And what does it tell us about how we interact? Berstler, who studies language and communication, has published a new paper on plausible deniability, examining these issues. It is part of a larger body of work Berstler is generating, focused on everyday interactions involving deception.

To understand plausible deniability, Berstler thinks we should recognize that our conversations cannot be understood simply by analyzing the words we use. Our interactions always take place in social contexts, often have a performative aspect, and occasionally intersect with “non-acknowedgement norms,” the practice of keeping quiet about what we all know. Plausible deniability is bound up with social practices that incentivize us to not be fully transparent.

“A lot of indirect speech is designed, as it were, to facilitate this kind of deniability,” Berstler says.

The paper, “Non-Epistemic Deniability,” is published in the journal MIND. Berstler, the Laurance S. Rockefeller Career Development Chair and assistant professor of philosophy at MIT, is the sole author.

Managing a personal “Cold War”

In Berstler’s view, there are multiple ways to create plausible deniability. One is through the practice of open secrets, the subject of one of her previous papers. An open secret is widely known information that is never acknowledged, for reasons of power or in-group identification, among other things. Indeed, no one even acknowledges that they are not acknowledging the open secret.

Examining open secrets led Berstler directly to her analysis of plausible deniability. However, the new paper focuses more on another way of creating plausible deniability, which she calls “two-tracking norms.” Two-tracking is when a group divides its communications into two parts: One track consists of official, limited, courteous interaction, and the second track consists more of informal, resentful, uncooperative interactions. Linda, in our example, is engaging in two-tracking.

But why do we two-track at all? Why not just be fully transparent? Well, in an office scenario, if Linda is mad that Brad divulged some company secrets, calling out Brad directly might lead to recriminations and conflict beyond what Linda is willing to tolerate for the sake of critizing Brad on the record.

“It's like a Cold War situation where we each have an interest in not letting the conflict go to a state where we’re firing warheads at each other, but we can’t just purely manage relations around the negotiating table because we’re adversaries,” Berstler says. “We’re going to aggress against each other, but in a limited way. In a two-track conversation, communicating in the second track is like fighting a proxy battle, but we’re also providing evidence to each other that we’re only going to engage in a proxy battle.”

In this way, Linda takes Brad to task and some people pick up on it, but Brad is not explicitly publicly shamed. And though he might be unhappy, he is less likely to wreck all company norms in an attempt to retaliate. The firm more or less rolls on as usual.

Waiting for Goffman

Where Berstler differs in part from other philosophers is in her emphasis on the extent to which social practices are integral to our ways of deploying deniability. Our interactions are not just limited to rhetoric, but have additional layers.

“What we mean can often be different from what we say, or enhanced from what we say,” Berstler says. “Sometimes we figure out what others mean by relying on what they say in literal language. But sometimes we’re relying on other things, like the context.”

So, back at the firm, the colleagues of Linda and Brad might have some knowledge of a confidentiality breach, or they might know that Linda does not usually speak up at meetings, or they might read things into her tone of voice and the way she appeared to look at Brad. There is more to be gleaned than her literal words.

In this kind of analysis, Berstler finds illumination in the work of the midcentury sociologist Erving Goffman, who studied in minute detail the performative parts of our everyday interactions and speech. Goffman, as Berstler notes in the paper, proposed that we have a ritualized, social self (or “face”) and that normal, everyday behavior generally allows us, and others, to keep this face intact.

Relatedly, Goffman and some of his intellectual followers concluded that habits such as two-tracking are very common in everyday life; the price we pay for saving face is a bit less transparency, and a bit more secrecy and deniability.

“What I’m suggesting is we have these other established practices like two-tracking and open secrecy, where the deniability is just a byproduct,” Berstler says.

What’s the solution?

By bringing sociological ideas into her work, Berstler is moving beyond the normal philosophical discussion of the subject. On the other hand, she is not directly disputing core ideas in linguistics or the philosophy of language; she is just suggesting we add another layer to our analysis of communication and meaning.

Digging into issues of plausible deniability also raises the question of what to do about it. There may be something pernicious in the practice, but calling out plausible deniability threatens to dismantle our social guardrails and break the “Cold War” norms used to help people co-exist.

Berstler, though, has another suggestion: Instead of calling out such subterfuge, we can become verbally and performatively skilled enough to counteract it.

“I think the actual answer is becoming rhetorically clever,” Berstler says. “It’s being the person who uses indirect speech to respond strategically, without violating these norms. That is possible. It also means you have agency. You could become very good at verbal sparring.”

Besides, Berstler says, “Often that can be more powerful than just calling them out, and demonstrates your own verbal fluency. I think we admire it when we see it. Conversational skill is an important component of being morally good, in these cases by reprimanding someone in a way that’s not going to be counterproductive.”

She adds: “People who buy into the rhetoric of transparency can be setting back their own interests. Maybe speaking transparently is morally virtuous in some respects, but given the reality of our speech practices, transparency is not necessarily going to be the most effective way of handling things.”



de MIT News https://ift.tt/iq0JoCM

Jacob Andreas and Brett McGuire named Edgerton Award winners

MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.

“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”

“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”

Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the National Accrediting Agency for Clinical Laboratory Sciences, the International Conference on Machine Learning, and the Association for Computational Linguistics.

Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”

Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”

Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.

The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.

“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”

“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”

Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven't felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.” 

“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”

“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”



de MIT News https://ift.tt/z6O7mkW

jueves, 16 de abril de 2026

Bringing AI-driven protein-design tools to biologists everywhere

Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”



de MIT News https://ift.tt/4gzbMWC