lunes, 9 de febrero de 2026

A quick stretch switches this polymer’s capacity to transport heat

Most materials have an inherent capacity to handle heat. Plastic, for instance, is typically a poor thermal conductor, whereas materials like marble move heat more efficiently. If you were to place one hand on a marble countertop and the other on a plastic cutting board, the marble would conduct more heat away from your hand, creating a colder sensation compared to the plastic.

Typically, a material’s thermal conductivity cannot be changed without re-manufacturing it. But MIT engineers have now found that a relatively common material can switch its thermal conductivity. Simply stretching the material quickly dials up its heat conductance, from a baseline similar to that of plastic to a higher capacity closer to that of marble. When the material springs back to its unstretched form, it returns to its plastic-like properties.

The thermally reversible material is an olefin block copolymer — a soft and flexible polymer that is used in a wide range of commercial products. The team found that when the material is quickly stretched, its ability to conduct heat more than doubles. This transition occurs within just 0.22 seconds, which is the fastest thermal switching that has been observed in any material.

This material could be used to engineer systems that adapt to changing temperatures in real time. For instance, switchable fibers could be woven into apparel that normally retains heat. When stretched, the fabric would instantly conduct heat away from a person’s body to cool them down. Similar fibers can be built into laptops and infrastructure to keep devices and buildings from overheating. The researchers are working on further optimizing the polymer and on engineering new materials with similar properties.

“We need cheap and abundant materials that can quickly adapt to environmental temperature changes,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now that we’ve seen this thermal switching, this changes the direction where we can look for and build new adaptive materials.”

Boriskina and her colleagues have published their results in a study appearing today in the journal Advanced Materials. The study’s co-authors include Duo Xu, Buxuan Li, You Lyu, and Vivian Santamaria-Garcia of MIT, and Yuan Zhu of Southern University of Science and Technology in Shenzhen, China.

Elastic chains

The key to the new phenomenon is that when the material is stretched, its microscopic structures align in ways that suddenly allow heat to travel through easily, increasing the material’s thermal conductivity. In its unstretched state, the same microstructures are tangled and bunched, effectively blocking heat’s path.

As it happens, Boriskina and her colleagues didn’t set out to find a heat-switching material. They were initially looking for more sustainable alternatives to spandex, which is a synthetic fabric made from petroleum-based plastics that is traditionally difficult to recycle. As a potential replacement, the team was investigating fibers made from a different polymer known as polyethylene.

“Once we started working with the material, we realized it had other properties that were more interesting than the fact that it was elastic,” Boriskina says. “What makes polyethylene unique is it has this backbone of carbon atoms arranged along a simple chain. And carbon is a very good conductor of heat.”

The microstructure of most polymer materials, including polyethylene, contains many carbon chains. However, these chains exist in a messy, spaghetti-like tangle known as an amorphous phase. Despite the fact that carbon is a good heat conductor, the disordered arrangement of chains typically impedes heat flow. Polyethylene and most other polymers, therefore, generally have low thermal conductivity.

In previous work, MIT Professor Gang Chen and his collaborators found ways to untangle the mess of carbon chains and push polyethylene to shift from a disordered amorphous state to a more aligned, crystalline phase. This transition effectively straightened the carbon chains, providing clear highways for heat to flow through and increasing the material’s thermal conductivity. In those experiments however, the switch was permanent; once the material’s phase changed, it could not be reversed.

As Boriskina’s team explored polyethylene, they also considered other closely related materials, including olefin block copolymer (OBC). OBC is predominantly an amorphous material, made from highly tangled chains of carbon and hydrogen atoms. Scientists had therefore assumed that OBC would exhibit low thermal conductivity. If its conductance could be increased, it would likely be permanent, similar to polyethylene.

But when the team carried out experiments to test the elasticity of OBC, they found something quite different.

“As we stretched and released the material, we realized that its thermal conductivity was really high when it was stretched and lower when it was relaxed, over thousands of cycles,” says study co-author and MIT graduate student Duo Xu. “This switch was reversible, while the material stayed mostly amorphous. That was unexpected.”

A stretchy mess

The team then took a closer look at OBC, and how it might be changing as it was stretched. The researchers used a combination of X-ray and Raman spectroscopy to observe the material’s microscopic structure as they stretched and relaxed it repeatedly. They observed that, in its unstretched state, the material consists mainly of amorphous tangles of carbon chains, with just a few islands of ordered, crystalline domains scattered here and there. When stretched, the crystalline domains seemed to align and the amorphous tangles straightened out, similar to what Gang Chen observed in polyethylene.

However, rather than transitioning entirely into a crystalline phase, the straightened tangles stayed in their amorphous state. In this way, the team found that the tangles were able to switch back and forth, from straightened to bunched and back again, as the material was stretched and relaxed repeatedly.

“Our material is always in a mostly amorphous state; it never crystallizes under strain,” Xu notes. “So it leaves you this opportunity to go back and forth in thermal conductivity a thousand times. It’s very reversible.”

The team also found that this thermal switching happens extremely fast: The material’s thermal conductivity more than doubled within just 0.22 seconds of being stretched.

“The resulting difference in heat dissipation through this material is comparable to a tactile difference between touching a plastic cutting board versus a marble countertop,” Boriskina says.

She and her colleagues are now taking the results of their experiments and working them into models to see how they can tweak a material’s amorphous structure, to trigger an even bigger change when stretched.

“Our fibers can quickly react to dissipate heat, for electronics, fabrics, and building infrastructure.” Boriskina says. “If we could make further improvements to switch their thermal conductivity from that of plastic to that closer to diamond, it would have a huge industrial and societal impact.”

This research was supported, in part, by the U.S. Department of Energy, the Office of Naval Research Global via Tec de Monterrey, MIT Evergreen Graduate Innovation Fellowship, MathWorks MechE Graduate Fellowship, and the MIT-SUSTech Centers for Mechanical Engineering Research and Education, and carried out, in part, with the use of MIT.nano and ISN facilities.



de MIT News https://ift.tt/MEaklcs

domingo, 8 de febrero de 2026

How MIT’s 10th president shaped the Cold War

Today, MIT plays a key role in maintaining U.S. competitiveness, technological leadership, and national defense — and much of the Institute’s work to support the nation’s standing in these areas can be traced back to 1953.

Two months after he took office that year, U.S. President Dwight Eisenhower received a startling report from the military: The USSR had successfully exploded a nuclear bomb nine months sooner than intelligence sources had predicted. The rising Communist power had also detonated a hydrogen bomb using development technology more sophisticated than that of the U.S. And lastly, there was evidence of a new Soviet bomber that rivaled the B-52 in size and range — and the aircraft was of an entirely original design from within the USSR. There was, the report concluded, a significant chance of a surprise nuclear attack on the United States.

Eisenhower’s understanding of national security was vast (he had led the Allies to victory in World War II and served as the first supreme commander of NATO), but the connections he’d made during his two-year stint as president of Columbia University would prove critical to navigating the emerging challenges of the Cold War. He sent his advisors in search of a plan for managing this threat, and he suggested they start with James Killian, then president of MIT.

Killian had an unlikely path to the presidency of MIT. “He was neither a scientist nor an engineer,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing and a professor of aeronautics and astronautics at MIT. “But Killian turned out to be a truly gifted administrator.”

While he was serving as editor of MIT Technology Review (where he founded what became the MIT Press), Killian was tapped by then-president Karl Compton to join his staff. As the war effort ramped up on the MIT campus in the 1940s, Compton deputized Killian to lead the RadLab — a 4,000-person effort to develop and deploy the radar systems that proved decisive in the Allied victory.

Killian was named MIT’s 10th president in 1948. In 1951, he launched MIT Lincoln Laboratory, a federally funded research center where MIT and U.S. Air Force scientists and engineers collaborated on new air defense technologies to protect the nation against a nuclear attack.

Two years later, within weeks of Eisenhower’s 1953 request, Killian convened a group of leading scientists at MIT. The group proposed a three-part study: The U.S. needed to reassess its offensive capabilities, its continental defense, and its intelligence operations. Eisenhower agreed.

Killian mobilized 42 engineers and scientists from across the country into three panels matching the committee’s charge. Between September 1954 and February 1955, the panels held 307 meetings with every major defense and intelligence organization in the U.S. government. They had unrestricted access to every project, plan, and program involving national defense. The result, a 190-page report titled “Meeting the Threat of a Surprise Attack,” was delivered to Eisenhower’s desk on Feb. 14, 1955.

The Killian Report, as it came to be known, would go on to play a dramatic role in defining the frontiers of military technology, intelligence gathering, national security policy, and global affairs over the next several decades. Killian’s input would also have dramatic impacts on Eisenhower’s presidency and the relationship between the federal government and higher education.

Foreseeing an evolving competition

The Killian Report opens by anticipating four projected “periods” in the shifting balance of power between the U.S. and the Soviet Union.

In 1955, the U.S. had a decided offensive advantage over the USSR, but it was overly vulnerable to surprise attack. In 1956 and 1957, the U.S. would have an even larger offensive advantage and be only somewhat less vulnerable to surprise. By 1960, the U.S.’ offensive advantage would be narrower, but it would be in a better position to anticipate an attack. Within a decade, the report stated, the two nations would enter “Period IV” — during which “an attack by either side would result in mutual destruction … [a period] so fraught with danger to the U.S. that we should push all promising technological development so that we may stay in Periods II and III as long as possible.”

The report went on to make extensive, detailed recommendations — accelerated development of intercontinental ballistic missiles and high-energy aircraft fuels, expansion and increased ground security for “delivery system” facilities, increased cooperation with Canada and more studies about establishing monitoring stations on polar pack ice, and “studies directed toward better understanding of the radiological hazards that may result from the detonation of large numbers of nuclear weapons,” among others.

“Eisenhower really wanted to draw the perspectives of scientists and engineers into his decision-making,” says Mindell. “Generals and admirals tend to ask for more arms and more boots on the ground. The president didn’t want to be held captive by these views — and Killian’s report really delivered this for him.”

On the day it arrived, President Eisenhower circulated the Killian Report to the head of every department and agency in the federal government and asked them to comment on its recommendations. The Cold War arms race was on — and it would be between scientists and engineers in the United States and those in the Soviet Union.

An odd couple

The Killian Report made many recommendations based on “the correctness of the current national intelligence estimates” — even though “Eisenhower was frustrated with his whole intelligence apparatus,” says Will Hitchcock, the James Madison Professor of History at the University of Virginia and author of “The Age of Eisenhower.” “He felt it was still too much World War II ‘exploding-cigar’ stuff. There wasn’t enough work on advance warning, on seeing what’s over the hill. But that’s what Eisenhower really wanted to know.” The surprise attack on Pearl Harbor still lingered in the minds of many Americans, Hitchcock notes, and “that needed to be avoided.”

Killian needed an aggressive, innovative thinker to assess U.S. intelligence, so he turned to Edwin Land. The cofounder of Polaroid, Land was an astonishingly bold engineer and inventor. He also had military experience, having developed new ordnance targeting systems, aerial photography devices, and other photographic and visual surveillance technologies during World War II. Killian approached Land knowing their methods and work style were quite different. (When the offer to lead the intelligence panel was made, Land was in Hollywood advising filmmakers on the development of 3D movies; Land told Killian he had a personal rule that any committee he served on “must fit into a taxicab.”)

In fall 1954, Land and his five-person panel quickly confirmed Killian and Eisenhower’s suspicions: “We would go in and interview generals and admirals in charge of intelligence and come away worried,” Land reported to Killian later. “We were [young scientists] asking questions — and they couldn’t answer them.” Killian and Land realized this would set their report and its recommendations on a complicated path: While they needed to acknowledge and address the challenges of broadly upgrading intelligence activities, they also needed to make rapid progress on responding to the Soviet threat.

As work on the report progressed, Land and Killian held briefings with Eisenhower. They used these meetings to make two additional proposals — neither of which, President Eisenhower decided, would be spelled out in the final report for security reasons. The first was the development of missile-firing submarines, a long-term prospect that would take a decade to complete. (The technology developed for Polaris-class submarines, Mindell notes, transferred directly to the rockets that powered the Apollo program to the moon.)

The second proposal — to fast-track development of the U-2, a new high-altitude spy plane —could be accomplished within a year, Land told Eisenhower. The president agreed to both ideas, but he put a condition on the U-2 program. As Killian later wrote: “The president asked that it should be handled in an unconventional way so that it would not become entangled in the bureaucracy of the Defense Department or troubled by rivalries among the services.”

Powered by Land’s revolutionary imaging devices, the U-2 would become a critical tool in the U.S.’ ability to assess and understand the Soviet Union’s nuclear capacity. But the spy plane would also go on to have disastrous consequences for the peace process and for Eisenhower.

The aftermath(s)

The Killian Report has a very complex legacy, says Christopher Capozzola, the Elting Morison Professor of History. “There is a series of ironies about the whole undertaking,” he says. “For example, Eisenhower was trying to tamp down interservice rivalries by getting scientists to decide things. But within a couple of years those rivalries have all gotten worse.” Similarly, Capozzola notes, Eisenhower — who famously coined the phrase “military-industrial complex” and warned against it — amplified the militarization of scientific research “more than anyone else.”

Another especially painful irony emerged on May 1, 1960. Two weeks before a meeting between Eisenhower and Khrushchev in Paris to discuss how the U.S. and USSR could ease Cold War tensions and slow the arms race, a U-2 was shot down in Soviet airspace. After a public denial by the U.S. that the aircraft was being used for espionage, the Soviets produced the plane’s wreckage, cameras, and pilot — who admitted he was working for the CIA. The peace process, which had become the centerpiece of Eisenhower’s intended legacy, collapsed.

There were also some brighter outcomes of the Killian Report, Capozzola says. It marked a dramatic reset of the national government’s relationship with academic scientists and engineers — and with MIT specifically. “The report really greased the wheels between MIT scientists and Washington,” he notes. “Perhaps more than the report itself, the deep structures and relationships that Killian set up had implications for MIT and other research universities. They started to orient their missions toward the national interest,” he adds.

The report also cemented Eisenhower’s relationship with Killian. After the launch of Sputnik, which induced a broad public panic in the U.S. about Soviet scientific capabilities, the president called on Killian to guide the national response. Eisenhower later named Killian the first special assistant to the president for science and technology. In the years that followed, Killian would go on to help launch NASA, and MIT engineers would play a critical role in the Apollo mission that landed the first person on the moon. To this day, researchers at MIT and Lincoln Laboratory uphold this legacy of service, advancing knowledge in areas vital to national security, economic competitiveness, and quality of life for all Americans.

As Eisenhower’s special assistant, Killian met with him almost daily and became one of his most trusted advisors. “Killian could talk to the president, and Eisenhower really took his advice,” says Capozzola. “Not very many people can do that. The fact that Killian had that and used it was different.”

A key to their relationship, Capozzola notes, was Killian’s approach to his work. “He exemplified the notion that if you want to get something done, don’t take the credit. At no point did Killian think he was setting science policy. He was advising people on their best options, including decision-makers who would have to make very difficult decisions. That’s it.”

In 1977, after many tours of duty in Washington and his retirement from MIT, Killian summarized his experience working for Eisenhower in his memoir, “Sputnik, Scientists, and Eisenhower.” Killian said of his colleagues: “They were held together in close harmony not only by the challenge of the scientific and technical work they were asked to undertake but by their abiding sense of the opportunity they had to serve a president they admired and the country they loved. They entered the corridors of power in a moment of crisis and served there with a sense of privilege and of admiration for the integrity and high purpose of the White House.”



de MIT News https://ift.tt/H8YzCrG

viernes, 6 de febrero de 2026

“This is science!” – MIT president talks about the importance of America’s research enterprise on GBH’s Boston Public Radio

In a wide-ranging live conversation, MIT President Sally Kornbluth joined Jim Braude and Margery Eagan live in studio for GBH’s Boston Public Radio on Thursday, February 5. They talked about MIT, the pressures facing America’s research enterprise, the importance of science, that Congressional hearing on antisemitism in 2023, and more – including Sally’s experience as a Type 1 diabetic.

Reflecting on how research and innovation in the treatment of diabetes has advanced over decades of work, leading to markedly better patient care, Kornbluth exclaims: “This is science!”

With new financial pressures facing universities, increased competition for talented students and scholars from outside the U.S., as well as unprecedented pressures on university leaders and campuses, co-host Eagan asks Kornbluth what she thinks will happen in years to come.

“For us, one of the hardest things now is the endowment tax,” remarks Kornbluth. “That is $240 million a year. Think about how much science you can get for $240 million a year. Are we managing it? Yes. Are we still forging ahead on all of our exciting initiatives? Yes. But we’ve had to reconfigure things. We’ve had to merge things. And it’s not the way we should be spending our time and money.”   

Watch and listen to the full episode on YouTube. President Kornbluth appears one hour and seven minutes into the broadcast.

Following Kornbluth’s appearance, MIT Assistant Professor John Urschel – also a former offensive lineman for the Baltimore Ravens –   joined Edgar B. Herwick III, host of GBH’s newest show, The Curiosity Desk, to talk about his love of his family, linear algebra, and football.

On how he eventually chose math over football, Urschel quips: “Well, I hate to break it to you, I like math better… let me tell you, when I started my PhD at MIT, I just fell in love with the place. I fell in love with this idea of being in this environment [where] everyone loves math, everyone wants to learn. I was just constantly excited every day showing up.”

Prof. Urschel appears about 2 hours and 40 minutes into the webcast on YouTube.

Coming up on Curiosity Desk later this month…

Airing weekday afternoons from 1-2 p.m., The Curiosity Desk will welcome additional MIT guests in the coming weeks. On Thursday, Feb. 12 Anette “Peko” Hosoi, Pappalardo Professor of Mechanical Engineering, and Jerry Lu MFin ’24, a former researcher at the MIT Sports Lab, visit The Curiosity Desk to discuss their work using AI to help Olympic figure skaters improve their jumps.

Then, on Thursday, Feb. 19, Professors Sangeeta Bhatia and Angela Belcher talk with Herwick about their research to improve diagnostics for ovarian cancer. We learn that about 80% of the time ovarian cancer starts in the fallopian tubes and how this points the way to a whole new approach to diagnosing and treating the disease. 



de MIT News https://ift.tt/frEd3P9

I’m walking here! A new model maps foot traffic in New York City

Early in the 1969 film “Midnight Cowboy,” Dustin Hoffman, playing the character of Ratso Rizzo, crosses a Manhattan street and angrily bangs on the hood of an encroaching taxi. Hoffman’s line — “I’m walking here!” — has since been repeated by thousands of New Yorkers. Where cars and people mix, tensions rise.

And yet, governments and planners across the U.S. haven’t thoroughly tracked where it is that cars and people mix. Officials have long measured vehicle traffic closely while largely ignoring pedestrian traffic. Now, an MIT research group has assembled a routable dataset of sidewalks, crosswalks, and footpaths for all of New York City — a massive mapping project and the first complete model of pedestrian activity in any U.S. city.

The model could help planners decide where to make pedestrian infrastructure and public space investments, and illuminate how development decisions could affect non-motorized travel in the city. The study also helps pinpoint locations throughout the city where there are both lots of pedestrians and high pedestrian hazards, such as traffic crashes, and where streets or intersections are most in need of upgrades.

“We now have a first view of foot traffic all over New York City and can check planning decisions against it,” says Andres Sevtsuk, an associate professor in MIT’s Department of Urban Studies and Planning (DUSP), who led the study. “New York has very high densities of foot traffic outside of its most well-known areas.”

Indeed, one upshot of the model is that while Manhattan has the most foot traffic per block, the city’s other boroughs contain plenty of pedestrian-heavy stretches of sidewalk and could probably use more investment on behalf of walkers.

“Midtown Manhattan has by far the most foot traffic, but we found there is a probably unintentional Manhattan bias when it comes to policies that support pedestrian infrastructure,” Sevtsuk says. “There are a whole lot of streets in New York with very high pedestrian volumes outside of Manhattan, whether in Queens or the Bronx or Brooklyn, and we’re able to show, based on data, that a lot of these streets have foot-traffic levels similar to many parts of Manhattan.”

And, in an advance that could help cities anywhere, the model was used to quantify vehicle crashes involving pedestrians not only as raw totals, but on a per-pedestrian basis.

“A lot of cities put real investments behind keeping pedestrians safe from vehicles by prioritizing dangerous locations,” Sevtsuk says. “But that’s not only where the most crashes occur. Here we are able to calculate accidents per pedestrian, the risk people face, and that broadens the picture in terms of where the most dangerous intersections for pedestrians really are.”

The paper, “Spatial Distribution of Foot-traffic in New York City and Applications for Urban Planning,” is published today in Nature Cities.

The authors are Sevtsuk, the Charles and Ann Spaulding Associate Professor of Urban Science and Planning in DUSP and head of the City Design and Development Group; Rounaq Basu, an assistant professor at Georgia Tech; Liu Liu, a PhD student at the City Form Lab in DUSP; Abdulaziz Alhassan, a PhD student at MIT’s Center for Complex Engineering Systems; and Justin Kollar, a PhD student at MIT’s Leventhal Center for Advanced Urbanism in DUSP.

Walking everywhere

The current study continues work Sevtsuk and his colleagues have conducted charting and modeling pedestrian traffic around the world, from Melbourne to MIT’s Kendall Square neighborhood in Cambridge, Massachusetts. Many cities collect some pedestrian count data — but not much. And while officials usually request vehicle traffic impact assessments for new development plans, they rarely study how new developments or infrastructure proposals affect pedestrians.

However, New York City does devote part of its Department of Transportation (DOT) to pedestrian issues, and about 41 percent of trips city-wide are made on foot, compared to just 28 percent by vehicle, likely the highest such ratio in any big U.S. city. To calibrate the model, the MIT team used pedestrian counts that New York City’s DOT recorded in 2018 and 2019, covering up to 1,000 city sidewalk segments on weekdays and up to roughly 450 segments on weekends.

The researchers were able to test the model — which incorporates a wide range of factors — against New York City’s pedestrian-count data. Once calibrated, the model could expand foot-traffic estimates throughout the whole city, not just the points where pedestrian counts were observed.

The results showed that in Midtown Manhattan, there are about 1,697 pedestrians, on average, per sidewalk segment per hour during the evening peak of foot traffic, the highest in the city. The financial district in lower Manhattan comes in second, at 740 pedestrians per hour, with Greenwich Village third at 656.

Other parts of Manhattan register lower levels of foot traffic, however. Morningside Heights and East Harlem register 226 and 227 pedestrians per block per hour. And that’s similar to, or lower than, some parts of other boroughs. Brooklyn Heights has 277 pedestrians per sidewalk segment per hour; University Heights in the Bronx has 263; Borough Park in Brooklyn and the Grand Concourse in the Bronx average 236; and a slice of Queens in the Corona area averages 222. Many other spots are over 200.

The model overlays many different types of pedestrian journeys for each time period and shows that people are generally headed to work and schools in the morning, but conduct more varied types of trips in mid-day and the evening, as they seek out amenities or conduct social or recreational visits.

“Because of jobs, transit stops are the biggest generators of foot traffic in the morning peak,” Liu observes. “In the evening peak, of course people need to get home too, but patterns are much more varied, and people are not just returning from work or school. More social and recreational travel happens after work, whether it’s getting together with friends or running errands for family or family care trips, and that’s what the model detects too.”

On the safety front, pedestrians face danger in many places, not just the intersections with the most total accidents. Many parts of the city are riskier than others on a per-pedestrian basis, compared to the locations with the most pedestrian-related crashes.

“Places like Times Square and Herald Square in Manhattan may have numerous crashes, but they have very high pedestrian volumes, and it’s actually relatively safe to walk there,” Basu says. “There are other parts of the city, around highway off-ramps and heavy car-infrastructure, including the relatively low-density borough of Staten Island, which turn out to have a disproportionate number of crashes per pedestrian.”

Taking the model across the U.S.

The MIT model stands a solid chance of being applied in New York City policy and planning circles, since officials there are aware of the research and have been regularly communicating with the MIT team about it.

For his part, Sevtsuk emphasizes that, as distinct as New York City might be, the MIT model can be applied to cities and town anywhere in the U.S. As it happens, the team is working with municipal officials in two other places at the moment. One is Los Angeles, where city officials are not only trying to upgrade pedestrian and public transit mobility for regular daily trips, but making plans to handle an influx of visitors for the 2028 summer Olympics.

Meanwhile the state of Maine is working with the MIT team to evaluate pedestrian movement in over 140 of its cities and towns, to better understand the kinds of upgrades and safety improvements it could make for pedestrians across the state. Sevtsuk hopes that still other places will take notice of the New York City study and recognize that the tools are in place to analyze foot traffic more broadly in U.S. cities, to address the urgent need to decarbonize cities, and to start balancing what he views as the disproportionate focus on car travel prevalent in 20th century urban planning.

“I hope this can inspire other cities to invest in modeling foot traffic and mapping pedestrian infrastructure as well,” Sevtsuk says. “Very few cities make plans for pedestrian mobility or examine rigorously how future developments will impact foot-traffic. But they can. Our models serve as a test bed for making future changes.” 



de MIT News https://ift.tt/mwSefEo

jueves, 5 de febrero de 2026

Some early life forms may have breathed oxygen well before it filled the atmosphere

Oxygen is a vital and constant presence on Earth today. But that hasn’t always been the case. It wasn’t until around 2.3 billion years ago that oxygen became a permanent fixture in the atmosphere, during a pivotal period known as the Great Oxidation Event (GOE), which set the evolutionary course for oxygen-breathing life as we know it today.

A new study by MIT researchers suggests some early forms of life may have evolved the ability to use oxygen hundreds of millions of years before the GOE. The findings may represent some of the earliest evidence of aerobic respiration on Earth.

In a study appearing today in the journal Palaeogeography, Palaeoclimatology, Palaeoecology, MIT geobiologists traced the evolutionary origins of a key enzyme that enables organisms to use oxygen. The enzyme is found in the vast majority of aerobic, oxygen-breathing life forms today. The team discovered that this enzyme evolved during the Mesoarchean — a geological period that predates the Great Oxidation Event by hundreds of millions of years.

The team’s results may help to explain a longstanding puzzle in Earth’s history: Why did it take so long for oxygen to build up in the atmosphere?

The very first producers of oxygen on the planet were cyanobacteria — microbes that evolved the ability to use sunlight and water to photosynthesize, releasing oxygen as a byproduct. Scientists have determined that cyanobacteria emerged around 2.9 billion years ago. The microbes, then, were presumably churning out oxygen for hundreds of millions of years before the Great Oxidation Event. So, where did all of cyanobacteria’s early oxygen go?

Scientists suspect that rocks may have drawn down a large portion of oxygen early on, through various geochemical reactions. The MIT team’s new study now suggests that biology may have also played a role.

The researchers found that some organisms may have evolved the enzyme to use oxygen hundreds of millions of years before the Great Oxidation Event. This enzyme may have enabled the organisms living near cyanobacteria to gobble up any small amounts of oxygen that the microbes produced, in turn delaying oxygen’s accumulation in the atmosphere for hundreds of millions of years.

“This does dramatically change the story of aerobic respiration,” says study co-author Fatima Husain, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our study adds to this very recently emerging story that life may have used oxygen much earlier than previously thought. It shows us how incredibly innovative life is at all periods in Earth’s history.”

The study’s other co-authors include Gregory Fournier, associate professor of geobiology at MIT, along with Haitao Shang and Stilianos Louca of the University of Oregon.

First respirers

The new study adds to a long line of work at MIT aiming to piece together oxygen’s history on Earth. This body of research has helped to pin down the timing of the Great Oxidation Event as well as the first evidence of oxygen-producing cyanobacteria. The overall understanding that has emerged is that oxygen was first produced by cyanobacteria around 2.9 billion years ago, while the Great Oxidation Event — when oxygen finally accumulated enough to persist in the atmosphere — took place much later, around 2.33 billion years ago.

For Husain and her colleagues, this apparent delay between oxygen’s first production and its eventual persistence inspired a question.

“We know that the microorganisms that produce oxygen were around well before the Great Oxidation Event,” Husain says. “So it was natural to ask, was there any life around at that time that could have been capable of using that oxygen for aerobic respiration?”

If there were in fact some life forms that were using oxygen, even in small amounts, they might have played a role in keeping oxygen from building up in the atmosphere, at least for a while.

To investigate this possibility, the MIT team looked to heme-copper oxygen reductases, which are a set of enzymes that are essential for aerobic respiration. The enzymes act to reduce oxygen to water, and they are found in the majority of aerobic, oxygen-breathing organism today, from bacteria to humans.

“We targeted the core of this enzyme for our analyses because that’s where the reaction with oxygen is actually taking place,” Husain explains.

Tree dates

The team aimed to trace the enzyme’s evolution backward in time to see when the enzyme first emerged to enable organisms to use oxygen. They first identified the enzyme’s genetic sequence and then used an automated search tool to look for this same sequence in databases containing the genomes of millions of different species of organisms.

“The hardest part of this work was that we had too much data,” Fournier says. “This enzyme is just everywhere and is present in most modern living organism. So we had to sample and filter the data down to a dataset that was representative of the diversity of modern life and also small enough to do computation with, which is not trivial.”

The team ultimately isolated the enzyme’s sequence from several thousand modern species and mapped these sequences onto an evolutionary tree of life, based on what scientists know about when each respective species has likely evolved and branched off. They then looked through this tree for specific species that might offer related information about their origins.

If, for instance, there is a fossil record for a particular organism on the tree, that record would include an estimate of when that organism appeared on Earth. The team would use that fossil’s age to “pin” a date to that organism on the tree. In a similar way, they could place pins across the tree to effectively tighten their estimates for when in time the enzyme evolved from one species to the next.

In the end, the researchers were able to trace the enzyme as far back as the Mesoarchean — a geological era that lasted from 3.2 to 2.8 billion years ago. It’s around this time that the team suspects the enzyme — and organisms’ ability to use oxygen — first emerged. This period predates the Great Oxidation Event by several hundred million years.

The new findings suggest that, shortly after cyanobacteria evolved the ability to produce oxygen, other living things evolved the enzyme to use that oxygen. Any such organism that happened to live near cyanobacteria would have been able to quickly take up the oxygen that the bacteria churned out. These early aerobic organisms may have then played some role in preventing oxygen from escaping to the atmosphere, delaying its accumulation for hundreds of millions of years.

“Considered all together, MIT research has filled in the gaps in our knowledge of how Earth’s oxygenation proceeded,” Husain says. “The puzzle pieces are fitting together and really underscore how life was able to diversify and live in this new, oxygenated world.”

This research was supported, in part, by the Research Corporation for Science Advancement Scialog program.



de MIT News https://ift.tt/m96ASXY

Helping AI agents search to get the best results out of large language models

Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.

AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.

But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes. Coding this up can take as much effort as implementing the original agent; if your system for translating a codebase contained thousands of lines of code, then you’d be making thousands of lines of code changes or additions to support the logic for backtracking when LLMs make mistakes. 

To save programmers time and effort, researchers with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Asari AI have developed a framework called “EnCompass.” 

With EnCompass, you no longer have to make these changes yourself. Instead, when EnCompass runs your program, it automatically backtracks if LLMs make mistakes. EnCompass can also make clones of the program runtime to make multiple attempts in parallel in search of the best solution. In full generality, EnCompass searches over the different possible paths your agent could take as a result of the different possible outputs of all the LLM calls, looking for the path where the LLM finds the best solution.

Then, all you have to do is to annotate the locations where you may want to backtrack or clone the program runtime, as well as record any information that may be useful to the strategy used to search over the different possible execution paths of your agent (the search strategy). You can then separately specify the search strategy — you could either use one that EnCompass provides out of the box or, if desired, implement your own custom search strategy.

“With EnCompass, we’ve separated the search strategy from the underlying workflow of an AI agent,” says lead author Zhening Li ’25, MEng ’25, who is an MIT electrical engineering and computer science (EECS) PhD student, CSAIL researcher, and research consultant at Asari AI. “Our framework lets programmers easily experiment with different search strategies to find the one that makes the AI agent perform the best.” 

EnCompass was used for agents implemented as Python programs that call LLMs, where it demonstrated noticeable code savings. EnCompass reduced coding effort for implementing search by up to 80 percent across agents, such as an agent for translating code repositories and for discovering transformation rules of digital grids. In the future, EnCompass could enable agents to tackle large-scale tasks, including managing massive code libraries, designing and carrying out science experiments, and creating blueprints for rockets and other hardware.

Branching out

When programming your agent, you mark particular operations — such as calls to an LLM — where results may vary. These annotations are called “branchpoints.” If you imagine your agent program as generating a single plot line of a story, then adding branchpoints turns the story into a choose-your-own-adventure story game, where branchpoints are locations where the plot branches into multiple future plot lines. 

You can then specify the strategy that EnCompass uses to navigate that story game, in search of the best possible ending to the story. This can include launching parallel threads of execution or backtracking to a previous branchpoint when you get stuck in a dead end.

Users can also plug-and-play a few common search strategies provided by EnCompass out of the box, or define their own custom strategy. For example, you could opt for Monte Carlo tree search, which builds a search tree by balancing exploration and exploitation, or beam search, which keeps the best few outputs from every step. EnCompass makes it easy to experiment with different approaches to find the best strategy to maximize the likelihood of successfully completing your task.

The coding efficiency of EnCompass

So just how code-efficient is EnCompass for adding search to agent programs? According to researchers’ findings, the framework drastically cut down how much programmers needed to add to their agent programs to add search, helping them experiment with different strategies to find the one that performs the best.

For example, the researchers applied EnCompass to an agent that translates a repository of code from the Java programming language, which is commonly used to program apps and enterprise software, to Python. They found that implementing search with EnCompass — mainly involving adding branchpoint annotations and annotations that record how well each step did — required 348 fewer lines of code (about 82 percent) than implementing it by hand. They also demonstrated how EnCompass enabled them to easily try out different search strategies, identifying the best strategy to be a two-level beam search algorithm, achieving an accuracy boost of 15 to 40 percent across five different repositories at a search budget of 16 times the LLM calls made by the agent without search.

“As LLMs become a more integral part of everyday software, it becomes more important to understand how to efficiently build software that leverages their strengths and works around their limitations,” says co-author Armando Solar-Lezama, who is an MIT professor of EECS and CSAIL principal investigator. “EnCompass is an important step in that direction.”

The researchers add that EnCompass targets agents where a program specifies the steps of the high-level workflow; the current iteration of their framework is less applicable to agents that are entirely controlled by an LLM. “In those agents, instead of having a program that specifies the steps and then using an LLM to carry out those steps, the LLM itself decides everything,” says Li. “There is no underlying programmatic workflow, so you can execute inference-time search on whatever the LLM invents on the fly. In this case, there’s less need for a tool like EnCompass that modifies how a program executes with search and backtracking.”

Li and his colleagues plan to extend EnCompass to more general search frameworks for AI agents. They also plan to test their system on more complex tasks to refine it for real-world uses, including at companies. What’s more, they’re evaluating how well EnCompass helps agents work with humans on tasks like brainstorming hardware designs or translating much larger code libraries. For now, EnCompass is a powerful building block that enables humans to tinker with AI agents more easily, improving their performance.

“EnCompass arrives at a timely moment, as AI-driven agents and search-based techniques are beginning to reshape workflows in software engineering,” says Carnegie Mellon University Professor Yiming Yang, who wasn’t involved in the research. “By cleanly separating an agent’s programming logic from its inference-time search strategy, the framework offers a principled way to explore how structured search can enhance code generation, translation, and analysis. This abstraction provides a solid foundation for more systematic and reliable search-driven approaches to software development.”  

Li and Solar-Lezama wrote the paper with two Asari AI researchers: Caltech Professor Yisong Yue, an advisor at the company; and senior author Stephan Zheng, who is the founder and CEO. Their work was supported by Asari AI.

The team’s work was presented at the Conference on Neural Information Processing Systems (NeurIPS) in December.



de MIT News https://ift.tt/WUQb9RN

New vaccine platform promotes rare protective B cells

A longstanding goal of immunotherapies and vaccine research is to induce antibodies in humans that neutralize deadly viruses such as HIV and influenza. Of particular interest are antibodies that are “broadly neutralizing,” meaning they can in principle eliminate multiple strains of a virus such as HIV, which mutates rapidly to evade the human immune system.

Researchers at MIT and the Scripps Research Institute have now developed a vaccine that generates a significant population of rare precursor B cells that are capable of evolving to produce broadly neutralizing antibodies. Expanding these cells is the first step toward a successful HIV vaccine.

The researchers’ vaccine design uses DNA instead of protein as a scaffold to fabricate a virus-like particle (VLP) displaying numerous copies of an engineered HIV immunogen called eOD-GT8, which was developed at Scripps. This vaccine generated substantially more precursor B cells in a humanized mouse model compared to a protein-based virus-like particle that has shown significant success in human clinical trials.

Preclinical studies showed that the DNA-VLP generated eight times more of the desired, or “on-target,” B cells than the clinical product, which was already shown to be highly potent.

“We were all surprised that this already outstanding VLP from Scripps was significantly outperformed by the DNA-based VLP,” says Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard. “These early preclinical results suggest a potential breakthrough as an entirely new, first-in-class VLP that could transform the way we think about active immunotherapies, and vaccine design, across a variety of indications.”

The researchers also showed that the DNA scaffold doesn’t induce an immune response when applied to the engineered HIV antigen. This means the DNA VLP might be used to deliver multiple antigens when boosting strategies are needed, such as for challenging diseases such as HIV.

“The DNA-VLP allowed us for the first time to assess whether B cells targeting the VLP itself limit the development of ‘on target’ B cell responses — a longstanding question in vaccine immunology,” says Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute and a Howard Hughes Medical Institute Investigator.

Bathe and Irvine are the senior authors of the study, which appears today in Science. The paper’s lead author is Anna Romanov PhD ’25.

Priming B cells

The new study is part of a major ongoing global effort to develop active immunotherapies and vaccines that expand specific lineages of B cells. All humans have the necessary genes to produce the right B cells that can neutralize HIV, but they are exceptionally rare and require many mutations to become broadly neutralizing. If exposed to the right series of antigens, however, these cells can in principle evolve to eventually produce the requisite broadly neutralizing antibodies.

In the case of HIV, one such target antibody, called VRC01, was discovered by National Institutes of Health researchers in 2010 when they studied humans living with HIV who did not develop AIDS. This set off a major worldwide effort to develop an HIV vaccine that would induce this target antibody, but this remains an outstanding challenge.

Generating HIV-neutralizing antibodies is believed to require three stages of vaccination, each one initiated by a different antigen that helps guide B cell evolution toward the correct target, the native HIV envelope protein gp120.

In 2013, William Schief, a professor of immunology and microbiology at Scripps, reported an engineered antigen called eOD-GT6 that could be used for the first step in this process, known as priming. His team subsequently upgraded the antigen to eOD-GT8. Vaccination with eOD-GT8 arrayed on a protein VLP generated early antibody precursors to VRC01 both in mice and more recently in humans, a key first step toward an HIV vaccine.

However, the protein VLP also generated substantial “off-target” antibodies that bound the irrelevant, and potentially highly distracting, protein VLP itself. This could have unknown consequences on propagating target B cells of interest for HIV, as well as other challenging immunotherapy applications.

The Bathe and Irvine labs set out to test if they could use a particle made from DNA, instead of protein, to deliver the priming antigen. These nanoscale particles are made using DNA origami, a method that offers precise control over the structure of synthetic DNA and allows researchers to attach viral antigens at specific locations.

In 2024, Bathe and Daniel Lingwood, an associate professor at Harvard Medical School and a principal investigator at the Ragon Institute, showed this DNA VLP could be used to deliver a SARS-CoV-2 vaccine in mice to generate neutralizing antibodies. From that study, the researchers learned that the DNA scaffold does not induce antibodies to the VLP itself, unlike proteins. They wondered whether this might also enable a more focused antibody response.

Building on these results, Romanov, co-advised by Bathe and Irvine, set off to apply the DNA VLP to the Scripps HIV priming vaccine, based on eOD-GT8.

“Our earlier work with SARS-CoV-2 antigens on DNA-VLPs showed that DNA-VLPs can be used to focus the immune response on an antigen of interest. This property seemed especially useful for a case like HIV, where the B cells of interest are exceptionally rare. Thus, we hypothesized that reducing the competition among other irrelevant B cells (by delivering the vaccine on a silent DNA nanoparticle) may help these rare cells have a better chance to survive,”  Romanov says.

Initial studies in mice, however, showed the vaccine did not induce sufficient early B cell response to the first, priming dose.

After redesigning the DNA VLPs, Romanov and colleagues found that a smaller diameter version with 60 instead of 30 copies of the engineered antigen dramatically out-performed the clinical protein VLP construct, both in overall number of antigen-specific B cells and the fraction of B cells that were on-target to the specific HIV domain of interest. This was a result of improved retention of the particles in B cell follicles in lymph nodes and better collaboration with helper T cells, which promote B cell survival.

Overall, these improvements enabled the particles to generate eightfold more on-target B cells than the vaccine consisting of eOD-GT8 carried by a protein scaffold. Another key finding, elucidated by the Lingwood lab, was that the DNA particles promoted VRC01 precursor B cells toward the VRC01 antibody more efficiently than the protein VLP.

“In the field of vaccine immunology, the question of whether B cell responses to a targeted protective epitope on a vaccine antigen might be hindered by responses to neighboring off-target epitopes on the same antigen has been under intense investigation,” says Schief, who is also vice president for protein design at Moderna. “There are some data from other studies suggesting that off-target responses might not have much impact, but this study shows quite convincingly that reducing off-target responses by using a DNA VLP can improve desired on-target responses.”

“While nanoparticle formulations have been great at boosting antibody responses to various antigens, there is always this nagging question of whether competition from B cells specific for the particle’s own structural antigens won’t get in the way of antibody responses to targeted epitopes,” says Gabriel Victora, a professor of immunology, virology, and microbiology at Rockefeller University, who was not involved in the study. “DNA-based particles that leverage B cells’ natural tolerance to nucleic acids are a clever idea to circumvent this problem, and the research team’s elegant experiments clearly show that this strategy can be used to make difficult epitopes easier to target.”

A “silent” scaffold

The fact that the DNA-VLP scaffold doesn’t induce scaffold-specific antibodies means that it could be used to carry second and potentially third antigens needed in the vaccine series, as the researchers are currently investigating. It also might offer significantly improved on-target antibodies for numerous antigens that are outcompeted and dominated by off-target, irrelevant protein VLP scaffolds in this or other applications.

“A breakthrough of this paper is the rigorous, mechanistic quantification of how DNA-VLPs can ‘focus’ antibody responses on target antigens of interest, which is a consequence of the silent nature of this DNA-based scaffold we’ve previously shown is stealth to the immune system,” Bathe says.

More broadly, this new type of VLP could be used to generate other kinds of protective antibody responses against pandemic threats such as flu, or potentially against chemical warfare agents, the researchers suggest. Alternatively, it might be used as an active immunotherapy to generate antibodies that target amyloid beta or tau protein to treat degenerative diseases such as Alzheimer’s, or to generate antibodies that target noxious chemicals such as opioids or nicotine to help people suffering from addiction.

The research was funded by the National Institutes of Health; the Ragon Institute of MGH, MIT, and Harvard; the Howard Hughes Medical Institute; the National Science Foundation; the Novo Nordisk Foundation; a Koch Institute Support (core) Grant from the National Cancer Institute; the National Institute of Environmental Health Sciences; the Gates Foundation Collaboration for AIDS Vaccine Discovery; the IAVI Neutralizing Antibody Center; the National Institute of Allergy and Infectious Diseases; and the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies.



de MIT News https://ift.tt/jeIP1AF