jueves, 18 de septiembre de 2025

Meet the 2025 tenured professors in the School of Humanities, Arts, and Social Sciences

In 2025, six faculty were granted tenure in the MIT School of Humanities, Arts, and Social Sciences.

Sara Brown is an associate professor in the Music and Theater Arts Section. She develops stage designs for theater, opera, and dance by approaching the scenographic space as a catalyst for collective imagination. Her work is rooted in curiosity and interdisciplinary collaboration, and spans virtual environments, immersive performance installations, and evocative stage landscapes. Her recent projects include “Carousel” at the Boston Lyric Opera; the virtual dance performance “The Other Shore” at the Massachusetts Museum of Contemporary Art and Jacob’s Pillow; and “The Lehman Trilogy” at the Huntington Theatre Company. Her upcoming co-directed work, “Circlusion,” takes place within a fully immersive inflatable space and reimagines the female body’s response to power and violence. Her designs have been seen at the BAM Next Wave Festival in New York, the Festival d’Automne in Paris, and the American Repertory Theater in Cambridge.

Naoki Egami is a professor in the Department of Political Science. He is also a faculty affiliate of the MIT Institute for Data, Systems, and Society. Egami specializes in political methodology and develops statistical methods for questions in political science and the social sciences. His current research programs focus on three areas: external validity and generalizability; machine learning and AI for the social sciences; and causal inference with network and spatial data. His work has appeared in various academic journals in political science, statistics, and computer science, such as American Political Science Review, American Journal of Political Science, Journal of the American Statistical Association, Journal of the Royal Statistical Society (Series B), NeurIPS, and Science Advances. Before joining MIT, Egami was an assistant professor at Columbia University. He received a PhD from Princeton University (2020) and a BA from the University of Tokyo (2015).

Rachel Fraser is an associate professor in the Department of Linguistics and Philosophy. Before coming to MIT, Fraser taught at the University of Oxford, where she also completed her graduate work in philosophy. She has interests in epistemology, language, feminism, aesthetics, and political philosophy. At present, her main project is a book manuscript on the epistemology of narrative.

Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His research focuses on how we ought to form beliefs and make decisions. He works in epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics including collective action problems, legal standards of proof, algorithmic fairness, and political polarization, among others. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford. He received his BA From Princeton University in 2006 and his PhD from MIT in 2012.

Viola Schmitt is an associate professor in the Department of Linguistics and Philosophy. She is a linguist with a special interest in semantics. Much of her work focuses on trying to understand general constraints on human language meaning; that is, the principles regulating which meanings can be expressed by human languages and how languages can package meaning. Variants of this question were also central to grants she received from the Austrian and German research foundations. She earned her PhD in linguistics from the University of Vienna and worked as a postdoc and/or lecturer at the Universities of Vienna, Graz, Göttingen, and at the University of California at Los Angeles. Her most recent position was as a junior professor at Humboldt University in Berlin.

Miguel Zenón is an associate professor in the Music and Theater Arts Section. The Puerto Rican alto saxophonist, composer, band leader, music producer, and educator is a Grammy Award winner, the recipient of a Guggenheim Fellowship, a MacArthur Fellowship, and a Doris Duke Artist Award. He also holds an honorary doctorate degree in the arts from Universidad del Sagrado Corazón. Zenón has released 18 albums as a band leader and collaborated with some of the great musicians and ensembles of his time. As a composer, Zenón has been commissioned by Chamber Music America, Logan Center for The Arts, The Hyde Park Jazz Festival, Miller Theater, The Hewlett Foundation, Peak Performances, and many of his peers. Zenón has given hundreds of lectures and master classes at institutions all over the world, and in 2011 he founded Caravana Cultural — a program that presents jazz concerts free of charge in rural areas of Puerto Rico.



de MIT News https://ift.tt/FCDgWT2

Inflammation jolts “sleeping” cancer cells awake, enabling them to multiply again

Cancer cells have one relentless goal: to grow and divide. While most stick together within the original tumor, some rogue cells break away to traverse to distant organs. There, they can lie dormant — undetectable and not dividing — for years, like landmines waiting to go off.

This migration of cancer cells, called metastasis, is especially common in breast cancer. For many patients, the disease can return months — or even decades — after initial treatment, this time in an entirely different organ.

Robert Weinberg, the Daniel K. Ludwig Professor for Cancer Research at MIT and a Whitehead Institute for Biomedical Research founding member, has spent decades unraveling the complex biology of metastasis and pursuing research that could improve survival rates among patients with metastatic breast cancer — or prevent metastasis altogether.

In his latest study, Weinberg, postdoc Jingwei Zhang, and colleagues ask a critical question: What causes these dormant cancer cells to erupt into a frenzy of growth and division? The group’s findings, published Sept. 1 in The Proceedings of the National Academy of Sciences (PNAS), point to a unique culprit.

This awakening of dormant cancer cells, they’ve discovered, isn’t a spontaneous process. Instead, the wake-up call comes from the inflamed tissue surrounding the cells. One trigger for this inflammation is bleomycin, a common chemotherapy drug that can scar and thicken lung tissue.

“The inflammation jolts the dormant cancer cells awake,” Weinberg says. “Once awakened, they start multiplying again, seeding new life-threatening tumors in the body.”

Decoding metastasis

There’s a lot that scientists still don’t know about metastasis, but this much is clear: Cancer cells must undergo a long and arduous journey to achieve it. The first step is to break away from their neighbors within the original tumor.

Normally, cells stick to one another using surface proteins that act as molecular “velcro,” but some cancer cells can acquire genetic changes that disrupt the production of these proteins and make them more mobile and invasive, allowing them to detach from the parent tumor. 

Once detached, they can penetrate blood vessels and lymphatic channels, which act as highways to distant organs.

While most cancer cells die at some point during this journey, a few persist. These cells exit the bloodstream and invade different tissues—lungs, liver, bone, and even the brain — to give birth to new, often more-aggressive tumors.

“Almost 90 percent of cancer-related deaths occur not from the original tumor, but when cancer cells spread to other parts of the body,” says Weinberg. “This is why it’s so important to understand how these ‘sleeping’ cancer cells can wake up and start growing again.”

Setting up shop in new tissue comes with changes in surroundings — the “tumor microenvironment” — to which the cancer cells may not be well-suited. These cells face constant threats, including detection and attack by the immune system. 

To survive, they often enter a protective state of dormancy that puts a pause on growth and division. This dormant state also makes them resistant to conventional cancer treatments, which often target rapidly dividing cells.

To investigate what makes this dormancy reversible months or years down the line, researchers in the Weinberg Lab injected human breast cancer cells into mice. These cancer cells were modified to produce a fluorescent protein, allowing the scientists to track their behavior in the body.

The group then focused on cancer cells that had lodged themselves in the lung tissue. By examining them for specific proteins — Ki67, ITGB4, and p63 — that act as markers of cell activity and state, the researchers were able to confirm that these cells were in a non-dividing, dormant state.

Previous work from the Weinberg Lab had shown that inflammation in organ tissue can provoke dormant breast cancer cells to start growing again. In this study, the team tested bleomycin — a chemotherapy drug known to cause lung inflammation — that can be given to patients after surgery to lower the risk of cancer recurrence.

The researchers found that lung inflammation from bleomycin was sufficient to trigger the growth of large lung cancer colonies in treated mice — and to shift the character of these once-dormant cells to those that are more invasive and mobile.

Zeroing in on the tumor microenvironment, the team identified a type of immune cells, called M2 macrophages, as drivers of this process. These macrophages release molecules called epidermal growth factor receptor (EGFR) ligands, which bind to receptors on the surface of dormant cancer cells. This activates a cascade of signals that provoke dormant cancer cells to start multiplying rapidly. 

But EGFR signaling is only the initial spark that ignites the fire. “We found that once dormant cancer cells are awakened, they retain what we call an ‘awakening memory,’” Zhang says. “They no longer require ongoing inflammatory signals from the microenvironment to stay active [growing and multiplying] — they remember the awakened state.”

While signals related to inflammation are necessary to awaken dormant cancer cells, exactly how much signaling is needed remains unclear. “This aspect of cancer biology is particularly challenging, because multiple signals contribute to the state change in these dormant cells,” Zhang says.

The team has already identified one key player in the awakening process, but understanding the full set of signals and how each contributes is far more complex — a question they are continuing to investigate in their new work. 

Studying these pivotal changes in the lives of cancer cells — such as their transition from dormancy to active growth — will deepen our scientific understanding of metastasis and, as researchers in the Weinberg Lab hope, lead to more effective treatments for patients with metastatic cancers.



de MIT News https://ift.tt/4o6DQ2R

Biogen groundbreaking stirs optimism in Kendall Square

Nearly 300 people gathered Tuesday to mark the ceremonial groundbreaking for Biogen’s new state-of-the-art facility in Kendall Square. The project is the first building to be constructed at MIT’s Kendall Common on the former Volpe federal site, and will serve as a consolidated headquarters for the pioneering biotechnology company which has called Cambridge home for more than 40 years.

In marking the start of construction, Massachusetts Governor Maura Healey addressed the enthusiastic crowd, saying, “Massachusetts science saves lives — saves lives here, saves lives around the world. We celebrate that in Biogen today, we celebrate that in Kendall Common, and we celebrate that in this incredible ecosystem that extends all across our great state. Today, Biogen is not just building a new facility, they are building the future of medicine and innovation.”

Emceed by Kirk Taylor, president and CEO of the Massachusetts Life Sciences Center, the event featured a specially created Lego model of the new building and a historic timeline of Biogen’s origin story overlaid on Kendall Square’s transformation. The program’s theme — “Making breakthroughs happen in Kendall Square” — seemed to elicit a palpable sense of pride among the Biogen and MIT employees, business leaders, and public officials in attendance.

MIT President Sally Kornbluth reflected on the vibrancy of the local innovation ecosystem: “I sometimes say that Kendall Square’s motto might as well be ‘talent in proximity.’ By following that essential recipe, Biogen’s latest decision to intensify its presence here promises great things for the whole region.” Kornbluth described Biogen’s move as “a very important signal to the world right now.”

Biogen’s March 2025 announcement that it will centralize operations at 75 Broadway was lauded as a show of strength for the historic company and the life sciences sector. The 580,000-square-foot research and development headquarters, designed by Elkus Manfredi Architects, will optimize Biogen’s scientific discovery and clinical processes. The new facility is scheduled to open in 2028.

CEO Chris Veihbacher shared his thoughts on Biogen’s decision: “I am proud to stand here with so many individuals who have shaped our past and who are dedicated to our future in Kendall Square. … We decided to invest in the next chapter of Kendall Square because of what this community represents: talent, energy, ingenuity, and collaboration.” Biogen was founded in 1978 by Nobel laureates Phillip Sharp (an MIT Institute Professor and professor of biology emeritus) and Wally Gilbert, both of whom were not only present, but received an impromptu standing ovation, led by Viehbacher.

Kendall Common is being developed by MIT’s Investment Management Company (MITIMCo) and will ultimately include four commercial buildings, four residential buildings (including affordable housing), open space, retail, entertainment, and a community center. MITIMCo’s joint venture partner for the Biogen project is BioMed Realty, a Blackstone Real Estate portfolio company.

Senior Vice President Patrick Rowe, who oversees MITIMCo’s real estate group, says, “Biogen is such a critical anchor for the area. I’m excited for the impact that this project will have on Kendall Square, and for the way that the Kendall Common development can help to further advance our innovation ecosystem.”



de MIT News https://ift.tt/7wyAITM

miércoles, 17 de septiembre de 2025

Could a primordial black hole’s last burst explain a mysteriously energetic neutrino?

The last gasp of a primordial black hole may be the source of the highest-energy “ghost particle” detected to date, a new MIT study proposes.

In a paper appearing today in Physical Review Letters, MIT physicists put forth a strong theoretical case that a recently observed, highly energetic neutrino may have been the product of a primordial black hole exploding outside our solar system.

Neutrinos are sometimes referred to as ghost particles, for their invisible yet pervasive nature: They are the most abundant particle type in the universe, yet they leave barely a trace. Scientists recently identified signs of a neutrino with the highest energy ever recorded, but the source of such an unusually powerful particle has yet to be confirmed.

The MIT researchers propose that the mysterious neutrino may have come from the inevitable explosion of a primordial black hole. Primordial black holes (PBHs) are hypothetical black holes that are microscopic versions of the much more massive black holes that lie at the center of most galaxies. PBHs are theorized to have formed in the first moments following the Big Bang. Some scientists believe that primordial black holes could constitute most or all of the dark matter in the universe today.

Like their more massive counterparts, PBHs should leak energy and shrink over their lifetimes, in a process known as Hawking radiation, which was predicted by the physicist Stephen Hawking. The more a black hole radiates, the hotter it gets and the more high-energy particles it releases. This is a runaway process that should produce an incredibly violent explosion of the most energetic particles just before a black hole evaporates away.

The MIT physicists calculate that, if PBHs make up most of the dark matter in the universe, then a small subpopulation of them would be undergoing their final explosions today throughout the Milky Way galaxy. And, there should be a statistically significant possibility that such an explosion could have occurred relatively close to our solar system. The explosion would have released a burst of high-energy particles, including neutrinos, one of which could have had a good chance of hitting a detector on Earth.

If such a scenario had indeed occurred, the recent detection of the highest-energy neutrino would represent the first observation of Hawking radiation, which has long been assumed, but has never been directly observed from any black hole. What’s more, the event might indicate that primordial black holes exist and that they make up most of dark matter — a mysterious substance that comprises 85 percent of the total matter in the universe, the nature of which remains unknown.

“It turns out there’s this scenario where everything seems to line up, and not only can we show that most of the dark matter [in this scenario] is made of primordial black holes, but we can also produce these high-energy neutrinos from a fluke nearby PBH explosion,” says study lead author Alexandra Klipfel, a graduate student in MIT’s Department of Physics. “It’s something we can now try to look for and confirm with various experiments.”

The study’s other co-author is David Kaiser, professor of physics and the Germeshausen Professor of the History of Science at MIT.

High-energy tension

In February, scientists at the Cubic Kilometer Neutrino Telescope, or KM3NeT, reported the detection of the highest-energy neutrino recorded to date. KM3NeT is a large-scale underwater neutrino detector located at the bottom of the Mediterranean Sea, where the environment is meant to mute the effects of any particles other than neutrinos.

The scientists operating the detector picked up signatures of a passing neutrino with an energy of over 100 peta-electron-volts. One peta-electron volt is equivalent to the energy of 1 quadrillion electron volts.

“This is an incredibly high energy, far beyond anything humans are capable of accelerating particles up to,” Klipfel says. “There’s not much consensus on the origin of such high-energy particles.”

Similarly high-energy neutrinos, though not as high as what KM3NeT observed, have been detected by the IceCube Observatory — a neutrino detector embedded deep in the ice at the South Pole. IceCube has detected about half a dozen such neutrinos, whose unusually high energies have also eluded explanation. Whatever their source, the IceCube observations enable scientists to work out a plausible rate at which neutrinos of those energies typically hit Earth. If this rate were correct, however, it would be extremely unlikely to have seen the ultra-high-energy neutrino that KM3NeT recently detected. The two detectors’ discoveries, then, seemed to be what scientists call “in tension.”

Kaiser and Klipfel, who had been working on a separate project involving primordial black holes, wondered: Could a PBH have produced both the KM3NeT neutrino and the handful of IceCube neutrinos, under conditions in which PBHs comprise most of the dark matter in the galaxy? If they could show a chance existed, it would raise an even more exciting possibility — that both observatories observed not only high-energy neutrinos but also the remnants of Hawking radiation.

“Our best chance”

The first step the scientists took in their theoretical analysis was to calculate how many particles would be emitted by an exploding black hole. All black holes should slowly radiate over time. The larger a black hole, the colder it is, and the lower-energy particles it emits as it slowly evaporates. Thus, any particles that are emitted as Hawking radiation from heavy stellar-mass black holes would be near impossible to detect. By the same token, however, much smaller primordial black holes would be very hot and emit high-energy particles in a process that accelerates the closer the black hole gets to disappearing entirely.

“We don’t have any hope of detecting Hawking radiation from astrophysical black holes,” Klipfel says. “So if we ever want to see it, the smallest primordial black holes are our best chance.”

The researchers calculated the number and energies of particles that a black hole should emit, given its temperature and shrinking mass. In its final nanosecond, they estimate that once a black hole is smaller than an atom, it should emit a final burst of particles, including about 1020 neutrinos, or about a sextillion of the particles, with energies of about 100 peta-electron-volts (around the energy that KM3NeT observed).

They used this result to calculate the number of PBH explosions that would have to occur in a galaxy in order to explain the reported IceCube results. They found that, in our region of the Milky Way galaxy, about 1,000 primordial black holes should be exploding per cubic parsec per year. (A parsec is a unit of distance equal to about 3 light years, which is more than 10 trillion kilometers.)

They then calculated the distance at which one such explosion in the Milky Way could have occurred, such that just a handful of the high-energy neutrinos could have reached Earth and produced the recent KM3NeT event. They find that a PBH would have to explode relatively close to our solar system — at a distance about 2,000 times further than the distance between the Earth and our sun.

The particles emitted from such a nearby explosion would radiate in all directions. However, the team found there is a small, 8 percent chance that an explosion can happen close enough to the solar system, once every 14 years, such that enough ultra-high-energy neutrinos hit the Earth.

“An 8 percent chance is not terribly high, but it’s well within the range for which we should take such chances seriously — all the more so because so far, no other explanation has been found that can account for both the unexplained very-high-energy neutrinos and the even more surprising ultra-high-energy neutrino event,” Kaiser says.

The team’s scenario seems to hold up, at least in theory. To confirm their idea will require many more detections of particles, including neutrinos at “insanely high energies.” Then, scientists can build up better statistics regarding such rare events.

“In that case, we could use all of our combined experience and instrumentation, to try to measure still-hypothetical Hawking radiation,” Kaiser says. “That would provide the first-of-its-kind evidence for one of the pillars of our understanding of black holes — and could account for these otherwise anomalous high-energy neutrino events as well. That’s a very exciting prospect!”

In tandem, other efforts to detect nearby PBHs could further bolster the hypothesis that these unusual objects make up most or all of the dark matter.

This work was supported, in part, by the National Science Foundation, MIT’s Center for Theoretical Physics – A Leinweber Institute, and the U.S. Department of Energy.



de MIT News https://ift.tt/k8x6FYW

New 3D bioprinting technique may improve production of engineered tissue

The field of tissue engineering aims to replicate the structure and function of real biological tissues. This engineered tissue has potential applications in disease modeling, drug discovery, and implantable grafts.

3D bioprinting, which uses living cells, biocompatible materials, and growth factors to build three-dimensional tissue and organ structures, has emerged as a key tool in the field. To date, one of the most-used approaches for bioprinting relies on additive manufacturing techniques and digital models, depositing 2D layers of bio-inks, composed of cells in a soft gel, into a support bath, layer-by-layer, to build a 3D structure. While these techniques do enable fabrication of complex architectures with features that are not easy to build manually, current approaches have limitations.

“A major drawback of current 3D bioprinting approaches is that they do not integrate process control methods that limit defects in printed tissues. Incorporating process control could improve inter-tissue reproducibility and enhance resource efficiency, for example limiting material waste,” says Ritu Raman, the Eugene Bell Career Development Chair of Tissue Engineering and an assistant professor of mechanical engineering.

She adds, “given the diverse array of available 3D bioprinting tools, there is a significant need to develop process optimization techniques that are modular, efficient, and accessible.”

The need motivated Raman to seek the expertise of Professor Bianca Colosimo of the Polytechnic University of Milan, also known as Polimi. Colosimo recently completed a sabbatical at MIT, which was hosted by John Hart, Class of 1922 Professor, co-director of MIT’s Initiative for New Manufacturing, director of the Center for Advanced Production Technologies, and head of the Department of Mechanical Engineering.

“Artificial Intelligence and data mining are already reshaping our daily lives, and their impact will be even more profound in the emerging field of 3D bioprinting, and in manufacturing at large,” says Colosimo. During her MIT sabbatical, she collaborated with Raman and her team to co-develop a solution that represents a first step toward intelligent bioprinting.

“This solution is now available in both our labs at Polimi and MIT, serving as a twin platform to exchange data and results across different environments and paving the way for many new joint projects in the years to come,” Colosimo says.

A new paper by Raman, Colosimo, and lead authors Giovanni Zanderigo, a Rocca Fellow at Polimi, and Ferdows Afghah of MIT published this week in the journal Device presents a novel technique that addresses this challenge. The team built and validated a modular, low-cost, and printer-agnostic monitoring technique that integrates a compact tool for layer-by-layer imaging. In their method, a digital microscope captures high-resolution images of tissues during printing and rapidly compares them to the intended design with an AI-based image analysis pipeline.

“This method enabled us to quickly identify print defects, such as depositing too much or too little bio-ink, thus helping us identify optimal print parameters for a variety of different materials,” says Raman. “The approach is a low-cost — less than $500 — scalable, and adaptable solution that can be readily implemented on any standard 3D bioprinter. Here at MIT, the monitoring platform has already been integrated into the 3D bioprinting facilities in The SHED. Beyond MIT, our research offers a practical path toward greater reproducibility, improved sustainability, and automation in the field of tissue engineering. This research could have a positive impact on human health by improving the quality of the tissues we fabricate to study and treat debilitating injuries and disease.”

The authors indicate that the new method is more than a monitoring tool. It also ‎serves as a foundation for intelligent process control in embedded bioprinting. By enabling real-‎time inspection, adaptive correction, and automated parameter tuning, the researchers anticipate that the approach can improve ‎reproducibility, reduce material waste, and accelerate process optimization‎ for real-world applications in tissue engineering.



de MIT News https://ift.tt/tVHI5ZG

Working to make fusion a viable energy source

George Tynan followed a nonlinear path to fusion.

Following his undergraduate degree in aerospace engineering, Tynann's work in the industry spurred his interest in rocket propulsion technology. Because most methods for propulsion involve the manipulation of hot ionized matter, or plasmas, Tynan focused his attention on plasma physics.

It was then that he realized that plasmas could also drive nuclear fusion. “As a potential energy source, it could really be transformative, and the idea that I could work on something that could have that kind of impact on the future was really attractive to me,” he says.

That same drive, to realize the promise of fusion by researching both plasma physics and fusion engineering, drives Tynan today. It’s work he will be pursuing as the Norman C. Rasmussen Adjunct Professor in the Department of Nuclear Science and Engineering (NSE) at MIT.

An early interest in fluid flow

Tynan’s enthusiasm for science and engineering traces back to his childhood. His electrical engineer father found employment in the U.S. space program and moved the family to Cape Canaveral in Florida.

“This was in the ’60s, when we were launching Saturn V to the moon, and I got to watch all the launches from the beach,” Tynan remembers. That experience was formative and Tynan became fascinated with how fluids flow.

“I would stick my hand out the window and pretend it was an airplane wing and tilt it with oncoming wind flow and see how the force would change on my hand,” Tynan laughs. The interest eventually led to an undergraduate degree in aerospace engineering at California State Polytechnic University in Pomona.

The switch to a new career would happen after work in the private sector, when Tynan discovered an interest in the use of plasmas for propulsion systems. He moved to the University of California at Los Angeles for graduate school, and it was here that the realization that plasmas could also anchor fusion moved Tynan into this field.

This was in the ’80s, when climate change was not as much in the public consciousness as it is today. Even so, “I knew there’s not an infinite amount of oil and gas around, and that at some point we would have to have widespread adoption of nuclear-based sources,” Tynan remembers. He was also attracted by the sustained effort it would take to make fusion a reality.

Doctoral work

To create energy from fusion, it’s important to get an accurate measurement of the “energy confinement time,” which is a measure of how long it takes for the hot fuel to cool down when all heat sources are turned off. When Tynan started graduate school, this measure was still an empirical guess. He decided to focus his research on the physics of observable confinement time.

It was during this doctoral research that Tynan was able to study the fundamental differences in the behavior of turbulence in plasma as compared to conventional fluids. Typically, when an ordinary fluid is stirred with increasing vigor, the fluid’s motion eventually becomes chaotic or turbulent. However, plasmas can act in a surprising way: confined plasmas, when heated sufficiently strongly, would spontaneously quench the turbulent transport at the boundary of the plasma

An experiment in Germany had unexpectedly discovered this plasma behavior. While subsequent work on other experimental devices confirmed this surprising finding, all earlier experiments lacked the ability to measure the turbulence in detail.

Brian LaBombard, now a senior research scientist at MIT’s Plasma Science and Fusion Center (PSFC), was a postdoc at UCLA at the time. Under LaBombard’s direction, Tynan developed a set of Langmuir probes, which are reasonably simple diagnostics for plasma turbulence studies, to further investigate this unusual phenomenon. It formed the basis for his doctoral dissertation. “I happened to be at the right place at the right time so I could study this turbulence quenching phenomenon in much more detail than anyone else could, up until that time,” Tynan says.

As a PhD student and then postdoc, Tynan studied the phenomenon in depth, shuttling between research facilities in Germany, Princeton University’s Plasma Physics Laboratory, and UCLA.

Fusion at UCSD

After completing his doctorate and postdoctoral work, Tynan worked at a startup for a few years when he learned that the University of California at San Diego was launching a new fusion research group at the engineering school. When they reached out, Tynan joined the faculty and built a research program focused on plasma turbulence and plasma-material interactions in fusion systems. Eventually, he became associate dean of engineering, and later, chair of the Department of Mechanical and Aerospace Engineering, serving in these roles for nearly a decade.

Tynan visited MIT on sabbatical in 2023, when his conversations with NSE faculty members Dennis Whyte, Zach Hartwig, and Michael Short excited him about the challenges the private sector faces in making fusion a reality. He saw opportunities to solve important problems at MIT that complemented his work at UC San Diego.

Tynan is excited to tackle what he calls, “the big physics and engineering challenges of fusion plasmas” at NSE: how to remove the heat and exhaust generated by burning plasma so it doesn’t damage the walls of the fusion device and the plasma does not choke on the helium ash. He also hopes to explore robust engineering solutions for practical fusion energy, with a particular focus on developing better materials for use in fusion devices that will make them longer-lasting, while  minimizing the production of radioactive waste.

“Ten or 15 years ago, I was somewhat pessimistic that I would ever see commercial exploitation of fusion in my lifetime,” Tynan says. But that outlook has changed, as he has seen collaborations between MIT and Commonwealth Fusion Systems (CFS) and other private-sector firms that seek to accelerate the timeline to the deployment of fusion in the real world.

In 2021, for example, MIT’s PSFC and CFS took a significant step toward commercial carbon-free power generation. They designed and built a high-temperature superconducting magnet, the strongest fusion magnet in the world.

The milestone was especially exciting because the promise of realizing the dream of fusion energy now felt closer. And being at MIT “seemed like a really quick way to get deeply connected with what’s going on in the efforts to develop fusion energy,” Tynan says.

In addition, “while on sabbatical at MIT, I saw how quickly research staff and students can capitalize on a suggestion of a new idea, and that intrigued me,” he adds.

Tynan brings his special blend of expertise to the table. In addition to extensive experience in plasma physics, he has spent a lot more time on hardcore engineering issues like materials, as well. “The key is to integrate the whole thing into a workable and viable system,” Tynan says.



de MIT News https://ift.tt/bXClGoz

Q&A: David Whelihan on the challenges of operating in the Arctic

To most, the Arctic can feel like an abstract place, difficult to imagine beyond images of ice and polar bears. But researcher David Whelihan of MIT Lincoln Laboratory's Advanced Undersea Systems and Technology Group is no stranger to the Arctic. Through Operation Ice Camp, a U.S. Navy–sponsored biennial mission to assess operational readiness in the Arctic region, he has traveled to this vast and remote wilderness twice over the past few years to test low-cost sensor nodes developed by the group to monitor loss in Arctic sea ice extent and thickness. The research team envisions establishing a network of such sensors across the Arctic that will persistently detect ice-fracturing events and correlate these events with environmental conditions to provide insights into why the sea ice is breaking up. Whelihan shared his perspectives on why the Arctic matters and what operating there is like.

Q: Why do we need to be able to operate in the Arctic?

A: Spanning approximately 5.5 million square miles, the Arctic is huge, and one of its salient features is that the ice covering much of the Arctic Ocean is decreasing in volume with every passing year. Melting ice opens up previously impassable areas, resulting in increasing interest from potential adversaries and allies alike for activities such as military operations, commercial shipping, and natural resource extraction. Through Alaska, the United States has approximately 1,060 miles of Arctic coastline that is becoming much more accessible because of reduced ice cover. So, U.S. operation in the Arctic is a matter of national security.  

Q: What are the technological limitations to Arctic operations?

A: The Arctic is an incredibly harsh environment. The cold kills battery life, so collecting sensor data at high rates over long periods of time is very difficult. The ice is dynamic and can easily swallow or crush sensors. In addition, most deployments involve "boots-on-the-ice," which is expensive and at times dangerous. One of the technological limitations is how to deploy sensors while keeping humans alive.

Q: How does the group's sensor node R&D work seek to support Arctic operations?

A: A lot of the work we put into our sensors pertains to deployability. Our ultimate goal is to free researchers from going onto the ice to deploy sensors. This goal will become increasingly necessary as the shrinking ice pack becomes more dynamic, unstable, and unpredictable. At the last Operation Ice Camp (OIC) in March 2024, we built and rapidly tested deployable and recoverable sensors, as well as novel concepts such as using UAVs (uncrewed aerial vehicles), or drones, as "data mules" that can fly out to and interrogate the sensors to see what they captured. We also built a prototype wearable system that cues automatic download of sensor data over Wi-Fi so that operators don't have to take off their gloves.

Q: The Arctic Circle is the northernmost region on Earth. How do you reach this remote place?

A: We usually fly on commercial airlines from Boston to Seattle to Anchorage to Prudhoe Bay on the North Slope of Alaska. From there, the Navy flies us on small prop planes, like Single and Twin Otters, about 200 miles north and lands us on an ice runway built by the Navy's Arctic Submarine Lab (ASL). The runway is part of a temporary camp that ASL establishes on floating sea ice for their operational readiness exercises conducted during OIC.

Q: Think back to the first time you stepped foot in the Arctic. Can you paint a picture of what you experienced?

A: My first experience was at Prudhoe Bay, coming out of the airport, which is a corrugated metal building with a single gate. Before you open the door to the outside, a sign warns you to be on the lookout for polar bears. Walking out into the sheer desolation and blinding whiteness of everything made me realize I was experiencing something very new.

When I flew out onto the ice and stepped out of the plane, I was amazed that the area could somehow be even more desolate. Bright white snowy ice goes in every direction, broken up by pressure ridges that form when ice sheets collide. The sun is low, and seems to move horizontally only. It is very hard to tell the time. The air temperature is really variable. On our first trip in 2022, it really wasn't (relatively) that cold — only around minus 5 or 10 degrees during the day. On our second trip in 2024, we were hit by minus 30 almost every day, and with winds of 20 to 25 miles per hour. The last night we were on the ice that year, it warmed up a bit to minus 10 to 20, but the winds kicked up and started blowing snow onto the heaters attached to our tents. Those heaters started failing one by one as the blowing snow covered them, blocking airflow. After our heater failed, I asked myself, while warm in my bed, whether I wanted to go outside to the command tent for help or try to make it until dawn in my thick sleeping bag. I picked the first option, but mostly because the heater control was beeping loudly right next to my bunk, so I couldn’t sleep anyway. Shout-out to the ASL staff who ran around fixing heaters all night!

Q: How do you survive in a place generally inhospitable to humans?

A: In partnership with the native population, ASL brings a lot of gear — from insulated, heated tents and communications equipment to large snowblowers to keep the runway clear. A few months before OIC, participants attend training on what conditions you will be exposed to and how to protect yourself through appropriate clothing, and how to use survival gear in case of an emergency.

Q: Do you have plans to return to the Arctic?  

A: We are hoping to go back this winter as part of OIC 2026! We plan to test a through-ice communication device. Communicating through 4 to 12 feet of ice is pretty tricky but could allow us to connect underwater drones and stationary sensors under the ice to the rest of the world. To support the through-ice communication system, we will repurpose our sensor-node boxes deployed during OIC 2024. If this setup works, those same boxes could be used as control centers for all sorts of undersea systems and relay information about the under-ice world back home via satellite.

Q: What lessons learned will you bring to your upcoming trip, and any potential future trips?

A: After the first trip, I had a visceral understanding of how hard operating there is. Prototyping of systems becomes a different game. Prototypes are often fragile, but fragility doesn't go over too well on the ice. So, there is a robustification step, which can take some time.

On this last trip, I realized that you have to really be careful with your energy expenditure and pace yourself. While the average adult may require about 2,000 calories a day, an Arctic explorer may burn several times more than that exerting themselves (we do a lot of walking around camp) and keeping warm. Usually, we live on the same freeze-dried food that you would take on camping trips. Each package only has so many calories, so you find yourself eating multiple of those and supplementing with lots of snacks such as Clif Bars or, my favorite, Babybel cheeses (which I bring myself). You also have to be really careful of dehydration. Your body's reaction to extreme cold is to reduce blood flow to your skin, which generally results in less liquid in your body. We have to drink constantly — water, cocoa, and coffee — to avoid dehydration.

We only have access to the ice every two years with the Navy, so we try to make the most of our time. In the several-day lead-up to our field expedition, my research partner Ben and I were really pushing ourselves to ready our sensor nodes for deployment and probably not eating and drinking as regularly as we should. When we ventured to our sensor deployment site about 5 kilometers outside of camp, I had to learn to slow down so I didn't sweat under my gear, as sweating in the extremely cold conditions can quickly lead to hypothermia. I also learned to pay more attention to exposed places on my face, as I got a bit of frostnip around my goggles.

Operating in the Arctic is a fine balance: you can't spend too much time out there, but you also can't rush.



de MIT News https://ift.tt/KCsUyk2

A more precise way to edit the genome

A genome-editing technique known as prime editing holds potential for treating many diseases by transforming faulty genes into functional ones. However, the process carries a small chance of inserting errors that could be harmful.

MIT researchers have now found a way to dramatically lower the error rate of prime editing, using modified versions of the proteins involved in the process. This advance could make it easier to develop gene therapy treatments for a variety of diseases, the researchers say.

“This paper outlines a new approach to doing gene editing that doesn’t complicate the delivery system and doesn’t add additional steps, but results in a much more precise edit with fewer unwanted mutations,” says Phillip Sharp, an MIT Institute Professor Emeritus, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the new study.

With their new strategy, the MIT team was able to improve the error rate of prime editors from about one error in seven edits to one in 101 for the most-used editing mode, or from one error in 122 edits to one in 543 for a high-precision mode.

“For any drug, what you want is something that is effective, but with as few side effects as possible,” says Robert Langer, the David H. Koch Institute Professor at MIT, a member of the Koch Institute, and one of the senior authors of the new study. “For any disease where you might do genome editing, I would think this would ultimately be a safer, better way of doing it.”

Koch Institute research scientist Vikash Chauhan is the lead author of the paper, which appears today in Nature.

The potential for error

The earliest forms of gene therapy, first tested in the 1990s, involved delivering new genes carried by viruses. Subsequently, gene-editing techniques that use enzymes such as zinc finger nucleases to correct genes were developed. These nucleases are difficult to engineer, however, so adapting them to target different DNA sequences is a very laborious process.

Many years later, the CRISPR genome-editing system was discovered in bacteria, offering scientists a potentially much easier way to edit the genome. The CRISPR system consists of an enzyme called Cas9 that can cut double-stranded DNA at a particular spot, along with a guide RNA that tells Cas9 where to cut. Researchers have adapted this approach to cut out faulty gene sequences or to insert new ones, following an RNA template.

In 2019, researchers at the Broad Institute of MIT and Harvard reported the development of prime editing: a new system, based on CRISPR, that is more precise and has fewer off-target effects. A recent study reported that prime editors were successfully used to treat a patient with chronic granulomatous disease (CGD), a rare genetic disease that affects white blood cells.

“In principle, this technology could eventually be used to address many hundreds of genetic diseases by correcting small mutations directly in cells and tissues,” Chauhan says.

One of the advantages of prime editing is that it doesn’t require making a double-stranded cut in the target DNA. Instead, it uses a modified version of Cas9 that cuts just one of the complementary strands, opening up a flap where a new sequence can be inserted. A guide RNA delivered along with the prime editor serves as the template for the new sequence.

Once the new sequence has been copied, however, it must compete with the old DNA strand to be incorporated into the genome. If the old strand outcompetes the new one, the extra flap of new DNA hanging off may accidentally get incorporated somewhere else, giving rise to errors.

Many of these errors might be relatively harmless, but it’s possible that some could eventually lead to tumor development or other complications. With the most recent version of prime editors, this error rate ranges from one per seven edits to one per 121 edits for different editing modes.

“The technologies we have now are really a lot better than earlier gene therapy tools, but there’s always a chance for these unintended consequences,” Chauhan says.

Precise editing

To reduce those error rates, the MIT team decided to take advantage of a phenomenon they had observed in a 2023 study. In that paper, they found that while Cas9 usually cuts in the same DNA location every time, some mutated versions of the protein show a relaxation of those constraints. Instead of always cutting the same location, those Cas9 proteins would sometimes make their cut one or two bases further along the DNA sequence.

This relaxation, the researchers discovered, makes the old DNA strands less stable, so they get degraded, making it easier for the new strands to be incorporated without introducing any errors.

In the new study, the researchers were able to identify Cas9 mutations that dropped the error rate to 1/20th its original value. Then, by combining pairs of those mutations, they created a Cas9 editor that lowered the error rate even further, to 1/36th the original amount.

To make the editors even more accurate, the researchers incorporated their new Cas9 proteins into a prime editing system that has an RNA binding protein that stabilizes the ends of the RNA template more efficiently. This final editor, which the researchers call vPE, had an error rate just 1/60th of the original, ranging from one in 101 edits to one in 543 edits for different editing modes. These tests were performed in mouse and human cells.

The MIT team is now working on further improving the efficiency of prime editors, through further modifications of Cas9 and the RNA template. They are also working on ways to deliver the editors to specific tissues of the body, which is a longstanding challenge in gene therapy.

They also hope that other labs will begin using the new prime editing approach in their research studies. Prime editors are commonly used to explore many different questions, including how tissues develop, how populations of cancer cells evolve, and how cells respond to drug treatment.

“Genome editors are used extensively in research labs,” Chauhan says. “So the therapeutic aspect is exciting, but we are really excited to see how people start to integrate our editors into their research workflows.”

The research was funded by the Life Sciences Research Foundation, the National Institute of Biomedical Imaging and Bioengineering, the National Cancer Institute, and the Koch Institute Support (core) Grant from the National Cancer Institute.



de MIT News https://ift.tt/VLZAypb

martes, 16 de septiembre de 2025

A new community for computational science and engineering

For the past decade, MIT has offered doctoral-level study in computational science and engineering (CSE) exclusively through an interdisciplinary program designed for students applying computation within a specific science or engineering field.

As interest grew among students focused primarily on advancing CSE methodology itself, it became clear that a dedicated academic home for this group — students and faculty deeply invested in the foundations of computational science and engineering — was needed.

Now, with a stand-alone CSE PhD program, they have not only a space for fostering discovery in the cross-cutting methodological dimensions of computational science and engineering, but also a tight-knit community.

“This program recognizes the existence of computational science and engineering as a discipline in and of itself, so you don’t have to be doing this work through the lens of mechanical or chemical engineering, but instead in its own right,” says Nicolas Hadjiconstantinou, co-director of the Center for Computational Science and Engineering (CCSE).

Offered by CCSE and launched in 2023, the stand-alone program blends both coursework and a thesis, much like other MIT PhD programs, yet its methodological focus sets it apart from other Institute offerings.

“What’s unique about this program is that it’s not hosted by one specific department. The stand-alone program is, at its core, about computational science and cross-cutting methodology. We connect this research with people in a lot of different application areas. We have oceanographers, people doing materials science, students with a focus on aeronautics and astronautics, and more,” says outgoing co-director Youssef Marzouk, now the associate dean of the MIT Schwarzman College of Computing.

Expanding horizons

Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering, and Marzouk, the Breene M. Kerr Professor of Aeronautics and Astronautics, have led the center’s efforts since 2018, and developed the program and curriculum together. The duo was intentional about crafting a program that fosters students’ individual research while also exposing them to all the field has to offer.

To expand students’ horizons and continue to build a collaborative community, the PhD in CSE program features two popular seminar series: weekly community seminars that focus primarily on internal speakers (current graduate students, postdocs, research scientists, and faculty), and monthly distinguished seminars in CSE, which are Institute-wide and bring external speakers from various institutions and industry roles.

“Something surprising about the program has been the seminars. I thought it would be the same people I see in my classes and labs, but it’s much broader than that,” says Emily Williams, a fourth-year PhD student and a Department of Energy Computational Science graduate fellow. “One of the most interesting seminars was around simulating fluid flow for biomedical applications. My background is in fluids, so I understand that part, but seeing it applied in a totally different domain than what I work in was eye-opening,” says Williams.

That seminar, “Astrophysical Fluid Dynamics at Exascale,” presented by James Stone, a professor in the School of Natural Sciences at the Institute for Advanced Study and at Princeton University, represented one of many opportunities for CSE students to engage with practitioners in small groups, gaining academic insight as well as a wider perspective on future career paths.

Designing for impact

The interdisciplinary PhD program served as a departure point from which Hadjiconstantinou and Marzouk created a new offering that was uniquely its own.

For Marzouk, that meant focusing on expanding the stand-alone program to be able to constantly grow and pivot to retain relevancy as technology speeds up, too: “In my view, the vitality of this program is that science and engineering applications nowadays rest on computation in a really foundational way, whether it’s engineering design or scientific discovery. So it’s essential to perform research on the building blocks of this kind of computation. This research also has to be shaped by the way that we apply it so that scientists or engineers will actually use it,” Marzouk says.

The curriculum is structured around six core focus areas, or “ways of thinking,” that are fundamental to CSE:

  • Discretization and numerical methods for partial differential equations;
  • Optimization methods;
  • Inference, statistical computing, and data-driven modeling;
  • High performance computing, software engineering, and algorithms;
  • Mathematical foundations (e.g., functional analysis, probability); and
  • Modeling (i.e., a subject that treats computational modeling in any science or engineering discipline).

Students select and build their own thesis committee that consists of faculty from across MIT, not just those associated with CCSE. The combination of a curriculum that’s “modern and applicable to what employers are looking for in industry and academics," according to Williams, and the ability to build your own group of engaged advisors, allows for a level of specialization that’s hard to find elsewhere.

“Academically, I feel like this program is designed in such a flexible and interdisciplinary way. You have a lot of control in terms of which direction you want to go in,” says Rosen Yu, a PhD student. Yu’s research is focused on engineering design optimization, an interest she discovered during her first year of research at MIT with Professor Faez Ahmed. The CSE PhD was about to launch, and it became clear that her research interests skewed more toward computation than the existing mechanical engineering degree; it was a natural fit.

“At other schools, you often see just a pure computer science program or an engineering department with hardly any intersection. But this CSE program, I like to say it’s like a glue between these two communities,” says Yu.

That “glue” is strengthening, with more students matriculating each year, as well as Institute faculty and staff becoming affiliated with CSE. While the thesis topics of students range from WIlliams’ stochastic methods for model reduction of multiscale chaotic systems to scalable and robust GPU-cased optimization for energy systems, the goal of the program remains the same: develop students and research that will make a difference.

“That's why MIT is an ‘Institute of Technology’ and not a ‘university.’ There’s always this question, no matter what you’re studying: what is it good for? Our students will go on to work in systems biology, simulators of climate models, electrification, hypersonic vehicles, and more, but the whole point is that their research is helping with something,” says Hadjiconstantinou.



de MIT News https://ift.tt/HG0OmFw

How to build AI scaling laws for efficient LLM training and budget maximization

When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.

New work from MIT and MIT-IBM Watson AI Lab researchers addresses this by amassing and releasing a collection of hundreds of models and metrics concerning training and performance to approximate more than a thousand scaling laws. From this, the team developed a meta-analysis and guide for how to select small models and estimate scaling laws for different LLM model families, so that the budget is optimally applied toward generating reliable performance predictions.

“The notion that you might want to try to build mathematical models of the training process is a couple of years old, but I think what was new here is that most of the work that people had been doing before is saying, ‘can we say something post-hoc about what happened when we trained all of these models, so that when we’re trying to figure out how to train a new large-scale model, we can make the best decisions about how to use our compute budget?’” says Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science and principal investigator with the MIT-IBM Watson AI Lab.

The research was recently presented at the International Conference on Machine Learning by Andreas, along with MIT-IBM Watson AI Lab researchers Leshem Choshen and Yang Zhang of IBM Research.

Extrapolating performance

No matter how you slice it, developing LLMs is an expensive endeavor: from decision-making regarding the numbers of parameters and tokens, data selection and size, and training techniques to determining output accuracy and tuning to the target applications and tasks. Scaling laws offer a way to forecast model behavior by relating a large model’s loss to the performance of smaller, less-costly models from the same family, avoiding the need to fully train every candidate. Mainly, the differences between the smaller models are the number of parameters and token training size. According to Choshen, elucidating scaling laws not only enable better pre-training decisions, but also democratize the field by enabling researchers without vast resources to understand and build effective scaling laws.

The functional form of scaling laws is relatively simple, incorporating components from the small models that capture the number of parameters and their scaling effect, the number of training tokens and their scaling effect, and the baseline performance for the model family of interest. Together, they help researchers estimate a target large model’s performance loss; the smaller the loss, the better the target model’s outputs are likely to be.

These laws allow research teams to weigh trade-offs efficiently and to test how best to allocate limited resources. They’re particularly useful for evaluating scaling of a certain variable, like the number of tokens, and for A/B testing of different pre-training setups.

In general, scaling laws aren’t new; however, in the field of AI, they emerged as models grew and costs skyrocketed. “It’s like scaling laws just appeared at some point in the field,” says Choshen. “They started getting attention, but no one really tested how good they are and what you need to do to make a good scaling law.” Further, scaling laws were themselves also a black box, in a sense. “Whenever people have created scaling laws in the past, it has always just been one model, or one model family, and one dataset, and one developer,” says Andreas. “There hadn’t really been a lot of systematic meta-analysis, as everybody is individually training their own scaling laws. So, [we wanted to know,] are there high-level trends that you see across those things?”

Building better

To investigate this, Choshen, Andreas, and Zhang created a large dataset. They collected LLMs from 40 model families, including Pythia, OPT, OLMO, LLaMA, Bloom, T5-Pile, ModuleFormer mixture-of-experts, GPT, and other families. These included 485 unique, pre-trained models, and where available, data about their training checkpoints, computational cost (FLOPs), training epochs, and the seed, along with 1.9 million performance metrics of loss and downstream tasks. The models differed in their architectures, weights, and so on. Using these models, the researchers fit over 1,000 scaling laws and compared their accuracy across architectures, model sizes, and training regimes, as well as testing how the number of models, inclusion of intermediate training checkpoints, and partial training impacted the predictive power of scaling laws to target models. They used measurements of absolute relative error (ARE); this is the difference between the scaling law’s prediction and the observed loss of a large, trained model. With this, the team compared the scaling laws, and after analysis, distilled practical recommendations for AI practitioners about what makes effective scaling laws.

Their shared guidelines walk the developer through steps and options to consider and expectations. First, it’s critical to decide on a compute budget and target model accuracy. The team found that 4 percent ARE is about the best achievable accuracy one could expect due to random seed noise, but up to 20 percent ARE is still useful for decision-making. The researchers identified several factors that improve predictions, like including intermediate training checkpoints, rather than relying only on final losses; this made scaling laws more reliable. However, very early training data before 10 billion tokens are noisy, reduce accuracy, and should be discarded. They recommend prioritizing training more models across a spread of sizes to improve robustness of the scaling law’s prediction, not just larger models; selecting five models provides a solid starting point. 

Generally, including larger models improves prediction, but costs can be saved by partially training the target model to about 30 percent of its dataset and using that for extrapolation. If the budget is considerably constrained, developers should consider training one smaller model within the target model family and borrow scaling law parameters from a model family with similar architecture; however, this may not work for encoder–decoder models. Lastly, the MIT-IBM research group found that when scaling laws were compared across model families, there was strong correlation between two sets of hyperparameters, meaning that three of the five hyperparameters explained nearly all of the variation and could likely capture the model behavior. Together, these guidelines provide a systematic approach to making scaling law estimation more efficient, reliable, and accessible for AI researchers working under varying budget constraints.

Several surprises arose during this work: small models partially trained are still very predictive, and further, the intermediate training stages from a fully trained model can be used (as if they are individual models) for prediction of another target model. “Basically, you don’t pay anything in the training, because you already trained the full model, so the half-trained model, for instance, is just a byproduct of what you did,” says Choshen. Another feature Andreas pointed out was that, when aggregated, the variability across model families and different experiments jumped out and was noisier than expected. Unexpectedly, the researchers found that it’s possible to utilize the scaling laws on large models to predict performance down to smaller models. Other research in the field has hypothesized that smaller models were a “different beast” compared to large ones; however, Choshen disagrees. “If they’re totally different, they should have shown totally different behavior, and they don’t.”

While this work focused on model training time, the researchers plan to extend their analysis to model inference. Andreas says it’s not, “how does my model get better as I add more training data or more parameters, but instead as I let it think for longer, draw more samples. I think there are definitely lessons to be learned here about how to also build predictive models of how much thinking you need to do at run time.” He says the theory of inference time scaling laws might become even more critical because, “it’s not like I'm going to train one model and then be done. [Rather,] it’s every time a user comes to me, they’re going to have a new query, and I need to figure out how hard [my model needs] to think to come up with the best answer. So, being able to build those kinds of predictive models, like we’re doing in this paper, is even more important.”

This research was supported, in part, by the MIT-IBM Watson AI Lab and a Sloan Research Fellowship. 



de MIT News https://ift.tt/0IXsCNP

lunes, 15 de septiembre de 2025

How to get your business into the flow

In the late 1990s, a Harley-Davidson executive named Donald Kieffer became general manager of a company engine plant near Milwaukee. The iconic motorcycle maker had forged a celebrated comeback, and Kieffer, who learned manufacturing on the shop floor, had been part of it. Now Kieffer wanted to make his facility better. So he arranged for a noted Toyota executive, Hajime Oba, to pay a visit.

The meeting didn’t go as Kieffer expected. Oba walked around the plant for 45 minutes, diagrammed the setup on a whiteboard, and suggested one modest change. As a high-ranking manager, Kieffer figured he had to make far-reaching upgrades. Instead, Oba asked him, “What is the problem you are trying to solve?”

Oba’s point was subtle. Harley-Davidson had a good plant that could get better, but not by imposing grand, top-down plans. The key was to fix workflow issues the employees could identify. Even a small fix can have large effects, and, anyway, a modestly useful change is better than a big, formulaic makeover that derails things. So Kieffer took Oba’s prompt and started making specific, useful changes. 

“Organizations are dynamic places, and when we try to impose a strict, static structure on them, we drive all that dynamism underground,” says MIT professor of management Nelson Repenning. “And the waste and chaos it creates is 100 times more expensive than people anticipate.”

Now Kieffer and Repenning have written a book about flexible, sensible organizational improvement, “There’s Got to Be a Better Way,” published by PublicAffairs. They call their approach “dynamic work design,” which aims to help firms refine their workflow — and to stop people from making it worse through overconfident, cookie-cutter prescriptions.

“So much of management theory presumes we can predict the future accurately, including our impact on it,” Repenning says. “And everybody knows that’s not true. Yet we go along with the fiction. The premise underlying dynamic work design is, if we accept that we can’t predict the future perfectly, we might design the world differently.”

Kieffer adds: “Our principles address how work is designed. Not how leaders have to act, but how you design human work, and drive changes.”

One collaboration, five principles

This book is the product of a long collaboration: In 1996, Kieffer first met Repenning, who was then a new MIT faculty member, and they soon recognized they thought similarly about managing work. By 2008, Kieffer also became a lecturer at the MIT Sloan School of Management, where Repenning is now a distinguished professor of system dynamics and organization studies.

The duo began teaching executive education classes together at MIT Sloan, often working with firms tackling tough problems. In the 2010s, they worked extensively with BP executives after the Deepwater Horizon accident, finding ways to combine safety priorities with other operations.

Repenning is an expert on system dynamics, an MIT-developed field emphasizing how parts of a system interact. In a firm, making isolated changes may throw the system as a whole further off kilter. Instead, managers need to grasp the larger dynamics — and recognize that a firm’s problems are not usually its people, since most employees perform similarly when burdened by a faulty system.

Whereas many have touted management systems prescribe set things in advance — like culling the bottom 10 percent of your employees annually — Repenning and Kieffer believe a firm should study itself empirically and develop improvements from there.

“Managers lose touch with how work actually gets done,” Kieffer says. “We bring managers in touch with real-time work, to see the problems people have, to help them solve it and learn new ways to work.”

Over time, Repenning and Kieffer have codified their ideas about work design into five principles:

  • Solve the right problem: Use empiricism to develop a blame-free statement of issues to address;
  • Structure for discovery: Allow workers to see how their work fits into the bigger picture, and to help improve things;
  • Connect the human chain: Make sure the right information moves from one person to the next;
  • Regulate for flow: New tasks should only enter a system when there is capacity for them to be handled; and
  • Visualize the work: Create a visual method — think of a whiteboard with sticky notes — for mapping work operations.

No mugs, no t-shirts — just open your eyes

Applying dynamic work design to any given firm may sound simple, but Repenning and Kieffer note that many forces make it hard to implement. For instance, firm leaders may be tempted to opt for technology-based solutions when there are simpler, cheaper fixes available.

Indeed, “resorting to technology before fixing the underlying design risks wasting money and embedding the original problem even deeper in the organization,” they write in the book.

Moreover, dynamic work design is not itself a solution, but a way of trying to find a specific solution.

“One thing that keeps Don and I up at night is a CEO reading our book and thinking, ‘We’re going to be a dynamic work design company,’ and printing t-shirts and coffee mugs and holding two-day conferences where everyone signs the dynamic work design poster, and evaluating everyone every week on how dynamic they are,’” Repenning says. “Then you’re being awfully static.”

After all, firms change, and their needs change. Repenning and Kieffer want managers to keep studying their firm’s workflow, so they can keep current with their needs. In fairness, a certain amount of managers do this.

“Most people have experienced fleeting moments of good work design,” Repenning says. Building on that, he says, managers and employees can keep driving a process of improvement that is realistic and logical.

“Start small,” he adds. “Pick one problem you can work on in a couple of weeks, and solve that. Most cases, with open eyes, there’s low-hanging fruit. You find the places you can win, and change incrementally, rather than all at once. For senior executives, this is hard. They are used to doing big things. I tell our executive ed students, it’s going to feel uncomfortable at the beginning, but this is a much more sustainable path to progress.”



de MIT News https://ift.tt/9dTCIQB

Machine-learning tool gives doctors a more detailed 3D picture of fetal health

For pregnant women, ultrasounds are an informative (and sometimes necessary) procedure. They typically produce two-dimensional black-and-white scans of fetuses that can reveal key insights, including biological sex, approximate size, and abnormalities like heart issues or cleft lip. If your doctor wants a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.

MRIs aren’t a catch-all, though; the 3D scans are difficult for doctors to interpret well enough to diagnose problems because our visual system is not accustomed to processing 3D volumetric scans (in other words, a wrap-around look that also shows us the inner structures of a subject). Enter machine learning, which could help model a fetus’s development more clearly and accurately from data — although no such algorithm has been able to model their somewhat random movements and various body shapes.

That is, until a new approach called “Fetal SMPL” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Boston Children’s Hospital (BCH), and Harvard Medical School presented clinicians with a more detailed picture of fetal health. It was adapted from “SMPL” (Skinned Multi-Person Linear model), a 3D model developed in computer graphics to capture adult body shapes and poses, as a way to represent fetal body shapes and poses accurately. Fetal SMPL was then trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. Inside each model is a skeleton with 23 articulated joints called a “kinematic tree,” which the system uses to pose and move like the fetuses it saw during training.

The extensive, real-world scans that Fetal SMPL learned from helped it develop pinpoint accuracy. Imagine stepping into a stranger’s footprint while blindfolded, and not only does it fit perfectly, but you correctly guess what shoe they wore — similarly, the tool closely matched the position and size of fetuses in MRI frames it hadn’t seen before. Fetal SMPL was only misaligned by an average of about 3.1 millimeters, a gap smaller than a single grain of rice.

The approach could enable doctors to precisely measure things like the size of a baby’s head or abdomen and compare these metrics with healthy fetuses at the same age. Fetal SMPL has demonstrated its clinical potential in early tests, where it achieved accurate alignment results on a small group of real-world scans.

“It can be challenging to estimate the shape and pose of a fetus because they’re crammed into the tight confines of the uterus,” says lead author, MIT PhD student, and CSAIL researcher Yingcheng Liu SM ’21. “Our approach overcomes this challenge using a system of interconnected bones under the surface of the 3D model, which represent the fetal body and its motions realistically. Then, it relies on a coordinate descent algorithm to make a prediction, essentially alternating between guessing pose and shape from tricky data until it finds a reliable estimate.”

In utero

Fetal SMPL was tested on shape and pose accuracy against the closest baseline the researchers could find: a system that models infant growth called “SMIL.” Since babies out of the womb are larger than fetuses, the team shrank those models by 75 percent to level the playing field.

The system outperformed this baseline on a dataset of fetal MRIs between the gestational ages of 24 and 37 weeks taken at Boston Children’s Hospital. Fetal SMPL was able to recreate real scans more precisely, as its models closely lined up with real MRIs.

The method was efficient at lining up their models to images, only needing three iterations to arrive at a reasonable alignment. In an experiment that counted how many incorrect guesses Fetal SMPL had made before arriving at a final estimate, its accuracy plateaued from the fourth step onward.

The researchers have just begun testing their system in the real world, where it produced similarly accurate models in initial clinical tests. While these results are promising, the team notes that they’ll need to apply their results to larger populations, different gestational ages, and a variety of disease cases to better understand the system’s capabilities.

Only skin deep

Liu also notes that their system only helps analyze what doctors can see on the surface of a fetus, since only bone-like structures lie beneath the skin of the models. To better monitor babies’ internal health, such as liver, lung, and muscle development, the team intends to make their tool volumetric, modeling the fetus’s inner anatomy from scans. Such upgrades would make the models more human-like, but the current version of Fetal SMPL already presents a precise (and unique) upgrade to 3D fetal health analysis.

“This study introduces a method specifically designed for fetal MRI that effectively captures fetal movements, enhancing the assessment of fetal development and health,” says Kiho Im, Harvard Medical School associate professor of pediatrics and staff scientist in the Division of Newborn Medicine at BCH’s Fetal-Neonatal Neuroimaging and Developmental Science Center. Im, who was not involved with the paper, adds that this approach “will not only improve the diagnostic utility of fetal MRI, but also provide insights into the early functional development of the fetal brain in relation to body movements.”

“This work reaches a pioneering milestone by extending parametric surface human body models for the earliest shapes of human life: fetuses,” says Sergi Pujades, an associate professor at University Grenoble Alpes, who wasn’t involved in the research. “It allows us to detangle the shape and motion of a human, which has already proven to be key in understanding how adult body shape relates to metabolic conditions and how infant motion relates to neurodevelopmental disorders. In addition, the fact that the fetal model stems from, and is compatible with, the adult (SMPL) and infant (SMIL) body models, will allow us to study human shape and pose evolution over long periods of time. This is an unprecedented opportunity to further quantify how human shape growth and motion are affected by different conditions.”

Liu wrote the paper with three CSAIL members: Peiqi Wang SM ’22, PhD ’25; MIT PhD student Sebastian Diaz; and senior author Polina Golland, the Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science, a principal investigator in MIT CSAIL, and the leader of the Medical Vision Group. BCH assistant professor of pediatrics Esra Abaci Turk, Inria researcher Benjamin Billot, and Harvard Medical School professor of pediatrics and professor of radiology Patricia Ellen Grant are also authors on the paper. This work was supported, in part, by the National Institutes of Health and the MIT CSAIL-Wistron Program.

The researchers will present their work at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in September.



de MIT News https://ift.tt/V5LJXCr

Climate Action Learning Lab helps state and local leaders identify and implement effective climate mitigation strategies

This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.

“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”

From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.

“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.

This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.

“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”

The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space. 

“The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”

The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.

Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter.



de MIT News https://ift.tt/IC9B0dg

domingo, 14 de septiembre de 2025

3 Questions: On humanizing scientists

Alan Lightman has spent much of his authorial career writing about scientific discovery, the boundaries of knowledge, and remarkable findings from the world of research. His latest book “The Shape of Wonder,” co-authored with the lauded English astrophysicist Martin Rees and published this month by Penguin Random House, offers both profiles of scientists and an examination of scientific methods, humanizing researchers and making an affirmative case for the value of their work. Lightman is a professor of the practice of the humanities in MIT’s Comparative Media Studies/Writing Program; Rees is a fellow of Trinity College at Cambridge University and the UK’s Astronomer Royal. Lightman talked with MIT News about the new volume.

Q: What is your new book about?

A: The book tries to show who scientists are and how they think. Martin and I wrote it to address several problems. One is mistrust in scientists and their institutions, which is a worldwide problem. We saw this problem illustrated during the pandemic. That mistrust I think is associated with a belief by some people that scientists and their institutions are part of the elite establishment, a belief that is one feature of the populist movement worldwide. In recent years there’s been considerable misinformation about science. And, many people don’t know who scientists are.

Another thing, which is very important, is a lack of understanding about evidence-based critical thinking. When scientists get new data and information, their theories and recommendations change. But this process, part of the scientific method, is not well-understood outside of science. Those are issues we address in the book. We have profiles of a number of scientists and show them as real people, most of whom work for the benefit of society or out of intellectual curiosity, rather than being driven by political or financial interests. We try to humanize scientists while showing how they think.

Q: You profile some well-known figures in the book, as well as some lesser-known scientists. Who are some of the people you feature in it?

A: One person is a young neuroscientist, Lace Riggs, who works at the McGovern Institute for Brain Research at MIT. She grew up in difficult circumstances in southern California, decided to go into science, got a PhD in neuroscience, and works as a postdoc researching the effect of different compounds on the brain and how that might lead to drugs to combat certain mental illnesses. Another very interesting person is Magdalena Lenda, an ecologist in Poland. When she was growing up, her father sold fish for a living, and took her out in the countryside and would identify plants, which got her interested in ecology. She works on stopping invasive species. The intention is to talk about people’s lives and interests, and show them as full people.

While humanizing scientists in the book, we show how critical thinking works in science. By the way, critical thinking is not owned by scientists. Accountants, doctors, and many others use critical thinking. I’ve talked to my car mechanic about what kinds of problems come into the shop. People don’t know what causes the check engine light to go on — the catalytic converter, corroded spark plugs, etc. — so mechanics often start from the simplest and cheapest possibilities and go to the next potential problem, down the list. That’s a perfect example of critical thinking. In science, it is checking your ideas and hypotheses against data, then updating them if needed.

Q: Are there common threads linking together the many scientists you feature in the book?

A: There are common threads, but also no single scientific stereotype. There’s a wide range of personalities in the sciences. But one common thread is that all the scientists I know are passionate about what they’re doing. They’re working for the benefit of society, and out of sheer intellectual curiosity. That links all the people in the book, as well as other scientists I’ve known. I wish more people in America would realize this: Scientists are working for their overall benefit. Science is a great success story. Thanks to scientific advances, since 1900 the expected lifespan in the U.S, has increased from a little more than 45 years to almost 80 years, in just a century, largely due to our ability to combat diseases. What’s more vital than your lifespan?

This book is just a drop in the bucket in terms of what needs to be done. But we all do what we can. 



de MIT News https://ift.tt/tWwTh3G

viernes, 12 de septiembre de 2025

Lidar helps gas industry find methane leaks and avoid costly losses

Each year, the U.S. energy industry loses an estimated 3 percent of its natural gas production, valued at $1 billion in revenue, to leaky infrastructure. Escaping invisibly into the air, these methane gas plumes can now be detected, imaged, and measured using a specialized lidar flown on small aircraft.

This lidar is a product of Bridger Photonics, a leading methane-sensing company based in Bozeman, Montana. MIT Lincoln Laboratory developed the lidar's optical-power amplifier, a key component of the system, by advancing its existing slab-coupled optical waveguide amplifier (SCOWA) technology. The methane-detecting lidar is 10 to 50 times more capable than other airborne remote sensors on the market.

"This drone-capable sensor for imaging methane is a great example of Lincoln Laboratory technology at work, matched with an impactful commercial application," says Paul Juodawlkis, who pioneered the SCOWA technology with Jason Plant in the Advanced Technology Division and collaborated with Bridger Photonics to enable its commercial application.

Today, the product is being adopted widely, including by nine of the top 10 natural gas producers in the United States. "Keeping gas in the pipe is good for everyone — it helps companies bring the gas to market, improves safety, and protects the outdoors," says Pete Roos, founder and chief innovation officer at Bridger. "The challenge with methane is that you can't see it. We solved a fundamental problem with Lincoln Laboratory."

A laser source "miracle"

In 2014, the Advanced Research Projects Agency-Energy (ARPA-E) was seeking a cost-effective and precise way to detect methane leaks. Highly flammable and a potent pollutant, methane gas (the primary constituent of natural gas) moves through the country via a vast and intricate pipeline network. Bridger submitted a research proposal in response to ARPA-E's call and was awarded funding to develop a small, sensitive aerial lidar.

Aerial lidar sends laser light down to the ground and measures the light that reflects back to the sensor. Such lidar is often used for producing detailed topography maps. Bridger's idea was to merge topography mapping with gas measurements. Methane absorbs light at the infrared wavelength of 1.65 microns. Operating a laser at that wavelength could allow a lidar to sense the invisible plumes and measure leak rates.

"This laser source was one of the hardest parts to get right. It's a key element," Roos says. His team needed a laser source with specific characteristics to emit powerfully enough at a wavelength of 1.65 microns to work from useful altitudes. Roos recalled the ARPA-E program manager saying they needed a "miracle" to pull it off.

Through mutual connections, Bridger was introduced to a Lincoln Laboratory technology for optically amplifying laser signals: the SCOWA. When Bridger contacted Juodawlkis and Plant, they had been working on SCOWAs for a decade. Although they had never investigated SCOWAs at 1.65 microns, they thought that the fundamental technology could be extended to operate at that wavelength. Lincoln Laboratory received ARPA-E funding to develop 1.65-micron SCOWAs and provide prototype units to Bridger for incorporation into their gas-mapping lidar systems.

"That was the miracle we needed," Roos says.

A legacy in laser innovation

Lincoln Laboratory has long been a leader in semiconductor laser and optical emitter technology. In 1962, the laboratory was among the first to demonstrate the diode laser, which is now the most widespread laser used globally. Several spinout companies, such as Lasertron and TeraDiode, have commercialized innovations stemming from the laboratory's laser research, including those for fiber-optic telecommunications and metal-cutting applications.

In the early 2000s, Juodawlkis, Plant, and others at the laboratory recognized a need for a stable, powerful, and bright single-mode semiconductor optical amplifier, which could enhance lidar and optical communications. They developed the SCOWA (slab-coupled optical waveguide amplifier) concept by extending earlier work on slab-coupled optical waveguide lasers (SCOWLs). The initial SCOWA was funded under the laboratory's internal technology investment portfolio, a pool of R&D funding provided by the undersecretary of defense for research and engineering to seed new technology ideas. These ideas often mature into sponsored programs or lead to commercialized technology.

"Soon, we developed a semiconductor optical amplifier that was 10 times better than anything that had ever been demonstrated before," Plant says. Like other semiconductor optical amplifiers, the SCOWA guides laser light through semiconductor material. This process increases optical power as the laser light interacts with electrons, causing them to shed photons at the same wavelength as the input laser. The SCOWA's unique light-guiding design enables it to reach much higher output powers, creating a powerful and efficient beam. They demonstrated SCOWAs at various wavelengths and applied the technology to projects for the Department of Defense.

When Bridger Photonics reached out to Lincoln Laboratory, the most impactful application of the device yet emerged. Working iteratively through the ARPA-E funding and a Cooperative Research and Development Agreement (CRADA), the team increased Bridger's laser power by more than tenfold. This power boost enabled them to extend the range of the lidar to elevations over 1,000 feet.

"Lincoln Laboratory had the knowledge of what goes on inside the optical amplifier — they could take our input, adjust the recipe, and make a device that worked very well for us," Roos says.

The Gas Mapping Lidar was commercially released in 2019. That same year, the product won an R&D 100 Award, recognizing it as a revolutionary advancement in the marketplace.

A technology transfer takes off

Today, the United States is the world's largest natural gas supplier, driving growth in the methane-sensing market. Bridger Photonics deploys its Gas Mapping Lidar for customers nationwide, attaching the sensor to planes and drones and pinpointing leaks across the entire supply chain, from where gas is extracted, piped through the country, and delivered to businesses and homes. Customers buy the data from these scans to efficiently locate and repair leaks in their gas infrastructure. In January 2025, the Environmental Protection Agency provided regulatory approval for the technology.

According to Bruce Niemeyer, president of Chevron's shale and tight operations, the lidar capability has been game-changing: "Our goal is simple — keep methane in the pipe. This technology helps us assure we are doing that … It can find leaks that are 10 times smaller than other commercial providers are capable of spotting."

At Lincoln Laboratory, researchers continue to innovate new devices in the national interest. The SCOWA is one of many technologies in the toolkit of the laboratory's Microsystems Prototyping Foundry, which will soon be expanded to include a new Compound Semiconductor Laboratory – Microsystem Integration Facility. Government, industry, and academia can access these facilities through government-funded projects, CRADAs, test agreements, and other mechanisms.

At the direction of the U.S. government, the laboratory is also seeking industry transfer partners for a technology that couples SCOWA with a photonic integrated circuit platform. Such a platform could advance quantum computing and sensing, among other applications.

"Lincoln Laboratory is a national resource for semiconductor optical emitter technology," Juodawlkis says.



de MIT News https://ift.tt/D3tnCEU