jueves, 9 de abril de 2026

Bridging space research and policy

While earning her dual master’s degrees in aeronautics and astronautics and public policy, Carissma McGee SM ’25 learned to navigate between two seemingly distinct worlds, bridging rigorous technical analysis and policy decisions.

As an undergraduate congressional intern and researcher, she saw a persistent gap in space policymaking. Policymakers often lacked technical expertise, while researchers were rarely involved in increasingly complex questions surrounding intellectual property and international collaboration in space.

Her work on intellectual property frameworks for space collaborations directly addresses that gap, combining expertise in gravitational microlensing and space telescope operations with policy analysis to tackle emerging governance challenges.

“I want to bring an expert level in science in the rooms where policy decisions are made,” says McGee, now a doctoral student in aeronautics and astronautics. “That perspective is critical for shaping the future of research and exploration.”

Likewise, she wants to bring her expertise in public policy into the lab.

“I enjoy being able to ask questions about intellectual property, territorial claims, knowledge transfer, or allocation of resources early on in a research project,” adds McGee.

McGee’s fascination with space started during her high school years in Delaware, when she first volunteered at a local observatory and then interned at the NASA Goddard Space Flight Center in Maryland.

Following high school, McGee attended Howard University. She was selected to participate in the Karsh STEM Scholars Program, a full-ride scholarship track for students committed to working continuously toward earning doctoral degrees. Howard, which holds an R1 research classification from the Carnegie Foundation, is in close proximity the Goddard Space Flight Center, as well as the American Astronomical Society and the D.C. Space Grant Consortium.

In 2020, after her first year at Howard, the Covid-19 pandemic sent McGee back to her hometown in Delaware. As it turned out, that gave her an opportunity to work with her local congresswoman, Lisa Blunt Rochester, then a U.S. representative. In addition to supporting the congresswoman’s constituents, she drafted dozens of letters related to STEM education and energy reform.

Working in government gave McGee an opportunity to use her voice to “advocate for astronomy and astrophysics with the American Astronomical Society, advocate for space sciences, and for science representation.”

As an undergraduate, McGee also conducted research linking computational physics and astronomy, working with both NASA’s Jet Propulsion Laboratory and Yale University’s Department of Astronomy. She also continued research begun in 2021 with the Harvard and Smithsonian Center for Astrophysics’ Black Hole Initiative, contributing to work associated with the Event Horizon Telescope.

When she visited MIT in 2023, McGee was struck by the Institute’s openness to interdisciplinary work and support of her interest in combining aeronautics and astronautics with policy.

Once at MIT, she started working in the Space, Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab) with advisor Kerri Cahoy, professor of aeronautics and astronautics. McGee says she experienced a great deal of freedom to craft her own program.

“I was drawn to the lab’s work on satellite missions and CubeSats, and excited to discover that I could pursue exoplanet astrophysics research within this framework and that submitting a dual thesis or focusing on astrophysics applications was possible,” says McGee. “When I expressed interest in participating in the Technology [and] Policy Program for a dual thesis in a framework for space policy, my advisors encouraged me to explore how we could integrate these diverse interests into a path forward.”

In 2024, McGee was awarded a MathWorks Fellowship to pursue research associated with the Nancy Grace Roman Space Telescope and join a NASA mission.

“It was just amazing to join the exoplanet group at NASA,” she says. “I had a front-row seat to see how real researchers and workers navigate complex problems.”

McGee credits MathWorks with helping fellows to “be at the forefront of knowledge and shaping innovation.”

One of her proudest academic accomplishments is PyLIMASS, a software system she developed with collaborators at Louisiana State University, the Ohio State University, and NASA’s Goddard Space Flight Center. The tool enables more accurate mass and distance estimates in gravitational microlensing events, helping the Roman Space Telescope project meet its precision goals for studying exoplanets.

“To build software that didn’t previously exist — and to know it will be used for the Roman mission — is incredibly exciting,” McGee says.

In May 2025, McGee graduated with dual master’s degrees in aeronautics and astronautics and technology and policy. That same month, she presented her research at the American Astronomical Society meeting in Anchorage, Alaska, and at the Technology Management and Policy Conference in Portugal.

McGee remained at MIT to pursue her doctoral degree. Last fall, as an MIT BAMIT Community Advancement Program and Fund Fellow, she hosted a daylong conference for STEM students focused on how intellectual property frameworks shape technical fields.

McGee’s accomplishments and contributions have been celebrated with a number of honors recently. In 2026, she was named Miss Black Massachusetts United States, was recognized among MIT’s Graduate Students of Excellence, and received the MIT MLK Leadership Award in recognition of her service, integrity, and community impact.

Beyond her academic work, McGee is active across campus. She teaches Pilates with MIT Recreation, participates in the Graduate Women in Aerospace Engineering group, and serves as a graduate resident assistant in an undergraduate dorm on East Campus.

She credits the AeroAstro graduate community with keeping her momentum going.

“Even if we’re tired, there’s this powerful camaraderie among AeroAstro graduate students working together. Seeing my peers are pushing through similar research milestones and solve daunting problems motivates you to advance beyond the finish line to further developments in the field.”



de MIT News https://ift.tt/drL6Zmk

New technique makes AI models leaner and faster while they’re still learning

Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance. 

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.

The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.

"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."

The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.

"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”

The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.

"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."

What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.

The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.

The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.

The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.

There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.

The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.

Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.

"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."

"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."

The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.



de MIT News https://ift.tt/aEQoFg7

miércoles, 8 de abril de 2026

The flawed fundamentals of failing banks

Bank runs are dramatic: Picture Depression-era footage of customers lined up, trying to get their deposits back. Or recall Lehmann Brothers emptying out in 2008 or Silicon Valley Bank collapsing in 2023.

But what causes these runs in the first place? One viewpoint is that something of a self-fulfilling prophecy is involved. Panic spreads, and suddenly many customers are seeking their money back, until an otherwise solid institution is run into the ground.

That is not exactly Emil Verner’s position, however. Verner, an MIT economist, has been studying bank failures empirically for years and now has a different perspective. Verner and his collaborators have produced extensive evidence suggesting that when banks fail, it is usually because they are in a fundamentally shaky position. A bank run generally finishes off an already flawed business rather than upending a viable one.

“What we essentially find is that banks that fail are almost always very weak, and are in trouble,” says Verner, who is the Jerome and Dorothy Lemelson Professor of Management and Financial Economics at the MIT Sloan School of Management. “Most banks that have been subject to runs have been pretty insolvent. Runs are more the final spasm that brings down weak banks, rather than the causes of indiscriminate failures.”

This conclusion has plenty of policy relevance for the banking sector and follows a lengthy analysis of historical data. In one forthcoming paper, in the Quarterly Journal of Economics, Verner and two colleagues reviewed U.S. bank data from 1863 to 2024, concluding that “the primary cause of bank failures and banking crises is almost always and everywhere a deterioration of bank fundamentals.” In a 2021 paper in the same journal, Verner and two other colleagues studied banking data from 46 countries covering 1870-2016, and found that declining bank fundamentals usually preceded runs. And currently, Verner is working to make more historical U.S. bank data publicly available to scholars.

Seen in this light, sure, bank runs are damaging, but bank failures likely have more to do with bad portfolios, poor risk management, and minimal assets in reserve, rather than sentiment-driven client behavior.

“From the idea that bank crises are really about sudden runs on bank debt, we’re moving to thinking that runs are one symptom of crisis that runs deeper,” Verner says. “For most people, we’re saying something reasonable, refining our knowledge, and just shifting the emphasis,” Verner says.

For his research and teaching, Verner received tenure at MIT last year.

Landing in a “great place”

Verner is a native of Denmark who also lived in the U.S. for several years while growing up. Around the time he was finishing school, the U.S. housing market imploded, taking some financial institutions with it.

“Everything came crashing down,” Verner said. “I got obsessed with understanding it.”

As an undergraduate, he studied economics at the University of Copenhagen. After three years, Verner was unconvinced the discipline had fully explained financial crises. He decided to keep studying economics in graduate school, and was accepted into the PhD program at Princeton University.

Along the way, Verner became a historically minded economist, digging into data and cases from past decades to shed light on larger patterns about crises and bank insolvency.

“I’ve always thought history was extremely fascinating in itself,” Verner says. And while history may not repeat, he notes, it is “a really valuable tool. It helps you think through what could happen, what are similar scenarios, and how agents acted when facing similar constraints and incentives in the past.”

For studying financial crises in particular, he adds, history helps in multiple ways. Crises are rare, so historical cases add data. Changes over time, like more financial regulations and more complex investment tools, provide different settings to examine the same cause-and-effect issues. “History is a useful laboratory to study these questions,” Verner says.

After earning his PhD from Princeton, Verner went on the job market and landed his faculty position at MIT Sloan. Many aspects of Institute life — the classroom experience, the collegiality, the campus — have strongly resonated with him.

“MIT is a great place,” Verner says simply. “Great colleagues, great students.”

Focused on fundamentals

Over the last decade, Verner has published papers on numerous topics in addition to banking crises. As an outgrowth of his doctoral work, for instance, he published innovative papers examining the dampening effect that household debt has on economic growth in many countries. He also co-authored the lead paper in an issue of the American Economic Review last year examining the way German hyperinflation after World War I reallocated wealth to large business with substantial debt, leading them to grow faster.

Still, the main focus of Verner’s work right now is on banking crises and bank failures — including their causes. In a 2024 paper looking at private lending in 117 countries since 1940, Verner and economist Karsten Müller showed that financial crises are often preceded by credit booms in what scholars call the “non-tradeable” sector of the economy. That includes industries such as retail or construction, which do not produce easily tradeable goods. Firms in the non-tradeable sector tend to rely more heavily on loans secured by real estate; during real estate booms, such firms use high valuations to borrow more, and they become more vulnerable to crashes — which helps explain why bank portfolios, in turn, can crater as well.

In recent years, in the process of studying these topics, Verner has helped expand the domain of known U.S. historical data in the field. Working with economists Sergio Correa and Stephan Luck, Verner has helped apply large language models to historical newspaper collections, unearthing information about 3,421 runs on individual banks from 1863 to 1934; they are making that data freely available to other scholars.

This topic has important policy implications. If runs are a contagion bringing down worthy banks, then one solution is to provide banks with more liquidity to get through the crisis — something that has indeed been tried in the U.S. However, if bank failures are more based in fundamentals about risk and not keeping enough capital on hand, more systemic policy options about best practices might be logical. At a minimum, substantive new research can help alter the contents of those discussions.

“When banks fail, it’s usually because these banks have taken a lot of risk and have big losses,” Verner says. “It’s rarely unjustified. So that means these types of liquidity interventions alone are not enough to stop a crisis.”

The expansive research Verner has helped conduct includes a number of specific indicators that fundamentals are a big factor in failure. For instance, examining how infrequently banks recover their all assets shows how shaky their foundations are.

“The recovery rate on assets is informative about how solvent a bank was,” Verner says. “This is where I think we’ve contributed something new.” Some economists in the past have cited particular examples of struggling banks making depositors whole, but those are exceptions, not the rule. “Sometimes people argue this or that bank was actually solvent because depositors ended up getting all their money back, and that might be true of one bank, but on aggregate it’s not the case,” Verner says.

Overall, Verner intends to keep following the facts, digging up more evidence, and seeing where it leads.

“While there is this notion that liquidity problems can arise pretty much out of nowhere, I think we are changing that emphasis by showing that financial crises happen basically because banks become insolvent,” Verner underscores. “And then the bank run is that final dramatic spasm — which slightly shifts how we teach and talk about it, and perhaps think about the policy response.”



de MIT News https://ift.tt/VBXN6su

Desirée Plata appointed associate dean of engineering

Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor in the MIT Department of Civil and Environmental Engineering, has been named associate dean of engineering, effective July 1.

In her new role, Plata will focus on fostering early-stage research initiatives across the school’s faculty and on strengthening entrepreneurial and innovation efforts. She will also support the school’s Technical Leadership and Communication (TLC) Programs, including: the Gordon Engineering Leadership Program, the Daniel J. Riccio Graduate Engineering Leadership Program, the School of Engineering Communication Lab, and the Undergraduate Practice Opportunities Program.

Plata will join Associate Dean Hamsa Balakrishnan, who continues to lead faculty searches, fellowships, and outreach programs. Together, the two associate deans will serve on key leadership groups including Engineering Council and the Dean’s Advisory Council to shape the school’s strategic priorities.

“Desirée’s leadership, scholarship, and commitment to excellence have already had a meaningful impact on the MIT community, and I look forward to the perspective and energy she will bring to this role,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering.

Plata’s research centers on the sustainable design of industrial processes and materials through environmental chemistry, with an emphasis on clean energy technologies. She develops ways to make industrial processes more environmentally sustainable, incorporating environmental objectives into the design phase of processes and materials. Her work spans nanomaterials and carbon-based materials for pollution reduction, as well as advanced methods for environmental cleanup and energy conversion.  Plata directs MIT’s Parsons Laboratory, which conducts interdisciplinary research on natural systems and human adaptation to environmental change.

Plata is a leader on campus and beyond in climate and sustainability initiatives. She serves as director of the MIT Climate and Sustainability Consortium (MCSC), an industry–academia collaboration launched to accelerate solutions for global climate challenges. She founded and directs the MIT Methane Network, a multi-institution effort to cut global methane emissions within this decade. Plata also co-directs the National Institute of Environmental Health Sciences MIT Superfund Research Program, which focuses on strategies to protect communities concerned about hazardous chemicals, pollutants, and other contaminants in their environment.

Beyond academia, Plata has co-founded two climate and energy startups, Nth Cycle and Moxair. Nth Cycle is redefining metal refining and the domestic battery supply chain. Earlier this month, the company signed a $1.1 billion off-take agreement to help establish a secure and circular technology for battery minerals.

Her company Moxair specializes in advanced approaches for low-level methane monitoring and destruction. In 2026, with support from the U.S. Department of Energy and collaboration with MIT, Moxair will build and demonstrate a first-of-a-kind dilute methane oxidation technology to tackle methane emissions using transition metal catalysts.

As an educator, Plata has helped develop programs that enhance research experience for students and postdocs. She played a pivotal role in the founding of the MIT Postdoctoral Fellowship Program for Engineering Excellence, serving on its faculty steering committee, overseeing admissions, and leading both the academic track and entrepreneurship track. She also helped design the MCSC Climate and Sustainability Scholars Program, a yearlong program open to juniors and seniors across MIT.

Plata earned a BS in chemistry from Union College in 2003 and a PhD in the joint MIT-Woods Hole Oceanographic Institution program in oceanography and applied ocean science in 2009. After completing her doctorate, she held faculty positions at Mount Holyoke College, Duke University, and Yale University. While at Yale, she served as associate director of research at the university’s Center for Green Chemistry and Green Engineering. In 2018, Plata joined MIT’s faculty in the Department of Civil and Environmental Engineering.

Her work as a scholar and educator has earned numerous awards and honors. She received MIT’s Harold E. Edgerton Faculty Achievement Award in 2020, recognizing her excellence in research, teaching, and service. She has also been honored with an NSF CAREER Award and the Odebrecht Award for Sustainable Development. Plata is a fellow of the American Chemical Society and was a Young Investigator Sustainability Fellow at Caltech.

Plata is a two-time National Academy of Engineering Frontiers of Engineering Fellow and a two-time National Academy of Sciences Kavli Frontiers of Science Fellow. Her dedication to mentoring was recognized with MIT’s Junior Bose Award for Excellence in Teaching and the Frank Perkins Graduate Advising Award.



de MIT News https://ift.tt/7jCTmnJ

Physicists zero in on the mass of the fundamental W boson particle

When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.

In a paper appearing today in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.

The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.

Now, scientists have determined the mass of the W boson by analyzing more than 1 billion proton-colliding events produced by the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) in Switzerland. The LHC accelerates protons toward each other at close to the speed of light. When they collide, two protons can produce a W boson, among a shower of other particles.

Catching a W boson is nearly impossible, as it decays almost immediately into two types of particles, one of which, a neutrino, is so elusive that it cannot be detected. Scientists are left to measure the other particle, known as a muon, and model how it might add up to the total mass of its parent, the W boson. In the new study, scientists used the Compact Muon Solenoid (CMS) experiment, a particle detector at the LHC that precisely tracks muons and other particles produced in the aftermath of proton collisions.

From billions of proton-proton collisions, the team identified 100 million events that produced a W boson decaying to a muon and a neutrino. For each of these events, they carried out detailed analyses to narrow in on a precise mass measurement. In the end, they determined that the W boson has a mass of 80360.2 ± 9.9 megaelectron volts (MeV). This new mass is in line with predictions of the Standard Model, which is physicists’ best rulebook for describing the fundamental particles and forces of nature.

The precision of the new measurement is on par with a previous measurement made in 2022 by the Collider Detector at Fermilab (CDF). That measurement took physicists by surprise, as it was significantly heavier than what the Standard Model predicted, and therefore raised the possibility of “new physics,” such as particles and forces that have yet to be discovered.

Because the new CMS measurement is just as precise as the CDF result and agrees with the Standard Model along with a number of other experiments, it is more likely that physicists are on solid ground in terms of how they understand the W boson.

“It’s just a huge relief, to be honest,” says Kenneth Long, a lead author of the study, who is a senior postdoc in MIT’s Laboratory for Nuclear Science. “This new measurement is a strong confirmation that we can trust the Standard Model.”

The study is authored by more than 3,000 members of CERN’s CMS Collaboration. The core group who worked on the new measurement includes about 30 scientists from 10 institutions, led by a team at MIT that includes Long; Tianyu Justin Yang PhD ’24; David Walter and Jan Eysermans, who are both MIT postdocs in physics; Guillelmo Gomez-Ceballos, a principal research scientist in the Particle Physics Collaboration; Josh Bendavid, a former research scientist; and Christoph Paus, a professor of physics at MIT and principal investigator with the Particle Physics Collaboration.

Piecing together

The W boson was first discovered in 1983 and is predicted to be the fourth heaviest among all the fundamental particles. Multiple experiments have aimed to narrow in on the particle’s mass, with varying degrees of precision. For the most part, these experiments have produced measurements that agree with the Standard Model’s predictions. The 2022 measurement by Fermilab’s CDF experiment is the one significant outlier. It also happens to be the most precise experiment to date.

“If you take the CDF measurement at face value, you would say there must be physics beyond the Standard Model,” says co-author Christoph Paus. “And of course that was the big mystery.”

Paus and his colleagues sought to either support or refute the CDF’s findings by making an independent measurement, with an experiment that matches CDF’s precision. Their new W boson mass measurement is a product of 10 years’ worth of work, both to analyze actual particle collision events and to simulate all the scenarios that could produce those events.

For their new study, the physicists analyzed proton collision events that were produced at the LHC in 2016. When it is running, the particle collider generates proton collisions at a furious rate of about one every 25 nanoseconds. The team analyzed a portion of the LHC’s 2016 dataset that encompasses billions of proton-proton collisions. Among these, they identified about 100 million events that produced a very short-lived W boson.

“A particle like the W boson exists for a teeny tiny moment — something like 10-24 seconds — before decaying to two particles, one of which is a neutrino that can’t be measured directly,” Long explains. “That’s the tricky part: You have to measure the other particle — a muon — really well, and be able to piece things together with only one piece of the puzzle.”

Gathering momentum

When a muon is produced from the decay of a W boson, it carries half of the W boson’s mass, which is converted into momentum that carries the muon away from the original collision. Due to the strong magnetic field inside the CMS detector, the electrically charged muon follows a path whose curvature is a function of its momentum. Scientists’ challenge is to track the muon’s path and every interaction it may have with other particles and its surroundings, in order to estimate its initial momentum.

The muon’s momentum is also influenced by the momentum of the W boson before it decays. Decoding the impact of the W boson’s motion from the effects of its mass presented a major challenge. To infer the W boson mass, the team first carried out simulations of every scenario they could think of that a muon might experience after a proton-proton collision in the chaotic environment of the particle collider. In all, the team produced 4 billion such simulated events described by state-of-the-art theoretical calculations. The simulations encoded diverse hypotheses about how the muon momentum is affected by the physical features of the CMS detector, as well as uncertainties in the predictions that govern W boson production in LHC collisions.

The researchers compared their simulations with data from the 2016 LHC run. For every proton-proton collision event that occurs in the collider, scientists can use the CMS detector at CERN’s LHC to precisely measure the energy and momentum of resulting particles such as muons. The team analyzed CMS measurements of muons that were produced from over 100 million W boson events. They then overlaid this data onto their simulations of the muon momentum, which they then converted to a new mass for the W boson.

That mass — 80360.2 ± 9.9 megaelectron volts — is significantly lighter than the CDF experiment’s measurement. What’s more, the new estimate is within the range of what the Standard Model predicts for the W boson’s mass, bolstering physicists’ confidence in the Standard Model and its descriptions of the major particles and forces of nature.

“With the combination of our really precise result and other experiments that line up with the Standard Model’s predictions, I think that most people would place their bets on the Standard Model,” Long says. “Though I do think people should continue doing this measurement. We are not done.”

“We want to add more data, make our analysis techniques more precise, and basically squeeze the lemon a little harder. There is always some juice left,” Paus adds. “With a better look, then we can say for certain whether we truly understand this one fundamental building block.”

This work was supported, in part, by multiple funding agencies, including the U.S. Department of Energy, and the SubMIT computing facility, sponsored by the MIT Department of Physics. 



de MIT News https://ift.tt/JnmoKeN

martes, 7 de abril de 2026

Study reveals “two-factor authentication” system that controls microRNA destruction

Cells rely on tiny molecules called microRNAs to tune which genes are active and when. Cells must carefully control the lifespan of microRNAs to prevent widespread disruption to gene regulation.

A new study led by researchers at MIT’s Whitehead Institute for Biomedical Research and Germany’s Max Planck Institute of Biochemistry reveals how cells selectively eliminate certain microRNAs through an unexpectedly intricate molecular recognition system. The open-access work, published on March 18 in Nature, shows that the process requires two separate RNA signals, similar to how many digital systems require two forms of identity verification before granting access.

The findings explain how cells use this “two-factor authentication” system to ensure that only intended microRNAs are destroyed, leaving the rest of the gene regulation machinery in operation.

MicroRNAs are short strands of RNA that help control gene expression. Working together with a protein called Argonaute, they bind to specific messenger RNAs — the molecules that carry genetic instructions from DNA to the cell’s protein-making machinery — and trigger their destruction. In this way, microRNAs can reduce the production of specific proteins.

While scientists recognized that microRNAs could be destroyed through a pathway known as target-directed microRNA degradation, or TDMD, the details of how cells recognized which microRNAs to eliminate remained unclear.

“We knew there was a pathway that could target microRNAs for degradation, but the biochemical mechanism behind it wasn’t understood,” says MIT Professor David Bartel, a Whitehead Institute member and co-senior author of the study.

Earlier work from Bartel’s lab and others had identified a key player in this pathway: the ZSWIM8 E3 ubiquitin ligase. E3 ubiquitin ligases are involved in the cell’s recycling system and attach a small molecular tag called ubiquitin to other proteins, marking them for destruction.

The researchers first showed that the ZSWIM8 E3 ligase specifically binds and tags Argonaute, the protein that holds microRNAs and helps regulate genes. The researchers’ next challenge was to understand how this machinery recognized only Argonaute complexes carrying specific microRNAs that should be degraded.

The answer turned out to be surprisingly sophisticated.

Using a combination of biochemistry and cryo-electron microscopy — an imaging technique that reveals molecular structures at near-atomic resolution — the researchers discovered that the degradation system relies on a dual-RNA recognition process. First, Argonaute must carry a specific microRNA. Second, another RNA molecule called a “trigger RNA” must bind to that microRNA in a particular way.

The degradation machinery activates only when both signals are present.

This dual requirement ensures exquisite specificity. Each cell contains over a hundred thousand Argonaute–microRNA complexes regulating many genes, and destroying them indiscriminately would disrupt essential biological processes.

“The vast majority of Argonaute molecules in the cell are doing useful work regulating gene expression,” says Bartel, who is a professor of biology at MIT and also a Howard Hughes Medical Institute investigator. “You only want to degrade the ones carrying a particular microRNA and bound to the right trigger RNA. Without that specificity, the cell would lose its microRNAs and the essential regulation that they provide.”

The structural images revealed complex molecular interactions. The ZSWIM8 ligase detects multiple structural changes that occur when the two RNAs bind together within the Argonaute protein.

“When we saw the structure, everything clicked,” says Elena Slobodyanyuk, a graduate student in Bartel’s lab and co-first author of the study. “You could see how the pairing of the trigger RNA with the microRNA reshapes the Argonaute complex in a way that the ligase can recognize.”

Beyond explaining how TDMD works, the findings may impact how scientists think about the regulation of RNA molecules more broadly.

“A lot of E3 ligases recognize their targets through simpler signals,” says Jakob Farnung, co-first author and researcher in the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “It was like opening a treasure chest where every detail revealed something new and mesmerizing.”

MicroRNAs typically persist in cells for much longer time periods than most messenger RNAs, but some degrade far more quickly, and the TDMD pathway appears to account for many of these unusually short-lived microRNAs.

The researchers are now investigating whether other RNAs can trigger similar degradation pathways and whether additional microRNAs are regulated through variations of the mechanism shown in this study.

“This opens up a whole new way of thinking about how RNA molecules can control protein degradation,” says Brenda Schulman, study co-senior author and director of the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “Here, the recognition was far more elaborate than expected. There’s likely much more left to discover.”

Uncovering the details of this intricate regulatory system required interdisciplinary collaboration, combining expertise in RNA biochemistry, structural biology, and ubiquitin enzymology to solve this long-standing molecular puzzle.

“This was a project that required the strengths of two labs working at the forefront of their fields,” says Schulman, who is also an alum of Whitehead Institute. “It was an incredible team effort.”



de MIT News https://ift.tt/W0rUw8y

How bacteria suppress immune defenses in stubborn wound infections

Chronic wound infections are notoriously difficult to manage because some bacteria can actively interfere with the body’s immune defenses. In wounds, Enterococcus faecalis (E. faecalis) is particularly resilient — it can survive inside tissues, alter the wound environment, and weaken immune signals at the injury site. This disruption creates conditions where other microbes can easily establish themselves, resulting in multi-species infections that are complex and slow to resolve. Such persistent wounds, including diabetic foot ulcers and post-surgical infections, place a heavy burden on patients and health care systems, and sometimes lead to serious complications such as amputations.

Now, researchers have discovered how E. faecalis releases lactic acid to acidify its surroundings and suppresses the immune-cell signal needed to start a proper response to infection. By silencing the body’s defenses, the bacterium can cause persistent and hard-to-treat wound infections. This explains why some wounds struggle to heal, even with treatment, and why infections involving multiple bacteria are especially difficult to eradicate.

The work was led by researchers from the Singapore-MIT Alliance for Research and Technology (SMART) Antimicrobial Resistance (AMR) interdisciplinary research group, alongside collaborators from the Singapore Centre for Environmental Life Sciences Engineering at Nanyang Technological University (NTU Singapore), MIT, and the University of Geneva in Switzerland.

In a paper titled Enterococcus faecalis-derived lactic acid suppresses macrophage activation to facilitate persistent and polymicrobial wound infections,” recently published in Cell Host & Microbe, the researchers documented how E. faecalis releases large amounts of lactic acid during infection. This acidity suppresses the activation of macrophages — immune cells that normally help to clear infections — and interferes with several important internal processes that help the cell recognize and respond to infection. As a result, the mechanisms that cells rely on to send out “danger” signals are suppressed, leaving the macrophages unable to fully activate.

Researchers found that E. faecalis uses a two‑step mechanism to achieve this. Lactic acid enters the macrophages through a lactate transporter called MCT‑1 and also binds to a lactate-sensing receptor, GPR81, on the cell surface. By engaging both pathways, the bacterium effectively shuts down downstream immune signalling and blocks the macrophage’s inflammatory response, allowing E. faecalis to persist in the wound much longer than it should. Specifically, the lactic acid prevents a key immune alarm signal, known as NF-κB, from switching on inside these cells.

This was proven in a mouse wound model, where strains of E. faecalis that could not make lactic acid were cleared much more quickly, and the wounds also showed stronger immune activity. In wounds infected with both E. faecalis and Escherichia coli, the weakened immune response caused by lactic acid also allowed E. coli to grow better. This explains why wound infections often involve multiple species of bacteria and become harder to treat over time, particularly since E. faecalis is among the most common bacteria found in chronic wounds.

“Chronic wound infections often fail not because antibiotics are powerless, but because the immune system has effectively been ‘switched off’ at the infection site. We found that E. faecalis floods the wound with lactic acid, lowering pH and muting the NF‑κB alarm inside macrophages — the very cells that should be calling for help. By pinpointing how acidity rewires immune signalling, we now have clear targets to reactivate the immune response,” says first author Ronni da Silva, research scientist at SMART AMR, former postdoc in the lab of co-author and MIT professor of biology Jianzhu Chen, and SCELSE-NTU visiting researcher.

“This discovery strengthens our understanding of host-pathogen interactions and offers new directions for developing treatments and wound care that target the bacteria’s immunosuppressive strategies. By revealing how the immune response is shut down, this research may help improve infection management and support better recovery outcomes for patients, especially those with chronic wounds or weakened immunity,” says Kimberly Kline, principal investigator at SMART AMR, SCELSE-NTU visiting academic, professor at the University of Geneva, and corresponding author of the paper.

By identifying lactic‑acid‑driven immune suppression as a root cause of persistent wound infections, this work highlights the potential of treatment approaches that support the immune system, rather than rely on antibiotics alone. This could lead to therapies that help wounds heal more reliably and reduce the risk of complications. Potential directions include reducing acidity in the wound or blocking the signals that lactic acid uses to switch off immune cells.

Building on their study, the researchers plan to explore validation in additional pathogens and human wound samples, followed by assessments in advanced preclinical models ahead of any potential clinical trials.

The research was partially supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.



de MIT News https://ift.tt/a93hKvU