viernes, 17 de abril de 2026

Why bother with plausible deniability?

Picture this scenario in a business: An employee, Brad, disclosed some information that wound up in the hands of a competitor. He may not have meant to, but he did, and a few people at the firm know this. So, at the next company meeting, another employee, Linda, looks pointedly at Brad and says, “I know that no one would ever dream of leaking information, intentionally or otherwise, from our discussions.”

Linda means the opposite of what she says, of course. She is letting people know that Brad is to blame. However, while Linda is making her message public, she also wants what we often call “plausible deniability” for her statement. If anyone asks later if she was insinuating anything about Brad, she can claim she was just making a general comment about the firm.

From the boardroom to the courtroom, the talk show, and beyond, people frequently seek plausible deniability for their statements. It seems to work, too. Indeed, to have plausible deniability, the denial need not be plausible.

“People can say, ‘That’s not what I meant,’ and completely get away with it, even though it’s totally obvious they’re lying,” says MIT philosopher Sam Berstler. “They wouldn’t be getting away with it in the same respect by putting the content in explicit words.”

She adds: “This should be very puzzling to us, because in both cases the intent is maximally obvious.”

So why does plausible deniability work, and work like this? And what does it tell us about how we interact? Berstler, who studies language and communication, has published a new paper on plausible deniability, examining these issues. It is part of a larger body of work Berstler is generating, focused on everyday interactions involving deception.

To understand plausible deniability, Berstler thinks we should recognize that our conversations cannot be understood simply by analyzing the words we use. Our interactions always take place in social contexts, often have a performative aspect, and occasionally intersect with “non-acknowedgement norms,” the practice of keeping quiet about what we all know. Plausible deniability is bound up with social practices that incentivize us to not be fully transparent.

“A lot of indirect speech is designed, as it were, to facilitate this kind of deniability,” Berstler says.

The paper, “Non-Epistemic Deniability,” is published in the journal MIND. Berstler, the Laurance S. Rockefeller Career Development Chair and assistant professor of philosophy at MIT, is the sole author.

Managing a personal “Cold War”

In Berstler’s view, there are multiple ways to create plausible deniability. One is through the practice of open secrets, the subject of one of her previous papers. An open secret is widely known information that is never acknowledged, for reasons of power or in-group identification, among other things. Indeed, no one even acknowledges that they are not acknowledging the open secret.

Examining open secrets led Berstler directly to her analysis of plausible deniability. However, the new paper focuses more on another way of creating plausible deniability, which she calls “two-tracking norms.” Two-tracking is when a group divides its communications into two parts: One track consists of official, limited, courteous interaction, and the second track consists more of informal, resentful, uncooperative interactions. Linda, in our example, is engaging in two-tracking.

But why do we two-track at all? Why not just be fully transparent? Well, in an office scenario, if Linda is mad that Brad divulged some company secrets, calling out Brad directly might lead to recriminations and conflict beyond what Linda is willing to tolerate for the sake of critizing Brad on the record.

“It's like a Cold War situation where we each have an interest in not letting the conflict go to a state where we’re firing warheads at each other, but we can’t just purely manage relations around the negotiating table because we’re adversaries,” Berstler says. “We’re going to aggress against each other, but in a limited way. In a two-track conversation, communicating in the second track is like fighting a proxy battle, but we’re also providing evidence to each other that we’re only going to engage in a proxy battle.”

In this way, Linda takes Brad to task and some people pick up on it, but Brad is not explicitly publicly shamed. And though he might be unhappy, he is less likely to wreck all company norms in an attempt to retaliate. The firm more or less rolls on as usual.

Waiting for Goffman

Where Berstler differs in part from other philosophers is in her emphasis on the extent to which social practices are integral to our ways of deploying deniability. Our interactions are not just limited to rhetoric, but have additional layers.

“What we mean can often be different from what we say, or enhanced from what we say,” Berstler says. “Sometimes we figure out what others mean by relying on what they say in literal language. But sometimes we’re relying on other things, like the context.”

So, back at the firm, the colleagues of Linda and Brad might have some knowledge of a confidentiality breach, or they might know that Linda does not usually speak up at meetings, or they might read things into her tone of voice and the way she appeared to look at Brad. There is more to be gleaned than her literal words.

In this kind of analysis, Berstler finds illumination in the work of the midcentury sociologist Erving Goffman, who studied in minute detail the performative parts of our everyday interactions and speech. Goffman, as Berstler notes in the paper, proposed that we have a ritualized, social self (or “face”) and that normal, everyday behavior generally allows us, and others, to keep this face intact.

Relatedly, Goffman and some of his intellectual followers concluded that habits such as two-tracking are very common in everyday life; the price we pay for saving face is a bit less transparency, and a bit more secrecy and deniability.

“What I’m suggesting is we have these other established practices like two-tracking and open secrecy, where the deniability is just a byproduct,” Berstler says.

What’s the solution?

By bringing sociological ideas into her work, Berstler is moving beyond the normal philosophical discussion of the subject. On the other hand, she is not directly disputing core ideas in linguistics or the philosophy of language; she is just suggesting we add another layer to our analysis of communication and meaning.

Digging into issues of plausible deniability also raises the question of what to do about it. There may be something pernicious in the practice, but calling out plausible deniability threatens to dismantle our social guardrails and break the “Cold War” norms used to help people co-exist.

Berstler, though, has another suggestion: Instead of calling out such subterfuge, we can become verbally and performatively skilled enough to counteract it.

“I think the actual answer is becoming rhetorically clever,” Berstler says. “It’s being the person who uses indirect speech to respond strategically, without violating these norms. That is possible. It also means you have agency. You could become very good at verbal sparring.”

Besides, Berstler says, “Often that can be more powerful than just calling them out, and demonstrates your own verbal fluency. I think we admire it when we see it. Conversational skill is an important component of being morally good, in these cases by reprimanding someone in a way that’s not going to be counterproductive.”

She adds: “People who buy into the rhetoric of transparency can be setting back their own interests. Maybe speaking transparently is morally virtuous in some respects, but given the reality of our speech practices, transparency is not necessarily going to be the most effective way of handling things.”



de MIT News https://ift.tt/iq0JoCM

Jacob Andreas and Brett McGuire named Edgerton Award winners

MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.

“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”

“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”

Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the National Accrediting Agency for Clinical Laboratory Sciences, the International Conference on Machine Learning, and the Association for Computational Linguistics.

Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”

Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”

Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.

The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.

“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”

“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”

Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven't felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.” 

“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”

“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”



de MIT News https://ift.tt/z6O7mkW

jueves, 16 de abril de 2026

Bringing AI-driven protein-design tools to biologists everywhere

Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”



de MIT News https://ift.tt/4gzbMWC

With navigating nematodes, scientists map out how brains implement behaviors

Animal behavior reflects a complex interplay between an animal’s brain and its sensory surroundings. Only rarely have scientists been able to discern how actions emerge from this interaction. A new open-access study in Nature Neuroscience by researchers in The Picower Institute for Learning and Memory at MIT offers one example by revealing how circuits of neurons within C. elegans nematode worms respond to odors and generate movement as they pursue of smells they like and evade ones they don’t.

“Across the animal kingdom, there are just so many remarkable behaviors,” says study senior author Steven Flavell, associate professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. “With modern neuroscience tools, we are finally gaining the ability to map their mechanistic underpinnings.”

By the end of the study, which former graduate student Talya Kramer PhD ’25 led as her doctoral thesis research, the team was able to show exactly which neurons in the worm’s brain did which of the jobs needed to sense where smells were coming from, plan turns toward or away from them, shift to reverse (like old-fashioned radio-controlled cars, C. elegans worms turn in reverse), execute the turns, and then go back to moving forward. Not only did the study reveal the sequence and each neuron’s role in it, but it also demonstrated that worms are more skillful and intentional in these actions than perhaps they’ve received credit for. And finally, the study demonstrated that it’s all coordinated by the neuromodulatory chemical tyramine.

“One thing that really excited us about this study is that we were able to see what a sensorimotor arc looks like at the scale of a whole nervous system: all the bits and pieces, from responses to the sensory cue until the behavioral response is implemented,” Flavell says.

Seeing the sequence

To do the research, Kramer put worms in dishes with spots of odors they’d either want to navigate toward or slither away from. With the lab’s custom microscopes and software, she and her co-authors could track how the worms navigated and all the electrical activity of more than 100 neurons in their brains during those behaviors (the worms only have 302 neurons total).

The surveillance enabled Kramer, Flavell, and their colleagues to observe that the worms weren’t just ambling randomly until they happened to get where they’d want to be. Instead, the worms would execute turns with advantageous timing and at well-chosen angles. The worms seemed to know what they were doing as they navigated along the gradients of the odors.

Inside their heads, patterns of electrical activity among a cohort of 10 neurons (indicated by flashing green light tied to the flux of calcium ions in the cells), revealed the sequence of neural activation that enabled the worms to execute these sensible sensory-guided motions: forward, then into reverse, then into the turn, and then back to forward. Particular neurons guided each of these steps, including detecting the odors, planning the turn, switching into reverse, and then executing the turns.

A couple of neurons stood out as key gears in the sequence. A neuron called SAA proved pivotal for integrating odor detection with planning movement, as its activity predicted the direction of the eventual turn. Several neurons were flexible enough to show different activity patterns depending on factors such as where the odors were and whether the worm was moving forward or in reverse.

And if the neurons are indeed turning and shifting gears, then the neuromodulator tyramine (the worm analog of norepinephrine) was the signal essential to switch their gears. After the worms started moving in reverse, tyramine from the neuron RIM enabled other neurons in the sequence to change their activity appropriately to execute the turns. In several experiments the scientists knocked out RIM tyramine and saw that the navigation behaviors and the sequence of neural activity largely fell apart.

“The neuromodulator tyramine plays a central role in organizing these sequential brain activity patterns,” Flavell says.

In addition to Flavell and Kramer, the paper’s other authors are Flossie Wan, Sara Pugliese, Adam Atanas, Sreeparna Pradhan, Alex Hiser, Lillie Godinez, Jinyue Luo, Eric Bueno, and Thomas Felt.

A MathWorks Science Fellowship, the National Institutes of Health, the National Science Foundation, The McKnight Foundation, The Alfred P. Sloan Foundation, the Freedom Together Foundation, and HHMI provided funding to support the work.



de MIT News https://ift.tt/jHNJYRF

miércoles, 15 de abril de 2026

Waves hit different on other planets

On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.

This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.

In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.

The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock. 

“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”

The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.

“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”

If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.

“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”

The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.

“The first puff”

When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.

“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”

She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.

“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”

Making waves

The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.

The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.

They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.

“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”

The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.

Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.

The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.

This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.

Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.

“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”

This work was supported, in part, by NASA and the National Science Foundation.



de MIT News https://ift.tt/LMlVWoU

Geothermal energy turns red hot

Drill deep and drill differently. That’s what’s needed to exploit the nearly bottomless promise of geothermal energy in the United States and around the globe, according to participants at the 2026 Spring Symposium, titled “Next-generation geothermal energy for firm power.” 

Sponsored by the MIT Energy Initiative (MITEI), the March 4 event drew 120 people, including MIT faculty and students, investors, and representatives from startups, multinational energy companies, and zero-carbon advocacy groups.

“The time feels right to pull together good policy, great corporate partners, and the research and technological innovations … to make significant advances in the widespread utilization of this incredible resource,” said Karen Knutson, the vice president for government affairs at MIT, in welcoming attendees.

Technology from the oil and gas industry helped usher in a first wave of geothermal energy. But chewing vertical holes through rocks in traditional ways can’t deliver on the full potential of this resource. And the real treasure — geologic formations radiating heat at 374 degrees Celsius and above — lies kilometers beneath Earth’s surface, far beyond the reach of most conventional drilling rigs.

Panelists explored the many innovations in accessing and circulating subsurface heat, as well as digging to unprecedented depths through extremely challenging geological conditions, discussing advanced drilling technologies, materials, and subsurface imaging.

This work is needed urgently, as demand for firm (always-on) power skyrockets in response to the electrification of industry and rise of data centers, said Pablo Dueñas‑Martínez, a MITEI research scientist. “We cannot get through this only with solar and wind; we need dense, deployable energy like geothermal.”

From “minuscule” to “almost inexhaustible” energy

In her opening remarks, Carolyn Ruppel, MITEI’s deputy director of science and technology, noted that despite decades of successful projects in places like the United States, Kenya, Iceland, Indonesia, and Turkey, geothermal still contributes only a “minuscule” share of global electricity. “The tremendous heat beneath our feet remains largely untouched,” she said.

Citing MIT’s milestone 2006 study “The Future of Geothermal Energy,” keynote speaker John McLennan, a professor at the University of Utah and co–principal investigator of the U.S. Department of Energy’s Utah FORGE enhanced geothermal systems (EGS) field laboratory, reminded attendees that the continental crust holds enough accessible heat to supply power for generations. “For practical purposes, it’s almost inexhaustible,” he said.

The question now, he said, is how to access that resource economically and responsibly.

At the Utah FORGE test site, McLennan has been part of a team investigating one method — adapting the oil and gas industry’s drilling and reservoir engineering expertise for hot, relatively impermeable rocks.

The project has drilled multiple deep wells into crystalline granitic rock, including a pair of wells that have been hydraulically stimulated and connected. In a recent circulation test, cold water was pumped down one well, flowed through fractures, and returned hot through the other.

“On a commercial basis … this hot water would be converted to electricity at the surface,” McLennan said. “This has now been demonstrated at Utah FORGE.”

The basic physics, in other words, work. The harder problems now are cost, repeatability, and scale.

Geothermal on the grid

Several panels highlighted the fact that next-generation geothermal is already beginning to deliver firm power.

At Lightning Dock, New Mexico, geothermal company Zanskar used a probabilistic modeling framework that simulated thousands of possible subsurface configurations to identify where to drill a new production well at an underperforming geothermal field. By thermal power delivered, the resulting well is now “the most-productive pumped geothermal well in the country,” said Joel Edwards, Zanskar’s co-founder and chief technology officer — powering the entire 15 megawatt (MW) Lightning Dock plant from a single well.

This data-driven approach enables the company to find and develop new resources faster and more cheaply than traditional methods, said Edwards.

José Bona, the director of next-generation geothermal at Turboden, explained how his company’s technology uses specialized turbines to circulate organic fluids that conserve heat better than water, and then convert that heat efficiently into electrical power. This closed-cycle technology can utilize low- to medium-temperature heat sources. Turboden is supplying its technology both to the Lightning Dock geothermal facility in New Mexcio and to Fervo Energy’s Cape Station in southwest Utah, an EGS project that will begin delivering 100 MW of baseload, clean electricity to the grid this year, aiming for 500 MW by 2028.

In Geretsried, Germany, Eavor has developed its own proprietary closed-loop system by creating a kind of underground radiator.

“We drilled to about 4.5 kilometers vertical depth, completed six horizontal multilateral pairs, and we delivered the first power to the grid in December,” said Christian Besoiu, the team lead of technology development at Eavor. The project will ultimately be capable of supplying 8.2 MW of electricity to the 32,000 households in the Bavarian town of Geretsried and 64 MW of thermal energy to the district in which the town lies, prioritizing heat when needed.

Beyond oil and gas technology

Early geothermal exploration typically targeted preexisting faults using vertical wells left by oil and gas drilling. Today, companies are experimenting with rock fracturing at multiple subsurface levels and creating heat reservoirs in previously untenable formations by using propping materials.

“Instead of vertical wells, we’re going to horizontal wells, we’re going to cased wells, we’re introducing proppants [solid materials that hold open hydraulically fractured rock] … we do dozens of stages with these designs,” said Koenraad Beckers, the geothermal engineering lead at ResFrac. This shale-style approach has already yielded much higher flow rates and more-reliable performance than earlier EGS.

Some current geothermal wells manage to achieve depths close to 15,000 feet using the oil and gas industry’s polycrystalline diamond compact drill bits, which can bore through hard rock like granite at more than 100 feet per hour. But these bits and the rigs that drive them are no match for conditions six or more kilometers down — and it is at those depths that the heat on hand begins to make an overwhelming economic case for geothermal.

“If we go to around 300 to 350 degrees, your power potential increases 10 times,” said Lev Ring, CEO of Sage Geosystems. “At that point, with reasonable CAPEX [capital expenditure] assumptions, levelized cost of electricity [a metric for comparing the cost of electricity across different generation technologies] is around 4 cents, and geothermal becomes cheaper than any other alternative.”

But “at 10 kilometers down … the largest land rigs in existence today cannot handle it,” Ring added. “We need alternatives — new materials, new ways to handle pressure, maybe even welding on the rig … a whole space that has not been addressed yet.”

One panel, featuring Quaise Energy, an MIT spinout with MITEI roots, spotlighted just how radically drilling might change. Co-founder Matt Houde described the company’s millimeter-wave drilling approach, which uses high-frequency electromagnetic waves derived from fusion research to vaporize rock instead of grinding it, as with conventional drilling. In a recent Texas field test, the team drilled 100 meters of hard basement rock in about a month, and is now planning kilometer-scale trials aimed at reaching superhot rock temperatures around 400 C, where each well could deliver many times the power of today’s geothermal projects.

Innovations for deep drilling

Moderating a panel on “MIT innovations for next-generation geothermal,” Andrew Inglis, the venture builder in residence with MIT Proto Ventures, whose position is sponsored by the U.S. Department of Energy GEODE program, framed the Institute’s role in getting such hard-tech ideas out of the lab and into the field. “The way MIT thinks about tech development, uniquely from other universities, can play a very singular role in geothermal commercial liftoff,” he said.

Materials researchers on that panel illustrated the point. Matěj Peč, an associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences, outlined work to build sensors that survive up to 900 C so that rock deformation and fracturing can be studied at supercritical conditions. Michael Short, the Class of 1941 Professor in the Department of Nuclear Science and Engineering, and C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering, respectively described coatings and alloys designed to resist corrosion, fouling, and cracking in extreme environments. In response to audience questions after their talks, Tasan made an important point, highlighting how academics need input from industry to understand the real-world problems (e.g., corrosion of pipes by geofluids) that require engineering solutions.

Other researchers are rethinking how to detect geothermal resources: Wanju Yuan, a research scientist with the Geological Survey of Canada at Natural Resources Canada, is using satellite imagery and thermal infrared sensing to screen vast regions for subtle hot spots and structures, processing thousands of images to identify promising sites in just a few months of work. “It’s a very efficient way to screen potential areas before more expensive exploration, thus reducing exploration and drilling risks,” he said.

Policy as backdrop, not center stage

Policy loomed in the background of many discussions — from bipartisan support for geothermal exploration and tax incentives to issues of regulation and permitting.

For Ruppel, that was by design.

“We wanted this meeting to showcase what’s technically possible and what’s already happening on the ground,” she said. “The policy world is starting to pay attention. Our job is to make sure that when that spotlight turns our way, next-generation geothermal is ready.”

MITEI’s Spring Symposium was followed by a gathering of geothermal entrepreneurs, investors, and energy industry experts co-hosted by MITEI and the Clean Air Task Force. “GeoTech Summit: Accelerating geothermal technology, projects, and deal flow” explored the financing challenges and opportunities of geothermal energy today.



de MIT News https://ift.tt/YBjabVF

MIT faculty, alumni receive 2025-26 American Physical Society honors

The American Physical Society (APS) recently honored two MIT faculty members — professors Yoel Fink PhD ’00 and Mehran Kardar PhD ’83 — as well as six alumni with prizes and awards for their contributions to physics and academic leadership.

In addition, several MIT faculty members — Professor Jorn Dunkel, Professor Yen-Jie Lee PhD ’11, Associate Professor Mingda Li PhD ’15, and Associate Professor Julien Tailleur — as well as 12 additional alumni were named APS Fellows.

Yoel Fink PhD ’00, the Danae and Vasilis (1961) Salapatas Professor in the Department of Materials Science and Engineering, received the Andrei Sakharov Prize “for defending the academic freedom and human rights of scientists working in the U.S.”

The prize, named for physicist and human rights advocate Andrei Sakharov, recognizes scientists whose leadership and impact advance the principles of intellectual freedom and human dignity. Fink’s research focuses on “computing fabrics” — fibers and textiles that sense, communicate, store, and process information. By embedding functionality at the fiber level, fabrics become computing systems that can infer human activity and context while keeping the traditional qualities of garments. These textiles enable noninvasive monitoring of physiological and health conditions, with applications ranging from fetal and maternal health to human performance analytics, injury prevention in challenging environments, and defense.

Mehran Kardar PhD ’83, the Francis Friedman Professor of Physics, received the Lars Onsager Prize “for ground-breaking contributions to statistical physics, including the Kardar-Parisi-Zhang equation, Casimir forces, active matter, and aspects of biological physics.”

Kardar’s research focuses on how complex behavior emerges from simple interactions in systems both in and far from equilibrium, including stable ones like a still pond and rapidly changing ones such as growing surfaces. The Kardar-Parisi-Zhang equation, which he helped develop, provides a unifying framework for understanding how randomness and fluctuations shape evolving phenomena, from fluids and interfaces to biological and quantum systems. His work has also advanced the theoretical understanding of disordered materials, soft matter such as polymers and gels, and fluctuation-induced forces — including Casimir forces arising from quantum and thermal effects. More recently, he has applied these ideas to active matter — systems of self-driven units — and biological systems, helping reveal patterns in living and evolving systems.

Alumni receiving awards

Joel Butler PhD ’75 was presented the W.K.H. Panofsky Prize in Experimental Particle Physics “for wide-ranging scientific, technical, and strategic contributions to particle physics, particularly exceptional leadership in fixed-target quark flavor experiments at Fermilab and collider physics at the Large Hadron Collider.”

Anthony Duncan PhD ’75 is the recipient of the Abraham Pais Prize for History of Physics “for research on the history of quantum physics between 1900 and 1927 that culminated in 'Constructing Quantum Mechanics,' an exemplary work that uses primary sources masterfully and employs scaffold and arch metaphors to describe developments in the quantum revolution.”

Laura A. Lopez ’04 was presented the Edward A. Bouchet Award “for pioneering contributions to X-ray astronomy, including foundational studies of supernova remnants, compact objects, and stellar feedback in galaxies, and for transformative leadership in advancing equity and inclusion in physics through innovative mentorship programs, national advocacy, and unwavering support for students from historically marginalized communities.”

Zhiquan Sun PhD ’25 is the recipient of the J.J. and Noriko Sakurai Dissertation Award in Theoretical Particle Physics “for applying effective field theory to advance our understanding of QCD [quantum chromodynamics], including establishing a new formalism to study heavy quark fragmentation, determining how confinement affects energy correlators, and revealing an overlooked complexity of the axion solution to the strong CP [charge conjugation symmetry and parity symmetry] problem.”

Charles B. Thorn III ’68 received the Dannie Heineman Prize for Mathematical Physics for “fundamental contributions to elementary particle physics, primarily the theory of strong interactions and the development of string theory.”

Christina Wang ’19 received the Mitsuyoshi Tanaka Dissertation Award in Experimental Particle Physics “for pioneering a novel technique using CMS [Compact Muon Solenoid] muon chambers to search for weakly-coupled sub-GeV [giga-electronvolt] mass dark matter using long-lived particle searches, and for groundbreaking work in quantum sensing to enable new probes of dark matter.”

APS Fellows

Several MIT faculty were elected 2025 APS Fellows:

Jorn Dunkel, MathWorks Professor of Mathematics, is the recipient of the Division of Statistical and Nonlinear Physics Fellowship “for pioneering contributions to statistical, nonlinear, and biological physics, notably in understanding pattern formation in soft matter and biology, cell positioning in tissues, and turbulence in active media.”

Yen-Jie Lee PhD '11, professor of physics, received the Division of Nuclear Physics Fellowship “for pioneering measurements of jet quenching, medium response and heavy-quark diffusion in the quark-gluon plasma, and for using electron-positron collisions as an innovative control to understand collectivity in small collision systems.”

Mingda Li PhD '15, associate professor of nuclear science and engineering, is the recipient of the Topical Group on Data Science Fellowship “for pioneering the integration of artificial intelligence with scattering and spectroscopy, enabling breakthroughs in phonons, topological states, optical and time-resolved spectra, and data-driven discovery for quantum and energy applications.”

Julien Tailleur, associate professor of physics, is the recipient of the Division of Soft Matter Fellowship “for foundational theoretical work on motility-induced phase separation and emergent collective behavior in scalar active matter.”

The following additional MIT alumni were also honored as APS Fellows:

Andrew Cross SM ’05, PhD ’08 (EECS), Division of Quantum Information Fellowship 

Kevin D. Dorfman SM '01, PhD '02 (ChemE), Division of Polymer Physics Fellowship

Geoffroy Hautier PhD '11 (DMSE), Division of Computational Physics Fellowship

Douglas J. Jerolmack PhD '06 (EAPS), Division of Statistical and Nonlinear Physics Fellowship

Brian Lantz '92, PhD '99 (Physics), Division of Gravitational Physics Fellowship

Valerio Lucarini SM '03 (EAPS), Topical Group on Physics of Climate Fellowship

Giles Novak '81 (Physics), Division of Astrophysics Fellowship

Steve Presse PhD '08 (Physics), Division of Biological Physics Fellowship

Jonathan Rothstein PhD '01 (MechE), Division of Fluid Dynamics Fellowship

Gray Rybka PhD '07 (Physics), Division of Particles and Fields Fellowship

Sarah Sheldon '08, PhD '13 (Physics, NSE), Forum on Industrial and Applied Physics Fellowship

Lian Shen ScD '01 (MechE), Division of Fluid Dynamics Fellowship



de MIT News https://ift.tt/lHQBpit

martes, 14 de abril de 2026

Multitasking quantum sensors can measure several properties at once

A special class of sensors leverages quantum properties to measure tiny signals at levels that would be impossible using classical sensors alone. Such quantum sensors are currently being used to study the inner workings of cells and the outer depths of our universe.

Particularly promising are solid-state quantum sensors, which can operate at room temperature. Unfortunately, most solid-state quantum sensors today only measure one physical quantity at a time — such as the magnetic field, temperature, or strain in a material. Trying to measure both the magnetic field and temperature of a material at the same time causes their signals to get mixed up and measurements to become unreliable.

Now, MIT researchers have created a way to simultaneously measure multiple physical quantities with a solid-state quantum sensor. They achieved this by exploiting entanglement, where particles become correlated into a single quantum state. In a new paper, the team demonstrated its approach in a commonly used quantum sensor at room temperature, measuring the amplitude, frequency, and phase of a microwave field in a single measurement. They also showed the approach works better than sequentially measuring each property or using traditional sensors.

The researchers say the approach could enable quantum sensors that can deepen our understanding of the behavior of atoms and electrons inside materials and living systems like cancer cells.

“Quantum multiparameter estimation has been mostly theoretical to date,” says co-lead author of the paper Takuya Isogawa, a graduate student in nuclear science and engineering. “There have been very few experiments that actually demonstrate it, and that work focused on photons. We wanted to demonstrate multiparameter estimation in a more application-oriented setup: a solid-state quantum sensor in use today.”

Joining Isogawa on the paper are co-lead authors Guoqing Wang PhD ’23 and MIT PhD candidate Boning Li. The other authors on the paper are former MIT visiting students Zhiyao Hu and Ayumi Kanamoto; University of Tokyo PhD candidate Shunsuke Nishimura; Chinese University of Hong Kong Professor Haidong Yuan; and Paola Cappellaro, MIT’s Ford Professor of Engineering, a professor of nuclear science and engineering and of physics, and a member of the Research Laboratory of Electronics.

Quantum effects for measurement

Quantum sensors exploit quantum effects like entanglement, spin states, and superposition to measure changes in magnetic fields, electric fields, gravity, acceleration, and more. As such, they can be used to measure the activity of single molecules in ways that are useful for understanding biology and space, like tracking the activity of metabolites or enzymes inside cells.

One particularly useful sensor in biology leverages what’s known as nitrogen-vacancy (NV) centers in diamonds, a defect where a carbon atom in the diamond’s crystal lattice is replaced by a nitrogen atom, and a neighboring lattice site is missing, or vacant. The defect hosts an electronic spin whose transition frequencies can be read out optically. The NV center’s spin state is extremely sensitive to external effects, such as magnetic fields and temperature, which can shift the spin state in ways that can be measured at extremely high resolution.

Unfortunately, different external effects change the energy resonances of the spin in similar ways, making it difficult to measure multiple effects at once. The result is that most solid-state quantum sensor applications measure a single physical quantity at one time.

“If you can only measure one quantity at a time, you have to repeat experiments to measure quantities one by one,” Isogawa says. “That takes more time, which means less sensitivity. It also makes experiments more susceptible to errors.”

For their experiment, the researchers used NV centers inside of a 5-square-millimeter diamond. They pointed a laser into the diamond and studied its fluorescence to make their measurements, a common approach for such sensors. To study the electronic spin of the NV center, they used a microwave antenna. To study the spin of the nitrogen atom they used a radio frequency field.

“We used those two spins as two qubits,” Isogawa says, referring to the building blocks of quantum computing systems. “If you have only one qubit, you can only measure one outcome: basically, 0 or 1. It’s the probability that it spins up or down. Think of it like a coin toss, with the probability of getting heads or tails. With two qubits, we increased the parameters that we could extract.”

The system worked because the spins of the sensor qubit and auxiliary qubit were entangled, a quantum property where the state of one particle is dependent on another. With one qubit, you get a binary outcome. With two, you get four possible outcomes with a total of three possible parameters.

The two qubits allowed researchers to measure those three quantities simultaneously using a technique known as the Bell state measurement.

Other researchers had used the Bell state measurement at extremely low temperatures before, but the MIT researchers developed a new technique to perform the measurement at room temperature. That technique was first proposed by Wang, who was previously a graduate student in Professor Cappellaro’s lab.

The researchers used the approach to simultaneously measure the amplitude, detuning, and phase of a microwave magnetic field. The researchers also say the approach could be used to measure electric fields, temperature, pressure, and strain.

“Measuring these parameters simultaneously can help us explore spin waves in materials, which is an important topic in condensed matter physics,” Isogawa says. “NV center sensors have extremely high spatial resolution and versatility. It can measure a lot of different physical quantities.”

More practical quantum sensing

The researchers say this work is an important step toward using solid-state quantum sensors to more fully characterize systems in biomedical research and materials characterization. That’s because multiparameter estimation had never been achieved in realistic settings or in widely used quantum sensors.

“What makes the NV center quantum sensors so special is they can operate at room temperature,” Isogawa says. “It’s very suitable for biological measurements or condensed matter physics experiments.”

Although the researchers say their sensor didn’t measure each quantity at the highest possible precision, in future work they plan to explore if their approach can achieve higher precision for each parameter.

They also plan to explore how their approach works to characterize heterogenous materials.

“In an extremely uniform environment, you could use many different classical and quantum sensors and measure each physical quantity at the same time,” Isogawa says. “But if the physical quantities change at different locations, you need high spatial sensors, and you need a sensor that can measure multiple physical quantities. This approach has major advantages in such situations.”

The work was supported, in part, by the U.S. National Science Foundation, the National Research Foundation of Korea, and the Research Grants Council of Hong Kong.



de MIT News https://ift.tt/5o8Tkza

Flying at the edge of the stratosphere

All the ingredients to leave the first layer of the atmosphere were laying on a picnic table. T-minus 30 minutes before launch from the New York Catskills, students in MIT's reborn 16.00 (Introduction to Aerospace Engineering) course tore open hand warmers to fight the December morning chill. One hot pack for cold hands. One for the electronics payload, which would need the warmth on the way up. This series of balloon launches rose to more than 20 kilometers above the surface.

Five student teams completed stratospheric balloon launches for a final project in the MIT Department of Aeronautics and Astronautics (AeroAstro) first-year exploratory course. This fall semester was the first iteration of the reimagined 16.00. The course was co-taught by MIT professors Jeffery Hoffman, a former NASA astronaut, and Oliver de Weck, Apollo Program Professor of Astronautics and Engineering Systems. The course was reintroduced to the curriculum in 2025 to give first-year students a design-build experience from the very start, says de Weck, who is also AeroAstro's associate department head. 

"This course had been taught for more than 25 years. And then the pandemic came," he explains. "We felt that it was time to bring the course back, to revive it, give it new life."

De Weck taught a version of this hands-on project from 2012 to 2016 in Unified Engineering, with 20 balloon launches over that time. Hoffman taught a version that focused on blimps, indoor flights, and achieving neutral buoyancy and control. Those prior courses inspired the new program. The current 16.00 course is an early introduction to design-build flying, offered before the well-known Unified Engineering course for Course 16 sophomores.

"Students don't want to sit through long lectures, with lots of PowerPoints and notes and blackboards," says de Weck. He referenced feedback from students that is framing the department's upcoming strategic plan. "Those hands-on visceral experiences is what we want to provide them."

The AeroAstro program adds about 60 undergraduates per year. Future students can expect to see different versions of the 16.00 course, including those focused on fixed-wing aircraft, quadcopter drones, and rockets. Future balloon courses will be called 16.00B. A fixed-wing remote-controlled aircraft course will be 16.00A.

Over 13 weeks, the students attended lectures on subjects including atmospheric composition, radio waves, and flight planning and regulations. In labs, they practiced building Arduino-based pressure and temperature sensors, and testing communication systems.

On that cold launch day, Jackson Lunfelt kept his grip against the pull of an oversized helium balloon moments before his team's launch. His team worked for weeks configuring GPS and radio communications and testing balloon buoyancy. Among their trials and errors, they had to find the right weight for a 3D printed frame to attach the balloon and parachute. It was too heavy at first. They figured out how to reduce the weight of the plastic to keep the payload buoyant.

"Fortunately, a lot of preparation had helped us," he says.

Lunfelt, a first-year student, grew up just a few hours away from the Catskills in upstate New York. In high school, he was active in Future Farmers of America, welding, and robotics. On launch day, his team was worried their onboard GoPro would shut off from the cold high-altitude temperatures. They got the green light to add a battery bank. They would need to re-calculate the weight and helium needed at the final hour.

"It was one of those things that if you don't do this, you're not gonna launch,” says Lunfelt.

That first week of December brought frigid air, gusts, and wind patterns that meant the class would have to rethink its launch site. The team aimed to fly east, over Massachusetts, and land before reaching the ocean. The new weather pattern pushed the team even farther west across the New York border.

The balloon lifted the 3.5 pound payload from the Catskills while the mission control group monitored progress from Cambridge, Massachusetts. It rose hundreds of feet per minute. It passed the troposphere and flew across Western Massachusetts at 100 miles an hour, pushed by the strong upper-level winds of the jet stream. It climbed to an estimated 22 kilometers above the surface. At that height, an onboard GoPro camera recorded the curvature of the Earth.

"Every single moment of that video was amazing. It was truly a story in itself," says Lunfelt.

Then the latex balloon burst, as designed, and descended back down — aided by a parachute. The GoPros captured that spectacular moment, too. The winds carried them just north of the Massachusetts-New Hampshire border. They landed in a neighborhood around Nashua, New Hampshire. Locals saw the MIT identifiers written on the side of the payloads and helped the teams recover them. The landing made it onto the local news.

After a very early morning and late evening monitoring the launch returns, de Weck, alongside teaching assistant Jonathan Stoppani and Senior Technical Instructor Dave Robertson, agreed that the feeling of pride from the whole class was palpable. The payloads all came back in one piece, a test of successful design-builds and last-minute adjustments. The AeroAstro flying tradition is back for first-year students. 



de MIT News https://ift.tt/yulmi6C

lunes, 13 de abril de 2026

Carbon removal project supports Maine’s blue economy, broader marine health

Oceans absorb roughly 25 to 30 percent of the carbon dioxide (CO2) that is released into the atmosphere. When this CO2 dissolves in seawater, it forms carbonic acid, making the water more acidic and altering its chemistry. Elevated levels of acidity are harmful to marine life like corals, oysters, and certain plankton that rely on calcium carbonate to build shells and skeletons.

“As the oceans absorb more CO2, the chemistry shifts — increasing bicarbonate while reducing carbonate ion availability — which means shellfish have less carbonate to form shells,” explains Kripa Varanasi, professor of mechanical engineering at MIT. “These changes can propagate through marine ecosystems, affecting organism health and, over time, broader food webs.”

Loss of shellfish can lead to water quality decline, coastal erosion, and other ecosystem disruptions, including significant economic consequences for coastal communities. “The U.S. has such an extensive coastline, and shellfish aquaculture is globally valued at roughly $60 billion,” says Varanasi. “With the right innovations, there is a substantial opportunity to expand domestic production.”

“One might think, ‘this [depletion] could happen in 100 years or something,’ but what we’re finding is that they are already affecting hatcheries and coastal systems today,” he adds. “Without intervention, these trends could significantly alter marine ecosystems and the coastal economies that rely on them over time.”

Varanasi and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering, Post-Tenure, at MIT, have been collaborating for years to develop methods for removing carbon dioxide from seawater and turn acidic water back to alkaline. In recent years, they’ve partnered with researchers at the University of Maine Darling Marine Center to deploy the method in hatcheries.

“The way we farm oysters, we spawn them in special tanks and rear them through about a two-week larval period … until they’re big enough so that they can be transferred out into the river as the water warms up,” explains Bill Mook, founder of Mook Sea Farm. Around 2009, he noticed problems with production of early-stage larvae. “It was a catastrophe. We lost several hundred thousand dollars’ worth of production,” he says.

Ultimately, the problem was identified as the low pH of the water that was being brought in: The water was too acidic. The farm’s initial strategy, a common practice in oyster farming, was to buffer the water by adding sodium bicarbonate. The new approach avoids the use of chemicals or minerals.

“A lot of researchers are studying direct air capture, but very few are working in the ocean-capture space,” explains Hatton. “Our approach is to use electricity, in an electrochemical manner, rather than add chemicals to manipulate the solution pH.”

The method uses reactive electrodes to release protons into seawater that is collected and fed into the cells, driving the release of the dissolved carbon dioxide from the water. The cyclic process acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. The water is then fed to a second set of cells with a reversed voltage to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea.

Maine’s Damariscotta River Estuary, where Mook farms is located, provides about 70 percent of the state’s oyster crop. Damian Brady, a professor of oceanography based at the University of Maine and key collaborator on the project, says the Damariscotta community has “grown into an oyster-producing powerhouse … [that is] not only part of the economy, but part of the culture.” He adds, “there’s actually a huge amount that we could learn if we couple the engineering at MIT with the aquaculture science here at the University of Maine.”

“The scientific underpinning of our hypothesis was that these bivalve shellfish, including oysters, need calcium carbonate in order to form their shells,” says Simon Rufer PhD ’25, a former student in Varanasi’s lab and now CEO and co-founder of CoFlo Medical. “By alkalizing the water, we actually make it easier for the oysters to form and maintain their shells.”

In trials conducted by the team, results first showed that the approach is biocompatible and doesn't kill the larvae, and later showed that the oysters treated by MIT's buffer approach did better than mineral or chemical approaches. Importantly, Hatton also notes, the process creates no waste products. Ocean water goes in, CO2 comes out. This captured CO2 can potentially be used for other applications, including to grow algae to be used as food for shellfish.

Varanasi and Hatton first introduced their approach in 2023. Their most recent paper, “Thermodynamics of Electrochemical Marine Inorganic Carbon Removal,” which was published last year in journal Environmental Science & Technology, outlines the overall thermodynamics of the process and presents a design tool to compare different carbon removal processes. The team received a “plus-up award” from ARPA-E to collaborate with University of Maine and further develop and scale the technology for application in aquaculture environments.

Brady says the project represents another avenue for aquaculture to contribute to climate change mitigation and adaptation. “It pushes a new technology for removing carbon dioxide from ocean environments forward simultaneously,” says Brady. “If they can be coupled, aquaculture and carbon dioxide removal improve each other’s bottom line."

Through the collaboration, the team is improving the robustness of the cells and learning about their function in real ocean environments. The project aims to scale up the technology, and to have significant impact on climate and the environment, but it includes another big focus.

“It’s also about jobs,” says Varanasi. “It’s about supporting the local economy and coastal communities who rely on aquaculture for their livelihood. We could usher in a whole new resilient blue economy. We think that this is only the beginning. What we have developed can really be scaled.”

Mook says the work is very much an applied science, “[and] because it’s applied science, it means that we benefit hugely from being connected and plugged into academic institutions that are doing research very relevant to our livelihoods. Without science, we don’t have a prayer of continuing this industry.”



de MIT News https://ift.tt/IWB2mwL

sábado, 11 de abril de 2026

Jazz in the key of life

It is not hard to find glowing reviews of saxophonist Miguel Zenón, a creative jazz artist whose compositions incorporate musical elements from his native Puerto Rico.

For instance, The Jazz Times called “Jibaro,” Zenón’s breakthrough 2005 album, “profound yet joyful.” The New York Times called the same music “strong and light,” adding that we have “rarely seen a jazz composer step forward with a project so impressively organized, intellectually powerful and well played from the start.”

In 2009, when Zenón won a prestigious MacArthur Fellowship, the MacArthur Foundation called Zenón’s work “elegant and innovative,” with “a high degree of daring and sophistication.” In 2012, The New York Times reviewed another Zenón work, “Puerto Rico Nació en Mi: Tales From the Diaspora,” by calling the music “deeply hybridized and original, complex but clear.”

As you may have noticed, these notices all contain multiple descriptive terms. That’s because Zenón’s work is many things at once: jazz, combined with other musical genres; technically rigorous, and supple; novel, yet steeped in tradition. Indeed, Zenón has always seen jazz as being multifaceted.

“What I discovered, when I first encountered jazz, was this idea that you were using improvisation to portray your personality directly to your listeners,” Zenón explains. “And it was connected to a very interesting and intricate improvisational language. That provided something I hadn’t encountered in music before, this idea that you could have something personal and heartfelt walking hand in hand with something that was intellectual and brainy. That balance spoke to me.”

It is still speaking. In 2024, Zenón won the Grammy Award for Best Latin Jazz Album for “El Arte Del Bolero Vol. 2,” a collaboration with Venezuelan pianist Luis Perdomo, a musical partner in the Miguel Zenón Quartet.

Zenón has taught at MIT for three years now. He became a tenured faculty member last year, in MIT’s Music and Theater Arts program, where he helps students find the same satisfaction in music that he does.

“When I first got into music, I was looking for fulfillment,” Zenón says. “It wasn’t about success. I was just looking for music to fulfill something within me. And I still search for that now. And sometimes it still feels like it did 25 or 30 years ago, when I first encountered that feeling. It’s nice to have that in your pocket, to say, this is what I’m looking for, that initial feeling.”

Paradise in the Back Bay

Zenón grew up in San Juan, Puerto Rico. Around age 11, he started attending a performing arts school and playing the saxophone. In his last year of school, Zenón was admitted into college to study engineering. However, a few years before, he had encountered something new: jazz. Zenón’s training had been in classical music. But jazz felt different.

“Discovering jazz music ignited a passion for music in me that had not existed up to that point,” says Zenón, who decided to pursue music in college. “I kind of jumped ship, and it was a blind jump. I didn’t know what to expect, I didn’t know what was on the other side, I didn’t have any artists or any musicians in my family. I just followed a hunch, followed my heart.”

After teachers recommended he study at the renowned Berklee College of Music in Boston, Zenón worked to find a scholarship and funding.

“This was way before the internet. I was looking at catalogs,” Zenón recalls. “I had never been to Boston in my life, I didn’t even know what Berklee looked like. But at Berklee it was the first time I was able to connect with a jazz teacher in a formal way, to learn about history, theory, harmony, and I soaked in it. Also, I was surrounded by young people like myself, who were as enamored and passionate about music as I was. It really felt like paradise.”

After earning his BA from Berklee in 1998, Zenón then moved to New York City. He earned an MA from the Manhattan School of Music in 2001 and began playing more extensively with new bandmates.

“I just wanted to be able to play with people who were better than me, and learn from the experience,” Zenón says. He started generating new ideas, writing music, and performing publicly. With Antonio Sánchez, Hans Glawischnig, and Perdomo, he founded the Miguel Zenón Quartet.

“That led to going into the studio and making an album,” Zenón recounts. “And that led to more experience, and more albums.”

Did it ever. Zenón has now been the leader for about 20 albums, mostly featuring the quartet. (After several years, Henry Cole replaced Sánchez as the group’s drummer.) Zenón has played on many recordings by other artists, and helped found the SFJAZZ Collective.

Not many prolific musicians will name any one recording as their best, and Zenón is the same way, but he is willing to cite a few that were milestones for him.

“Jibaro” draws on the music of Puerto Rico’s jibaro singers, troubadors using 10-line stanzas with eight-syllable lines, something Zenón adopted for jazz-quartet use. “Esta Plena,” a 2009 record, fuses jazz and the structures of “plena,” a traditional percussion-based Puerto Rican song form. “Alma Adentro,” a 2011 album, covers classic songs from Puerto Rico.

“It would be impossible for me to pick one favorite, but what I would say is, there are a couple of albums in the earlier part of my career that explored a balance between things coming from a jazz world and coming from traditional Puerto Rican traditional music and folklore, when I was able to feel like that balance was right, it felt like me,” Zenón says. “This is what I have to give. This is my persona.”

In 2008, Zenón was also honored with a Guggenheim Fellowship, which helped him conduct music research, another facet of his career. Zenón has often extensively interviewed traditional Puerto Rican musicians about the intricacies of their works before writing material in those forms.

And Zenón has made a point of giving back, founding the Caravana Cultural, a project that brings free jazz concerts to rural Puerto Rico.

Work, joy, and love

Zenón is now settled in at MIT, which boasts a vibrant music program. More than 1,500 MIT students take a music class each year, and over 500 students participate in one of 30 campus ensembles. Last year, MIT opened its new Edward and Joyce Linde Music Building, a purpose-built performance, rehearsal, and teaching space.

“There are definitely students at MIT who could be at some of the best music schools in the world,” Zenón says. “That’s not in question.”

Moreover, among MIT students, Zenón says, “There is a communal approach to music. Everything they do, they do for each other. They look out for each other, they work together. And that has been one of the most rewarding things to see.”

He continues: “Of course the students are brilliant and the faculty are too. In terms of what I like to teach, it’s been a good fit for me personally, and I couldn’t be happier about the opportunity. There’s more and more interest in jazz, more and more interest in creating things together, and there’s a unique mindset being built in front of our eyes.”

He is also pleased to work in the Linde Music Building: “It’s amazing to have the building, not only in terms of the facilities, but it’s also a symbol of the place music has within the Institute. We’re not just talking about music, we’re creating it. It’s a great commitment from the school and says a lot about our leadership.”

Meanwhile, along with teaching, Zenón’s own recording career continues at full speed. With Luis Perdomo, he is working on “El Arte Del Bolero Vol. 3,” the follow-up to his Grammy-winning album. And Zenón has plans for still another album, to be recorded in Puerto Rico with a large ensemble, based on music he is writing about Puerto Rico’s history and present.

“Things are always linked,” Zenón explains. “Once you finish one project, the next one starts. It feels natural for me to do it that way.”

In conversation, Zenón is engaging, genial, and reflective. So what advice does he have for younger musicians? Not everyone who plays an instrument will become Miguel Zenón. But what about people who want to pursue music, not knowing how far it will take them?

“If you find something you enjoy, just enjoy it for the sake of it,” Zenón says. “Find what brings joy, and make sure you don’t lose that. Having said that, with music, like any art form, or anything else in life, in order to make progress, it takes work and commitment. There’s no hiding that. So if music is something you’re serious about, set goals you can achieve over time, so you always have something to work for. In my experience, that’s key. But I always pair that with the idea of joy and love for music — keeping that love close to your heart.”



de MIT News https://ift.tt/je3FUvJ

viernes, 10 de abril de 2026

Professor Emeritus Jack Dennis, pioneering developer of dataflow models of computation, dies at 94

Jack Dennis, an influential MIT professor emeritus of computer science and engineering, died on March 14 at age 94. The original leader of the Computation Structures Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), he pioneered the development of dataflow models of computation, and, subsequently, many novel principles of computer architecture inspired by dataflow models.

The second child of an engineer and a textile designer, Dennis showed early interest in both engineering and music, rewriting Gilbert and Sullivan lyrics with his parents and playing piano with the Norwalk Symphony Orchestra in Connecticut as a teen, while building a canoe at home with his father. As an undergraduate at MIT, he developed his wide array of interests further, joining the VI-A Cooperative Program in Electrical Engineering; working at the Air Force Cambridge Research Laboratories on projects in speech processing and novel radar systems; participating in the model railroad club; and joining the MIT Symphony Orchestra, where he met his first wife, Jane Hodgson ’55, SM ’56, PhD ’61. (The two later separated when she went to study medicine in Florida.) 

Dennis earned his BS (1953), MS (1954), and ScD (1958) from MIT before joining the then-Department of Electrical Engineering as a faculty member. He was promoted to full professor status in 1969. His doctoral thesis, entitled, “Mathematical Programming and electrical networks,” explored analogies between electric circuit theory and quadratic programming problems. Ideas he developed in that paper further crystallized in his 1964 paper, “Distributed solution of network programming problems,” which created an important early class of digital distributed optimization solvers.

In a 2003 piece that Dennis wrote for his undergraduate class’s 50th reunion, he remembered his earliest encounters with computers at the Institute: “I prepared programs written in assembly language on punched paper tape using Frieden 'Flexowriters,' and stood aside watching the myriad lights blink and flash while operator Mike Solamita fed the tapes [...] That was 1954. Fifty years later, much has changed: A room full of vacuum tubes has become a tiny chip with millions of transistors. A phenomenon once limited to research laboratories has become an industry producing commodity products that anyone can own and use beneficially.”

Dennis’ influence in steering that change was profound. As a collaborator with the teams behind both Project MAC and Multics, the earliest attempts to allow multiple users to work with a single computer seemingly simultaneously (i.e., a time-shared operating system), Dennis helped to specify the unique segment addressing and paging mechanisms that became a fundamental part of the General Electric Model 645 computer. His insights stemmed from a tendency to pay equal attention to both hard- and software when others considered themselves specialists in one or the other. 

“I formed the Computation Structures Group [within CSAIL] and focused on architectural concepts that could narrow the acknowledged gap between programming concepts and the organization of computer hardware,” Dennis explained in his 2003 recollection. “I found myself dismayed that people would consider themselves to be either hardware or software experts, but paid little heed to how joint advances in programming and architecture could lead to a synergistic outcome that might revolutionize computing practice.”

Dennis’ emphasis on synergy did not go unnoticed. Gerald Sussman, the Panasonic Professor of Electrical Engineering, points out “the relationship of [Dennis’] dataflow architecture to single-assignment programs, and thus to pure functional programs. This coupled the virtue of referential transparency in programming to the effective use of hardware parallelism. Dennis also pioneered the use of self-timed circuits in digital systems. The ideas from that work generalize to much of the work on highly distributed systems.” 

The Computation Structures Group attracted multiple scholars interested in developing asynchronous computing and dataflow architecture, many of whom became lifelong friends and collaborators. These included Peter Denning, with whom Dennis and Joseph Qualitz co-authored the textbook “Machines, Languages, and Computation” (1978); the late Arvind, who became faculty head of computer science for the Department of Electrical Engineering and Computer Science (EECS), and the late Guang R. Gao, who became distinguished professor of electrical and computer engineering at the University of Delaware. 

In recognition of his contributions to the Multics project, Dennis was elected fellow of the Institute of Electrical and Electronics Engineers (IEEE). Many additional honors would follow: He received the Association for Computing Machinery (ACM)/IEEE Eckert-Mauchly Award in 1984; was inducted as a fellow of the ACM (1994); was named to the National Academy of Engineering (2009); was elected to the (ACM) Special Interest Group on Operating Systems (SIGOPS) Hall of Fame (2012); and was awarded the IEEE John von Neumann Medal (2013). 

A successful researcher, Dennis was perhaps equally influential in the development of EECS’ curriculum, developing six subjects in areas of computer theory and systems: Theoretical Models for Computation; Computation Structures; Structure of Computer Systems; Semantic Theory for Computer Systems; Semantics of Parallel Computation; and Computer System Architecture (taught in collaboration with Arvind.) Several of the courses that Dennis developed continue to be taught, in updated form, to this day.

Following his retirement from teaching in 1987, he consulted on projects relating to parallel computer hardware and software for such varied groups as NASA Research Institute for Advanced Computer Science; Boeing Aerospace; McGill University; the Architecture Group of Carlstedt Elektronik in Gothenburg, Sweden; and Acorn Networks, Inc. His fruitful relationship with former student Guang Gao continued in the form of a lecture tour through China, as well as co-authorship of a book, “Dataflow Architecture,” currently in progress at MIT Press. 

A voracious lifelong learner, Dennis was fond of repeating a friend’s observation that “a scholar is just a book’s way of making another book.” In a full and active retirement, he still made room for music, trying his hand at composing; performing at Tanglewood as a tenor in Chorus Pro Musica; playing piano at the marriage of Guang Gao’s son Nick; and joining the chorus at the First Church in Belmont, Massachusetts, where his celebration of life (with concurrent livestreaming) will be held on Monday, June 8, at 2 p.m. 

Dennis is survived by his wife Therese Smith ’75; children David Hodgson Dennis of North Miami, Florida; Randall Dennis of Connecticut; and Galen Dennis, a resident of Australia. 



de MIT News https://ift.tt/50mnXlk

jueves, 9 de abril de 2026

Slice and dice

What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?

That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.

Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.

New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages. 

Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.

“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.” 

Light it up

For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria. 

“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”

When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection. 

Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE. 

“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says. 

Problems at the periphery

Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes. 

SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.

The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage. 

The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before. 

There may also be other proteins or interactions involved. 

“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.” 

Future directions

Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.

Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role. 

Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.

“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”



de MIT News https://ift.tt/wrWjCOi