viernes, 6 de marzo de 2026

Personal tech, social media, and the “decline of humanity”

Social psychologist Jonathan Haidt presented a forceful analysis of the damage smartphones and social media are doing to our cognition, our civic fabric, and our children’s wellbeing, while calling for renewed action to ward off their effects, in the latest of MIT’s Compton Lectures on Wednesday.

“Around the world, people are getting diminished,” Haidt said. “Less intelligent, less happy, less competent. And it’s happening very fast … My argument is that if we continue with current trends as AI is coming in, it’s going to accelerate. The decline of humanity is going to accelerate.”

Haidt is the Thomas Cooley Professor of Ethical Leadership at New York University’s Stern School of Business and the author of the recent bestseller “The Anxious Generation,” which suggests that the widespread adoption of social media in the 2010s has been especially damaging to young women, making them prone to anxiety and depression.

But as Haidt has continued to examine the effects of social media on society, he has started focusing on additional issues. Our inability to put our phones away, our compulsion to check social media, and the way we spend hours a day watching short-form videos, may be causing problems that go far beyond any rise in anxiety and depression.

“It turns out, it’s not the biggest thing,” Haidt said. “There’s something bigger. It is the destruction of the human capacity to pay attention. Because this is affecting most people, including most adults. And if you imagine humanity with 10 to 50 percent of its attentional ability sucked out of it, there’s not much left. We’re not very capable of doing things if we can’t focus or stay on a task for more than 30 seconds.”

Whatever solution may emerge to these problems, Haidt declared, is going to have to come from “human agency. People see a problem, they figure out a way around it. That’s what I’m hoping to promote here [to] this very important audience. So please consider what I’m saying, these trends, and then work to change them.”

Haidt’s lecture, titled, “Life After Babel: Democracy and Human Development in the Fractured, Lonely World That Technology Gave Us,” was delivered before a capacity audience of over 400 people in MIT’s Huntington Hall (Room 10-250).

The lecture spanned a variety of related topics, with Haidt presenting chart after chart showing the onset of declines in cognition, educational achievement, and happiness, which all have seemed to occur soon after the widespread adoption of smartphones in the 2010s. The individual adoption of smartphones, he notes, has been compounded by the way schools brought internet-connected computing devices into classrooms around the same time.

“The biggest, the most costly mistake we’ve ever made in the history of American education [was] to put computers and high tech on people’s desks,” Haidt said.

Distractible students with shorter attention spans are reading fewer books, he noted; some cinema students cannot sit through films. The top quartile of students is continuing to do well, he noted, but for most students, proficiency levels have dipped notably since the 2010s.

“Fifty years of progress in education, 50 years of progress, up in smoke, gone,” Haidt said. “We’re back to where we were 50 years ago. That’s pretty big, that’s pretty serious.”

As Haidt mentioned multiple times in his remarks, he is not an opponent of all forms of technology, or even personal communication technology, but rather is seeking to mitigate its harmful effects.

“I love tech, I love modernity, we’re all dependent on it, I love my iPhone,” Haidt said. Just as he finished that sentence, an audience member’s cellphone started ringing loudly — drawing a huge laugh from the audience.

“I did not plant that, that was a truly spontaneous demonstration of what I’m talking about,” Haidt said.

Haidt was introduced by MIT President Sally A. Kornbluth, who called him “a leading voice for reforming society’s relationship with technology.” She praised Haidt’s work, noting that he wants to “encourage us to imagine a more positive role for technology in humanity’s future.”

The Karl Taylor Compton Lecture Series was introduced in 1957. It is named for MIT’s ninth president, who led the Institute from 1930 to 1948 and also served as chair of the MIT Corporation from 1948 to 1954.

Compton, as Kornbluth observed, helped MIT evolve from being more strictly an engineering school into “a great global university” with “a new focus on fundamental scientific research.” During World War II, she added, Compton “helped invent the longstanding partnership between the federal government and America’s research universities.”

Haidt received his undergraduate degree from Yale University and his PhD from the University of Pennsylvania. He taught on the faculty at the University of Virginia for 16 years before joining New York University. He has written several widely discussed books about contemporary civic life. Haidt observed that the problems stemming from device distraction and compulsion appear to have hit so-called Gen Z — those born from roughly the mid 1990s to the early 2010s — especially hard, though he emphasized that people in that cohort are essentially victims of circumstance.

“I am not blaming Gen Z,” Haidt said. “I am saying we raised our kids in a way — we allowed the technology companies to take over childhood. We allowed a few giant companies to own our children’s attention, to show them millions of short videos, to destroy their ability to pay attention, to stop them from reading books, and this is the result.”

For a portion of his remarks, Haidt also examined the consequences of social media for politics, showing data that chart the global diminishment of democracy since the 2010s, while the world has become soaked in misinformation and conflictual online interactions.

“That, I think, is what digital technology has done to us,” Haidt said. “It was supposed to connect us, but instead it has broken things, divided us, and made it very, very hard to ever have common facts, common truths, common stories again.”

Towards the end of his remarks, Haidt also speculated that the effects of using AI will be corrosive as well, intellectually and psychologically.

“AI is not exactly going to make us better at interacting with human beings,” Haidt said.

With all this in mind, what is to be done, to limit the intellectual and social damage from tech devices and social media? For one thing, Haidt suggested, we should be less impressed by high-tech innovations and social media.

“We need to disenthrall ourselves from technology,” Haidt said, paraphrasing a line written by President Abraham Lincoln. He added: “I suggest that we have a generally negative view … of social media and of AI.” This kind of “more emotionally negative or ambivalent view” will make it easier for us to reverse the way technology seems to control us.

As a practical matter, Haidt suggested, that means taking steps to limit our exposure to technology. His own public-advocacy group, The Anxious Generation Movement, suggests a set of four reforms: No smartphones for kids before they are high-school age; no social media before age 16; making school phone-free, from bell to bell; and giving kids more independence, free play, and responsibility in the world.

Certainly there is movement toward some of these concepts. Some school districts in the U.S. are banning or limiting phone usage; Australia has also instituted a ban on social media for anyone under 16, while a handful of other countries have announced similar plans.

“There’s a gigantic techlash happening right now,” Haidt suggested. For all the sudden changes technology has introduced within the last 15 years, it is still possible, for now, for people to find a way out of our tech-induced predicament.

“The good news is, there is human agency,” Haidt said.



de MIT News https://ift.tt/dyQtriB

jueves, 5 de marzo de 2026

Seeds of something different

In Berlin in the early 1870s, tourists began visiting a neighborhood called Barackia. It did not have museums, palaces, or any other typical attractions. Barackia was a working-class neighborhood where people grew their own food, lived in small dwellings, and established communal arrangements outside the normal reach of government. For a while, anyway: In 1872, authorities moved in and cleared out Barackia.

Still, the concept of small urban farming caught on, and by 1900, about 50,000 Berlin households were growing food, often in so-called arbor colonies. The practice has never really been abandoned: Today, by law, Germany provides residents the right to garden, still a very popular activity in urban areas.

“In a little space, you can grow a lot of produce,” says MIT Professor Kate Brown, author of a new history of urban gardening. “Once you set things up, it need not take too much of your time. You can have another job and still grow food. You go to Berlin, and many German cities, and you’re surrounded by these allotment gardens.”

But as the residents of Barackia found out, there is a politics that comes with growing your own food on common land. Other interests may want to claim or at least control the land themselves. Or they may want to tap into the labor being applied to gardening. One way or another, when many people start gardening for themselves, core questions about the organization of society seem to sprout up, too.

Brown examines urban gardening and its politics in her book, “Tiny Gardens Everywhere: The Past Present, and Future of the Self-Provisioning City,” published by W.W. Norton. Brown is the Thomas M. Siebel Distinguished Professor in History of Science within MIT’s Program in Science, Technology, and Society. In a book with global scope, ranging from Estonia to Amsterdam and Washington, Brown contends that urban gardening has many positive spillover effects, from health and environmental benefits to community-building — apart from periods of pushback when others are trying to eliminate it.

“Community after community, people work together to create food provisioning practices,” Brown says. “And after people come together for food and gardening, then they start to solve other problems they have.”

Whose land?

“Tiny Gardens Everywhere” was several years in making, featuring extensive archival research, with firsthand material interspersed too. Brown’s story begins in England, which had a very long tradition of people farming on common land, often in ingenious, productive ways. “Every bit of space was used,” Brown says.

Then in the late 18th century, the advent of “enclosures” for wealthy landowners privatized much land and changed social life for many. Poorer residents, even when given allotments, found them not big enough for self-sustaining farming.

“Private property is largely an English invention of the late 18th century,” Brown says. “Before that, and in many parts of the world to this day, people live with a communal sense of the ownership of the land.”

In Brown’s interpretation, the enclosure movement did not just claim more land for Britain’s upper class. In an industrializing society, it forced peasants into the factory labor force, whether in cities or in rural mills.

“Really what they were doing when they were enclosing land was trying to control labor, as much as controlling land,” Brown says. “Because of their reliance on the commons, peasants were self-sufficient. Who wants to go work in a factory when you could be out having fun in the forest? Expelling people was a way to force them to become homeless, the landless proletariat, with nothing to sell but their labor, for 10 or 18 hours a day.”

As Brown chronicles in detail, conflicts between communal agriculture and propertied classes have often arisen since then, in varying forms. And sometimes, in now-surprising places, because urban gardening has been more extensive than we realize.

A core section of “Tiny Gardens Everywhere” focuses on Washington, in the middle of the 20th century. During the Great Migration, which started a few decades earlier, African Americans moved north en masse, resettling in cities. They brought extensive knowledge with them about agricultural practices. In the part of Washington east of the Anacostia River, Black neighborhoods relied heavily on local gardening.

“They set up workers’ cooperatives and food cooperatives,” Brown observes. Despite often living in difficult circumstances, she adds, “I think it’s very interesting that people found really smart ways to adapt. If the neighborhood had no garbage collection, they’ll compost. No sewers, they’ll compost.”

Over time, though, authorities started claiming more land, designating homes to be torn down, and restricting the ability of residents to garden. And as Brown chronicles in the book, local officials have used restrictions on urban gardening as a form of social control, with one outcome being a homogenized social and physical landscape characterized by grass lawns for the affluent.

How much food?

Even if urban gardening has been fairly common in the past, it is natural to ask: How much food can it really provide? As Brown sees it, there is not one simple answer to that question. At one point, victory gardens provided about 40 percent of all produce grown in the U.S. during World War II, for one thing. More recently, In 1996, 91 percent of the potatoes Russians ate came from urban allotment gardens on 1.5 percent of the country’s arable land.

As Brown also points out in the book, we may not be growing as much produce on giant farms as we think. Only 2 percent of agricultural land in the U.S. is used to produce fruit and vegetables, for instance. The U.S., as a variety of analysts and writers have observed, has corn-and soy-heavy agricultural systems at its largest scales, principally yielding corn-based products. That means, Brown says, “They’re really inefficiently [working] to produce ethanol, corn syrup, chips, and cookies.”

In sum, she adds, “Yes, I do think it’s possible to take an urban space and grow a good part of the fruits and vegetables that people need there.”

It is possible, Brown believes, for things to change on this front. For instance, Florida, Illinois, and Maine, three fairly different states in terms of politics, all have laws providing the right to garden. Oklahoma has a similar bill in the works.

“I think this approach to looking at our right to grow food, to self-provision, to step outside of markets for our most essential needs, is something that represents a unifying set of desires in our hyperpolarized political landscape,” Brown says.

Other scholars have praised “Tiny Gardens Everywhere.” Sunil Amrith, a professor of history at Yale University, has said that Brown uses “enviable skill, craft, and insight” to show “that the past of small-scale urban provisioning contains the seeds of a more resilient future for us all.”

For her part, Brown hopes the book will not only appeal to readers, but spur them to become more active about the issue, as gardeners, local policy advocates, or both.

“One of the drumbeats of this book is that people do — and maybe we all should — win the right to garden,” Brown says. 



de MIT News https://ift.tt/NvDSObB

X-raying rocks reveals their carbon-storing capacity

To avoid the worst effects of climate change, many billions of metric tons of industrially generated carbon dioxide will have to be captured and stored away by the end of this century. One place to store such an enormous amount of greenhouse gas is in the Earth itself. If carbon dioxide were pumped into the cracks and crevices of certain underground rocks, the fluid would react with the rocks and solidify carbon into minerals. In this way, carbon dioxide could potentially be locked in the rocks in stable form for millions of years without escaping back into the atmosphere.

Some pilot projects are already underway to demonstrate such “carbon mineralization.” These efforts have shown promising results in terms of successfully mineralizing a large fraction of injected CO2. However, it’s less clear how the rocks will evolve in response. As carbonate minerals build up, could they clog up cracks and crevices, and ultimately limit the amount of CO2 that can be stored there?

In a new study appearing today in the journal AGU Advances, MIT geophysicists explored this question by injecting fluid into rocks and using X-ray imaging to reveal how the rocks’ pores and cracks changed as the fluid mineralized over time.

Their experiments showed that as fluid was pumped into a rock, the rock’s permeability (the ability of fluid to flow through the rock) dropped sharply. Meanwhile, the rock’s porosity (its total amount of empty space, in the form of pores, cracks, and crevices) remained relatively the same.

The researchers found that the minerals were precipitating out of the fluid in the narrower tunnels connecting larger pores, preventing the fluid from flowing into larger pore spaces. Even so, the fluid did keep flowing through the rock, albeit at a lower rate, and minerals continued to form in some cracks and crevices.

“This study gives you information about what the rock does during this complex mineralization process, which could give you ideas of how to engineer it in your favor,” says study co-author Matėj Peč, an associate professor of geophysics at MIT. 

“If you were injecting CO2 into the Earth and saw a massive drop in permeability, some operators might think they clogged up the well,” adds co-author Jonathan Simpson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But as this study shows, in some cases, it might not matter that much. As long as you maintain some flow rate, you could still form minerals and sequester carbon.”

The study’s co-authors include EAPS Research Scientist Hoagy O’Ghaffari as well as Sharath Mahavadi and Jean Elkhoury of the Schlumberger-Doll Research Center.

Drilling down

Basalt is a type of erupted volcanic rock that is found in places such as Hawaii and Iceland. When fresh, it’s highly porous, with many pores, cracks, and fractures running through the rock. The material also is highly concentrated in iron, calcium, and magnesium. When these elements come in contact with fluid that is rich in carbon dioxide, they can dissolve and mix with CO2, and eventually form a new carbon-based mineral such as calcite or dolomite.

A project based in Iceland and piloted by the company CarbFix is currently injecting CO2-rich water into the region’s underground basalt to see how much of the gas can be converted and stored as minerals in the rock. The company’s runs have shown that more than 95 percent of the CO2 injected into the ground turns into minerals within two years. The project is proving that the chemistry works: CO2 can be stored as stone.

But the MIT team wondered how this mineralization process would change the basalt itself and its capacity to store carbon over time.

“Most studies investigating carbon mineralization have focused on optimizing the geochemistry, but we wanted to know how mineralization would affect real reservoir rocks,” Peč says.

Rocky X-rays

The team set out to study how the permeability and porosity of basalt changes as carbonate-rich fluid is pumped into and mineralized throughout the rock.

“Porosity refers to the total amount of open space in the rock, which could be in the form of vesicles, or fractures that connect vesicles, or even areas between sand grains,” Simpson explains. “Because there is so much variability in porosity patterns, there is no one-to-one relationship between porosity and permeability. You could have a lot of pores that are not necessarily connected. So, even if 20 percent of the rock is porous, if they’re not connected, then permeability would be zero.”

“The details of that are important to understand for all these problems of injecting fluids into the subsurface,” Peč emphasizes.

For their experiments, the team used samples of basalt that Peč and others collected during a trip to Iceland in 2023. They placed small samples of basalt in a custom-built holder that they connected to two tubes, through which they flowed two different fluids, each containing a solution that, when mixed, quickly forms carbonate minerals. The team chose this combination of fluids in order to speed up the mineralization process.

In the actual process of injecting CO2 into the ground, CO2 is mixed with water. When it is pumped through rock, the fluid first goes through a “dissolution” phase, in which it draws elements such as iron, calcium, and magnesium out from the basalt and into the CO2-rich fluid. This dissolution process can take some time, before the mineralization process, in which CO2 mixes with the drawn-out elements, can proceed.

The researchers used two different fluids that quickly mineralize when combined, in order to skip over the dissolution phase and efficiently study the effects of the mineralization process. The team was able to see the mineralization process occurring within the rock, at an unprecedented level of detail, by performing experiments inside an X-ray CT scanner. The team set up their experiment in a CT scanner (similar to the ones used for medical imaging in hospitals) and took frequent, high-resolution, three-dimensional snapshots of the basalt periodically over several days to weeks as they flowed the fluids through.

Their imaging revealed how the pores, cracks, and crevices in the rock evolved, and filled in with minerals as the fluid flowed through over time. Over multiple experiments, they found that the rock’s permeability quickly dropped within a day, by an order of magnitude. The rock’s porosity, however, decreased at a much slower rate. At the end of the longest-duration experiments, only about 5 percent of the original pore space was filled with new minerals.

“Our findings tell us that the minerals are initially forming in really small microcracks that connect the bigger pore spaces, and clogging up those spaces,” Simpson says. “You don’t need much to clog up the tiny microfractures. But when you do clog them up, that really drops the permeability.”

Even after the initial drop in permeability, however, the team could continue to flow fluid through, and minerals continued to form in tight spaces within the rock. This suggests that even when it seems like an underground reservoir is full, it might still be able to store more carbon.

The researchers also monitored the rock with ultrasonic sensors during each experiment and found that the sensor could track even small changes in the rock’s porosity. The less porous, or more filled in the rock was with minerals, the faster sound waves traveled through the material. These results suggest that acoustic sensors could be a reliable way to monitor the porosity of underground rocks and ultimately their capacity to store carbon.

“Overall, we think that carbon mineralization seems like a promising avenue to permanently store large volumes of CO2,” Peč concludes. “There are plenty of reservoirs and they should be injectable over extended periods of time if our results can be extrapolated.”

This work was supported by MIT’s Advanced Carbon Mineralization Initiative funded by Beth Siegelman SM ’84 and Russ Siegelman ’84, with additional funding from the Chan-Zuckerberg Foundation.



de MIT News https://ift.tt/jG5sq83

For one learner, online MIT courses are “like getting a Ferrari for the price of an electric scooter”

As a professional mechanical engineer, Badri Ratnam was inspired when MIT started offering massive open online courses (MOOCs) in engineering and science in 2012. He wondered if he was up to the challenge of solving problem sets and successfully completing exams from MIT.

Ratnam first began his journey with the course 8.MReVx/8.MReV (Mechanics ReView), and he hasn’t looked back since. As he grew in his career in mechanical design and computer-aided engineering, he also completed nearly 40 MITx courses in physics, mechanical engineering, and materials science. 

Part of MIT Open Learning, MITx offers free online courses across a wide variety of subjects to learners around the world. Learners may also opt for the certificate track for a low fee. 

Ratnam has worked for companies such as Freudenberg e-Power Systems, Siemens, GE, and Westport Fuel Systems. His continued learning through MITx courses, as well as courses offered by other universities, has expanded his expertise to include areas such as physics, mechanics of materials, transport phenomena, failure and root cause analysis, validation and verification testing, vibration signal processing, certification and compliance statistical quality control, manufacturing, reliability, supplier selection, and more.

“There are many different learning styles,” says Ratnam. “Some people might need to be in a classroom, and others might be able to learn entirely on their own from a textbook. Personally, I benefit from some amount of structure, including having timelines and deadlines, as well as assignments and discussion forums. With MITx, there is also the excitement of the rigor that can be a boost of adrenaline — trying to see whether you can tackle some of the toughest material, presented by a top institution.”

Supplementing engineering education with extensive course offerings

Ratnam earned a bachelor’s degree in engineering from the University of Delhi. He says during his undergraduate program he tended to study the night before exams, and was “more focused on passing the subject than deep learning.”

He followed his undergrad studies with a master of science degree in mechanical engineering from the University of South Florida and an MS in computational and applied mathematics from Simon Fraser University in British Columbia. Even with all of his degrees, he felt that he needed to revisit the engineering subjects he had initially learned as an undergraduate student, pursuing online courses to review the fundamentals and gain greater understanding and mastery.

The MITx courses Ratnam has taken have covered many different areas within engineering, physics, mathematics, supply chains, and manufacturing. He has recently completed Vibrations and Waves, taught by Yen-Jie Lee, Alex Shvonski, and Michelle Tomasik.

“It’s an 18-week class with over 40 lessons, 13 assignments, and three exams, all designed very deliberately. I don’t think I could have ever learned this very difficult subject without this structure,” says Ratnam. “It’s also important to note that I paid less than $100 for this class. MITx does not follow the dictum that ‘you get what you pay for.’ It’s like getting a Ferrari for the price of an electric scooter.”

Ratnam has also recently finished Information Entropy: Energy and Exergy, taught by former MIT Open Learning dean for digital learning Krishna Rajagopal, Peter Dourmaskin, and Aidan MacDonagh, as well as Shvonski and Tomasik.

Although Ratnam says he can’t pick a favorite course — and is hard-pressed to even pick a few favorites of the many MITx courses he has taken — he says he has especially liked these recent courses and Elements of Structures, taught by Alexie M. Kolpak and Simona Socrate. In addition to the many MITx courses he has taken, he has also completed a few MIT Professional Education programs in smart manufacturing and design. 

“As I’ve taken more and more courses, I’ve learned to never fear learning new things and exploring new areas,” says Ratnam. “I used to think of more unfamiliar subjects and feel a little terrified, not knowing where to start, but I don’t feel that any more. I know that with some time and effort, I can pick up new skills and knowledge.”

Ratnam has found the discussion forums for MITx courses to be especially useful to the learning process.

“This is where the rigorous, engaging, yet automated, courses come to life,” says Ratnam. “Learners from all over the world help each other in the problem sets and discuss their conceptual doubts. And the forums are diligently monitored by MIT staff to ensure there are no open questions, and all errors are corrected.”

Increasing value in the workplace

Ratnam says that his MITx studies have deepened his understanding of a variety of engineering topics, which have given him new insights to apply as an engineer.

“My learnings from MITx courses have really helped me gain the confidence of having a deep understanding on the theoretical side,” says Ratman. “I’ve developed a wide base of knowledge and have become the go-to person whom people come to with questions.”

Ratnam has found MITx to be an excellent professional development resource. He notes that while many professionals have access to and complete courses offered at or through their workplaces, these usually aim to enable people to complete a very specific goal — such as performing a set task at work — within a short period of time. He says that with online courses, it’s a much different timeline and result.

MITx classes have provided me with a much broader overview of engineering phenomena,” says Ratnam. “The benefit of the classes might not always come immediately. It can be a long gestation period for the information to all gel together. It’s much more of a profound and long-term benefit.”

Explore lifelong learning opportunities from the Institute, including online courses, resources, and professional programs, on MIT Learn.



de MIT News https://ift.tt/jlzWirR

New catalog more than doubles the number of gravitational-wave detections made by LIGO, Virgo, and KAGRA observatories

When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.

And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.

Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.

The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)

This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.

“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.

“In the past decade, gravitational wave astronomy has progressed from the first detection  to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”

“Pushing the edges”

Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.

A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.

Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.

The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.

“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”

Unusual signals

The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.

For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.

“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”

Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.

Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.

The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other. 

“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”

Cosmic connections

From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.

“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.

This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.

The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.

“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”

Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.

“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”

The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.

Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.

“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”

By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).

“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.

“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."



de MIT News https://ift.tt/bZj5QTr

miércoles, 4 de marzo de 2026

Nitrous oxide, a product of fertilizer use, may harm some soil bacteria

Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.

While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.

Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.

The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.

“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”

A toxic gas

Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.

“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”

While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.

They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.

Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.

Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.

“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.

“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”

From the lab to the farm

In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.

Wasson calls the paper a proof of concept and plans to study agricultural soil next.

“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”

McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.

“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”

The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).



de MIT News https://ift.tt/iGL6ogO

martes, 3 de marzo de 2026

A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster

Many engineering challenges come down to the same headache — too many knobs to turn and too few chances to test them. Whether tuning a power grid or designing a safer vehicle, each evaluation can be costly, and there may be hundreds of variables that could matter.

Consider car safety design. Engineers must integrate thousands of parts, and many design choices can affect how a vehicle performs in a collision. Classic optimization tools could start to struggle when searching for the best combination.

MIT researchers developed a new approach that rethinks how a classic method, known as Bayesian optimization, can be used to solve problems with hundreds of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods.

Their technique leverages a foundation model trained on tabular data that automatically identifies the variables that matter most for improving performance, repeating the process to hone in on better and better solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This allows them to adapt to different applications.

The researchers’ tabular foundation model does not need to be constantly retrained as it works toward a solution, increasing the efficiency of the optimization process. The technique also delivers greater speedups for more complicated problems, so it could be especially useful in demanding applications like materials development or drug discovery.

“Modern AI and machine-learning models can fundamentally change the way engineers and scientists create complex systems. We came up with one algorithm that can not only solve high-dimensional problems, but is also reusable so it can be applied to many problems without the need to start everything from scratch,” says Rosen Yu, a graduate student in computational science and engineering and lead author of a paper on this technique.

Yu is joined on the paper by Cyril Picard, a former MIT postdoc and research scientist, and Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering. The research will be presented at the International Conference on Learning Representations.

Improving a proven method

When scientists seek to solve a multifaceted problem but have expensive methods to evaluate success, like crash testing a car to know how good each design is, they often use a tried-and-true method called Bayesian optimization. This iterative method finds the best configuration for a complicated system by building a surrogate model that helps estimate what to explore next while considering the uncertainty of its predictions.

But the surrogate model must be retrained after each iteration, which can quickly become computationally intractable when the space of potential solutions is very large. In addition, scientists need to build a new model from scratch any time they want to tackle a different scenario.

To address both shortcomings, the MIT researchers utilized a generative AI system known as a tabular foundation model as the surrogate model inside a Bayesian optimization algorithm.

“A tabular foundation model is like a ChatGPT for spreadsheets. The input and output of these models are tabular data, which in the engineering domain is much more common to see and use than language,” Yu says.

Just like large language models such as ChatGPT,  Claude, and Gemini, the model has been pre-trained on an enormous amount of tabular data. This makes it well-equipped to tackle a range of prediction problems. In addition, the model can be deployed as-is, without the need for any retraining.

To make their system more accurate and efficient for optimization, the researchers employed a trick that enables the model to identify features of the design space that will have the biggest impact on the solution.

“A car might have 300 design criteria, but not all of them are the main driver of the best design if you are trying to increase some safety parameters. Our algorithm can smartly select the most critical features to focus on,” Yu says.

It does this by using a tabular foundation model to estimate which variables (or combinations of variables) most influence the outcome.

It then focuses the search on those high-impact variables instead of wasting time exploring everything equally. For instance, if the size of the front crumple zone significantly increased and the car’s safety rating improved, that feature likely played a role in the enhancement.

Bigger problems, better solutions

One of their biggest challenges was finding the best tabular foundation model for this task, Yu says. Then they had to connect it with a Bayesian optimization algorithm in such a way that it could identify the most prominent design features.

“Finding the most prominent dimension is a well-known problem in math and computer science, but coming up with a way that leveraged the properties of a tabular foundation model was a real challenge,” Yu says.

With the algorithmic framework in place, the researchers tested their method by comparing it to five state-of-the-art optimization algorithms.

On 60 benchmark problems, including realistic situations like power grid design and car crash testing, their method consistently found the best solution between 10 and 100 times faster than the other algorithms.

“When an optimization problem gets more and more dimensions, our algorithm really shines,” Yu added.

But their method did not outperform the baselines on all problems, such as robotic path planning. This likely indicates that scenario was not well-defined in the model’s training data, Yu says.

In the future, the researchers want to study methods that could boost the performance of tabular foundation models. They also want to apply their technique to problems with thousands or even millions of dimensions, like the design of a naval ship.

“At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical,” says Ahmed.

“The approach presented in this work, using a pretrained foundation model together with high‑dimensional Bayesian optimization, is a creative and promising way to reduce the heavy data requirements of simulation‑based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings,” says Wei Chen, the Wilson-Cook Professor in Engineering Design and chair of the Department of Mechanical Engineering at Northwestern University, who was not involved in this research.



de MIT News https://ift.tt/VYw0po3