lunes, 8 de diciembre de 2025

NIH Director Jay Bhattacharya visits MIT

National Institutes of Health (NIH) Director Jay Bhattacharya visited MIT on Friday, engaging in a wide-ranging discussion about policy issues and research aims at an event also featuring Rep. Jake Auchincloss MBA ’16 of Massachusetts.

The forum consisted of a dialogue between Auchincloss and Bhattacharya, followed by a question-and-answer session with an audience that included researchers from the greater Boston area. The event was part of a daylong series of stops Bhattacharya and Auchincloss made around Boston, a world-leading hub of biomedical research.

“I was joking with Dr. Bhattacharya that when the NIH director comes to Massachusetts, he gets treated like a celebrity, because we do science, and we take science very seriously here,” Auchincloss quipped at the outset.

Bhattacharya said he was “delighted” to be visiting, and credited the thousands of scientists who participate in peer review for the NIH. “The reason why the NIH succeeds is the willingness and engagement of the scientific community,” he said.

In response to an audience question, Bhattacharya also outlined his overall vision of the NIH’s portfolio of projects.

“You both need investments in ideas that are not tested, just to see if something works. You don’t know in advance,” he said. “And at the same time, you need an ecosystem that tests those ideas rigorously and winnows those ideas to the ones that actually work, that are replicable. A successful portfolio will have both elements in it.”

MIT President Sally A. Kornbluth gave opening remarks at the event, welcoming Bhattacharya and Auchincloss to campus and noting that the Institute’s earliest known NIH grant on record dates to 1948. In recent decades, biomedical research at MIT has boomed, expanding across a wide range of frontier fields.

Indeed, Kornbluth noted, MIT’s federally funded research projects during U.S. President Trump’s first term include a method for making anesthesia safer, especially for children and the elderly; a new type of expanding heart valve for children that eliminates the need for repeated surgeries; and a noninvasive Alzheimer’s treatment using sound and light stimulation, which is currently in clinical trials.

“Today, researchers across our campus pursue pioneering science on behalf of the American people, with profoundly important results,” Kornbluth said.

“The hospitals, universities, startups, investors, and companies represented here today have made greater Boston an extraordinary magnet for talent,” Kornbluth added. “Both as a force for progress in human health and an engine of economic growth, this community of talent is a precious national asset. We look forward to working with Dr. Bhattacharya to build on its strengths.”

The discussion occurred amid uncertainty about future science funding levels and pending changes in the NIH’s grant-review processes. The NIH has announced a “unified strategy” for reviewing grant applications that may lead to more direct involvement in grant decisions by directors of the 27 NIH institutes and centers, along with other changes that could shift the types of awards being made.

Auchincloss asked multiple questions about the ongoing NIH changes; about 10 audience members from a variety of institutions also posed a range of questions to Bhattacharya, often about the new grant-review process and the aims of the changes.

“The unified funding strategy is a way to allow institute direcors to look at the full range of scoring, including scores on innovation, and pick projects that look like they are promising,” Bhattacharya said in response to one of Auchincloss’ queries.

One audience member also emphasized concerns about the long-term effects of funding uncertainties on younger scientists in the U.S.

“The future success of the American biomedical enterprise depends on us training the next generation of scientists,” Bhattacharya acknowledged.

Bhattacharya is the 18th director of the NIH, having been confirmed by the U.S. Senate in March. He has served as a faculty member at Stanford University, where he received his BA, MA, MD, and PhD, and is currently a professor emeritus. During his career, Bhattacharya’s work has often examined the economics of health care, though his research has ranged broadly across topics, in over 170 published papers. He has also served as director of the Center on the Demography and Economics of Health and Aging at Stanford University.

Auchincloss is in his third term as the U.S. Representative to Congress from the 4th district in Massachusetts, having first been elected in 2020. He is also a major in the Marine Corps Reserve, and received his MBA from the MIT Sloan School of Management.

Ian Waitz, MIT’s vice president for research, concluded the session with a note of thanks to Auchincloss and Bhattacharya for their “visit to the greater Boston ecosystem which has done so much for so many and contributed obviously to the NIH mission that you articulated.” He added: “We have such a marvelous history in this region in making such great gains for health and longevity, and we’re here to do more to partner with you.”



de MIT News https://ift.tt/iaRNrSZ

domingo, 7 de diciembre de 2025

When companies “go green,” air quality impacts can vary dramatically

Many organizations are taking actions to shrink their carbon footprint, such as purchasing electricity from renewable sources or reducing air travel.

Both actions would cut greenhouse gas emissions, but which offers greater societal benefits?

In a first step toward answering that question, MIT researchers found that even if each activity reduces the same amount of carbon dioxide emissions, the broader air quality impacts can be quite different.

They used a multifaceted modeling approach to quantify the air quality impacts of each activity, using data from three organizations. Their results indicate that air travel causes about three times more damage to air quality than comparable electricity purchases.

Exposure to major air pollutants, including ground-level ozone and fine particulate matter, can lead to cardiovascular and respiratory disease, and even premature death.

In addition, air quality impacts can vary dramatically across different regions. The study shows that air quality effects differ sharply across space because each decarbonization action influences pollution at a different scale. For example, for organizations in the northeast U.S., the air quality impacts of energy use affect the region, but the impacts of air travel are felt globally. This is because associated pollutants are emitted at higher altitudes.

Ultimately, the researchers hope this work highlights how organizations can prioritize climate actions to provide the greatest near-term benefits to people’s health.

“If we are trying to get to net zero emissions, that trajectory could have very different implications for a lot of other things we care about, like air quality and health impacts. Here we’ve shown that, for the same net zero goal, you can have even more societal benefits if you figure out a smart way to structure your reductions,” says Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); director of the Center for Sustainability Science and Strategy; and senior author of the study.

Selin is joined on the paper by lead author Yuang (Albert) Chen, an MIT graduate student; Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics; Sebastian D. Eastham, an associate professor in the Department of Aeronautics at Imperial College of London; Evan Gibney, an MIT graduate student; and William Clark, the Harvey Brooks Research Professor of International Science at Harvard University. The research was published Friday in Environmental Research Letters.

A quantification quandary

Climate scientists often focus on the air quality benefits of national or regional policies because the aggregate impacts are more straightforward to model.

Organizations’ efforts to “go green” are much harder to quantify because they exist within larger societal systems and are impacted by these national policies.

To tackle this challenging problem, the MIT researchers used data from two universities and one company in the greater Boston area. They studied whether organizational actions that remove the same amount of CO2 from the atmosphere would have an equivalent benefit on improving air quality.

“From a climate standpoint, CO2 has a global impact because it mixes through the atmosphere, no matter where it is emitted. But air quality impacts are driven by co-pollutants that act locally, so where those emissions occur really matters,” Chen says.

For instance, burning fossil fuels leads to emissions of nitrogen oxides and sulfur dioxide along with CO2. These co-pollutants react with chemicals in the atmosphere to form fine particulate matter and ground-level ozone, which is a primary component of smog.

Different fossil fuels cause varying amounts of co-pollutant emissions. In addition, local factors like weather and existing emissions affect the formation of smog and fine particulate matter. The impacts of these pollutants also depend on the local population distribution and overall health.

“You can’t just assume that all CO2-reduction strategies will have equivalent near-term impacts on sustainability. You have to consider all the other emissions that go along with that CO2,” Selin says.

The researchers used a systems-level approach that involved connecting multiple models. They fed the organizational energy consumption and flight data into this systems-level model to examine local and regional air quality impacts.

Their approach incorporated many interconnected elements, such as power plant emissions data, statistical linkages between air quality and mortality outcomes, and aviation emissions associated with specific flight routes. They fed those data into an atmospheric chemistry transport model to calculate air quality and climate impacts for each activity.

The sheer breadth of the system created many challenges.

“We had to do multiple sensitivity analyses to make sure the overall pipeline was working,” Chen says.

Analyzing air quality

At the end, the researchers monetized air quality impacts to compare them with the climate impacts in a consistent way. Monetized climate impacts of CO2 emissions based on prior literature are about $170 per ton (expressed in 2015 dollars), representing the financial cost of damages caused by climate change.

Using the same method as used to monetize the impact of CO2, the researchers calculated that air quality damages associated with electricity purchases are an additional $88 per ton of CO2, while the damages from air travel are an additional $265 per ton.

This highlights how the air quality impacts of a ton of emitted CO2 depend strongly on where and how the emissions are produced.

“A real surprise was how much aviation impacted places that were really far from these organizations. Not only were flights more damaging, but the pattern of damage, in terms of who is harmed by air pollution from that activity, is very different than who is harmed by energy systems,” Selin says.

Most airplane emissions occur at high altitudes, where differences in atmospheric chemistry and transport can amplify their air quality impacts. These emissions are also carried across continents by atmospheric winds, affecting people thousands of miles from their source.

Nations like India and China face outsized air quality impacts from such emissions due to the higher level of existing ground-level emissions, which exacerbates the formation of fine particulate matter and smog.

The researchers also conducted a deeper analysis of short-haul flights. Their results showed that regional flights have a relatively larger impact on local air quality than longer domestic flights.

“If an organization is thinking about how to benefit the neighborhoods in their backyard, then reducing short-haul flights could be a strategy with real benefits,” Selin says.

Even in electricity purchases, the researchers found that location matters.

For instance, fine particulate matter emissions from power plants caused by one university are in a densely populated region, while emissions caused by the corporation fall over less populated areas.

Due to these population differences, the university’s emissions resulted in 16 percent more estimated premature deaths than those of the corporation, even though the climate impacts are identical.

“These results show that, if organizations want to achieve net zero emissions while promoting sustainability, which unit of CO2 gets removed first really matters a lot,” Chen says.

In the future, the researchers want to quantify the air quality and climate impacts of train travel, to see whether replacing short-haul flights with train trips could provide benefits.

They also want to explore the air quality impacts of other energy sources in the U.S., such as data centers.

This research was funded, in part, by Biogen, Inc., the Italian Ministry for Environment, Land, and Sea, and the MIT Center for Sustainability Science and Strategy. 



de MIT News https://ift.tt/aDZezEk

viernes, 5 de diciembre de 2025

Cultivating confidence and craft across disciplines

Both Rohit Karnik and Nathan Wilmers personify the type of mentorship that any student would be fortunate to receive — one rooted in intellectual rigor and grounded in humility, empathy, and personal support. They show that transformative academic guidance is not only about solving research problems, but about lifting up the people working on them.

Whether it’s Karnik’s quiet integrity and commitment to scientific ethics, or Wilmers’ steadfast encouragement of his students in the face of challenges, both professors cultivate spaces where students are not only empowered to grow as researchers, but affirmed as individuals. Their mentees describe feeling genuinely seen and supported; mentored not just in theory or technique, but in resilience. It’s this attention to the human element that leaves a lasting impact.

Professors Karnik and Wilmers are two of the 2023–25 Committed to Caring cohort who are cultivating confidence and craft across disciplines. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.

Rohit Karnik: Rooted in rigor, guided by care

Rohit Karnik is Abdul Latif Jameel Professor in the Department of Mechanical Engineering at MIT, where he leads the Microfluidics and Nanofluidics Research Group and serves as director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). His research explores the physics of micro- and nanofluidic flows and systems. Applications of his work include the development of water filters, portable diagnostic tools, and sensors for environmental monitoring. 

Karnik is genuinely excited about his students’ ideas, and open to their various academic backgrounds. He validates students by respecting their research, encouraging them to pursue their interests, and showing enthusiasm for their exploration within mechanical engineering and beyond.

One student reflected on the manner in which Karnik helped them feel more confident in their academic journey. When a student from a non-engineering field joined the mechanical engineering graduate program, Karnik never viewed their background as a barrier to success. The student wrote, “from the start, he was enthusiastic about my interdisciplinarity and the perspective I could bring to the lab.”

He allowed the student to take remedial undergraduate classes to learn engineering basics, provided guidance on leveraging their previous academic background, and encouraged them to write grants and apply for fellowships that would support their interdisciplinary work. In addition to these concrete supports, Karnik also provided the student with the freedom to develop their own ideas, offering constructive, realistic feedback on what was attainable. 

“This transition took time, and Karnik honored that, prioritizing my growth in a completely new field over getting quick results,” the nominator reflected. Ultimately, Karnik’s mentorship, patience, and thoughtful encouragement led the student to excel in the engineering field.

Karnik encourages his advisees to explore their interests in mechanical engineering and beyond. This holistic approach extends beyond academics and into Karnik’s view of his students as whole individuals. One student wrote that he treats them as complete humans, with ambitions, aspirations, and passions worthy of his respect and consideration — and remains truly selfless in his commitment to their growth and success.

Karnik emphasizes that “it’s important to have dreams,” regularly encouraging his mentees to take advantage of opportunities that align with their goals and values. This sentiment is felt deeply by his students, with one nominator sharing that Karnik “encourag[ed] me to think broadly and holistically about my life, which has helped me structure and prioritize my time at MIT.”

Nathan Wilmers: Cultivating confidence, craft, and care

Nathan Wilmers is the Sarofim Family Career Development Associate Professor of Work and Organizations at MIT Sloan School of Management. His research spans wage and earnings inequality, economic sociology, and the sociology of labor. He is also affiliated with the Institute for Work and Employment Research, and the Economic Sociology program at Sloan. Wilmers studies wage and earnings inequality, economic sociology, and the sociology of labor, bringing insights from economic sociology to the study of labor markets and the wage structure.

A remarkable mentor, Wilmers is known for guiding his students through different projects while also teaching them more broadly about the system of academia. As one nominator illustrates, “he … helped me learn the ‘tacit’ knowledge to understand how to write a paper,” while also emphasizing the learning process of the PhD as a whole, and never reprimanding any mistakes along the way. 

Students say that Wilmers “reassures us that making mistakes is a natural part of the learning process and encourages us to continuously check, identify, and rectify them.” He welcomes all questions without judgment, and generously invests his time and patience in teaching students.

Wilmers is a strong advocate for his students, both academically and personally. He emphasizes the importance of learning, growth, and practical experience, rather than solely focusing on scholarly achievements and goals. Students feel this care, describing “an environment that maximizes learning opportunities and fosters the development of skills,” allowing them to truly collaborate rather than simply aim for the “right” answers.

In addition to his role in the classroom and lab, Wilmers also provides informal guidance to advisees, imparting valuable knowledge about the academic system, emphasizing the significance of networking, and sharing insider information. 

“Nate’s down-to-earth nature is evident in his accessibility to students,” expressed one nominator, who wrote that “sometimes we can freely approach his office without an appointment and receive valuable advice on both work-related and personal matters.” Moreover, Wilmers prioritizes his advisees’ career advancement, dedicating a substantial amount of time to providing feedback on thesis projects, and even encouraging students to take a lead in publishing research.

True mentorship often lies in the patient, careful transmission of craft — the behind-the-scenes work that forms the backbone of rigorous research. “I care about the details,” says Wilmers, reflecting a philosophy shaped by his own graduate advisors. Wilmers’ mentors instilled in him a deep respect for the less-glamorous but essential elements of scholarly work: data cleaning, thoughtful analysis, and careful interpretation. These technical and analytical skills are where real learning happens, he believes. 

By modeling this approach with his own students, Wilmers creates a culture where precision and discipline are valued just as much as innovation. His mentorship is grounded in the belief that becoming a good researcher requires not just vision, but also an intimate understanding of process — of how ideas are sharpened through methodical practice, and how impact comes from doing the small things well. His thoughtful, detail-oriented mentorship leaves a lasting impression on his students.

A nominator acclaimed, “Nate’s strong enthusiasm for my research, coupled with his expressed confidence and affirmation of its value, served as a significant source of motivation for me to persistently pursue my ideas.”



de MIT News https://ift.tt/POocfNT

jueves, 4 de diciembre de 2025

Robots that spare warehouse workers the heavy lifting

There are some jobs human bodies just weren’t meant to do. Unloading trucks and shipping containers is a repetitive, grueling task — and a big reason warehouse injury rates are more than twice the national average.

The Pickle Robot Company wants its machines to do the heavy lifting. The company’s one-armed robots autonomously unload trailers, picking up boxes weighing up to 50 pounds and placing them onto onboard conveyor belts for warehouses of all types.

The company name, an homage to The Apple Computer Company, hints at the ambitions of founders AJ Meyer ’09, Ariana Eisenstein ’15, SM ’16, and Dan Paluska ’97, SM ’00. The founders want to make the company the technology leader for supply chain automation.

The company’s unloading robots combine generative AI and machine-learning algorithms with sensors, cameras, and machine-vision software to navigate new environments on day one and improve performance over time. Much of the company’s hardware is adapted from industrial partners. You may recognize the arm, for instance, from car manufacturing lines — though you may not have seen it in bright pickle-green.

The company is already working with customers like UPS, Ryobi Tools, and Yusen Logistics to take a load off warehouse workers, freeing them to solve other supply chain bottlenecks in the process.

“Humans are really good edge-case problem solvers, and robots are not,” Paluska says. “How can the robot, which is really good at the brute force, repetitive tasks, interact with humans to solve more problems? Human bodies and minds are so adaptable, the way we sense and respond to the environment is so adaptable, and robots aren’t going to replace that anytime soon. But there’s so much drudgery we can get rid of.”

Finding problems for robots

Meyer and Eisenstein majored in computer science and electrical engineering at MIT, but they didn’t work together until after graduation, when Meyer started the technology consultancy Leaf Labs, which specializes in building embedded computer systems for things like robots, cars, and satellites.

“A bunch of friends from MIT ran that shop,” Meyer recalls, noting it’s still running today. “Ari worked there, Dan consulted there, and we worked on some big projects. We were the primary software and digital design team behind Project Ara, a smartphone for Google, and we worked on a bunch of interesting government projects. It was really a lifestyle company for MIT kids. But 10 years go by, and we thought, ‘We didn’t get into this to do consulting. We got into this to do robots.’”

When Meyer graduated in 2009, problems like robot dexterity seemed insurmountable. By 2018, the rise of algorithmic approaches like neural networks had brought huge advances to robotic manipulation and navigation.

To figure out what problem to solve with robots, the founders talked to people in industries as diverse as agriculture, food prep, and hospitality. At some point, they started visiting logistics warehouses, bringing a stopwatch to see how long it took workers to complete different tasks.

“In 2018, we went to a UPS warehouse and watched 15 guys unloading trucks during a winter night shift,” Meyer recalls. “We spoke to everyone, and not a single person had worked there for more than 90 days. We asked, ‘Why not?’ They laughed at us. They said, ‘Have you tried to do this job before?’”

It turns out warehouse turnover is one of the industry’s biggest problems, limiting productivity as managers constantly grapple with hiring, onboarding, and training.

The founders raised a seed funding round and built robots that could sort boxes because it was an easier problem that allowed them to work with technology like grippers and barcode scanners. Their robots eventually worked, but the company wasn’t growing fast enough to be profitable. Worse yet, the founders were having trouble raising money.

“We were desperately low on funds,” Meyer recalls. “So we thought, ‘Why spend our last dollar on a warm-up task?’”

With money dwindling, the founders built a proof-of-concept robot that could unload trucks reliably for about 20 seconds at a time and posted a video of it on YouTube. Hundreds of potential customers reached out. The interest was enough to get investors back on board to keep the company alive.

The company piloted its first unloading system for a year with a customer in the desert of California, sparing human workers from unloading shipping containers that can reach temperatures up to 130 degrees in the summer. It has since scaled deployments with multiple customers and gained traction among third-party logistics centers across the U.S.

The company’s robotic arm is made by the German industrial robotics giant KUKA. The robots are mounted on a custom mobile base with an onboard computing systems so they can navigate to docks and adjust their positions inside trailers autonomously while lifting. The end of each arm features a suction gripper that clings to packages and moves them to the onboard conveyor belt.

The company’s robots can pick up boxes ranging in size from 5-inch cubes to 24-by-30 inch boxes. The robots can unload anywhere from 400 to 1,500 cases per hour depending on size and weight. The company fine tunes pre-trained generative AI models and uses a number of smaller models to ensure the robot runs smoothly in every setting.

The company is also developing a software platform it can integrate with third-party hardware, from humanoid robots to autonomous forklifts.

“Our immediate product roadmap is load and unload,” Meyer says. “But we’re also hoping to connect these third-party platforms. Other companies are also trying to connect robots. What does it mean for the robot unloading a truck to talk to the robot palletizing, or for the forklift to talk to the inventory drone? Can they do the job faster? I think there’s a big network coming in which we need to orchestrate the robots and the automation across the entire supply chain, from the mines to the factories to your front door.”

“Why not us?”

The Pickle Robot Company employs about 130 people in its office in Charlestown, Massachusetts, where a standard — if green — office gives way to a warehouse where its robots can be seen loading boxes onto conveyor belts alongside human workers and manufacturing lines.

This summer, Pickle will be ramping up production of a new version of its system, with further plans to begin designing a two-armed robot sometime after that.

“My supervisor at Leaf Labs once told me ‘No one knows what they’re doing, so why not us?’” Eisenstein says. “I carry that with me all the time. I’ve been very lucky to be able to work with so many talented, experienced people in my career. They all bring their own skill sets and understanding. That’s a massive opportunity — and it’s the only way something as hard as what we’re doing is going to work.”

Moving forward, the company sees many other robot-shaped problems for its machines.

“We didn’t start out by saying, ‘Let’s load and unload a truck,’” Meyers says. “We said, ‘What does it take to make a great robot business?’ Unloading trucks is the first chapter. Now we’ve built a platform to make the next robot that helps with more jobs, starting in logistics but then ultimately in manufacturing, retail, and hopefully the entire supply chain.”



de MIT News https://ift.tt/I2sovG6

Revisiting a revolution through poetry

There are several narratives surrounding the American Revolution, a well-traveled and -documented series of events leading to the drafting and signing of the Declaration of Independence and the war that followed. 

MIT philosopher Brad Skow is taking a new approach to telling this story: a collection of 47 poems about the former American colonies’ journey from England’s imposition of the Stamp Act in 1765 to the war for America’s independence that began in 1775.

When asked why he chose poetry to retell the story, Skow, the Laurence S. Rockefeller Professor in the Department of Linguistics and Philosophy, said he “wanted to take just the great bits of these speeches and writings, while maintaining their intent and integrity.” Poetry, Skow argues, allows for that kind of nuance and specificity.

American Independence in Verse,” published by Pentameter Press, traces a story of America’s origins through a collection of vignettes featuring some well-known characters, like politician and orator Patrick Henry, alongside some lesser-known but no less important ones, like royalist and former chief justice of North Carolina Martin Howard. Each is rendered in blank verse, a nursery-style rhyme, or free verse. 

The book is divided into three segments: “Taxation Without Representation,” “Occupation and Massacre,” and “War and Independence.” Themes like freedom, government, and authority, rendered in a style of writing and oratory seldom seen today, lent themselves to being reimagined as poems. “The options available with poetic license offer opportunities for readers that might prove more difficult with prose,” Skow reports.

Skow based each of the poems on actual speeches, letters, pamphlets, and other printed materials produced by people on both sides of the debate about independence. “While reviewing a variety of primary sources for the book, I began to see the poetry in them,” he says. 

In the poem “Everywhere, the spirit of equality prevails,” during an “Interlude” between the “Occupation and Massacre” and “War and Independence” sections of the book, British commissioner of customs Henry Hulton, writing to Robert Nicholson in Liverpool, England, describes the America he experienced during a trip with his wife:

The spirit of equality prevails.

Regarding social differences, they’ve no

Notion of rank, and will show more respect

To one another than to those above them.

They’ll ask a thousand strange impertinent 

Questions, sit down when they should wait at a table,

React with puzzlement when you do not

Invite your valet to come share your meal.

Here, Skow, using Hulton’s words, illustrates the tension between agreed-upon social conventions — remnants of the Old World — and the society being built in the New World that animates a portion of the disconnect leading both toward war. “These writings are really powerful, and poetry offers a way to convey that power,” Skow says.

The journey to the printed page 

Skow’s interest in exploring the American Revolution came, in part, from watching the Emmy Award-winning play “Hamilton.” The book ends where the play begins. “It led me to want to learn more,” he says of the play and his experience watching it. “Its focus on the Revolution made the era more exciting for me.”

While conducting research for another poetry project, Skow read an interview with American diplomat, inventor, and publisher Benjamin Franklin in the House of Commons conducted in 1766. “There were lots of amazing poetic moments in the interview,” he says. Skow began reading additional pamphlets, letters, and other writings, disconnecting his work as a philosopher from the research that would yield the book.

“I wanted to remove my philosopher hat with this project,” he says. “Poetry can encourage ambiguity and, unlike philosophy, can focus on emotional and non-rational connections between ideas.” 

Although eager to approach the work as a poet and author, rather than a philosopher, Skow discovered that more primary sources than he expected were themselves often philosophical treatises. “Early in the resistance movement there were sophisticated arguments, often printed in newspapers, that it was unjust to tax the colonies without granting them representation in Parliament,” he notes. 

A series of new perspectives and lessons

Skow made some discoveries that further enhanced his passion for the project. “Samuel Adams is an important figure who isn’t as well-known as he should be,” he says. “I wanted to raise his profile.”

Skow also notes that American separatists used strong-arm tactics to “encourage” support for independence, and that prevailing narratives regarding America and its eventual separation from England are more complex and layered than we might believe. “There were arguments underway about legitimate forms of government and which kind of government was right,” he says, “and many Americans wanted to retain the existing relationship with England.”

Skow says the American Revolution is a useful benchmark when considering subsequent political movements, a notion he hopes readers will take away from the book. “The book is meant to be fun and not just a collection of dry, abstract ideas,” he believes. 

“There’s a simple version of the independence story we tell when we’re in a hurry; and there is the more complex truth, printed in long history books,” he continues. “I wanted to write something that was both short and included a variety of perspectives.”

Skow believes the book and its subjects are a testament to ideas he’d like to see return to political and practical discourse. “The ideals around which this country rallied for its independence are still good ideals, and the courage the participants exhibited is still worth admiring,” he says.



de MIT News https://ift.tt/xw42MyV

What’s the best way to expand the US electricity grid?

Growing energy demand means the U.S. will almost certainly have to expand its electricity grid in coming years. What’s the best way to do this? A new study by MIT researchers examines legislation introduced in Congress and identifies relative tradeoffs involving reliability, cost, and emissions, depending on the proposed approach.

The researchers evaluated two policy approaches to expanding the U.S. electricity grid: One would concentrate on regions with more renewable energy sources, and the other would create more interconnections across the country. For instance, some of the best untapped wind-power resources in the U.S. lie in the center of the country, so one type of grid expansion would situate relatively more grid infrastructure in those regions. Alternatively, the other scenario involves building more infrastructure everywhere in roughly equal measure, which the researchers call the “prescriptive” approach. How does each pencil out?

After extensive modeling, the researchers found that a grid expansion could make improvements on all fronts, with each approach offering different advantages. A more geographically unbalanced grid buildout would be 1.13 percent less expensive, and would reduce carbon emissions by 3.65 percent compared to the prescriptive approach. And yet, the prescriptive approach, with more national interconnection, would significantly reduce power outages due to extreme weather, among other things.

“There’s a tradeoff between the two things that are most on policymakers’ minds: cost and reliability,” says Christopher Knittel, an economist at the MIT Sloan School of Management, who helped direct the research. “This study makes it more clear that the more prescriptive approach ends up being better in the face of extreme weather and outages.”

The paper, “Implications of Policy-Driven Transmission Expansion on Costs, Emissions and Reliability in the United States,” is published today in Nature Energy.

The authors are Juan Ramon L. Senga, a postdoc in the MIT Center for Energy and Environmental Policy Research; Audun Botterud, a principal research scientist in the MIT Laboratory for Information and Decision Systems; John E. Parson, the deputy director for research at MIT’s Center for Energy and Environmental Policy Research; Drew Story, the managing director at MIT’s Policy Lab; and Knittel, who is the George P. Schultz Professor at MIT Sloan, and associate dean for climate and sustainability at MIT.

The new study is a product of the MIT Climate Policy Center, housed within MIT Sloan and committed to bipartisan research on energy issues. The center is also part of the Climate Project at MIT, founded in 2024 as a high-level Institute effort to develop practical climate solutions.

In this case, the project was developed from work the researchers did with federal lawmakers who have introduced legislation aimed at bolstering and expanding the U.S. electric grid. One of these bills, the BIG WIRES Act, co-sponsored by Sen. John Hickenlooper of Colorado and Rep. Scott Peters of California, would require each transmission region in the U.S. to be able to send at least 30 percent of its peak load to other regions by 2035.

That would represent a substantial change for a national transmission scenario where grids have largely been developed regionally, without an enormous amount of national oversight.

“The U.S. grid is aging and it needs an upgrade,” Senga says. “Implementing these kinds of policies is an important step for us to get to that future where we improve the grid, lower costs, lower emissions, and improve reliability. Some progress is better than none, and in this case, it would be important.”

To conduct the study, the researchers looked at how policies like the BIG WIRES Act would affect energy distribution. The scholars used a model of energy generation developed at the MIT Energy Initiative — the model is called “Gen X” — and examined the changes proposed by the legislation.

With a 30 percent level of interregional connectivity, the study estimates, the number of outages due to extreme cold would drop by 39 percent, for instance, a substantial increase in reliability. That would help avoid scenarios such as the one Texas experienced in 2021, when winter storms damaged distribution capacity.

“Reliability is what we find to be most salient to policymakers,” Senga says.

On the other hand, as the paper details, a future grid that is “optimized” with more transmission capacity near geographic spots of new energy generation would be less expensive.

“On the cost side, this kind of optimized system looks better,” Senga says.

A more geographically imbalanced grid would also have a greater impact on reducing emissions. Globally, the levelized cost of wind and solar dropped by 89 percent and 69 percent, respectively, from 2010 to 2022, meaning that incorporating less-expensive renewables into the grid would help with both cost and emissions.

“On the emissions side, a priori it’s not clear the optimized system would do better, but it does,” Knittel says. “That’s probably tied to cost, in the sense that it’s building more transmission links to where the good, cheap renewable resources are, because they’re cheap. Emissions fall when you let the optimizing action take place.”

To be sure, these two differing approaches to grid expansion are not the only paths forward. The study also examines a hybrid approach, which involves both national interconnectivity requirements and local buildouts based around new power sources on top of that. Still, the model does show that there may be some tradeoffs lawmakers will want to consider when developing and considering future grid legislation.

“You can find a balance between these factors, where you’re still going to still have an increase in reliability while also getting the cost and emission reductions,” Senga observes.

For his part, Knittel emphasizes that working with legislation as the basis for academic studies, while not generally common, can be productive for everyone involved. Scholars get to apply their research tools and models to real-world scenarios, and policymakers get a sophisticated evaluation of how their proposals would work.

“Compared to the typical academic path to publication, this is different, but at the Climate Policy Center, we’re already doing this kind of research,” Knittel says. 



de MIT News https://ift.tt/wO1cdp0

miércoles, 3 de diciembre de 2025

A smarter way for large language models to think about hard problems

To make large language models (LLMs) more accurate when answering harder questions, researchers can let the model spend more time thinking about potential solutions.

But common approaches that give LLMs this capability set a fixed computational budget for every problem, regardless of how complex it is. This means the LLM might waste computational resources on simpler questions or be unable to tackle intricate problems that require more reasoning.

To address this, MIT researchers developed a smarter way to allocate computational effort as the LLM solves a problem. Their method enables the model to dynamically adjust its computational budget based on the difficulty of the question and the likelihood that each partial solution will lead to the correct answer.

The researchers found that their new approach enabled LLMs to use as little as one-half the computation as existing methods, while achieving comparable accuracy on a range of questions with varying difficulties. In addition, their method allows smaller, less resource-intensive LLMs to perform as well as or even better than larger models on complex problems.

By improving the reliability and efficiency of LLMs, especially when they tackle complex reasoning tasks, this technique could reduce the energy consumption of generative AI systems and enable the use of LLMs in more high-stakes and time-sensitive applications.

“The computational cost of inference has quickly become a major bottleneck for frontier model providers, and they are actively trying to find ways to improve computational efficiency per user queries. For instance, the recent GPT-5.1 release highlights the efficacy of the ‘adaptive reasoning’ approach our paper proposes. By endowing the models with the ability to know what they don’t know, we can enable them to spend more compute on the hardest problems and most promising solution paths, and use far fewer tokens on easy ones. That makes reasoning both more reliable and far more efficient,” says Navid Azizan, the Alfred H. and Jean M. Hayes Career Development Assistant Professor in the Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this technique.

Azizan is joined on the paper by lead author Young-Jin Park, a LIDS/MechE graduate student; Kristjan Greenewald, a research scientist in the MIT-IBM Watson AI Lab; Kaveh Alim, an IDSS graduate student; and Hao Wang, a research scientist at the MIT-IBM Watson AI Lab and the Red Hat AI Innovation Team. The research is being presented this week at the Conference on Neural Information Processing Systems.

Computation for contemplation

A recent approach called inference-time scaling lets a large language model take more time to reason about difficult problems.

Using inference-time scaling, the LLM might generate multiple solution attempts at once or explore different reasoning paths, then choose the best ones to pursue from those candidates.

A separate model, known as a process reward model (PRM), scores each potential solution or reasoning path. The LLM uses these scores to identify the most promising ones.     

Typical inference-time scaling approaches assign a fixed amount of computation for the LLM to break the problem down and reason about the steps.

Instead, the researchers’ method, known as instance-adaptive scaling, dynamically adjusts the number of potential solutions or reasoning steps based on how likely they are to succeed, as the model wrestles with the problem.

“This is how humans solve problems. We come up with some partial solutions and then decide, should I go further with any of these, or stop and revise, or even go back to my previous step and continue solving the problem from there?” Wang explains.

To do this, the framework uses the PRM to estimate the difficulty of the question, helping the LLM assess how much computational budget to utilize for generating and reasoning about potential solutions.

At every step in the model’s reasoning process, the PRM looks at the question and partial answers and evaluates how promising each one is for getting to the right solution. If the LLM is more confident, it can reduce the number of potential solutions or reasoning trajectories to pursue, saving computational resources.

But the researchers found that existing PRMs often overestimate the model’s probability of success.

Overcoming overconfidence

“If we were to just trust current PRMs, which often overestimate the chance of success, our system would reduce the computational budget too aggressively. So we first had to find a way to better calibrate PRMs to make inference-time scaling more efficient and reliable,” Park says.

The researchers introduced a calibration method that enables PRMs to generate a range of probability scores rather than a single value. In this way, the PRM creates more reliable uncertainty estimates that better reflect the true probability of success.

With a well-calibrated PRM, their instance-adaptive scaling framework can use the probability scores to effectively reduce computation while maintaining the accuracy of the model’s outputs.

When they compared their method to standard inference-time scaling approaches on a series of mathematical reasoning tasks, it utilized less computation to solve each problem while achieving similar accuracy.

“The beauty of our approach is that this adaptation happens on the fly, as the problem is being solved, rather than happening all at once at the beginning of the process,” says Greenewald.

In the future, the researchers are interested in applying this technique to other applications, such as code generation and AI agents. They are also planning to explore additional uses for their PRM calibration method, like for reinforcement learning and fine-tuning.

“Human employees learn on the job — some CEOs even started as interns — but today’s agents remain largely static pieces of probabilistic software. Work like this paper is an important step toward changing that: helping agents understand what they don’t know and building mechanisms for continual self-improvement. These capabilities are essential if we want agents that can operate safely, adapt to new situations, and deliver consistent results at scale,” says Akash Srivastava, director and chief architect of Core AI at IBM Software, who was not involved with this work.

This work was funded, in part, by the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, the MIT-Google Program for Computing Innovation, and MathWorks. 



de MIT News https://ift.tt/Ps4kyJq

New bioadhesive strategy can prevent fibrous encapsulation around device implants on peripheral nerves

Peripheral nerves — the network connecting the brain, spinal cord, and central nervous system to the rest of the body — transmit sensory information, control muscle movements, and regulate automatic bodily functions. Bioelectronic devices implanted on these nerves offer remarkable potential for the treatment and rehabilitation of neurological and systemic diseases. However, because the body perceives these implants as foreign objects, they often trigger the formation of dense fibrotic tissue at bioelectronic device–tissue interfaces, which can significantly compromise device performance and longevity.

New research published in the journal Science Advances presents a robust bioadhesive strategy that establishes non-fibrotic bioelectronic interfaces on diverse peripheral nerves — including the occipital, vagus, deep peroneal, sciatic, tibial, and common peroneal nerves — for up to 12 weeks.

“We discovered that adhering the bioelectrodes to peripheral nerves can fully prevent the formation of fibrosis on the interfaces,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor, and professor of mechanical engineering and civil engineering at MIT. “We further demonstrated long-term, drug-free hypertension mitigation using non-fibrotic bioelectronics over four weeks, and ongoing.”

The approach inhibits immune cell infiltration at the device-tissue interface, thereby preventing the formation of fibrous capsules within the inflammatory microenvironment. In preclinical rodent models, the team demonstrated that the non-fibrotic, adhesive bioelectronic device maintained stable, long-term regulation of blood pressure.

“Our long-term blood pressure regulation approach was inspired by traditional acupuncture,” says Hyunmin Moon, lead author of the study and a postdoc in the Department of Mechanical Engineering. “The lower leg has long been used in hypertension treatment, and the deep peroneal nerve lies precisely at an acupuncture point. We were thrilled to see that stimulating this nerve achieved blood pressure regulation for the first time. The convergence of our non-fibrotic, adhesive bioelectronic device with this long-term regulation capability holds exciting promise for translational medicine.”

Importantly, after 12 weeks of implantation with continuous nerve stimulation, only minimal macrophage activity and limited deposition of smooth muscle actin and collagen were detected, underscoring the device’s potential to deliver long-term neuromodulation without triggering fibrosis. “The contrast between the immune response of the adhered device and that of the non-adhered control is striking,” says Bastien Aymon, a study co-author and a PhD candidate in mechanical engineering. “The fact that we can observe immunologically pristine interfaces after three months of adhesive implantation is extremely encouraging for future clinical translation.”

This work offers a broadly applicable strategy for all implantable bioelectronic systems by preventing fibrosis at the device interface, paving the way for more effective and long-lasting therapies such as hypertension mitigation.

Hypertension is a major contributor to cardiovascular diseases, the leading cause of death worldwide. Although medications are effective in many cases, more than 50 percent of patients remain hypertensive despite treatment — a condition known as resistant hypertension. Traditional carotid sinus or vagus nerve stimulation methods are often accompanied by side effects including apnea, bradycardia, cough, and paresthesia.

“In contrast, our non-fibrotic, adhesive bioelectronic device targeting the deep peroneal nerve enables long-term blood pressure regulation in resistant hypertensive patients without metabolic side effects,” says Moon.



de MIT News https://ift.tt/tGXNIqo

martes, 2 de diciembre de 2025

MIT chemists synthesize a fungal compound that holds promise for treating brain cancer

For the first time, MIT chemists have synthesized a fungal compound known as verticillin A, which was discovered more than 50 years ago and has shown potential as an anticancer agent.

The compound has a complex structure that made it more difficult to synthesize than related compounds, even though it differed by only a couple of atoms.

“We have a much better appreciation for how those subtle structural changes can significantly increase the synthetic challenge,” says Mohammad Movassaghi, an MIT professor of chemistry. “Now we have the technology where we can not only access them for the first time, more than 50 years after they were isolated, but also we can make many designed variants, which can enable further detailed studies.”

In tests in human cancer cells, a derivative of verticillin A showed particular promise against a type of pediatric brain cancer called diffuse midline glioma. More tests will be needed to evaluate its potential for clinical use, the researchers say.

Movassaghi and Jun Qi, an associate professor of medicine at Dana-Farber Cancer Institute/Boston Children’s Cancer and Blood Disorders Center and Harvard Medical School, are the senior authors of the study, which appears today in the Journal of the American Chemical Society. Walker Knauss PhD ’24 is the lead author of the paper. Xiuqi Wang, a medicinal chemist and chemical biologist at Dana-Farber, and Mariella Filbin, research director in the Pediatric Neurology-Oncology Program at Dana-Farber/Boston Children’s Cancer and Blood Disorders Center, are also authors of the study.

A complex synthesis

Researchers first reported the isolation of verticillin A from fungi, which use it for protection against pathogens, in 1970. Verticillin A and related fungal compounds have drawn interest for their potential anticancer and antimicrobial activity, but their complexity has made them difficult to synthesize.

In 2009, Movassaghi’s lab reported the synthesis of (+)-11,11'-dideoxyverticillin A, a fungal compound similar to verticillin A. That molecule has 10 rings and eight stereogenic centers, or carbon atoms that have four different chemical groups attached to them. These groups have to be attached in a way that ensures they have the correct orientation, or stereochemistry, with respect to the rest of the molecule.

Once that synthesis was achieved, however, synthesis of verticillin A remained challenging, even though the only difference between verticillin A and (+)-11,11'-dideoxyverticillin A is the presence of two oxygen atoms.

“Those two oxygens greatly limit the window of opportunity that you have in terms of doing chemical transformations,” Movassaghi says. “It makes the compound so much more fragile, so much more sensitive, so that even though we had had years of methodological advances, the compound continued to pose a challenge for us.”

Both of the verticillin A compounds consist of two identical fragments that must be joined together to form a molecule called a dimer. To create (+)-11,11'-dideoxyverticillin A, the researchers had performed the dimerization reaction near the end of the synthesis, then added four critical carbon-sulfur bonds.

Yet when trying to synthesize verticillin A, the researchers found that waiting to add those carbon-sulfur bonds at the end did not result in the correct stereochemistry. As a result, the researchers had to rethink their approach and ended up creating a very different synthetic sequence.

“What we learned was the timing of the events is absolutely critical. We had to significantly change the order of the bond-forming events,” Movassaghi says.

The verticillin A synthesis begins with an amino acid derivative known as beta-hydroxytryptophan, and then step-by-step, the researchers add a variety of chemical functional groups, including alcohols, ketones, and amides, in a way that ensures the correct stereochemistry.

A functional group containing two carbon-sulfur bonds and a disulfide bond were introduced early on, to help control the stereochemistry of the molecule, but the sensitive disulfides had to be “masked” and protected as a pair of sulfides to prevent them from breakdown under subsequent chemical reactions. The disulfide-containing groups were then regenerated after the dimerization reaction.

“This particular dimerization really stands out in terms of the complexity of the substrates that we’re bringing together, which have such a dense array of functional groups and stereochemistry,” Movassaghi says.

The overall synthesis requires 16 steps from the beta-hydroxytryptophan starting material to verticillin A.

Killing cancer cells

Once the researchers had successfully completed the synthesis, they were also able to tweak it to generate derivates of verticillin A. Researchers at Dana-Farber then tested these compounds against several types of diffuse midline glioma (DMG), a rare brain tumor that has few treatment options.

The researchers found that the DMG cell lines most susceptible to these compounds were those that have high levels of a protein called EZHIP. This protein, which plays a role in the methylation of DNA, has been previously identified as a potential drug target for DMG.

“Identifying the potential targets of these compounds will play a critical role in further understanding their mechanism of action, and more importantly, will help optimize the compounds from the Movassaghi lab to be more target specific for novel therapy development,” Qi says.

The verticillin derivatives appear to interact with EZHIP in a way that increases DNA methylation, which induces the cancer cells to under programmed cell death. The compounds that were most successful at killing these cells were N-sulfonylated (+)-11,11'-dideoxyverticillin A and N-sulfonylated verticillin A. N-sulfonylation — the addition of a functional group containing sulfur and oxygen — makes the molecules more stable.

“The natural product itself is not the most potent, but it’s the natural product synthesis that brought us to a point where we can make these derivatives and study them,” Movassaghi says.

The Dana-Farber team is now working on further validating the mechanism of action of the verticillin derivatives, and they also hope to begin testing the compounds in animal models of pediatric brain cancers.

“Natural compounds have been valuable resources for drug discovery, and we will fully evaluate the therapeutic potential of these molecules by integrating our expertise in chemistry, chemical biology, cancer biology, and patient care. We have also profiled our lead molecules in more than 800 cancer cell lines, and will be able to understand their functions more broadly in other cancers,” Qi says.

The research was funded by the National Institute of General Medical Sciences, the Ependymoma Research Foundation, and the Curing Kids Cancer Foundation.



de MIT News https://ift.tt/hlenZi5

MIT researchers demonstrate ship hull modifications to cut fuel use

Researchers at MIT have demonstrated that wedge-shaped vortex generators attached to a ship’s hull can reduce drag by up to 7.5 percent, which reduces overall ship emissions and fuel expenses. The paper, “Net Drag Reduction in High Block Coefficient Ships and Vehicles Using Vortex Generators,” was presented at the Society of Naval Architects and Marine Engineers 2025 Maritime Convention in Norfolk, Virginia.

The work offers a promising path toward decarbonization, addressing the pressing need to meet the International Maritime Organization (IMO) goal to reduce carbon intensity of international shipping by at least 40 percent by 2030, compared to 2008 levels. Achieving such ambitious emissions reduction will require a coordinated approach, employing multiple methods, from redesigning ship hulls, propellers, and engines to using novel fuels and operational methods.

The researchers — José del Águila Ferrandis, Jack Kimmeth, and Michael Triantafyllou of MIT Sea Grant and the Department of Mechanical Engineering, along with Alfonso Parra Rubio and Neil Gershenfeld of the Center for Bits and Atoms — determined the optimized vortex generator shape and size using a combination of computational fluid dynamics (CFD) and experimental methods guided by AI optimization methods. 

The team first established parametric trends through extensive CFD analysis, and then tested multiple hulls through rapid prototyping to validate the results experimentally. Scale models of an axisymmetric hull with a bare tail, a tail with delta wing vortex generators, and a tail with wedge vortex generators were produced and tested. The team identified wedge-like vortex generators as the key shape that could achieve this level of drag reduction. 

Through flow visualization, the researchers could see that drag was reduced by delaying turbulent flow separation, helping water flow more smoothly along the ship’s hull, shrinking the wake behind the vessel. This also allows the propeller and rudder to work more efficiently in a uniform flow. “We document for the first time experimentally a reduction in fuel required by ships using vortex generators, relatively small structures in the shape of a wedge attached at a specific point of the ship’s hull,” explains Michael Triantafyllou, professor of mechanical engineering and director of MIT Sea Grant. 

Vortex generators have long been used in aircraft-wing design to maintain lift and delay stalling. This study pioneers the translation of these aerodynamic techniques into hydrodynamic design.

The modular adaptability of the wedge vortex generators would allow integration into a broad range of hull forms, including bulk carriers and tankers, and the devices can synergize with, or even replace, existing technologies like pre-swirl stators (fixed fins mounted in front of propellers), improving overall system performance. As an example case, the researchers estimate that installing the vortex generators on a 300-meter Newcastlemax bulk carrier operating at 14.5 knots over a cross-Pacific route would result in significantly reduced emissions and approximately $750,000 in fuel savings per year.

The findings offer a practical, cost-effective solution that could be implemented efficiently across existing fleets. This study was supported through the CBA Consortium, working with Oldendorff Carriers, which operates about 700 bulk carriers around the world. An extension of this research is supported by the MIT Marine Consortium, led by MIT professors Themis Sapsis and Fotini Christia. The Maritime Consortium was formed in 2025 to address critical gaps in the modernization of the commercial fleet through interdisciplinary research and collaboration across academia, industry, and regulatory agencies.



de MIT News https://ift.tt/Lfgdokj

lunes, 1 de diciembre de 2025

Driving American battery innovation forward

Advancements in battery innovation are transforming both mobility and energy systems alike, according to Kurt Kelty, vice president of battery, propulsion, and sustainability at General Motors (GM). At the MIT Energy Initiative (MITEI) Fall Colloquium, Kelty explored how GM is bringing next-generation battery technologies from lab to commercialization, driving American battery innovation forward. The colloquium is part of the ongoing MITEI Presents: Advancing the Energy Transition speaker series.

At GM, Kelty’s team is primarily focused on three things: first, improving affordability to get more electric vehicles (EVs) on the road. “How do you drive down the cost?” Kelty asked the audience. “It's the batteries. The batteries make up about 30 percent of the cost of the vehicle.” Second, his team strives to improve battery performance, including charging speed and energy density. Third, they are working on localizing the supply chain. “We've got to build up our resilience and our independence here in North America, so we're not relying on materials coming from China,” Kelty explained.

To aid their efforts, resources are being poured into the virtualization space, significantly cutting down on time dedicated to research and development. Now, Kelty’s team can do modeling up front using artificial intelligence, reducing what previously would have taken months to a couple of days.

“If you want to modify … the nickel content ever so slightly, we can very quickly model: ‘OK, how’s that going to affect the energy density? The safety? How’s that going to affect the charge capability?’” said Kelty. “We can look at that at the cell level, then the pack level, then the vehicle level.”

Kelty revealed that they have found a solution that addresses affordability, accessibility, and commercialization: lithium manganese-rich (LMR) batteries. Previously, the industry looked to reduce costs by lowering the amount of cobalt in batteries by adding greater amounts of nickel. These high-nickel batteries are in most cars on the road in the United States due to their high range. LMR batteries, though, take things a step further by reducing the amount of nickel and adding more manganese, which drives the cost of batteries down even further while maintaining range.

Lithium-iron-phosphate (LFP) batteries are the chemistry of choice in China, known for low cost, high cycle life, and high safety. With LMR batteries, the cost is comparable to LFP with a range that is closer to high-nickel. “That’s what’s really a breakthrough,” said Kelty.

LMR batteries are not new, but there have been challenges to adopting them, according to Kelty. “People knew about it, but they didn’t know how to commercialize it. They didn’t know how to make it work in an EV,” he explained. Now that GM has figured out commercialization, they will be the first to market these batteries in their EVs in 2028.

Kelty also expressed excitement over the use of vehicle-to-grid technologies in the future. Using a bidirectional charger with a two-way flow of energy, EVs could charge, but also send power from their batteries back to the electrical grid. This would allow customers to charge “their vehicles at night when the electricity prices are really low, and they can discharge it during the day when electricity rates are really high,” he said.

In addition to working in the transportation sector, GM is exploring ways to extend their battery expertise into applications in grid-scale energy storage. “It’s a big market right now, but it’s growing very quickly because of the data center growth,” said Kelty.

When looking to the future of battery manufacturing and EVs in the United States, Kelty remains optimistic: “we’ve got the technology here to make it happen. We’ve always had the innovation here. Now, we’re getting more and more of the manufacturing. We’re getting that all together. We’ve got just tremendous opportunity here that I’m hopeful we’re going to be able to take advantage of and really build a massive battery industry here.”

This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit MITEI’s Events page for more information on this and additional events.



de MIT News https://ift.tt/TFiKDHb

Exploring how AI will shape the future of work

“MIT hasn’t just prepared me for the future of work — it’s pushed me to study it. As AI systems become more capable, more of our online activity will be carried out by artificial agents. That raises big questions: How should we design these systems to understand our preferences? What happens when AI begins making many of our decisions?”

These are some of the questions MIT Sloan School of Management PhD candidate Benjamin Manning is researching. Part of his work investigates how to design and evaluate artificial intelligence agents that act on behalf of people, and how their behavior shapes markets and institutions. 

Previously, he received a master’s degree in public policy from the Harvard Kennedy School and a bachelor’s in mathematics from Washington University in St. Louis. After working as a research assistant, Manning knew he wanted to pursue an academic career.

“There’s no better place in the world to study economics and computer science than MIT,” he says. “Nobel and Turing award winners are everywhere, and the IT group lets me explore both fields freely. It was my top choice — when I was accepted, the decision was clear.” 

After receiving his PhD, Manning hopes to secure a faculty position at a business school and do the same type of work that MIT Sloan professors — his mentors — do every day.

“Even in my fourth year, it still feels surreal to be an MIT student. I don’t think that feeling will ever fade. My mom definitely won’t ever get over telling people about it.”

Of his MIT Sloan experience, Manning says he didn’t know it was possible to learn so much so quickly. “It’s no exaggeration to say I learned more in my first year as a PhD candidate than in all four years of undergrad. While the pace can be intense, wrestling with so many new ideas has been incredibly rewarding. It’s given me the tools to do novel research in economics and AI — something I never imagined I’d be capable of.”

As an economist studying AI simulations of humans, for Manning, the future of work not only means understanding how AI acts on our behalf, but also radically improving and accelerating social scientific discovery.

“Another part of my research agenda explores how well AI systems can simulate human responses. I envision a future where researchers test millions of behavioral simulations in minutes, rapidly prototyping experimental designs, and identifying promising research directions before investing in costly human studies. This isn’t about replacing human insight, but amplifying it: Scientists can focus on asking better questions, developing theory, and interpreting results while AI handles the computational heavy lifting.”

He’s excited by the prospect: “We are possibly moving toward a world where the pace of understanding may get much closer to the speed of economic change.”



de MIT News https://ift.tt/5xkKUyG

Artificial tendons give muscle-powered robots a boost

Our muscles are nature’s actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate “biohybrid robots” made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers.

But for the most part, these designs are limited in the amount of motion and power they can produce. Now, MIT engineers are aiming to give bio-bots a power lift with artificial tendons.

In a study appearing today in the journal Advanced Sciencethe researchers developed artificial tendons made from tough and flexible hydrogel. They attached the rubber band-like tendons to either end of a small piece of lab-grown muscle, forming a “muscle-tendon unit.” Then they connected the ends of each artificial tendon to the fingers of a robotic gripper.

When they stimulated the central muscle to contract, the tendons pulled the gripper’s fingers together. The robot pinched its fingers together three times faster, and with 30 times greater force, compared with the same design without the connecting tendons.

The researchers envision the new muscle-tendon unit can be fit to a wide range of biohybrid robot designs, much like a universal engineering element.

“We are introducing artificial tendons as interchangeable connectors between muscle actuators and robotic skeletons,” says lead author Ritu Raman, an assistant professor of mechanical engineering (MechE) at MIT. “Such modularity could make it easier to design a wide range of robotic applications, from microscale surgical tools to adaptive, autonomous exploratory machines.”

The study’s MIT co-authors include graduate students Nicolas Castro, Maheera Bawa, Bastien Aymon, Sonika Kohli, and Angel Bu; undergraduate Annika Marschner; postdoc Ronald Heisser; alumni Sarah J. Wu ’19, SM ’21, PhD ’24 and Laura Rosado ’22, SM ’25; and MechE professors Martin Culpepper and Xuanhe Zhao.

Muscle’s gains

Raman and her colleagues at MIT are at the forefront of biohybrid robotics, a relatively new field that has emerged in the last decade. They focus on combining synthetic, structural robotic parts with living muscle tissue as natural actuators.

“Most actuators that engineers typically work with are really hard to make small,” Raman says. “Past a certain size, the basic physics doesn’t work. The nice thing about muscle is, each cell is an independent actuator that generates force and produces motion. So you could, in principle, make robots that are really small.”

Muscle actuators also come with other advantages, which Raman’s team has already demonstrated: The tissue can grow stronger as it works out, and can naturally heal when injured. For these reasons, Raman and others envision that muscly droids could one day be sent out to explore environments that are too remote or dangerous for humans. Such muscle-bound bots could build up their strength for unforeseen traverses or heal themselves when help is unavailable. Biohybrid bots could also serve as small, surgical assistants that perform delicate, microscale procedures inside the body.

All these future scenarios are motivating Raman and others to find ways to pair living muscles with synthetic skeletons. Designs to date have involved growing a band of muscle and attaching either end to a synthetic skeleton, similar to looping a rubber band around two posts. When the muscle is stimulated to contract, it can pull the parts of a skeleton together to generate a desired motion.

But Raman says this method produces a lot of wasted muscle that is used to attach the tissue to the skeleton rather than to make it move. And that connection isn’t always secure. Muscle is quite soft compared with skeletal structures, and the difference can cause muscle to tear or detach. What’s more, it is often only the contractions in the central part of the muscle that end up doing any work — an amount that’s relatively small and generates little force.

“We thought, how do we stop wasting muscle material, make it more modular so it can attach to anything, and make it work more efficiently?” Raman says. “The solution the body has come up with is to have tendons that are halfway in stiffness between muscle and bone, that allow you to bridge this mechanical mismatch between soft muscle and rigid skeleton. They’re like thin cables that wrap around joints efficiently.”

“Smartly connected”

In their new work, Raman and her colleagues designed artificial tendons to connect natural muscle tissue with a synthetic gripper skeleton. Their material of choice was hydrogel — a squishy yet sturdy polymer-based gel. Raman obtained hydrogel samples from her colleague and co-author Xuanhe Zhao, who has pioneered the development of hydrogels at MIT. Zhao’s group has derived recipes for hydrogels of varying toughness and stretch that can stick to many surfaces, including synthetic and biological materials.

To figure out how tough and stretchy artificial tendons should be in order to work in their gripper design, Raman’s team first modeled the design as a simple system of three types of springs, each representing the central muscle, the two connecting tendons, and the gripper skeleton. They assigned a certain stiffness to the muscle and skeleton, which were previously known, and used this to calculate the stiffness of the connecting tendons that would be required in order to move the gripper by a desired amount.

From this modeling, the team derived a recipe for hydrogel of a certain stiffness. Once the gel was made, the researchers carefully etched the gel into thin cables to form artificial tendons. They attached two tendons to either end of a small sample of muscle tissue, which they grew using lab-standard techniques. They then wrapped each tendon around a small post at the end of each finger of the robotic gripper — a skeleton design that was developed by MechE professor Martin Culpepper, an expert in designing and building precision machines.

When the team stimulated the muscle to contract, the tendons in turn pulled on the gripper to pinch its fingers together. Over multiple experiments, the researchers found that the muscle-tendon gripper worked three times faster and produced 30 times more force compared to when the gripper is actuated just with a band of muscle tissue (and without any artificial tendons). The new tendon-based design also was able to keep up this performance over 7,000 cycles, or muscle contractions.

Overall, Raman saw that the addition of artificial tendons increased the robot’s power-to-weight ratio by 11 times, meaning that the system required far less muscle to do just as much work.

“You just need a small piece of actuator that’s smartly connected to the skeleton,” Raman says. “Normally, if a muscle is really soft and attached to something with high resistance, it will just tear itself before moving anything. But if you attach it to something like a tendon that can resist tearing, it can really transmit its force through the tendon, and it can move a skeleton that it wouldn’t have been able to move otherwise.”

The team’s new muscle-tendon design successfully merges biology with robotics, says biomedical engineer Simone Schürle-Finke, associate professor of health sciences and technology at ETH Zürich.

“The tough-hydrogel tendons create a more physiological muscle–tendon–bone architecture, which greatly improves force transmission, durability, and modularity,” says Schürle-Finke, who was not involved with the study. “This moves the field toward biohybrid systems that can operate repeatably and eventually function outside the lab.”

With the new artificial tendons in place, Raman’s group is moving forward to develop other elements, such as skin-like protective casings, to enable muscle-powered robots in practical, real-world settings.

This research was supported, in part, by the U.S. Department of Defense Army Research Office, the MIT Research Support Committee, and the National Science Foundation.



de MIT News https://ift.tt/3V4HgxI