viernes, 4 de abril de 2025

Carsten Rasmussen, LEGO Group COO, discusses the production network that enables the builders of tomorrow

LEGOs are no stranger to many members of the MIT community. Faculty, staff, and students, alike, have developed a love of building and mechanics while playing with the familiar plastic bricks. In just a few hours, a heap of bricks can become a house, a ship, an airplane, or a cat. The simplicity lends itself to creativity and ingenuity, and it has inspired many MIT faculty members to bring LEGOs into the classroom, including class 2.S00 (Introduction to Manufacturing), where students use LEGO bricks to learn about manufacturing processes and systems.

It was perhaps no surprise, then, that the lecture hall in the MIT Schwarzman College of Computing was packed with students, faculty, staff, and guests to hear Carsten Rasmussen, chief operating officer of the LEGO Group, speak as part of the Manufacturing@MIT Distinguished Speaker Series on March 20.

In his engaging and inspiring talk, Rasmussen asked one of the most important questions in manufacturing: How do you balance innovation with sustainability while keeping a complex global supply chain running smoothly? He emphasized that success in modern manufacturing isn’t just about cutting costs — it’s about creating value across the entire network, and integrating every aspect of the business.

Successful manufacturing is all about balance

The way the toy industry views success is evolving, Rasmussen said. In the past, focusing on “cost, quality, safety, delivery, and service” may have been enough, but today’s landscape is far more demanding. “Now, it’s about availability, customers’ happiness, and innovation,” he said.

Rasmussen, who has been with the LEGO Group since 2001, started as a buyer before moving to various leadership roles within the organization. Today, he oversees the LEGO Group’s operations strategy, including manufacturing and supply chain planning, quality, engineering, and sales and operations planning.

“The way we can inspire the builders of tomorrow is basically, whatever we develop, we are able to produce, and we are able to sell,” he said.

The LEGO Group’s operations are intricate. Focusing on areas such as capacity and infrastructure, network utilization, analysis and design, and sustainability, keeps the company true to its mission, “to inspire and develop the builders of tomorrow.” Within the organization, departments operate with a focus on how their decisions will impact the rest of the company. To do this, they need to communicate effectively.

Intuition and experience play a big role in effective decision-making

In a time where data analytics is a huge part of decision-making in manufacturing and supply-chain management, Rasmussen highlighted the importance of blending data with intuition and experience.

“Many of the decisions you have to make are very, very complex,” he explained. “A lot of the data you’re going to provide me is based on history. And what happened in history is not what you’re facing right now. So, you need to really be able to take great data and blend that with your intuition and your experience to make a decision.”

This shift reflects a broader trend in industries where leaders are beginning to see the benefits of looking beyond purely data-driven decision-making. With global supply chains disrupted by unforeseen events like the Covid-19 pandemic, there’s growing acknowledgement that historical data may not be the most effective way to predict the future. Rasmussen said that the audience should practice blending their own intuition and experience with data by asking themselves: “Does it make sense? Does it feel right?”

Prioritizing sustainability 

Rasmussen also highlighted the LEGO Group’s ambitious sustainability goals, signaling that innovation cannot come at the expense of environmental responsibility. “There is no excuse for us to not leave a better planet for the next generation, for the next hundred years,” he said.

With an ambition to make its products from more renewable or recycled materials by 2032 and eliminate single-use packaging, the company aims to lead a shift in trends in manufacturing toward being more environmentally friendly, including an effort to turn waste into bricks.

Innovation doesn’t exist in a vacuum

Throughout his talk, Rasmussen underscored the importance of innovation. The only way to stay on top is to be constantly thinking of new ideas, he said.

“Are you daring to put new products into the market?” he asked, adding that it’s not enough to come up with a novel product or approach. How its implementation will work within the system is essential, too. “Our challenge that you need to help me with,” he said to the audience, “is how can we bring in innovation, because we can’t stand still either. We also need to be fit for the future … that is actually one of our bigger challenges.”

He reminded the audience that innovation is not a linear path. It involves risk, some failure, and continuous evolution. “Resilience is absolutely key,” he said.

Q&A

After his presentation, Rasmussen sat down with Professor John Hart for a brief Q&A, followed by audience questions. Among the questions that Hart asked Rasmussen was how he would respond to a designer who presented a model of MIT-themed LEGO set, assuring Rasmussen it would break sales records. “Oh, I’ve heard that so many times,” Rasmussen laughed.

Hart asked what it would take to turn an idea into reality. “How long does it take from bricks to having it on my doorstep?” he asked.

“Typically, a new product takes between 12 to 18 months from idea to when we put it out on the market,” said Rasmussen, explaining that the process requires a good deal of integration and that there is a lot of planning to make sure that new ideas can be implemented across the organization.

Then the microphone was opened up to the crowd. The first audience questions came from Emerson Linville-Engler, the youngest audience member at just 5 years old, who wanted to know what the most difficult LEGO set to make was (the Technic round connector pieces), as well as Rasmussen’s favorite LEGO set (complex builds, like buildings or Technic models).

Other questions showcased how much LEGO inspired the audience. One member asked Rasmussen if it ever got old being told that he worked for a company that inspires the inner child? “No. It motivates me every single day when you meet them,” he said.

Through the Q&A, the audience was also able to ask more about the manufacturing process from ideas to execution, as well as whether Rasmussen was threatened by imitators (he welcomes healthy competition, but not direct copycats), and whether the LEGO Group plans on bringing back some old favorites (they are discussing whether to bring back old sets, but there are no set plans to do so at this time).

For the aspiring manufacturing leaders and innovators in the room, the lesson of Rasmussen’s talk was clear: Success isn’t just about making the right decision, it’s about understanding the entire system, having the courage to innovate, and being resilient enough to navigate unexpected challenges.

The event was hosted by the Manufacturing@MIT Working Group as part of the Manufacturing@MIT Distinguished Speaker Series. Past speakers include the TSMC founder Morris Chang, Office of Science and Technology Policy Director Arati Prabhakar, Under Secretary of Defense for Research and Engineering Heidi Shyu, and Pennsylvania Governor Tom Wolf



de MIT News https://ift.tt/Wu1b2eP

Lincoln Laboratory honored for technology transfer of hurricane-tracking satellites

The Federal Laboratory Consortium (FLC) has awarded MIT Lincoln Laboratory a 2025 FLC Excellence in Technology Transfer Award. The award recognizes the laboratory's exceptional efforts in commercializing microwave sounders hosted on small satellites called CubeSats. The laboratory first developed the technology for NASA, demonstrating that such satellites could work in tandem to collect hurricane data more frequently than previously possible and significantly improve hurricane forecasts. The technology is now licensed to the company Tomorrow.io, which will launch a large constellation of the sounder-equipped satellites to enhance hurricane prediction and expand global weather coverage. 

"This FLC award recognizes a technology with significant impact, one that could enhance hourly weather forecasting for aviation, logistics, agriculture, and emergency management, and highlights the laboratory's important role in bringing federally funded innovation to the commercial sector," says Asha Rajagopal, Lincoln Laboratory's chief technology transfer officer.

A nationwide network of more than 300 government laboratories, agencies, and research centers, the FLC helps facilitate the transfer of technologies out of federal labs and into the marketplace to benefit the U.S. economy, society, and national security.

Lincoln Laboratory originally proposed and demonstrated the technology for NASA's TROPICS (Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of SmallSats) mission. For TROPICS, the laboratory put its microwave sounders on low-cost, commercially available CubeSats for the first time.

Of all the technology used for sensing hurricanes, microwave sounders provide the greatest improvement to forecasting models. From space, these instruments detect a range of microwave frequencies that penetrate clouds, allowing them to measure 3D temperature, humidity, and precipitation in a storm. State-of-the-art instruments are typically large (the size of a washing machine) and hosted aboard $2 billion polar-orbiting satellites, which collectively may revisit a storm every six hours. If sounders could be miniaturized, laboratory researchers imagined, then they could be put on small satellites and launched in large numbers, working together to revisit storms more often.

The TROPICS sounder is the size of a coffee cup. The laboratory team worked for several years to develop and demonstrate the technology that resulted in a miniaturized instrument, while maintaining performance on par with traditional sounders for the frequencies that provide the most useful tropical cyclone observations. By 2023, NASA launched a constellation of four TROPICS satellites, which have since collected rapidly refreshed data of many tropical storms.

Now, Tomorrow.io plans to increase that constellation to a global network of 18 satellites. The resulting high-rate observations — under an hour — are expected to improve weather forecasts, hurricane tracking, and early-warning systems.

"This partnership with Tomorrow.io expands the impact of the TROPICS mission. Tomorrow.io’s increased constellation size, software pipeline, and resilient business model enable it to support a number of commercial and government organizations. This transfer to industry has resulted in a self-sustaining national capability, one that is expected to help the economy and the government for years to come," says Tom Roy, who managed the transfer of the technology to Tomorrow.io.

The technology transfer spanned 18 months. Under a cooperative research and development agreement (CRADA), the laboratory team adapted the TROPICS payload to an updated satellite design and delivered to Tomorrow.io the first three units, two of which were launched in September 2024. The team also provided in-depth training to Tomorrow.io and seven industry partners who will build, test, launch, and operate the future full commercial constellation. The remaining satellites are expected to launch before the end of this year.

"With these microwave sounders, we can set a new standard in atmospheric data collection and prediction. This technology allows us to capture atmospheric data with exceptional accuracy, especially over oceans and remote areas where traditional observations are scarce," said Rei Goffer, co-founder of Tomorrow.io, in a press release announcing the September launches.

Tomorrow.io will use the sounder data as input into their weather forecasts, data products, and decision support tools available to their customers, who range from major airlines to governments. Tomorrow.io's nonprofit partner, TomorrowNow, also plans to use the data as input to its climate model for improving food security in Africa.

This technology is especially relevant as hurricanes and severe weather events continue to cause significant destruction. In 2024, the United States experienced a near-record 27 disaster events that each exceeded $1 billion in damage, resulting in a total cost of approximately $182.7 billion, and that caused the deaths of at least 568 people. Globally, these storm systems cause thousands of deaths and billions of dollars in damage each year.

“It has been great to see the Lincoln Laboratory, Tomorrow.io, and industry partner teams work together so effectively to rapidly incorporate the TROPICS technology and bring the new Tomorrow.io microwave sounder constellation online,” says Bill Blackwell, principal investigator of the NASA TROPICS mission and the CRADA with Tomorrow.io. “I expect that the improved revisit rate provided by the Tomorrow.io constellation will drive further improvements in hurricane forecasting performance over and above what has already been demonstrated by TROPICS.”

The team behind the transfer includes Tom Roy, Bill Blackwell, Steven Gillmer, Rebecca Keenan, Nick Zorn, and Mike DiLiberto of Lincoln Laboratory and Kai Lemay, Scott Williams, Emma Watson, and Jan Wicha of Tomorrow.io. Lincoln Laboratory will be honored among other winners of 2025 FLC Awards at the FLC National Meeting to be held virtually on May 13.



de MIT News https://ift.tt/vNfj7u4

Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping

When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.

They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.

While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.

Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.

The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.

They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.

“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.

“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.

Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.

Slashing sulfur emissions

Heavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.

Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.

In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.

Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.

“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.

A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.

The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.

But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.

Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.

The researchers set out to fill that scientific gap.

A “well-to-wake” analysis

The team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.

“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.

The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.

“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.

For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.

Collecting these onboard data was the most challenging part of the study.

“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.

Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.

In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.

The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.

Most chemical concentrations were at least an order of magnitude below these requirements.

In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.

These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.

“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.

“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”

Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.

“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.

This study was supported, in part, by Oldendorff Carriers.



de MIT News https://ift.tt/3MDsUOe

jueves, 3 de abril de 2025

New method assesses and improves the reliability of radiologists’ diagnostic reports

Due to the inherent ambiguity in medical images like X-rays, radiologists often use words like “may” or “likely” when describing the presence of a certain pathology, such as pneumonia.

But do the words radiologists use to express their confidence level accurately reflect how often a particular pathology occurs in patients? A new study shows that when radiologists express confidence about a certain pathology using a phrase like “very likely,” they tend to be overconfident, and vice-versa when they express less confidence using a word like “possibly.”

Using clinical data, a multidisciplinary team of MIT researchers in collaboration with researchers and clinicians at hospitals affiliated with Harvard Medical School created a framework to quantify how reliable radiologists are when they express certainty using natural language terms.

They used this approach to provide clear suggestions that help radiologists choose certainty phrases that would improve the reliability of their clinical reporting. They also showed that the same technique can effectively measure and improve the calibration of large language models by better aligning the words models use to express confidence with the accuracy of their predictions.

By helping radiologists more accurately describe the likelihood of certain pathologies in medical images, this new framework could improve the reliability of critical clinical information.

“The words radiologists use are important. They affect how doctors intervene, in terms of their decision making for the patient. If these practitioners can be more reliable in their reporting, patients will be the ultimate beneficiaries,” says Peiqi Wang, an MIT graduate student and lead author of a paper on this research.

He is joined on the paper by senior author Polina Golland, a Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science (EECS), a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the leader of the Medical Vision Group; as well as Barbara D. Lam, a clinical fellow at the Beth Israel Deaconess Medical Center; Yingcheng Liu, at MIT graduate student; Ameneh Asgari-Targhi, a research fellow at Massachusetts General Brigham (MGB); Rameswar Panda, a research staff member at the MIT-IBM Watson AI Lab; William M. Wells, a professor of radiology at MGB and a research scientist in CSAIL; and Tina Kapur, an assistant professor of radiology at MGB. The research will be presented at the International Conference on Learning Representations.

Decoding uncertainty in words

A radiologist writing a report about a chest X-ray might say the image shows a “possible” pneumonia, which is an infection that inflames the air sacs in the lungs. In that case, a doctor could order a follow-up CT scan to confirm the diagnosis.

However, if the radiologist writes that the X-ray shows a “likely” pneumonia, the doctor might begin treatment immediately, such as by prescribing antibiotics, while still ordering additional tests to assess severity.

Trying to measure the calibration, or reliability, of ambiguous natural language terms like “possibly” and “likely” presents many challenges, Wang says.

Existing calibration methods typically rely on the confidence score provided by an AI model, which represents the model’s estimated likelihood that its prediction is correct.

For instance, a weather app might predict an 83 percent chance of rain tomorrow. That model is well-calibrated if, across all instances where it predicts an 83 percent chance of rain, it rains approximately 83 percent of the time.

“But humans use natural language, and if we map these phrases to a single number, it is not an accurate description of the real world. If a person says an event is ‘likely,’ they aren’t necessarily thinking of the exact probability, such as 75 percent,” Wang says.

Rather than trying to map certainty phrases to a single percentage, the researchers’ approach treats them as probability distributions. A distribution describes the range of possible values and their likelihoods — think of the classic bell curve in statistics.

“This captures more nuances of what each word means,” Wang adds.

Assessing and improving calibration

The researchers leveraged prior work that surveyed radiologists to obtain probability distributions that correspond to each diagnostic certainty phrase, ranging from “very likely” to “consistent with.”

For instance, since more radiologists believe the phrase “consistent with” means a pathology is present in a medical image, its probability distribution climbs sharply to a high peak, with most values clustered around the 90 to 100 percent range.

In contrast the phrase “may represent” conveys greater uncertainty, leading to a broader, bell-shaped distribution centered around 50 percent.

Typical methods evaluate calibration by comparing how well a model’s predicted probability scores align with the actual number of positive results.

The researchers’ approach follows the same general framework but extends it to account for the fact that certainty phrases represent probability distributions rather than probabilities.

To improve calibration, the researchers formulated and solved an optimization problem that adjusts how often certain phrases are used, to better align confidence with reality.

They derived a calibration map that suggests certainty terms a radiologist should use to make the reports more accurate for a specific pathology.

“Perhaps, for this dataset, if every time the radiologist said pneumonia was ‘present,’ they changed the phrase to ‘likely present’ instead, then they would become better calibrated,” Wang explains.

When the researchers used their framework to evaluate clinical reports, they found that radiologists were generally underconfident when diagnosing common conditions like atelectasis, but overconfident with more ambiguous conditions like infection.

In addition, the researchers evaluated the reliability of language models using their method, providing a more nuanced representation of confidence than classical methods that rely on confidence scores. 

“A lot of times, these models use phrases like ‘certainly.’ But because they are so confident in their answers, it does not encourage people to verify the correctness of the statements themselves,” Wang adds.

In the future, the researchers plan to continue collaborating with clinicians in the hopes of improving diagnoses and treatment. They are working to expand their study to include data from abdominal CT scans.

In addition, they are interested in studying how receptive radiologists are to calibration-improving suggestions and whether they can mentally adjust their use of certainty phrases effectively.

“Expression of diagnostic certainty is a crucial aspect of the radiology report, as it influences significant management decisions. This study takes a novel approach to analyzing and calibrating how radiologists express diagnostic certainty in chest X-ray reports, offering feedback on term usage and associated outcomes,” says Atul B. Shinagare, associate professor of radiology at Harvard Medical School, who was not involved with this work. “This approach has the potential to improve radiologists’ accuracy and communication, which will help improve patient care.”

The work was funded, in part, by a Takeda Fellowship, the MIT-IBM Watson AI Lab, the MIT CSAIL Wistrom Program, and the MIT Jameel Clinic.



de MIT News https://ift.tt/PSDNRzs

Taking the “training wheels” off clean energy

Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.

“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.

The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.

Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.

“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”

A shifting clean energy landscape

During a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.

Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.

Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.

During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”

David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”

“Everything on the table”

The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.

Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.

By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.

Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.

“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.

One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.

“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”

The road ahead

Several speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.

“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”

During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”

Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.

World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.”



de MIT News https://ift.tt/yHLphZu

Surprise discovery could lead to improved catalysts for industrial reactions

The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.

A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.

Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.

There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.

“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”

He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”

The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.

While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.

The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.

They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.

The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”

By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.

The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.

“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.

Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.

Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”

This work is “illuminating, something that will be worth teaching at the undergraduate level," says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. ... [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”

The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation.



de MIT News https://ift.tt/cHQnE7D

Engineers develop a way to mass manufacture nanoparticles that deliver cancer drugs directly to tumors

Polymer-coated nanoparticles loaded with therapeutic drugs show significant promise for cancer treatment, including ovarian cancer. These particles can be targeted directly to tumors, where they release their payload while avoiding many of the side effects of traditional chemotherapy.

Over the past decade, MIT Institute Professor Paula Hammond and her students have created a variety of these particles using a technique known as layer-by-layer assembly. They’ve shown that the particles can effectively combat cancer in mouse studies.

To help move these nanoparticles closer to human use, the researchers have now come up with a manufacturing technique that allows them to generate larger quantities of the particles, in a fraction of the time.

“There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, who is also MIT’s vice provost for faculty and a member of the Koch Institute for Integrative Cancer Research. “Ultimately, we need to be able to bring this to a scale where a company is able to manufacture these on a large level.”

Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Advanced Functional Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, and Ezra Gordon ’24 are the lead authors of paper. Heikyung Suh, an MIT research technician, is also an author.

A streamlined process

More than a decade ago, Hammond’s lab developed a novel technique for building nanoparticles with highly controlled architectures. This approach allows layers with different properties to be laid down on the surface of a nanoparticle by alternately exposing the surface to positively and negatively charged polymers.

Each layer can be embedded with drug molecules or other therapeutics. The layers can also carry targeting molecules that help the particles find and enter cancer cells.

Using the strategy that Hammond’s lab originally developed, one layer is applied at a time, and after each application, the particles go through a centrifugation step to remove any excess polymer. This is time-intensive and would be difficult to scale up to large-scale production, the researchers say.

More recently, a graduate student in Hammond’s lab developed an alternative approach to purifying the particles, known as tangential flow filtration. However, while this streamlined the process, it still was limited by its manufacturing complexity and maximum scale of production.

“Although the use of tangential flow filtration is helpful, it’s still a very small-batch process, and a clinical investigation requires that we would have many doses available for a significant number of patients,” Hammond says.

To create a larger-scale manufacturing method, the researchers used a microfluidic mixing device that allows them to sequentially add new polymer layers as the particles flow through a microchannel within the device. For each layer, the researchers can calculate exactly how much polymer is needed, which eliminates the need to purify the particles after each addition.

“That is really important because separations are the most costly and time-consuming steps in these kinds of systems,” Hammond says.

This strategy eliminates the need for manual polymer mixing, streamlines production, and integrates good manufacturing practice (GMP)-compliant processes. The FDA’s GMP requirements ensure that products meet safety standards and can be manufactured in a consistent fashion, which would be highly challenging and costly using the previous step-wise batch process. The microfluidic device that the researchers used in this study is already used for GMP manufacturing of other types of nanoparticles, including mRNA vaccines.

“With the new approach, there’s much less chance of any sort of operator mistake or mishaps,” Pires says. “This is a process that can be readily implemented in GMP, and that’s really the key step here. We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.”

Scaled-up production

Using this approach, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses) in just a few minutes, while the original technique would take close to an hour to create the same amount. This could enable the production of more than enough particles for clinical trials and patient use, the researchers say.

“To scale up with this system, you just keep running the chip, and it is much easier to produce more of your material,” Pires says.

To demonstrate their new production technique, the researchers created nanoparticles coated with a cytokine called interleukin-12 (IL-12). Hammond’s lab has previously shown that IL-12 delivered by layer-by-layer nanoparticles can activate key immune cells and slow ovarian tumor growth in mice.

In this study, the researchers found that IL-12-loaded particles manufactured using the new technique showed similar performance as the original layer-by-layer nanoparticles. And, not only do these nanoparticles bind to cancer tissue, but they show a unique ability to not enter the cancer cells. This allows the nanoparticles to serve as markers on the cancer cells that activate the immune system locally in the tumor. In mouse models of ovarian cancer, this treatment can lead to both tumor growth delay and even cures.

The researchers have filed for a patent on the technology and are now working with MIT’s Deshpande Center for Technological Innovation in hopes of potentially forming a company to commercialize the technology. While they are initially focusing on cancers of the abdominal cavity, such as ovarian cancer, the work could also be applied to other types of cancer, including glioblastoma, the researchers say.

The research was funded by the U.S. National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, and the Koch Institute Support (core) Grant from the National Cancer Institute.



de MIT News https://ift.tt/hktXW7U