sábado, 30 de noviembre de 2024

A data designer driven to collaborate with communities

It is fairly common in public discourse for someone to announce, “I brought data to this discussion,” thus casting their own conclusions as empirical and rational. It is less common to ask: Where did the data come from? How was it collected? Why is there data about some things but not others?

MIT Associate Professor Catherine D’Ignazio SM ’14 does ask those kinds of questions. A scholar with a far-reaching portfolio of work, she has a strong interest in applying data to social issues — often to help the disempowered gain access to numbers, and to help provide a fuller picture of civic problems we are trying to address.

“If we want an educated citizenry to participate in our democracy with data and data-driven arguments, we should think about how we design our data infrastructures to support that,” says D’Ignazio.

Take, for example, the problem of feminicide, the killing of women as a result of gender-based violence. Activists throughout Latin America started tabulating cases about it and building databases that were often more thorough than official state records. D’Ignazio has observed the issue and, with colleagues, co-designed AI tools with human rights defenders to support their monitoring work.

In turn, D’Ignazio’s 2024 book on the subject, “Counting Feminicide,” chronicled the entire process and has helped bring the issue to a new audience. Where there was once a data void, now there are substantial databases helping people recognize the reality of the problem on multiple continents, thanks to innovative citizens. The book outlines how grassroots data science and citizen data activism are generally rising forms of civic participation.

“When we talk about innovation, I think: Innovation for whom? And by whom? For me those are key questions,” says D’Ignazio, a faculty member in MIT’s Department of Urban Studies and Planning and director of MIT’s Data and Feminism Lab. For her research and teaching, D’Ignazio was awarded tenure earlier this year.

Out of the grassroots

D’Ignazio has long cultivated an interest in data science, digital design, and global matters. She received her BA in international relations from Tufts University, then became a software developer in the private sector. Returning to her studies, she earned an MFA from the Maine College of Art, and then an MS from the MIT Media Lab, which helped her synthesize her intellectual outlook.

“The Media Lab for me was the place where I was able to converge all those interests I had been thinking about,” D’Ignazio says. “How can we have more creative applications of software and databases? How can we have more socially just applications of AI? And how do we organize our technology and resources for a more participatory and equitable future for all of us?”

To be sure, D’Ignazio did not spend all her time at the Media Lab examining database issues. In 2014 and 2018 she co-organized a feminist hackathon called “Make the Breast Pump Not Suck,” in which hundreds of participants developed innovative technologies and policies to address postpartum health and infant feeding. Still, much of her work has focused on data architecture, data visualization, and the analysis of the relationship between data production and society.

D'Ignazio started her teaching career as a lecturer in the Digital + Media graduate program at Rhode Island School of Design, then became an assistant professor of data visualization and civic media in Emerson College’s journalism department. She joined the MIT faculty as an assistant professor in 2020.

D’Ignazio’s first book, “Data Feminism,” co-authored with Lauren Klein of Emory University and published in 2020, took a wide-ranging look at many ways that everyday data reflects the civic society that it emerges from. The reported rates of sexual assault on college campuses, for instance, could be deceptive because the institutions with the lowest rates might be those with the most problematic reporting climates for survivors.

D'Ignazio’s global outlook — she has lived in France, Argentina, and Uruguay, among other places — has helped her understand the regional and national politics behind these issues, as well as the challenges citizen watchdogs can face in terms of data collection. No one should think such projects are easy.

“So much grassroots labor goes into the production of data,” D’Ignazio says. “One thing that’s really interesting is the huge amount of work it takes on the part of grassroots or citizen science groups to actually make data useful. And oftentimes that’s because of institutional data structures that are really lacking.”

Letting students thrive

Overall, the issue of who participates in data science is, as D’Ignazio and Klein have written, “the elephant in the server room.” As an associate professor, D’Ignazio works to encourage all students to think openly about data science and its social underpinnings. In turn, she also draws inspiration from productive students.

“Part of the joy and privilege of being a professor is you have students who take you in directions you would not have gone in yourself,” D’Ignazio says.

One of D’Ignazio’s graduate students at the moment, Wonyoung So, has been digging into housing data issues. It is fairly simple for property owners to access information about tenants, but less so the other way around; this makes it hard to find out if landlords have abnormally high eviction rates, for example.

“There are all of these technologies that allow landlords to get almost every piece of information about tenants, but there are so few technologies allowing tenants to know anything about landlords,” D’Ignazio explains. The availability of data “often ends up reproducing asymmetries that already exist in the world.” Moreover, even where housing data is published by jurisdictions, she notes, “it’s incredibly fragmented, and published poorly and differently, from place to place. There are massive inequities even in open data.”

In this way housing seems like yet another area where new ideas and better data structures can be developed. It is not a topic she would have focused on by herself, but D’Ignazio also views herself as a facilitator of innovative work by others. There is much progress to be made in the application of data science to society, often by developing new tools for people to use.

“I’m interested in thinking about how information and technology can challenge structural inequalities,” D’Ignazio says. “The question is: How do we design technologies that help communities build power?”



de MIT News https://ift.tt/U40bPKC

martes, 26 de noviembre de 2024

Creating innovative health solutions for individuals and populations

The factors impacting successful patient care are many and varied. Early diagnosis, proper adherence to prescription medication schedules, and effective monitoring and management of chronic disease, for example, all contribute to better outcomes. However, each of these factors can be hindered by outside influences — medication doesn’t work as well if it isn’t taken as prescribed, and disease can be missed or misdiagnosed in early stages if symptoms are mild or not present.

Giovanni Traverso, the Karl Van Tassel Career Development Professor, an associate professor of mechanical engineering, and a gastroenterologist in the Division of Gastroenterology, Brigham and Women’s Hospital (BWH), is working on a variety of innovative solutions to improve patient care. As a physician and an engineer, he brings a unique perspective.

“Bringing those two domains together is what really can help transform and accelerate our capacity to develop new biomedical devices or new therapies for a range of conditions,” he says. “As physicians, we're extremely fortunate to be able to help individuals. As scientists and engineers, not only can we help individuals … we can help populations.”

Traverso found a passion for this work early in life. His family lived in his father’s native Peru through much of his childhood, but left in the late 1980s at the height of the nation’s political instability, emigrating to Canada, where he began high school.

“In high school, I had the incredible opportunity to actually spend time in a lab,” he says. “I really fell in love with molecular genetics. I loved the lab environment and that ability to investigate a very specific problem, with the hopes that those developments would eventually help people.”

He started medical school immediately after high school, attending the University of Cambridge, but paused his medical training to pursue a PhD in medical sciences at Johns Hopkins University before returning to Cambridge. After completing medical school, he completed internal medicine residency at BWH and his gastroenterology fellowship training at Massachusetts General Hospital, both at Harvard Medical School. For his postdoctoral research, he transitioned to the fields of chemical and biomedical engineering in the laboratory of Professor Robert Langer.

Traverso’s research interests today include biomedical device development, ingestible and implantable robotics, and drug delivery for optimal drug adherence. His academic home at MIT is in the Department of Mechanical Engineering, but his work integrates multiple domains, including mechanical engineering, electrical engineering, material science, and synthetic biology.

“The mechanical engineering department is a tremendous place to engage with students, as well as faculty, towards the development of the next generation of medical devices,” he says. “At the core of many of those medical devices are fundamental mechanical principles.”

Traverso’s team in the Laboratory for Translational Engineering is developing pioneering biomedical devices such as drug delivery systems to enable safe, efficient delivery of therapeutics, and novel diagnostic tests to support early detection of diseases.

The heart of his work, he says, is “about trying to help others. Patients, of course, but also students, to help them see the arc of bench-to-bedside and help stimulate their interest in careers applying engineering to help improve human health.”



de MIT News https://ift.tt/zgnMqZy

Troy Van Voorhis to step down as department head of chemistry

Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, will step down as department head of the Department of Chemistry at the end of this academic year. Van Voorhis has served as department head since 2019, previously serving the department as associate department head since 2015.

“Troy has been an invaluable partner and sounding board who could always be counted on for a wonderful mix of wisdom and pragmatism,” says Nergis Mavalvala, the Kathleen and Curtis Marble professor of astrophysics and dean of the MIT School of Science. “While department head, Troy provided calm guidance during the Covid pandemic, encouraging and financially supporting additional programs to improve his community’s quality of life.”

“I have had the pleasure of serving as head of our department for the past five-plus years. It has been a period of significant upheaval in our world,” says Van Voorhis. “Throughout it all, one of my consistent joys has been the privilege of working within the chemistry department and across the wider MIT community on research, education, and community building.”

Under Van Voorhis’ leadership, the Department of Chemistry implemented a department-wide statement of values that launched the Diversity, Equity, and Inclusion Committee, a Future Faculty Symposium that showcases rising stars in chemistry, and the Creating Bonds in Chemistry program that partners MIT faculty with chemistry faculty at select historically Black colleges and universities and minority-serving institutions.

Van Voorhis also oversaw a time of tremendous faculty growth in the department with the addition of nine new faculty. During his tenure as head, he also guided the department through a period of significant growth of interest in chemistry with the number of undergraduate majors, enrolled students, graduate students, and graduate student yields all up significantly.

Van Voorhis also had the honor of celebrating with the entire Institute for Professor Moungi Bawendi’s Nobel Prize in Chemistry — the department’s first win in 18 years, since Professor Richard R. Schrock’s win in 2005.

In addition to his service to the department within the School of Science, Van Voorhis had also co-chaired the Working Group on Curricula and Degrees for the MIT Stephen A. Schwarzman College of Computing. This service relates to Van Voorhis’ own research interests and programs.

Van Voorhis’ research lies at the nexus of chemistry and computation, and his work has impact on renewable energy and quantum computing. His lab is focused on developing new methods that provide an accurate description of electron dynamics in molecules and materials. Over the years, his research has led to advances in light-emitting diodes, solar cells, and other devices and technologies crucial to addressing 21st-century energy concerns.   

Van Voorhis received his bachelor's degree in chemistry and mathematics from Rice University and his PhD in chemistry from the University of California at Berkeley in 2001. Following a postdoctoral fellowship at Harvard University, he joined the faculty of MIT in 2003 and was promoted to professor of chemistry in 2012.

He has received many honors and awards, including being named an Alfred P. Sloan research fellow, a fellow of the David and Lucille Packard Foundation, and a recipient of a National Science Foundation CAREER award. He has also received the MIT School of Science’s award for excellence in graduate teaching.



de MIT News https://ift.tt/3nOPGbW

Is there enough land on Earth to fight climate change and feed the world?

Capping global warming at 1.5 degrees Celsius is a tall order. Achieving that goal will not only require a massive reduction in greenhouse gas emissions from human activities, but also a substantial reallocation of land to support that effort and sustain the biosphere, including humans. More land will be needed to accommodate a growing demand for bioenergy and nature-based carbon sequestration while ensuring sufficient acreage for food production and ecological sustainability.

The expanding role of land in a 1.5 C world will be twofold — to remove carbon dioxide from the atmosphere and to produce clean energy. Land-based carbon dioxide removal strategies include bioenergy with carbon capture and storage; direct air capture; and afforestation/reforestation and other nature-based solutions. Land-based clean energy production includes wind and solar farms and sustainable bioenergy cropland. Any decision to allocate more land for climate mitigation must also address competing needs for long-term food security and ecosystem health.

Land-based climate mitigation choices vary in terms of costs — amount of land required, implications for food security, impact on biodiversity and other ecosystem services — and benefits — potential for sequestering greenhouse gases and producing clean energy.

Now a study in the journal Frontiers in Environmental Science provides the most comprehensive analysis to date of competing land-use and technology options to limit global warming to 1.5 C. Led by researchers at the MIT Center for Sustainability Science and Strategy (CS3), the study applies the MIT Integrated Global System Modeling (IGSM) framework to evaluate costs and benefits of different land-based climate mitigation options in Sky2050, a 1.5 C climate-stabilization scenario developed by Shell.

Under this scenario, demand for bioenergy and natural carbon sinks increase along with the need for sustainable farming and food production. To determine if there’s enough land to meet all these growing demands, the research team uses the global hectare (gha) — an area of 10,000 square meters, or 2.471 acres — as the standard unit of measurement, and current estimates of the Earth’s total habitable land area (about 10 gha) and land area used for food production and bioenergy (5 gha).

The team finds that with transformative changes in policy, land management practices, and consumption patterns, global land is sufficient to provide a sustainable supply of food and ecosystem services throughout this century while also reducing greenhouse gas emissions in alignment with the 1.5 C goal. These transformative changes include policies to protect natural ecosystems; stop deforestation and accelerate reforestation and afforestation; promote advances in sustainable agriculture technology and practice; reduce agricultural and food waste; and incentivize consumers to purchase sustainably produced goods.

If such changes are implemented, 2.5–3.5 gha of land would be used for NBS practices to sequester 3–6 gigatonnes (Gt) of CO2 per year, and 0.4–0.6 gha of land would be allocated for energy production — 0.2–0.3 gha for bioenergy and 0.2–0.35 gha for wind and solar power generation.

“Our scenario shows that there is enough land to support a 1.5 degree C future as long as effective policies at national and global levels are in place,” says CS3 Principal Research Scientist Angelo Gurgel, the study’s lead author. “These policies must not only promote efficient use of land for food, energy, and nature, but also be supported by long-term commitments from government and industry decision-makers.”



de MIT News https://ift.tt/1hqgnRa

lunes, 25 de noviembre de 2024

Decarbonizing heavy industry with thermal batteries

Whether you’re manufacturing cement, steel, chemicals, or paper, you need a large amount of heat. Almost without exception, manufacturers around the world create that heat by burning fossil fuels.

In an effort to clean up the industrial sector, some startups are changing manufacturing processes for specific materials. Some are even changing the materials themselves. Daniel Stack SM ’17, PhD ’21 is trying to address industrial emissions across the board by replacing the heat source.

Since coming to MIT in 2014, Stack has worked to develop thermal batteries that use electricity to heat up a conductive version of ceramic firebricks, which have been used as heat stores and insulators for centuries. In 2021, Stack co-founded Electrified Thermal Solutions, which has since demonstrated that its firebricks can store heat efficiently for hours and discharge it by heating air or gas up to 3,272 degrees Fahrenheit — hot enough to power the most demanding industrial applications.

Achieving temperatures north of 3,000 F represents a breakthrough for the electric heating industry, as it enables some of the world’s hardest-to-decarbonize sectors to utilize renewable energy for the first time. It also unlocks a new, low-cost model for using electricity when it’s at its cheapest and cleanest.

“We have a global perspective at Electrified Thermal, but in the U.S. over the last five years, we've seen an incredible opportunity emerge in energy prices that favors flexible offtake of electricity,” Stack says. “Throughout the middle of the country, especially in the wind belt, electricity prices in many places are negative for more than 20 percent of the year, and the trend toward decreasing electricity pricing during off-peak hours is a nationwide phenomenon. Technologies like our Joule Hive Thermal Battery will enable us to access this inexpensive, clean electricity and compete head to head with fossil fuels on price for industrial heating needs, without even factoring in the positive climate impact.”

A new approach to an old technology

Stack’s research plans changed quickly when he joined MIT’s Department of Nuclear Science and Engineering as a master’s student in 2014.

“I went to MIT excited to work on the next generation of nuclear reactors, but what I focused on almost from day one was how to heat up bricks,” Stack says. “It wasn’t what I expected, but when I talked to my advisor, [Principal Research Scientist] Charles Forsberg, about energy storage and why it was valuable to not just nuclear power but the entire energy transition, I realized there was no project I would rather work on.”

Firebricks are ubiquitous, inexpensive clay bricks that have been used for millennia in fireplaces and ovens. In 2017, Forsberg and Stack co-authored a paper showing firebricks’ potential to store heat from renewable resources, but the system still used electric resistance heaters — like the metal coils in toasters and space heaters — which limited its temperature output.

For his doctoral work, Stack worked with Forsberg to make firebricks that were electrically conductive, replacing the resistance heaters so the bricks produced the heat directly.

“Electric heaters are your biggest limiter: They burn out too fast, they break down, they don’t get hot enough,” Stack explains. “The idea was to skip the heaters because firebricks themselves are really cheap, abundant materials that can go to flame-like temperatures and hang out there for days.”

Forsberg and Stacks were able to create conductive firebricks by tweaking the chemical composition of traditional firebricks. Electrified Thermal’s bricks are 98 percent similar to existing firebricks and are produced using the same processes, allowing existing manufacturers to make them inexpensively.

Toward the end of his PhD program, Stack realized the invention could be commercialized. He started taking classes at the MIT Sloan School of Management and spending time at the Martin Trust Center for MIT Entrepreneurship. He also entered the StartMIT program and the I-Corps program, and received support from the U.S. Department of Energy and MIT’s Venture Mentoring Service (VMS).

“Through the Boston ecosystem, the MIT ecosystem, and with help from the Department of Energy, we were able to launch this from the lab at MIT,” Stack says. “What we spun out was an electrically conductive firebrick, or what we refer to as an e-Brick.”

Electrified Thermal contains its firebrick arrays in insulated, off-the-shelf metal boxes. Although the system is highly configurable depending on the end use, the company’s standard system can collect and release about 5 megawatts of energy and store about 25 megawatt-hours.

The company has demonstrated its system’s ability to produce high temperatures and has been cycling its system at its headquarters in Medford, Massachusetts. That work has collectively earned Electrified Thermal $40 million from various the Department of Energy offices to scale the technology and work with manufacturers.

“Compared to other electric heating, we can run hotter and last longer than any other solution on the market,” Stack says. “That means replacing fossil fuels at a lot of industrial sites that couldn't otherwise decarbonize.”

Scaling to solve a global problem

Electrified Thermal is engaging with hundreds of industrial companies, including manufacturers of cement, steel, glass, basic and specialty chemicals, food and beverage, and pulp and paper.

“The industrial heating challenge affects everyone under the sun,” Stack says. “They all have fundamentally the same problem, which is getting their heat in a way that is affordable and zero carbon for the energy transition.”

The company is currently building a megawatt-scale commercial version of its system, which it expects to be operational in the next seven months.

“Next year will be a huge proof point to the industry,” Stack says. “We’ll be using the commercial system to showcase a variety of operating points that customers need to see, and we’re hoping to be running systems on customer sites by the end of the year. It’ll be a huge achievement and a first for electric heating because no other solution in the market can put out the kind of temperatures that we can put out.”

By working with manufacturers to produce its firebricks and casings, Electrified Thermal hopes to be able to deploy its systems rapidly and at low cost across a massive industry.

“From the very beginning, we engineered these e-bricks to be rapidly scalable and rapidly producible within existing supply chains and manufacturing processes,” Stack says. “If you want to decarbonize heavy industry, there will be no cheaper way than turning electricity into heat from zero-carbon electricity assets. We’re seeking to be the premier technology that unlocks those capabilities, with double digit percentages of global energy flowing through our system as we accomplish the energy transition.”



de MIT News https://ift.tt/FjSOq26

Professor Emeritus James Harris, a scholar of Spanish language, dies at 92

James Wesley “Jim” Harris PhD ’67, professor emeritus of Spanish and linguistics, passed away on Nov. 10. He was 92.

Harris attended the University of Georgia, the Instituto Tecnológico de Estudios Superiores de Monterrey, and the Universidad Nacional Autónoma de México. He later earned a master’s degree in linguistics from Louisiana State University and a PhD in linguistics from MIT.

Harris joined the MIT faculty as an assistant professor in 1967, where he remained until his retirement in 1996. During his tenure, he served as head of what was then called the Department of Foreign Languages and Literatures.

“I met Jim when I came to MIT in 1977 as department head of the neonatal Department of Linguistics and Philosophy,” says Samuel Jay Keyser, MIT professor emeritus of linguistics. “Throughout his career in the department, he never relinquished his connection to the unit that first employed him at MIT.”

In his early days at MIT, when French, German, and Russian dominated as elite “languages of science and world literature,” Harris championed, over some opposition, the introduction of Spanish language and literature courses.

He later oversaw the inclusion of Japanese and Chinese courses as language offerings at MIT. He promoted undergraduate courses in linguistics, leading to a full undergraduate degree program and later broadening the focus of the prestigious PhD program. 

His research in linguistics centered on theoretical phonology and morphology. His books, presentations at professional meetings, and articles in peer-reviewed journals were among the most discussed — in both positive and negative assessments, as he noted — by prominent scholars in the field. The ability to teach complex technical material comfortably in Spanish, plus the status of an MIT professorship, resulted in invitations to teach at universities across Spain and Latin America. He was also highly valued as a member of the editorial boards of several professional journals.

“I remember Jim most of all for being the consummate scholar,” Keyser says. “His articles were models of argumentation. They were assembled with all the precision of an Inca wall and all the beauty of a Faberge Egg. You couldn’t slip a credit card through any of its arguments, they were so superbly sculpted.”

Having achieved national recognition as an English-Spanish bilingual teacher and teacher-trainer, Harris was engaged as a writer at the Modern Language Materials Development Center in New York. Later, he co-authored, with Guillermo Segreda, a series of popular college-level Spanish textbooks.

“Harris belonged to Noam Chomsky and Morris Halle’s first generation of graduate students,” says MIT linguist Michael John Kenstowicz. “Together they overturned the distributionalist model of the structuralists in favor of ordered generative rules.”

After retiring from MIT, he remained internationally recognized as a highly influential figure in the area of Romance linguistics, and “el decano” (“the dean”) of Spanish phonology.

Harris was married to Florence Warshawsky Harris for 50 years until her passing in 2020. In 2011, in celebration of the program’s 50th anniversary, they partnered to prepare and publish a detailed history of the linguistics program’s origins. Warshawsky Harris, formerly an MIT graduate student, also edited Chomsky and Halle’s influential "The Sound Pattern on English" and numerous other important linguistic texts.

Harris’ scholarship was widely recognized in a diverse group of scholarly articles and textbooks he authored, co-authored, edited, and published.

Harris was born outside Atlanta, Georgia, in 1932. During the Korean War, he performed his military service as the clarinet and saxophone instructor at the U.S. Naval School of Music in Washington. After his discharge, he directed the band at the Charlotte Hall School in Maryland, where he also taught Spanish, French, and Latin.

Harris is survived by ​​his daughter, Lynn Corinne Harris, his son-in-law, Rabbi David Adelson, and his grandchildren, Bee Adelson and Sam Harris.



de MIT News https://ift.tt/z3RjAdX

To design better water filters, MIT engineers look to manta rays

Filter feeders are everywhere in the animal world, from tiny crustaceans and certain types of coral and krill, to various molluscs, barnacles, and even massive basking sharks and baleen whales. Now, MIT engineers have found that one filter feeder has evolved to sift food in ways that could improve the design of industrial water filters.

In a paper appearing this week in the Proceedings of the National Academy of Sciences, the team characterizes the filter-feeding mechanism of the mobula ray — a family of aquatic rays that includes two manta species and seven devil rays. Mobula rays feed by swimming open-mouthed through plankton-rich regions of the ocean and filtering plankton particles into their gullet as water streams into their mouths and out through their gills.

The floor of the mobula ray’s mouth is lined on either side with parallel, comb-like structures, called plates, that siphon water into the ray’s gills. The MIT team has shown that the dimensions of these plates may allow for incoming plankton to bounce all the way across the plates and further into the ray’s cavity, rather than out through the gills. What’s more, the ray’s gills absorb oxygen from the outflowing water, helping the ray to simultaneously breathe while feeding.

“We show that the mobula ray has evolved the geometry of these plates to be the perfect size to balance feeding and breathing,” says study author Anette “Peko” Hosoi, the Pappalardo Professor of Mechanical Engineering at MIT.

The engineers fabricated a simple water filter modeled after the mobula ray’s plankton-filtering features. They studied how water flowed through the filter when it was fitted with 3D-printed plate-like structures. The team took the results of these experiments and drew up a blueprint, which they say designers can use to optimize industrial cross-flow filters, which are broadly similar in configuration to that of the mobula ray.

“We want to expand the design space of traditional cross-flow filtration with new knowledge from the manta ray,” says lead author and MIT postdoc Xinyu Mao PhD ’24. “People can choose a parameter regime of the mobula ray so they could potentially improve overall filter performance.”

Hosoi and Mao co-authored the new study with Irmgard Bischofberger, associate professor of mechanical engineering at MIT.

A better trade-off

The new study grew out of the group’s focus on filtration during the height of the Covid pandemic, when the researchers were designing face masks to filter out the virus. Since then, Mao has shifted focus to study filtration in animals and how certain filter-feeding mechanisms might improve filters used in industry, such as in water treatment plants.

Mao observed that any industrial filter must strike a balance between permeability (how easily fluid can flow through a filter), and selectivity (how successful a filter is at keeping out particles of a target size). For instance, a membrane that is studded with large holes might be highly permeable, meaning a lot of water can be pumped through using very little energy. However, the membrane’s large holes would let many particles through, making it very low in selectivity. Likewise, a membrane with much smaller pores would be more selective yet also require more energy to pump the water through the smaller openings.

“We asked ourselves, how do we do better with this tradeoff between permeability and selectivity?” Hosoi says.

As Mao looked into filter-feeding animals, he found that the mobula ray has struck an ideal balance between permeability and selectivity: The ray is highly permeable, in that it can let water into its mouth and out through its gills quickly enough to capture oxygen to breathe. At the same time, it is highly selective, filtering and feeding on plankton rather than letting the particles stream out through the gills.

The researchers realized that the ray’s filtering features are broadly similar to that of industrial cross-flow filters. These filters are designed such that fluid flows across a permeable membrane that lets through most of the fluid, while any polluting particles continue flowing across the membrane and eventually out into a reservoir of waste.

The team wondered whether the mobula ray might inspire design improvements to industrial cross-flow filters. For that, they took a deeper dive into the dynamics of mobula ray filtration.

A vortex key

As part of their new study, the team fabricated a simple filter inspired by the mobula ray. The filter’s design is what engineers refer to as a “leaky channel” — effectively, a pipe with holes along its sides. In this case, the team’s “channel” consists of two flat, transparent acrylic plates that are glued together at the edges, with a slight opening between the plates through which fluid can be pumped. At one end of the channel, the researchers inserted 3D-printed structures resembling the grooved plates that run along the floor of the mobula ray’s mouth.

The team then pumped water through the channel at various rates, along with colored dye to visualize the flow. They took images across the channel and observed an interesting transition: At slow pumping rates, the flow was “very peaceful,” and fluid easily slipped through the grooves in the printed plates and out into a reservoir. When the researchers increased the pumping rate, the faster-flowing fluid did not slip through, but appeared to swirl at the mouth of each groove, creating a vortex, similar to a small knot of hair between the tips of a comb’s teeth.

“This vortex is not blocking water, but it is blocking particles,” Hosoi explains. “Whereas in a slower flow, particles go through the filter with the water, at higher flow rates, particles try to get through the filter but are blocked by this vortex and are shot down the channel instead. The vortex is helpful because it prevents particles from flowing out.”

The team surmised that vortices are the key to mobula rays’ filter-feeding ability. The ray is able to swim at just the right speed that water, streaming into its mouth, can form vortices between the grooved plates. These vortices effectively block any plankton particles — even those that are smaller than the space between plates. The particles then bounce across the plates and head further into the ray’s cavity, while the rest of the water can still flow between the plates and out through the gills.

The researchers used the results of their experiments, along with dimensions of the filtering features of mobula rays, to develop a blueprint for cross-flow filtration.

“We have provided practical guidance on how to actually filter as the mobula ray does,” Mao offers.

“You want to design a filter such that you’re in the regime where you generate vortices,” Hosoi says. “Our guidelines tell you: If you want your plant to pump at a certain rate, then your filter has to have a particular pore diameter and spacing to generate vortices that will filter out particles of this size. The mobula ray is giving us a really nice rule of thumb for rational design.”

This work was supported, in part, by the U.S. National Institutes of Health, and the Harvey P. Greenspan Fellowship Fund. 



de MIT News https://ift.tt/Pak2DKL

New solar projects will grow renewable energy generation for four major campus buildings

In the latest step to implement commitments made in MIT’s Fast Forward climate action plan, staff from the Department of Facilities; Office of Sustainability; and Environment, Health and Safety Office are advancing new solar panel installations this fall and winter on four major campus buildings: The Stratton Student Center (W20), the Dewey Library building (E53), and two newer buildings, New Vassar (W46) and the Theater Arts building (W97).

These four new installations, in addition to existing rooftop solar installations on campus, are “just one part of our broader strategy to reduce MIT’s carbon footprint and transition to clean energy,” says Joe Higgins, vice president for campus services and stewardship.

The installations will not only meet but exceed the target set for total solar energy production on campus in the Fast Forward climate action plan that was issued in 2021. With an initial target of 500 kilowatts of installed solar capacity on campus, the new installations, along with those already in place, will bring the total output to roughly 650 kW, exceeding the goal. The solar installations are an important facet of MIT’s approach to eliminating all direct campus emissions by 2050.

The process of advancing to the stage of placing solar panels on campus rooftops is much more complex than just getting them installed on an ordinary house. The process began with a detailed assessment of the potential for reducing the campus greenhouse gas footprint. A first cut eliminated rooftops that were too shaded by trees or other buildings. Then, the schedule for regular replacement of roofs had to be taken into account — it’s better to put new solar panels on top of a roof that will not need replacement in a few years. Other roofs, especially lab buildings, simply had too much existing equipment on them to allow a large area of space for solar panels.

Randa Ghattas, senior sustainability project manager, and Taya Dixon, assistant director for capital budgets and contracts within the Department of Facilities, spearheaded the project. Their initial assessment showed that there were many buildings identified with significant solar potential, and it took the impetus of the Fast Forward plan to kick things into action. 

Even after winnowing down the list of campus buildings based on shading and the life cycle of roof replacements, there were still many other factors to consider. Some buildings that had ample roof space were of older construction that couldn’t bear the loads of a full solar installation without significant reconstruction. “That actually has proved trickier than we thought,” Ghattas says. For example, one building that seemed a good candidate, and already had some solar panels on it, proved unable to sustain the greater weight and wind loads of a full solar installation. Structural capacity, she says, turned out to be “probably the most important” factor in this case.

The roofs on the Student Center and on the Dewey Library building were replaced in the last few years with the intention of the later addition of solar panels. And the two newer buildings were designed from the beginning with solar in mind, even though the solar panels were not part of the initial construction. “The designs were built into them to accommodate solar,” Dixon says, “so those were easy options for us because we knew the buildings were solar-ready and could support solar being integrated into their systems, both the electrical system and the structural system of the roof.”

But there were also other considerations. The Student Center is considered a historically significant building, so the installation had to be designed so that it was invisible from street level, even including a safety railing that had to be built around the solar array. But that was not a problem. “It was fine for this building,” Ghattas says, because it turned out that the geometry of the building and the roofs hid the safety railing from view below.

Each installation will connect directly to the building’s electrical system, and thus into the campus grid. The power they produce will be used in the buildings they are on, though none will be sufficient to fully power its building. Overall, the new installations, in addition to the existing ones on the MIT Sloan School of Management building (E62) and the Alumni Pool (57) and the planned array on the new Graduate Junction dorm (W87-W88), will be enough to power 5 to 10 percent of the buildings’ electric needs, and offset about 190 metric tons of carbon dioxide emissions each year, Ghattas says. This is equivalent to the electricity use of 35 homes annually.

Each building installation is expected to take just a couple of weeks. “We’re hopeful that we’re going to have everything installed and operational by the end of this calendar year,” she says.

Other buildings could be added in coming years, as their roof replacement cycles come around. With the lessons learned along the way in getting to this point, Ghattas says, “now that we have a system in place, hopefully it’s going to be much easier in the future.”

Higgins adds that “in parallel with the solar projects, we’re working on expanding electric vehicle charging stations and the electric vehicle fleet and reducing energy consumption in campus buildings.”

Besides the on-campus improvements, he says, “MIT is focused on both the local and the global.” In addition to solar installations on campus buildings, which can only mitigate a small portion of campus emissions, “large-scale aggregation partnerships are key to moving the actual market landscape for adding cleaner energy generation to power grids,” which must ultimately lead to zero emissions, he says. “We are spurring the development of new utility-grade renewable energy facilities in regions with high carbon-intensive electrical grids. These projects have an immediate and significant impact in the urgently needed decarbonization of regional power grids.”

Higgins says that other technologies, strategies, and practices are being evaluated for heating, cooling, and power for the campus, “with zero carbon emissions by 2050, utilizing cleaner energy sources.” He adds that these campus initiatives “are part of MIT’s larger Climate Project, aiming to drive progress both on campus and beyond, advancing broader partnerships, new market models, and informing approaches to climate policy.” 



de MIT News https://ift.tt/GqailC1

New AI tool generates realistic satellite images of future flooding

Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.

MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.

As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.

The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.

The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.

“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”

To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.

The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.

Generative adversarial images

The new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.

“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”

For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.

Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.

“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”

Flood hallucinations

In their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.

Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.

“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.

The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).

To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.

“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”

The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.



de MIT News https://ift.tt/SlMOL9s

viernes, 22 de noviembre de 2024

Consortium led by MIT, Harvard University, and Mass General Brigham spurs development of 408 MW of renewable energy

MIT is co-leading an effort to enable the development of two new large-scale renewable energy projects in regions with carbon-intensive electrical grids: Big Elm Solar in Bell County, Texas, came online this year, and the Bowman Wind Project in Bowman County, North Dakota, is expected to be operational in 2026. Together, they will add a combined 408 megawatts (MW) of new renewable energy capacity to the power grid. This work is a critical part of MIT’s strategy to achieve its goal of net-zero carbon emissions by 2026.

The Consortium for Climate Solutions, which includes MIT and 10 other Massachusetts organizations, seeks to eliminate close to 1 million metric tons of greenhouse gases each year — more than five times the annual direct emissions from MIT’s campus — by committing to purchase an estimated 1.3-million-megawatt hours of new solar and wind electricity generation annually.

“MIT has mobilized on multiple fronts to expedite solutions to climate change,” says Glen Shor, executive vice president and treasurer. “Catalyzing these large-scale renewable projects is an important part of our comprehensive efforts to reduce carbon emissions from generating energy. We are pleased to work in partnership with other local enterprises and organizations to amplify the impact we could achieve individually.”

The two new projects complement MIT’s existing 25-year power purchase agreement established with Summit Farms in 2016, which enabled the construction of a roughly 650-acre, 60 MW solar farm on farmland in North Carolina, leading to the early retirement of a coal-fired plant nearby. Its success has inspired other institutions to implement similar aggregation models.

A collective approach to enable global impact

MIT, Harvard University, and Mass General Brigham formed the consortium in 2020 to provide a structure to accelerate global emissions reductions through the development of large-scale renewable energy projects — accelerating and expanding the impact of each institution’s greenhouse gas reduction initiatives. As the project’s anchors, they collectively procured the largest volume of energy through the aggregation.  

The consortium engaged with PowerOptions, a nonprofit energy-buying consortium, which offered its members the opportunity to participate in the projects. The City of Cambridge, Beth Israel Lahey, Boston Children’s Hospital, Dana-Farber Cancer Institute, Tufts University, the Mass Convention Center Authority, the Museum of Fine Arts, and GBH later joined the consortium through PowerOptions. 
 
The consortium vetted over 125 potential projects against its rigorous project evaluation criteria. With faculty and MIT stakeholder input on a short list of the highest-ranking projects, it ultimately chose Bowman Wind and Big Elm Solar. Collectively, these two projects will achieve large greenhouse gas emissions reductions in two of the most carbon-intensive electrical grid regions in the United States and create clean energy generation sources to reduce negative health impacts.

“Enabling these projects in regions where the grids are most carbon-intensive allows them to have the greatest impact. We anticipate these projects will prevent two times more emissions per unit of generated electricity than would a similar-scale project in New England,” explains Vice President for Campus Services and Stewardship Joe Higgins.

By all consortium institutions making significant 15-to-20-year financial commitments to buy electricity, the developer was able to obtain critical external project financing to build the projects. Owned and operated by Apex Clean Energy, the projects will add new renewable electricity to the grid equivalent to powering 130,000 households annually, displacing over 950,000 metric tons of greenhouse gas emissions each year from highly carbon-intensive power plants in the region. 

Complementary decarbonization work underway 

In addition to investing in offsite renewable energy projects, many consortium members have developed strategies to reduce and eliminate their own direct emissions. At MIT, accomplishing this requires transformative change in how energy is generated, distributed, and used on campus. Efforts underway include the installation of solar panels on campus rooftops that will increase renewable energy generation four-fold by 2026; continuing to transition our heat distribution infrastructure from steam-based to hot water-based; utilizing design and construction that minimizes emissions and increases energy efficiency; employing AI-enabled sensors to optimize temperature set points and reduce energy use in buildings; and converting MIT’s vehicle fleet to all-electric vehicles while adding more electric car charging stations.

The Institute has also upgraded the Central Utilities Plant, which uses advanced co-generation technology to produce power that is up to 20 percent less carbon-intensive than that from the regional power grid. MIT is charting the course toward a next-generation district energy system, with a comprehensive planning initiative to revolutionize its campus energy infrastructure. The effort is exploring leading-edge technology, including industrial-scale heat pumps, geothermal exchange, micro-reactors, bio-based fuels, and green hydrogen derived from renewable sources as solutions to achieve full decarbonization of campus operations by 2050.

“At MIT, we are focused on decarbonizing our own campus as well as the role we can play in solving climate at the largest of scales, including supporting a cleaner grid in line with the call to triple renewables globally by 2030. By enabling these large-scale renewable projects, we can have an immediate and significant impact of reducing emissions through the urgently needed decarbonization of regional power grids,” says Julie Newman, MIT’s director of sustainability.  



de MIT News https://ift.tt/DQRMUfO

A vision for U.S. science success

White House science advisor Arati Prabhakar expressed confidence in U.S. science and technology capacities during a talk on Wednesday about major issues the country must tackle.

“Let me start with the purpose of science and technology and innovation, which is to open possibilities so that we can achieve our great aspirations,” said Prabhakar, who is the director of the Office of Science and Technology Policy (OSTP) and a co-chair of the President’s Council of Advisors on Science and Technology (PCAST). 

“The aspirations that we have as a country today are as great as they have ever been,” she added.

Much of Prabhakar’s talk focused on three major issues in science and technology development: cancer prevention, climate change, and AI. In the process, she also emphasized the necessity for the U.S. to sustain its global leadership in research across domains of science and technology, which she called “one of America’s long-time strengths.”

“Ever since the end of the Second World War, we said we’re going in on basic research, we’re going to build our universities’ capacity to do it, we have an unparalleled basic research capacity, and we should always have that,” said Prabhakar.

“We have gotten better, I think, in recent years at commercializing technology from our basic research,” Prabhakar added, noting, “Capital moves when you can see profit and growth.” The Biden administration, she said, has invested in a variety of new ways for the public and private sector to work together to massively accelerate the movement of technology into the market.

Wednesday’s talk drew a capacity audience of nearly 300 people in MIT’s Wong Auditorium and was hosted by the Manufacturing@MIT Working Group. The event included introductory remarks by Suzanne Berger, an Institute Professor and a longtime expert on the innovation economy, and Nergis Mavalvala, dean of the School of Science and an astrophysicist and leader in gravitational-wave detection.

Introducing Mavalvala, Berger said the 2015 announcement of the discovery of gravitational waves “was the day I felt proudest and most elated to be a member of the MIT community,” and noted that U.S. government support helped make the research possible. Mavalvala, in turn, said MIT was “especially honored” to hear Prabhakar discuss leading-edge research and acknowledge the role of universities in strengthening the country’s science and technology sectors.

Prabhakar has extensive experience in both government and the private sector. She has been OSTP director and co-chair of PCAST since October of 2022. She served as director of the Defense Advanced Research Projects Agency (DARPA) from 2012 to 2017 and director of the National Institute of Standards and Technology (NIST) from 1993 to 1997.

She has also held executive positions at Raychem and Interval Research, and spent a decade at the investment firm U.S. Venture Partners. An engineer by training, Prabhakar earned a BS in electrical engineering from Texas Tech University in 1979, an MA in electrical engineering from Caltech in 1980, and a PhD in applied physics from Caltech in 1984.

Among other remarks about medicine, Prabhakar touted the Biden administration’s “Cancer Moonshot” program, which aims to cut the cancer death rate in half over the next 25 years through multiple approaches, from better health care provision and cancer detection to limiting public exposure to carcinogens. We should be striving, Prabhakar said, for “a future in which people take good health for granted and can get on with their lives.”

On AI, she heralded both the promise and concerns about technology, saying, “I think it’s time for active steps to get on a path to where it actually allows people to do more and earn more.”

When it comes to climate change, Prabhakar said, “We all understand that the climate is going to change. But it’s in our hands how severe those changes get. And it’s possible that we can build a better future.” She noted the bipartisan infrastructure bill signed into law in 2021 and the Biden administration’s Inflation Reduction Act as important steps forward in this fight.

“Together those are making the single biggest investment anyone anywhere on the planet has ever made in the clean energy transition,” she said. “I used to feel hopeless about our ability to do that, and it gives me tremendous hope.”

After her talk, Prabhakar was joined onstage for a group discussion with the three co-presidents of the MIT Energy and Climate Club: Laurentiu Anton, a doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA candidate at the MIT Sloan School of Management; and Thomas Lee, a doctoral candidate in MIT’s Institute for Data, Systems, and Society.

Asked about the seemingly sagging public confidence in science today, Prabhakar offered a few thoughts.

“The first thing I would say is, don’t take it personally,” Prabhakar said, noting that any dip in public regard for science is less severe than the diminished public confidence in other institutions.

Adding some levity, she observed that in polling about which occupations are regarded as being desirable for a marriage partner to have, “scientist” still ranks highly.

“Scientists still do really well on that front, we’ve got that going for us,” she quipped.

More seriously, Prabhakar observed, rather than “preaching” at the public, scientists should recognize that “part of the job for us is to continue to be clear about what we know are the facts, and to present them clearly but humbly, and to be clear that we’re going to continue working to learn more.” At the same time, she continued, scientists can always reinforce that “oh, by the way, facts are helpful things that can actually help you make better choices about how the future turns out. I think that would be better in my view.”

Prabhakar said that her White House work had been guided, in part, by one of the overarching themes that President Biden has often reinforced.

“He thinks about America as a nation that can be described in a single word, and that word is ‘possibilities,’” she said. “And that idea, that is such a big idea, it lights me up. I think of what we do in the world of science and technology and innovation as really part and parcel of creating those possibilities.”

Ultimately, Prabhakar said, at all times and all points in American history, scientists and technologists must continue “to prove once more that when people come together and do this work … we do it in a way that builds opportunity and expands opportunity for everyone in our country. I think this is the great privilege we all have in the work we do, and it’s also our responsibility.”



de MIT News https://ift.tt/2ysXIZz

jueves, 21 de noviembre de 2024

Catherine Wolfram: High-energy scholar

In the mid 2000s, Catherine Wolfram PhD ’96 reached what she calls “an inflection point” in her career. After about a decade of studying U.S. electricity markets, she had come to recognize that “you couldn’t study the energy industries without thinking about climate mitigation,” as she puts it.

At the same time, Wolfram understood that the trajectory of energy use in the developing world was a massively important part of the climate picture. To get a comprehensive grasp on global dynamics, she says, “I realized I needed to start thinking about the rest of the world.”

An accomplished scholar and policy expert, Wolfram has been on the faculty at Harvard University, the University of California at Berkeley — and now MIT, where she is the William Barton Rogers Professor in Energy. She has also served as deputy assistant secretary for climate and energy economics at the U.S. Treasury.

Yet even leading experts want to keep learning. So, when she hit that inflection point, Wolfram started carving out a new phase of her research career.

“One of the things I love about being an academic is, I could just decide to do that,” Wolfram says. “I didn’t need to check with a boss. I could just pivot my career to being more focused to thinking about energy in the developing world.”

Over the last decade, Wolfram has published a wide array of original studies about energy consumption in the developing world. From Kenya to Mexico to South Asia, she has shed light on the dynamics of economics growth and energy consumption — while spending some of that time serving the government too. Last year, Wolfram joined the faculty of the MIT Sloan School of Management, where her work bolsters the Institute’s growing effort to combat climate change.

Studying at MIT

Wolfram largely grew up in Minnesota, where her father was a legal scholar, although he moved to Cornell University around the time she started high school. As an undergraduate, she majored in economics at Harvard University, and after graduation she worked first for a consultant, then for the Massachusetts Department of Public Utilities, the agency regulating energy rates. 

In the latter job, Wolfram kept noticing that people were often citing the research of an MIT scholar named Paul Joskow (who is now the Elizabeth and James Killian Professor of Economics Emeritus in MIT’s Department of Economics) and Richard Schmalensee (a former dean of the MIT Sloan School of Management and now the Howard W. Johnson Professor of Management Emeritus). Seeing how consequential economics research could be for policymaking, Wolfram decided to get a PhD in the field and was accepted into MIT’s doctoral program.

“I went into graduate school with an unusually specific view of what I wanted to do,” Wolfram says. “I wanted to work with Paul Joskow and Dick Schmalensee on electricity markets, and that’s how I wound up here.”

At MIT, Wolfram also ended up working extensively with Nancy Rose, the Charles P. Kindleberger Professor of Applied Economics and a former head of the Department of Economics, who helped oversee Wolfram’s thesis; Rose has extensively studied market regulation as well.

Wolfram’s dissertation research largely focused on price-setting behavior in the U.K.’s newly deregulated electricity markets, which, it turned out, applied handily to the U.S., where a similar process was taking place. “I was fortunate because this was around the time California was thinking about restructuring, as it was known,” Wolfram says. She spent four years on the faculty at Harvard, then moved to UC Berkeley. Wolfram’s studies have shown that deregulation has had some medium-term benefits, for instance in making power plants operate more efficiently.

Turning on the AC

By around 2010, though, Wolfram began shifting her scholarly focus in earnest, conducting innovative studies about energy in the developing world. One strand of her research has centered on Kenya, to better understand how more energy access for people without electricity might fit into growth in the developing world.

In this case, Wolfram’s perhaps surprising conclusion is that electrification itself is not a magic ticket to prosperity; people without electricity are more eager to adopt it when they have a practical economic need for it. Meanwhile, they have other essential needs that are not necessarily being addressed.

“The 800 million people in the world who don’t have electricity also don’t have access to good health care or running water,” Wolfram says. “Giving them better housing infrastructure is important, and harder to tackle. It’s not clear that bringing people electricity alone is the single most useful thing from a development perspective. Although electricity is a super-important component of modern living.”

Wolfram has even delved into topics such as air conditioner use in the developing world — an important driver of energy use. As her research shows, many countries, with a combined population far bigger than the U.S., are among the fastest-growing adopters of air conditioners and have an even greater need for them, based on their climates. Adoption of air conditioning within those countries also is characterized by marked economic inequality.

From early 2021 until late 2022, Wolfram also served in the administration of President Joe Biden, where her work also centered on global energy issues. Among other things, Wolfram was part of the team working out a price-cap policy for Russian oil exports, a concept that she thinks could be applied to many other products globally. Although, she notes, working with countries heavily dependent on exporting energy materials will always require careful engagement.

“We need to be mindful of that dependence and importance as we go through this massive effort to decarbonize the energy sector and shift it to a whole new paradigm,” Wolfram says.

At MIT again

Still, she notes, the world does need a whole new energy paradigm, and fast. Her arrival at MIT overlaps with the emergence of a new Institute-wide effort, the Climate Project at MIT, that aims to accelerate and scale climate solutions and good climate policy, including through the new Climate Policy Center at MIT Sloan. That kind of effort, Wolfram says, matters to her.

“It’s part of why I’ve come to MIT,” Wolfram says. “Technology will be one part of the climate solution, but I do think an innovative mindset, how can we think about doing things better, can be productively applied to climate policy.” On being at MIT, she adds: “It’s great, it’s awesome. One of the things that pleasantly surprised me is how tight-knit and friendly the MIT faculty all are, and how many interactions I’ve had with people from other departments.”

Wolfram has also been enjoying her teaching at MIT, and will be offering a large class in spring 2025, 15.016 (Climate and Energy in the Global Economy), that she debuted this past academic year.

“It’s super fun to have students from around the world, who have personal stories and knowledge of energy systems in their countries and can contribute to our discussions,” she says.

When it comes to tackling climate change, many things seem daunting. But there is still a world of knowledge to be acquired while we try to keep the planet from overheating, and Wolfram has a can-do attitude about learning more and applying those lessons.

“We’ve made a lot of progress,” Wolfram says. “But we still have a lot more to do.”



de MIT News https://ift.tt/b0S95ZP

Advancing urban tree monitoring with AI-powered digital twins

The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”

What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google's Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.

“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”

Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.

The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.

A tree grows in Brooklyn (and many other places)

The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.

Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.

“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”

AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.

It’s a breeze

While Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.

“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”

The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.

The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.

“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”

Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean's Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month. 



de MIT News https://ift.tt/kJPajC0

Your child, the sophisticated language learner

As young children, how do we build our vocabulary? Even by age 1, many infants seem to think that if they hear a new word, it means something different from the words they already know. But why they think so has remained subject to inquiry among scholars for the last 40 years.

A new study carried out at the MIT Language Acquisition Lab offers a novel insight into the matter: Sentences contain subtle hints in their grammar that tell young children about the meaning of new words. The finding, based on experiments with 2-year-olds, suggests that even very young kids are capable of absorbing grammatical cues from language and leveraging that information to acquire new words.

“Even at a surprisingly young age, kids have sophisticated knowledge of the grammar of sentences and can use that to learn the meanings of new words,” says Athulya Aravind, an associate professor of linguistics at MIT.

The new insight stands in contrast to a prior explanation for how children build vocabulary: that they rely on the concept of “mutual exclusivity,” meaning they treat each new word as corresponding to a new object or category. Instead, the new research shows how extensively children respond directly to grammatical information when interpreting words.

“For us it’s very exciting because it’s a very simple idea that explains so much about how children understand language,” says Gabor Brody, a postdoc at Brown University, who is the first author of the paper.

The paper is titled, “Why Do Children Think Words Are Mutually Exclusive?” It is published in advance online form in Psychological Science. The authors are Brody; Roman Feiman, the Thomas J. and Alice M. Tisch Assistant Professor of Cognitive and Psychological Sciences and Linguistics at Brown; and Aravind, the Alfred Henry and Jean Morrison Hayes Career Development Associate Professor in MIT’s Department of Linguistics and Philosophy.

Focusing on focus

Many scholars have thought that young children, when learning new words, have an innate bias toward mutual exclusivity, which could explain how children learn some of their new words. However, the concept of mutual exclusivity has never been airtight: Words like “bat” refer to multiple kinds of objects, while any object can be described using countlessly many words. For instance a rabbit can be called not only a “rabbit” or a “bunny,” but also an “animal,” or a “beauty,” and in some contexts even a “delicacy.” Despite this lack of perfect one-to-one mapping between words and objects, mutual exclusivity has still been posited as a strong tendency in children’s word learning.

What Aravind, Brody, and Fieman propose is that children have no such tendency, and instead rely on so-called “focus” signals to decide what a new word means. Linguists use the term “focus” to refer to the way we emphasize or stress certain words to signal some kind of contrast. Depending on what is focused, the same sentence can have different implications. “Carlos gave Lewis a Ferrari” implies contrast with other possible cars — he could have given Lewis a Mercedes. But “Carlos gave Lewis a Ferrari” implies contrast with other people — he could have given Alexandra a Ferrari.

The researchers’ experiments manipulated focus in three experiments with a total of 106 children. The participants watched videos of a cartoon fox who asked them to point to different objects.

The first experiment established how focus influences kids’ choice between two objects when they hear a label, like “toy,” that could, in principle, correspond to either of the two. After giving a name to one of the two objects (“Look, I am pointing to the blicket”), the fox told the child, “Now you point to the toy!” Children were divided into two groups. One group heard “toy” without emphasis, while the other heard it with emphasis.

In the first version, “blicket” and “toy” plausibly refer to the same object. But in the second version, the added focus, through intonation, implies that “toy” contrasts with the previously discussed “blicket.” Without focus, only 24 percent of the respondents thought the words were mutually exclusive, whereas with the focus created by emphasizing “toy,” 89 percent of participants thought “blicket” and “toy” referred to different objects.

The second and third experiments showed that focus is not just key when it comes to words like “toy,” but it also affects the interpretation of new words children have never encountered before, like “wug” or “dax.” If a new word was said without focus, children thought the word meant the previously named object 71 percent of the time. But when hearing the new word spoken with focus, they thought it must refer to a new object 87 percent of the time.

“Even though they know nothing about this new word, when it was focused, that still told them something: Focus communicated to children the presence of a contrasting alternative, and they correspondingly understood the noun to refer to an object that had not previously been labeled,” Aravind explains.

She adds: “The particular claim we’re making is that there is no inherent bias in children toward mutual exclusivity. The only reason we make the corresponding inference is because focus tells you that the word means something different from another word. When focus goes away, children don’t draw those exclusivity inferences any more.”

The researchers believe the full set of experiments sheds new light on the issue.

“Earlier explanations of mutual exclusivity introduced a whole new problem,” Feiman says. “If kids assume words are mutually exclusive, how do they learn words that are not? After all, you can call the same animal either a rabbit or a bunny, and kids have to learn both of those at some point. Our finding explains why this isn't actually a problem. Kids won’t think the new word is mutually exclusive with the old word by default, unless adults tell them that it is — all adults have to do if the new word is not mutually exclusive is just say it without focusing it, and they’ll naturally do that if they're thinking about it as compatible.”

Learning language from language

The experiment, the researchers note, is the result of interdisciplinary research bridging psychology and linguistics — in this case, mobilizing the linguistics concept of focus to address an issue of interest in both fields.

“We are hopeful this will be a paper that shows that small, simple theories have a place in psychology,” Brody says. “It is a very small theory, not a huge model of the mind, but it completely flips the switch on some phenomena we thought we understood.”

If the new hypothesis is correct, the researchers may have developed a more robust explanation about how children correctly apply new words.

“An influential idea in language development is that children can use their existing knowledge of language to learn more language,” Aravind says. “We’re in a sense building on that idea, and saying that even in the simplest cases, aspects of language that children already know, in this case an understanding of focus, help them grasp the meanings of unknown words.”

The scholars acknowledge that more studies could further advance our knowledge about the issue. Future research, they note in the paper, could reexamine prior studies about mutual exclusivity, record and study naturalistic interactions between parents and children to see how focus is used, and examine the issue in other languages, especially those marking focus in alternate ways, such as word order.

The research was supported, in part, by a Jacobs Foundation Fellowship awarded to Feiman.



de MIT News https://ift.tt/t2fHGzy

miércoles, 20 de noviembre de 2024

3 Questions: Claire Wang on training the brain for memory sports

On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.

The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.

The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.

MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.

Q: How did you come to get involved in memory competitions?

A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.

Q: What was the Tournament of Memory Champions like?

A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.

Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.

Since I hadn't done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.

Q: What is your approach to improving memory?

A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.

Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.

The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot. 



de MIT News https://ift.tt/tBOADdj

Reality check on technologies to remove carbon dioxide from the air

In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.

To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.

DAC: The promise and the reality

Including DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). 

DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.

Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.

Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.

Challenge 1: Scaling up

When it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”

Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.

Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”

Challenge 2: Energy requirement

Given the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.

The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.

Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.

Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.

Challenge 3: Siting

Some analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”

Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.

Challenge 4: Cost

Considering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.

To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.

As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.

Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.

Then there’s the cost of storage, which is ignored in many DAC cost estimates.

Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.

The bottom line

In their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.

The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.

Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.”



de MIT News https://ift.tt/IHYoB9m