martes, 30 de abril de 2024

Studies in empathy and analytics

Upon the advice of one of his soccer teammates, James Simon enrolled in 14.73 (The Challenge of World Poverty) as a first-year student to fulfill a humanities requirement. He went from knowing nothing about economics to learning about the subject from Nobel laureates.

The lessons created by professors Esther Duflo and Abhijit Banerjee revealed to Simon an entirely new way to use science to help humanity. One of the projects Simon learned about in this class assessed an area of India with a low vaccination rate and created a randomized, controlled trial to figure out the best way to fix this problem.

“What was really cool about the class was that it talked about huge problems in the world, like poverty, hunger, and lack of vaccinations, and it talked about how you could break them down using experiments and quantify the best way to solve them,” he says.

Galvanized by this experience, Simon joined a research project in the economics department and committed to a blended major in computer science, economics, and data. He began working on a research project with Senior Lecturer Sara Ellison in 2021 and has since contributed to multiple research papers published by the group, many concerning developmental economic issues. One of his most memorable projects explored the question of whether internet access helps bridge the gap between poor and wealthy countries. Simon collected data, conducted interviews, and did statistical analysis to develop answers to the group’s questions. Their paper was published in Competition Policy International in 2021.

Further bridging his economics studies with real-world efforts, Simon has become involved with the Guatemalan charity Project Somos, which is dedicated to challenging poverty through access to food and education. Through MIT’s Global Research and Consulting Group, he led a team of seven students to analyze the program’s data, measure its impact in the community, and provide the organization with easy-to-use data analytics tools. He has continued working with Project Somos through his undergraduate years and has joined its board of directors.

Simon hopes to quantify the most effective approaches to solutions for the people and groups he works with. “The charity I work for says ‘Use your head and your heart.’ If you can approach the problems in the world with empathy and analytics, I think that is a really important way to help a lot of people” he says.

Simon’s desire to positively impact his community is threaded through other areas of his life at MIT. He is a member of the varsity soccer team and the Phi Beta Epsilon fraternity, and has volunteered for the MIT Little Beavers Special Needs Running Club.

On the field, court, and trail

Athletics are a major part of Simon’s life, year-round. Soccer has long been his main sport; he joined the varsity soccer team as a first-year and has played ever since. In his second year with the team, Simon was recognized as an Academic All-American. He also earned the honor of NEWMAC First Team All-Conference in 2021.

Despite the long hours of practice, Simon says he is most relaxed when it’s game season. “It’s a nice, competitive outlet to have every day. You’re working with people that you like spending time with, to win games and have fun and practice to get better. Everything going on kind of fades away, and you’re just focused on playing your sport,” he explains.

Simon has also used his time at MIT to try new sports. In winter 2023, he joined the wrestling club. “I thought, ‘I’ve never done anything like this before. But maybe I’ll try it out,’” he says. “And so I tried it out knowing nothing. They were super welcoming and there were people with all experience levels, and I just really fell in love with it.” Simon also joined the MIT basketball team as a walk-on his senior year.

When not competing, Simon enjoys hiking. He recalls one of his favorite memories from the past four years being a trip to Yosemite National Park he took with friends while interning in San Francisco. There, he hiked upward of 20 miles each day. Simon also embarks on hiking trips with friends closer to campus in New Hampshire and Acadia National Park.

Social impact

Simon believes his philanthropic work has been pivotal to his experience at MIT. Through the MIT Global Research and Consulting Group, which he served as a case leader for, he has connected with charity groups around the world, including in Guatemala and South Africa.

On campus, Simon has worked to build social connections within both his school and city-wide community. During his sophomore year, he spent his Sundays with the Little Beavers Running Team, a program that pairs children from the Boston area who are on the autism spectrum with an MIT student to practice running and other sports activities. “Throughout the course of a semester when you’re working with a kid, you’re able to see their confidence and social skills improve. That’s really rewarding to me,” Simon says.

Simon is also a member of the Phi Beta Epsilon fraternity. He joined the group in his first year at MIT and has lived with the other members of the fraternity since his sophomore year. He appreciates the group’s strong focus on supporting the social and professional skills of its members. Simon served as the chapter’s president for one semester and describes his experience as “very impactful.”

“There’s something really cool about having 40 of your friends all live in a house together,” he says. “A lot of my good memories from college are of sitting around in our common rooms late at night and just talking about random stuff.”

Technical projects and helping others

Next fall, Simon will continue his studies at MIT, pursuing a master’s degree in economics. Following this, he plans to move to New York to work in finance. In the summer of 2023 he interned at BlackRock, a large finance company, where he worked on a team that invested on behalf of people looking to grow their retirement funds. Simon says, “I thought it was cool that I was able to apply things I learned in school to have an impact on a ton of different people around the country by helping them prepare for retirement.”

Simon has done similar work in past internships. In the summer after his first year at MIT, he worked for Surge Employment Solutions, a startup that connected formerly incarcerated people to jobs. His responsibility was to quantify the social impacts of the startup, which was shown to help the unemployment rate of formerly incarcerated individuals and help high-turnover businesses save money by retaining employees.

On his community work, Simon says, “There’s always a lot more similarities between people than differences. So, I think getting to know people and being able to use what I learned to help people make their lives even a little bit better is cool. You think maybe as a college student, you wouldn’t be able to do a lot to make an impact around the world. But I think even with just the computer science and economics skills that I’ve learned in college, it’s always kind of surprising to me how much of an impact you can make on people if you just put in the effort to seek out opportunities.”



de MIT News https://ift.tt/YUFyDT0

Alison Badgett named director of the Priscilla King Gray Public Service Center

Vice Chancellor for Undergraduate and Graduate Education Ian A. Waitz announced recently that Alison Badgett has been appointed the new associate dean and director of the Priscilla King Gray (PKG) Public Service Center. She succeeds Jill Bassett, who left that role to become chief of staff to Chancellor Melissa Nobles.

“Alison is a thought leader on how to integrate community-engaged learning with systematic change, making her ideally suited to actualize MIT’s mission of educating transformative leaders,” Waitz says. “I have no doubt she will make the PKG Center a model for all of higher ed, given her wealth of experience, finely honed skills, and commitment to social change.”

“I’m excited to help the PKG Center, and broader MIT community, develop a collective vision for public service education that builds on the PKG Center’s strength in social innovation programming, and leverages the Institute’s unique culture of innovation,” Badgett says. “MIT’s institutional commitment to tackling complex societal and environmental challenges, taking responsibility for outcomes and not just inputs, is exceedingly rare. I’m also especially excited to engage STEM majors, who may be less likely to enter the nonprofit or public sector, but who can have a tremendous impact on social and environmental outcomes within the systems they work.”

Badgett has over 20 years of experience leading public policy and nonprofit organizations, particularly those addressing challenging issues like affordable housing and homelessness, criminal justice, and public education. She is the founding principal of a consulting firm, From Charity to Change, which works with nonprofit leaders, educators, and philanthropists to apply systems-change strategies that target the root causes of complex social problems.

Prior to her consulting role, Badgett was executive director of the Petey Greene Program, which recruits and trains 1,000 volunteers annually from 30 universities to tutor justice-impacted students in 50 prisons and reentry programs. In addition, the program educates volunteers on the injustice of our prison system and encourages both volunteers and students to advocate for reforms.

She also served as executive director of Raise Your Hand Texas, an organization that aims to improve education by piloting innovative learning practices. During her tenure, the organization launched a five-year, $10 million initiative to showcase and scale blended learning, and a 10-year, $50 million initiative to improve teacher preparation and the status of teaching.

Before leading Raise Your Hand Texas, Badgett was executive director of several organizations related to housing and homelessness in New York and New Jersey. During that time, she developed a $3.6 million demonstration program to permanently house the chronically homeless, which served as a model for state and national replication. She also served as senior policy advisor to the governor of New Jersey, providing counsel on land use, redevelopment, and housing. 

Badgett holds a global executive EdD from the University of Southern California, an MA from Columbia University Teachers College in philosophy and education, and an BA in politics from Princeton University. 

Her appointment at the PKG Center is especially timely. Student demand for social impact experiential learning opportunities has increased significantly at MIT in recent years, and the center is expected to play a sizable role in increasing student engagement in social impact work and in helping to integrate social innovation into teaching and research.

At the same time, the Institute has made a commitment to help address complex issues with global impacts, such as climate change, economic inequality, and artificial intelligence. As part of that effort, the Office of Experiential Learning launched the Social Impact Experiential Learning Opportunity initiative last year, which has awarded nearly $1 million to fund hundreds of student opportunities. Projects cater to a broad range of interests and take place around the world — from using new computational methods to understand the role of special-interest-group funding in U.S. public policy to designing and testing a solar-powered, water-vapor condensing chamber in Madagascar.

Badgett, who is currently writing a book on re-imagining civic education at elite private schools, will begin her new role at the PKG Center in July. In the meantime, she is looking forward to bringing her experience to bear at MIT. “While leading public interest organizations was highly rewarding, I recognized that I could have a far greater impact educating future public interest leaders, and that higher education was the place to do it,” she says.



de MIT News https://ift.tt/mvEga4t

lunes, 29 de abril de 2024

Offering clean energy around the clock

As remarkable as the rise of solar and wind farms has been over the last 20 years, achieving complete decarbonization is going to require a host of complementary technologies. That’s because renewables offer only intermittent power. They also can’t directly provide the high temperatures necessary for many industrial processes.

Now, 247Solar is building high-temperature concentrated solar power systems that use overnight thermal energy storage to provide round-the-clock power and industrial-grade heat.

The company’s modular systems can be used as standalone microgrids for communities or to provide power in remote places like mines and farms. They can also be used in conjunction with wind and conventional solar farms, giving customers 24/7 power from renewables and allowing them to offset use of the grid.

“One of my motivations for working on this system was trying to solve the problem of intermittency,” 247Solar CEO Bruce Anderson ’69, SM ’73 says. “I just couldn’t see how we could get to zero emissions with solar photovoltaics (PV) and wind. Even with PV, wind, and batteries, we can’t get there, because there’s always bad weather, and current batteries aren’t economical over long periods. You have to have a solution that operates 24 hours a day.”

The company’s system is inspired by the design of a high-temperature heat exchanger by the late MIT Professor Emeritus David Gordon Wilson, who co-founded the company with Anderson. The company integrates that heat exchanger into what Anderson describes as a conventional, jet-engine-like turbine, enabling the turbine to produce power by circulating ambient pressure hot air with no combustion or emissions — what the company calls a first in the industry.

Here’s how the system works: Each 247Solar system uses a field of sun-tracking mirrors called heliostats to reflect sunlight to the top of a central tower. The tower features a proprietary solar receiver that heats air to around 1,000 Celsius at atmospheric pressure. The air is then used to drive 247Solar’s turbines and generate 400 kilowatts of electricity and 600 kilowatts of heat. Some of the hot air is also routed through a long-duration thermal energy storage system, where it heats solid materials that retain the heat. The stored heat is then used to drive the turbines when the sun stops shining.

“We offer round-the-clock electricity, but we also offer a combined heat and power option, with the ability to take heat up to 970 Celsius for use in industrial processes,” Anderson says. “It’s a very flexible system.”

The company’s first deployment will be with a large utility in India. If that goes well, 247Solar hopes to scale up rapidly with other utilities, corporations, and communities around the globe.

A new approach to concentrated solar

Anderson kept in touch with his MIT network after graduating in 1973. He served as the director of MIT’s Industrial Liaison Program (ILP) between 1996 and 2000 and was elected as an alumni member of the MIT Corporation in 2013. The ILP connects companies with MIT’s network of students, faculty, and alumni to facilitate innovation, and the experience changed the course of Anderson’s career.

“That was an extremely fascinating job, and from it two things happened,” Anderson says. “One is that I realized I was really an entrepreneur and was not well-suited to the university environment, and the other is that I was reminded of the countless amazing innovations coming out of MIT.”

After leaving as director, Anderson began a startup incubator where he worked with MIT professors to start companies. Eventually, one of those professors was Wilson, who had invented the new heat exchanger and a ceramic turbine. Anderson and Wilson ended up putting together a small team to commercialize the technology in the early 2000s.

Anderson had done his MIT master’s thesis on solar energy in the 1970s, and the team realized the heat exchanger made possible a novel approach to concentrated solar power. In 2010, they received a $6 million development grant from the U.S. Department of Energy. But their first solar receiver was damaged during shipping to a national laboratory for testing, and the company ran out of money.

It wasn’t until 2015 that Anderson was able to raise money to get the company back off the ground. By that time, a new high-temperature metal alloy had been developed that Anderson swapped out for Wilson’s ceramic heat exchanger.

The Covid-19 pandemic further slowed 247’s plans to build a demonstration facility at its test site in Arizona, but strong customer interest has kept the company busy. Concentrated solar power doesn’t work everywhere — Arizona’s clear sunshine is a better fit than Florida’s hazy skies, for example — but Anderson is currently in talks with communities in parts of the U.S., India, Africa, and Australia where the technology would be a good fit.

These days, the company is increasingly proposing combining its systems with traditional solar PV, which lets customers reap the benefits of low-cost solar electricity during the day while using 247’s energy at night.

“That way we can get at least 24, if not more, hours of energy from a sunny day,” Anderson says. “We’re really moving toward these hybrid systems, which work like a Prius: Sometimes you’re using one source of energy, sometimes you’re using the other.”

The company also sells its HeatStorE thermal batteries as standalone systems. Instead of being heated by the solar system, the thermal storage is heated by circulating air through an electric coil that’s been heated by electricity, either from the grid, standalone PV, or wind. The heat can be stored for nine hours or more on a single charge and then dispatched as electricity plus industrial process heat at 250 Celsius, or as heat only, up to 970 Celsius.

Anderson says 247’s thermal battery is about one-seventh the cost of lithium-ion batteries per kilowatt hour produced.

Scaling a new model

The company is keeping its system flexible for whatever path customers want to take to complete decarbonization.

In addition to 247’s India project, the company is in advanced talks with off-grid communities in the Unites States and Egypt, mining operators around the world, and the government of a small country in Africa. Anderson says the company’s next customer will likely be an off-grid community in the U.S. that currently relies on diesel generators for power.

The company has also partnered with a financial company that will allow it to access capital to fund its own projects and sell clean energy directly to customers, which Anderson says will help 247 grow faster than relying solely on selling entire systems to each customer.

As it works to scale up its deployments, Anderson believes 247 offers a solution to help customers respond to increasing pressure from governments as well as community members.

“Emerging economies in places like Africa don’t have any alternative to fossil fuels if they want 24/7 electricity,” Anderson says. “Our owning and operating costs are less than half that of diesel gen-sets. Customers today really want to stop producing emissions if they can, so you’ve got villages, mines, industries, and entire countries where the people inside are saying, ‘We can’t burn diesel anymore.’”



de MIT News https://ift.tt/25490Nz

An AI dataset carves new paths to tornado detection

The return of spring in the Northern Hemisphere touches off tornado season. A tornado's twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It's hard to know exactly when a tornado has formed, or even why.

A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature's most mysterious and violent phenomena.

“A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project's co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 

Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning's ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 

Swirling uncertainty

About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  

Yet tornadoes are notoriously difficult to forecast because scientists don't have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won't. We don't fully understand it,” Kurdzo says.

A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm's broad, rotating updraft. A mesocyclone doesn't always produce a tornado.

With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  

In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.

The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).

Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).

A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.

“What's beautiful about a true benchmark dataset is that we're all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”

Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.

“This dataset also means that a grad student doesn't have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.

This project was funded by Lincoln Laboratory's Climate Change Initiative, which aims to leverage the laboratory's diverse technical strengths to help address climate problems threatening human health and global security.

Chasing answers with deep learning

Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 

“We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren't searched for by forecasters,” Veillette says.

The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.

They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.

“The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”

TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.

Taking steps toward operations

On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.

“As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don't know about?” he asks.

Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 

“None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters' eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.

Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.

But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It's a first step.”

The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they'll eventually be shown to forecasters, to start a process of transitioning into operations.

In the end, the path could circle back to trust.

“We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.”



de MIT News https://ift.tt/oJQ159x

MIT faculty, instructors, students experiment with generative AI in teaching and learning

How can MIT’s community leverage generative AI to support learning and work on campus and beyond?

At MIT’s Festival of Learning 2024, faculty and instructors, students, staff, and alumni exchanged perspectives about the digital tools and innovations they’re experimenting with in the classroom. Panelists agreed that generative AI should be used to scaffold — not replace — learning experiences.

This annual event, co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, celebrates teaching and learning innovations. When introducing new teaching and learning technologies, panelists stressed the importance of iteration and teaching students how to develop critical thinking skills while leveraging technologies like generative AI.

“The Festival of Learning brings the MIT community together to explore and celebrate what we do every day in the classroom,” said Christopher Capozzola, senior associate dean for open learning. “This year's deep dive into generative AI was reflective and practical — yet another remarkable instance of ‘mind and hand’ here at the Institute.”  

Incorporating generative AI into learning experiences 

MIT faculty and instructors aren’t just willing to experiment with generative AI — some believe it’s a necessary tool to prepare students to be competitive in the workforce. “In a future state, we will know how to teach skills with generative AI, but we need to be making iterative steps to get there instead of waiting around,” said Melissa Webster, lecturer in managerial communication at MIT Sloan School of Management. 

Some educators are revisiting their courses’ learning goals and redesigning assignments so students can achieve the desired outcomes in a world with AI. Webster, for example, previously paired written and oral assignments so students would develop ways of thinking. But, she saw an opportunity for teaching experimentation with generative AI. If students are using tools such as ChatGPT to help produce writing, Webster asked, “how do we still get the thinking part in there?”

One of the new assignments Webster developed asked students to generate cover letters through ChatGPT and critique the results from the perspective of future hiring managers. Beyond learning how to refine generative AI prompts to produce better outputs, Webster shared that “students are thinking more about their thinking.” Reviewing their ChatGPT-generated cover letter helped students determine what to say and how to say it, supporting their development of higher-level strategic skills like persuasion and understanding audiences.

Takako Aikawa, senior lecturer at the MIT Global Studies and Languages Section, redesigned a vocabulary exercise to ensure students developed a deeper understanding of the Japanese language, rather than just right or wrong answers. Students compared short sentences written by themselves and by ChatGPT and developed broader vocabulary and grammar patterns beyond the textbook. “This type of activity enhances not only their linguistic skills but stimulates their metacognitive or analytical thinking,” said Aikawa. “They have to think in Japanese for these exercises.”

While these panelists and other Institute faculty and instructors are redesigning their assignments, many MIT undergraduate and graduate students across different academic departments are leveraging generative AI for efficiency: creating presentations, summarizing notes, and quickly retrieving specific ideas from long documents. But this technology can also creatively personalize learning experiences. Its ability to communicate information in different ways allows students with different backgrounds and abilities to adapt course material in a way that’s specific to their particular context. 

Generative AI, for example, can help with student-centered learning at the K-12 level. Joe Diaz, program manager and STEAM educator for MIT pK-12 at Open Learning, encouraged educators to foster learning experiences where the student can take ownership. “Take something that kids care about and they’re passionate about, and they can discern where [generative AI] might not be correct or trustworthy,” said Diaz.

Panelists encouraged educators to think about generative AI in ways that move beyond a course policy statement. When incorporating generative AI into assignments, the key is to be clear about learning goals and open to sharing examples of how generative AI could be used in ways that align with those goals. 

The importance of critical thinking

Although generative AI can have positive impacts on educational experiences, users need to understand why large language models might produce incorrect or biased results. Faculty, instructors, and student panelists emphasized that it’s critical to contextualize how generative AI works. “[Instructors] try to explain what goes on in the back end and that really does help my understanding when reading the answers that I’m getting from ChatGPT or Copilot,” said Joyce Yuan, a senior in computer science. 

Jesse Thaler, professor of physics and director of the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions, warned about trusting a probabilistic tool to give definitive answers without uncertainty bands. “The interface and the output needs to be of a form that there are these pieces that you can verify or things that you can cross-check,” Thaler said.

When introducing tools like calculators or generative AI, the faculty and instructors on the panel said it’s essential for students to develop critical thinking skills in those particular academic and professional contexts. Computer science courses, for example, could permit students to use ChatGPT for help with their homework if the problem sets are broad enough that generative AI tools wouldn’t capture the full answer. However, introductory students who haven’t developed the understanding of programming concepts need to be able to discern whether the information ChatGPT generated was accurate or not.

Ana Bell, senior lecturer of the Department of Electrical Engineering and Computer Science and MITx digital learning scientist, dedicated one class toward the end of the semester of Course 6.100L (Introduction to Computer Science and Programming Using Python) to teach students how to use ChatGPT for programming questions. She wanted students to understand why setting up generative AI tools with the context for programming problems, inputting as many details as possible, will help achieve the best possible results. “Even after it gives you a response back, you have to be critical about that response,” said Bell. By waiting to introduce ChatGPT until this stage, students were able to look at generative AI’s answers critically because they had spent the semester developing the skills to be able to identify whether problem sets were incorrect or might not work for every case. 

A scaffold for learning experiences

The bottom line from the panelists during the Festival of Learning was that generative AI should provide scaffolding for engaging learning experiences where students can still achieve desired learning goals. The MIT undergraduate and graduate student panelists found it invaluable when educators set expectations for the course about when and how it’s appropriate to use AI tools. Informing students of the learning goals allows them to understand whether generative AI will help or hinder their learning. Student panelists asked for trust that they would use generative AI as a starting point, or treat it like a brainstorming session with a friend for a group project. Faculty and instructor panelists said they will continue iterating their lesson plans to best support student learning and critical thinking. 

Panelists from both sides of the classroom discussed the importance of generative AI users being responsible for the content they produce and avoiding automation bias — trusting the technology’s response implicitly without thinking critically about why it produced that answer and whether it’s accurate. But since generative AI is built by people making design decisions, Thaler told students, “You have power to change the behavior of those tools.”



de MIT News https://ift.tt/HhGqd2t

Julie Shah named head of the Department of Aeronautics and Astronautics

Julie Shah ’04, SM ’06, PhD ’11, the H.N. Slater Professor in Aeronautics and Astronautics, has been named the new head of the Department of Aeronautics and Astronautics (AeroAstro), effective May 1.

“Julie brings an exceptional record of visionary and interdisciplinary leadership to this role. She has made substantial technical contributions in the field of robotics and AI, particularly as it relates to the future of work, and has bridged important gaps in the social, ethical, and economic implications of AI and computing,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

In addition to her role as a faculty member in AeroAstro, Shah served as associate dean of Social and Ethical Responsibilities of Computing in the MIT Schwarzman College of Computing from 2019 to 2022, helping launch a coordinated curriculum that engages more than 2,000 students a year at the Institute. She currently directs the Interactive Robotics Group in MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), and MIT’s Industrial Performance Center.

Shah and her team at the Interactive Robotics Group conduct research that aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She is expanding the use of human cognitive models for artificial intelligence and has translated her work to manufacturing assembly lines, health-care applications, transportation, and defense. In 2020, Shah co-authored the popular book “What to Expect When You’re Expecting Robots,” which explores the future of human-robot collaboration.

As an expert on how humans and robots interact in the workforce, Shah was named co-director of the Work of the Future Initiative, a successor group of MIT’s Task Force on the Work of the Future, alongside Ben Armstrong, executive director and research scientist at MIT’s Industrial Performance Center. In March of this year, Shah was named a co-leader of the Working Group on Generative AI and the Work of the Future, alongside Armstrong and Kate Kellogg, the David J. McGrath Jr. Professor of Management and Innovation. The group is examining how generative AI tools can contribute to higher-quality jobs and inclusive access to the latest technologies across sectors.

Shah’s contributions as both a researcher and educator have been recognized with many awards and honors throughout her career. She was named an associate fellow of the American Institute of Aeronautics and Astronautics (AIAA) in 2017, and in 2018 she was the recipient of the IEEE Robotics and Automation Society Academic Early Career Award. Shah was also named a Bisplinghoff Faculty Fellow, was named to MIT Technology Review’s TR35 List, and received an NSF Faculty Early Career Development Award. In 2013, her work on human-robot collaboration was included on MIT Technology Review’s list of 10 Breakthrough Technologies.

In January 2024, she was appointed to the first-ever AIAA Aerospace Artificial Intelligence Advisory Group, which was founded “to advance the appropriate use of AI technology particularly in aeronautics, aerospace R&D, and space.” Shah currently serves as editor-in-chief of Foundations and Trends in Robotics, as an editorial board member of the AIAA Progress Series, and as an executive council member of the Association for the Advancement of Artificial Intelligence.

A dedicated educator, Shah has been recognized for her collaborative and supportive approach as a mentor. She was honored by graduate students as “Committed to Caring” (C2C) in 2019. For the past 10 years, she has served as an advocate, community steward, and mentor for students in her role as head of house of the Sidney Pacific Graduate Community.

Shah received her bachelor’s and master’s degrees in aeronautical and astronautical engineering, and her PhD in autonomous systems, all from MIT. After receiving her doctoral degree, she joined Boeing as a postdoc, before returning to MIT in 2011 as a faculty member.

Shah succeeds Professor Steven Barrett, who has led AeroAstro as both interim department head and then department head since May 2023.



de MIT News https://ift.tt/MCrcyVP

Remembering Chasity Nunez, a shining star at MIT Health

On March 5, the MIT community lost one of its shining stars when Chasity Nunez passed away. She was 27.

“Chas,” as her friends and colleagues called her, served as the patient safety and clinical quality program coordinator at MIT Health. In her role, Nunez helped MIT Health maintain its high safety standards, working to train staff on reporting procedures and best practices for patient safety.

Director of Clinical Collaborations and Partnerships Elene Scheff was Nunez’s hiring manager and remembers her as a “perpetual learner.” Nunez put herself through both college and graduate school and was working on a graduate degree in informatics — her second master’s degree. “She loved to be challenged … She also loved collaborating with everybody,” Scheff remembers.

“Chas was passionate about the health and well-being of the MIT community,” adds MIT Chief Health Officer Cecilia Stuopis. “She was beloved by the colleagues who worked closely with her, and her dedication to our patients was powerful and impactful.”

Nunez’s dedication to helping patients within the MIT community was only matched by her desire to give back and be of service to her country. She was an active member of the U.S. Army National Guard, where she was stationed in Connecticut and served as an IT support specialist.

“[Chas] was always looking to improve upon herself,” says Janis Puibello, Nunez’s manager and MIT Health’s associate chief of nursing and clinical quality. “[She] was hungry for what we had to offer.”

Michele David, chief of clinical quality and patient safety, agrees. David recalls Nunez’s can-do spirit: “If she didn’t know how to do something, she would tell you, ‘I don’t know how to do it, but I will find out!’”

“She brought a lot to MIT Health and will always be with us,” says Puibello.

Nunez is survived by her mother and a daughter. To honor Nunez, MIT Health set up a GoFundMe campaign to help raise funds for her surviving daughter. The $5,000 campaign exceeded its goal by more than $3,000. All proceeds collected were donated to Nunez’s family to be used toward her daughter’s future education.



de MIT News https://ift.tt/tcAFDqS

sábado, 27 de abril de 2024

Exploring the history of data-driven arguments in public life

Political debates today may not always be exceptionally rational, but they are often infused with numbers. If people are discussing the economy or health care or climate change, sooner or later they will invoke statistics.

It was not always thus. Our habit of using numbers to make political arguments has a history, and William Deringer is a leading historian of it. Indeed, in recent years Deringer, an associate professor in MIT’s Program in Science, Technology, and Society (STS), has carved out a distinctive niche through his scholarship showing how quantitative reasoning has become part of public life.

In his prize-winning 2018 book “Calculated Values” (Harvard University Press), Deringer identified a time in British public life from the 1680s to the 1720s as a key moment when the practice of making numerical arguments took hold — a trend deeply connected with the rise of parliamentary power and political parties. Crucially, freedom of the press also expanded, allowing greater scope for politicians and the public to have frank discussions about the world as it was, backed by empirical evidence.

Deringer’s second book project, in progress and under contract to Yale University Press, digs further into a concept from the first book — the idea of financial discounting. This is a calculation to estimate what money (or other things) in the future is worth today, to assign those future objects a “present value.” Some skilled mathematicians understood discounting in medieval times; its use expanded in the 1600s; today it is very common in finance and is the subject of debate in relation to climate change, as experts try to estimate ideal spending levels on climate matters.

“The book is about how this particular technique came to have the power to weigh in on profound social questions,” Deringer says. “It’s basically about compound interest, and it’s at the center of the most important global question we have to confront.”

Numbers alone do not make a debate rational or informative; they can be false, misleading, used to entrench interests, and so on. Indeed, a key theme in Deringer’s work is that when quantitiative reasoning gains more ground, the question is why, and to whose benefit. In this sense his work aligns with the long-running and always-relevant approach of the Institute’s STS faculty, in thinking carefully about how technology and knowledge is applied to the world.

“The broader culture more has become attuned to STS, whether it’s conversations about AI or algorithmic fairness or climate change or energy, these are simultaneously technical and social issues,” Deringer says. “Teaching undergraduates, I’ve found the awareness of that at MIT has only increased.” For both his research and teaching, Deringer received tenure from MIT earlier this year.

Dig in, work outward

Deringer has been focused on these topics since he was an undergraduate at Harvard University.

“I found myself becoming really interested in the history of economics, the history of practical mathematics, data, statistics, and how it came to be that so much of our world is organized quantitatively,” he says.

Deringer wrote a college thesis about how England measured the land it was seizing from Ireland in the 1600s, and then, after graduating, went to work in the finance sector, which gave him a further chance to think about the application of quantification to modern life.

“That was not what I wanted to do forever, but for some of the conceptual questions I was interested in, the societal life of calculations, I found it to be a really interesting space,” Deringer says.

He returned to academia by pursuing his PhD in the history of science at Princeton University. There, in his first year of graduate school, in the archives, Deringer found 18th-century pamphlets about financial calculations concering the value of stock involved in the infamous episode of speculation known as the South Sea Bubble. That became part of his dissertation; skeptics of the South Sea Bubble were among the prominent early voices bringing data into public debates. It has also helped inform his second book.

First, though, Deringer earned his doctorate from Princeton in 2012, then spent three years as a Mellon Postdoctoral Research Fellow at Columbia University. He joined the MIT faculty in 2015. At the Institute, he finished turning his dissertation into the “Calculated Values” book — which won the 2019 Oscar Kenshur Prize for the best book from the Center for Eighteenth-Century Studies at Indiana University, and was co-winner of the 2021 Joseph J. Spengler Prize for best book from the History of Economics Society.

“My method as a scholar is to dig into the technical details, then work outward historically from them,” Deringer says.

A long historical chain

Even as Deringer was writing his first book, the idea for the second one was taking root in his mind. Those South Sea Bubble pamphets he had found while at Princeton incorporated discounting, which was intermittently present in “Calculated Values.” Deringer was intrigued by how adept 18th-century figures were at discounting.

“Something that I thought of as a very modern technique seemed to be really well-known by a lot of people in the 1720s,” he says.

At the same time, a conversation with an academic colleague in philosophy made it clear to Deringer how different conclusions about discounting had become debated in climate change policy. He soon resolved to write the “biography of a calculation” about financial discounting.

“I knew my next book had to be about this,” Deringer says. “I was very interested in the deep historical roots of discounting, and it has a lot of present urgency.”

Deringer says the book will incorporate material about the financing of English cathedrals, the heavy use of discounting in the mining industry during the Industrial Revolution, a revival of discounting in 1960s policy circles, and climate change, among other things. In each case, he is carefully looking at the interests and historical dynamics behind the use of discounting.

“For people who use discounting regularly, it’s like gravity: It’s very obvious that to be rational is to discount the future according to this formula,” Deringer says. “But if you look at history, what is thought of as rational is part of a very long historical chain of people applying this calculation in various ways, and over time that’s just how things are done. I’m really interested in pulling apart that idea that this is a sort of timeless rational calculation, as opposed to a product of this interesting history.”

Working in STS, Deringer notes, has helped encourage him to link together numerous historical time periods into one book about the numerous ways discounting has been used.

“I’m not sure that pursuing a book that stretches from the 17th century to the 21st century is something I would have done in other contexts,” Deringer says. He is also quick to credit his colleagues in STS and in other programs for helping create the scholarly environment in which he is thriving.

“I came in with a really amazing cohort of other scholars in SHASS,” Deringer notes, referring to the MIT School of Humanities, Arts, and Social Sciences. He cites others receiving tenure in the last year such as his STS colleague Robin Scheffler, historian Megan Black, and historian Caley Horan, with whom Deringer has taught graduate classes on the concept of risk in history. In all, Deringer says, the Institute has been an excellent place for him to pursue interdisciplinary work on technical thought in history.

“I work on very old things and very technical things,” Deringer says. “But I’ve found a wonderful welcoming at MIT from people in different fields who light up when they hear what I’m interested in.”



de MIT News https://ift.tt/irgCXVY

viernes, 26 de abril de 2024

Three from MIT awarded 2024 Guggenheim Fellowships

MIT faculty members Roger Levy, Tracy Slatyer, and Martin Wainwright are among 188 scientists, artists, and scholars awarded 2024 fellowships from the John Simon Guggenheim Memorial Foundation. Working across 52 disciplines, the fellows were selected from almost 3,000 applicants for “prior career achievement and exceptional promise.”

Each fellow receives a monetary stipend to pursue independent work at the highest level. Since its founding in 1925, the Guggenheim Foundation has awarded over $400 million in fellowships to more than 19,000 fellows. This year, MIT professors were recognized in the categories of neuroscience, physics, and data science.

Roger Levy is a professor in the Department of Brain and Cognitive Sciences. Combining computational modeling of large datasets with psycholinguistic experimentation, his work furthers our understanding of the cognitive underpinning of language processing, and helps to design models and algorithms that will allow machines to process human language. He is a recipient of the Alfred P. Sloan Research Fellowship, the NSF Faculty Early Career Development (CAREER) Award, and a fellowship at the Center for Advanced Study in the Behavioral Sciences.

Tracy Slatyer is a professor in the Department of Physics as well as the Center for Theoretical Physics in the MIT Laboratory for Nuclear Science and the MIT Kavli Institute for Astrophysics and Space Research. Her research focuses on dark matter — novel theoretical models, predicting observable signals, and analysis of astrophysical and cosmological datasets. She was a co-discoverer of the giant gamma-ray structures known as the “Fermi Bubbles” erupting from the center of the Milky Way, for which she received the New Horizons in Physics Prize in 2021. She is also a recipient of a Simons Investigator Award and Presidential Early Career Awards for Scientists and Engineers.

Martin Wainwright is the Cecil H. Green Professor in Electrical Engineering and Computer Science and Mathematics, and affiliated with the Laboratory for Information and Decision Systems and Statistics and Data Science Center. He is interested in statistics, machine learning, information theory, and optimization. Wainwright has been recognized with an Alfred P. Sloan Foundation Fellowship, the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. Wainwright has also co-authored books on graphical and statistical modeling, and solo-authored a book on high dimensional statistics.

“Humanity faces some profound existential challenges,” says Edward Hirsch, president of the foundation. “The Guggenheim Fellowship is a life-changing recognition. It’s a celebrated investment into the lives and careers of distinguished artists, scholars, scientists, writers and other cultural visionaries who are meeting these challenges head-on and generating new possibilities and pathways across the broader culture as they do so.”



de MIT News https://ift.tt/dKSo9LZ

A musical life: Carlos Prieto ’59 in conversation and concert

World-renowned cellist Carlos Prieto ’59 returned to campus for an event to perform and to discuss his new memoir, “Mi Vida Musical.”

At the April 9 event in the Samberg Conference Center, Prieto spoke about his formative years at MIT and his subsequent career as a professional cellist. The talk was followed by performances of J.S. Bach’s “Cello Suite No. 3” and Eugenio “Toussaint’s Bachriation.” Valerie Chen, a 2022 Sudler Prize winner and Emerson/Harris Fellow, also performed Phillip Glass’s “Orbit.”

Prieto was born in Mexico City and began studying the cello when he was 4. He graduated from MIT with BS degrees in 1959 in Course 3, then called the Metallurgical Engineering and today Materials Science and Engineering, and in Course 14 (Economics). He was the first cello and soloist of the MIT Symphony Orchestra. While at MIT, he took all available courses in Russian, which allowed him, years later, to study at Lomonosov University in Moscow.

After graduation from MIT, Prieto returned to Mexico, where he rose to become the head of an integrated iron and steel company.

“When I returned to Mexico, I was very active in my business life, but I was also very active in my music life,” he told the audience. “And at one moment, the music overcame all the other activities and I left my business activities to devote all my time to the cello and I’ve been doing this for the past 50 years.”

During his musical career, Prieto played all over the world and has played and recorded the world premieres of 115 compositions, most of which were written for him. He is the author of 14 books, some of which have been translated into English, Russian, and Portuguese.

Prieto’s honors include the Order of the Arts and Letters from France, the Order of Civil Merit from the King of Spain, and the National Prize for Arts and Sciences from the president of Mexico. In 1993 he was appointed member of the MIT Music and Theater Advisory Committee. In 2014, the School of Humanities, Arts, and Social Sciences awarded Prieto the Robert A. Muh Alumni Award.



de MIT News https://ift.tt/mad1wCz

jueves, 25 de abril de 2024

The MIT Edgerton Center’s third annual showcase dazzles onlookers

On April 9, a trailer with the words “Born by Fire” emblazoned on the back pulled down MIT's North Corridor (a.k.a. the Outfinite). Students, clad in orange construction vests, maneuvered their futuristic creation out of the trailer, eliciting a surge of curious bystanders. The aerodynamic shell is covered by 5 square meters of solar panels. This multi-occupancy solar car, Gemini, designed and built by the Solar Electric Vehicle Team (SEVT), is slated to race in the 2024 American Solar Challenge. Positioned just outside Building 13, Gemini made its inaugural public appearance at this year’s Edgerton Center Student Teams Showcase. The team’s first-place trophy from an earlier competition sat atop, glistening in the sunlight.

Next, MIT Motorsports arrived with their shiny red electric race car, MY24. SEVT, embodying MIT's spirit of collaboration, paused their own installation to assist the Motorsports team in transporting MY24 into Lobby 13. Such camaraderie is commonplace among Edgerton teams. MY24 is slated to compete in two upcoming events: the FSAE Hybrid event in Loudon, New Hampshire on May 1, followed by the FSAE Motorsports event in Michigan, later in June.

At the Third Annual Edgerton Center Showcase, Lobby 13 was abuzz with students, faculty, and visitors drawn in by the passion and excitement of members of 14 Edgerton Center student teams. Team members excitedly unveiled a wide range of technologies, including autonomous waterborne craft, rockets, wind turbines, assistive devices, and hydrogen-powered turbine engines. “Seeing the culmination of what MIT students can build in so many different forms was inspiring. It was great to see everyone's passion and creativity thriving in each of the team's projects,” says junior Anhad Sawhney, president of the MIT Electronics Research Society (MITERS) and captain of the Combat Robotics Club.

In one corner, children congregated around the Combat Robotics table, captivated by clips of the team competing on the Discovery channel’s Battlebots series. Nearby, towering rockets almost brushing the ceiling captured the gaze of onlookers. Suddenly, a symphony of electrical crackles filled the air. Visitors quickly discovered the source was not an AV malfunction, but a Tesla coil created by MITERS, where lightning danced to the pitch input using a computer keyboard. Established in 1973, MITERS — a member-run project space and machine shop — continues to give students the chance to tinker and create quirky inventions such as the motorized shopping cart, DOOMsled.

Adjacent to MITERS, students on the Spokes team dished ice cream into a bike-powered blender. A quick ride down the street created milkshakes for many to enjoy. Spokes is an Edgerton team of students who will bike across the country this summer, teaching STEM outreach classes along the way. Their curriculum is inspired by MIT's hands-on approach to education.

One of the newest Edgerton Center teams, The Assistive Technology Club, showed an array of innovations poised to revolutionize lives. Their blind assistance team is designing an app that uses machine learning to describe the most relevant features of the environment to visually impaired users. Their adaptive game controller team is designing a one-handed game controller for a user who is paralyzed on one side of her body due to a stroke. Junior Ben Lou, from the robotic self-feeding device team, has a rare disease called spinal muscular atrophy. He shares, “Eating is a basic necessity, but current devices that help people like me eat are not versatile with different foods, unaccommodating to users with different positional needs, generally difficult to set up, and extremely expensive. The self-feeding team is completely re-imagining the way a self-feeding device can work. Instead of operating with a spoon, which cannot handle a wide range of foods and is prone to spillage (among other issues), our device operates with an entirely new utensil.”

Beyond showcasing projects, the event served as a forum for idea exchange and collaboration. The MIT Wind team brought their first working prototype of their model wind turbine, which they will use as a baseline for competing in the Collegiate Wind Competition next year. “We hope to continue working on rotor optimization and blade fabrication, power conversion, and offshore foundation design to be competitive with the other CWC teams next year,” says team captain Kirby Heck. “As a new Edgerton Center team, the showcase was an amazing opportunity for our team members to engage with industry partners, interact with the MIT community, and explore how we fit within the broader constellation of teams within Edgerton at MIT. We also received helpful feedback on our current design and have plenty of new ideas on how we can innovate for our next design iteration.” 

The event included a short program, where SEVT captain Adrienne Wing Suen Lai and first-year Rachel Mohommed of the Electric Vehicle Team gave a shout-out to all the teams. A special tribute was also paid to Peggy Eysenbach, the event's organizer and the development officer at the Edgerton Center, with a bouquet of flowers. Edgerton Center Director and Professor Kim Vandiver welcomed the MIT community to the event and gave a brief review of the 30-year history of engineering teams sponsored by the Edgerton Center.

Vandiver believes that through all the fun and creativity, strong careers emerge. “Participation in an engineering team is great professional preparation. Upon graduation, these leaders are unafraid of hard problems, and rapidly rise in project management roles,” Vandiver says.



de MIT News https://ift.tt/kIMpShH

miércoles, 24 de abril de 2024

3 Questions: A shared vocabulary for how infectious diseases spread

On April 18, the World Health Organization (WHO) released new guidance on airborne disease transmission that seeks to create a consensus around the terminology used to describe the transmission of infectious pathogens through the air.

Lydia Bourouiba, the director of the MIT Fluid Dynamics of Disease Transmission Laboratory and the Fluids and Health Network, an associate professor in the MIT departments of Civil and Environmental Engineering and Mechanical Engineering, and a core member of the Institute for Medical Engineering and Science, served on the WHO expert team that developed the guidance. For more than a decade, Bourouiba’s laboratory has been researching fundamental physical processes underlying how infectious diseases spread from person to person.

The new WHO guidance puts forth new definitions of key terminology pertaining to respiratory infectious disease transmission. This reflects a new, shared understanding of how respiratory infectious pathogens move from one person to the next: through the exhalations of turbulent “puff clouds” that carry infectious contaminants in a continuum of droplet sizes and can lead to exposure at a range of distances.

Bourouiba’s lab has pioneered this physical picture and worked closely with a range of stakeholders over the years to ensure that public health guidance incorporates the latest science, improving preparedness for emerging respiratory pathogens. Bourouiba spoke with MIT News about the new WHO guidance.

Q: How did you become involved in creating these new guidelines?

A: I have been researching exhalation emissions for more than a decade. After the first SARS outbreak in 2003, I realized that the mechanisms by which respiratory pathogens are transmitted from one host to the next were essentially considered too random and too brief to be amenable to systematic investigation. Hence, the physical act of pathogen transmission was relegated to a black box. However, I also realized the fundamental importance of understanding these events mechanistically, to ultimately be able to mitigate such transmission events in a rational and principled manner. For this, we needed to understand the fluid physics and biophysics of respiratory emissions.

In the Fluid Dynamics of Disease Transmission Laboratory at MIT, we have been investigating these respiratory emissions. Our work showed that prior guidelines — specifically, the dichotomy of “large” versus “small” drops and isolated droplet emissions (essentially from spray bottles) — were not at all what we actually see and quantify when investigating respiratory emissions. We focused on establishing the full physics of such processes, from emission physiology to the fluid dynamics and biophysics of the exhalation flows and the interaction of the exhaled turbulent multiphase flow with the conditions of the ambient environment (air currents, temperature, and humidity).

Since 2015, I have also been working with the MIT Policy Lab at the Center for International Studies to disseminate our findings to public health officials and various agencies. We organized multiple conferences where we brought in scientists, clinicians, virologists, epidemiologists, microbiologists, and representatives from the U.S. Centers for Disease Control and Prevention and other groups, both before and during the pandemic.

In 2022, I was asked to serve on the World Health Organization’s technical consultation expert team, which was tasked with reaching a consensus on a new framework on respiratory infectious disease transmission. That process lasted about two years and culminated so far in the publication of the new guidelines. The process was obviously accelerated by the Covid-19 pandemic and the issues it brought to the fore regarding the inadequate old definitions. The goal of convening the consultation group was to bring together leading experts from around the globe and from very diverse fields — ranging from fluid physics to clinical medicine and epidemiology — to think through how best to redefine terms related to respiratory infectious disease transmission in light of the latest science. These new guidelines are very much a first step in a series of important consultations and efforts.

Q: How did your research change the WHO’s description of how diseases are transmitted through the air?

A: Our research established that these isolated droplets are not just exhaled as isolated droplets moving semiballistically [that will settle out of the air relatively near to the person who released them]. Instead, they are part of a multiphase turbulent puff gas cloud that contains a continuum of droplet sizes, where the cloud provides a comparatively warm and moist — and hence protective — environment for these droplets and the pathogens they contain, with respect to ambient air. One of our first papers establishing this concept was published in 2014. And we have showed since that models that do not include the proper physics of these turbulent puff clouds can dramatically underestimate the ranges of propagation and also completely shift estimates of risk and pathogen persistence in an indoor space.

These turbulent puff clouds are inhomogeneous, with potential for highly concentrated pathogen-bearing droplet load regions that can persist for a comparatively long time while moving very quickly across an indoor space in some of the most violent exhalations. Their dynamics enable potential effective inhalation exposure at a range of distances, long and short. This continuum and physical picture of concentrated packets of droplets and their impact on persistence of pathogen infectivity and exposure are in complete contrast with the notion of homogeneous mixing indoors, and the prior false dichotomy of “large” droplets that fall ballistically and “small” droplets that essentially evaporate immediately to form aerosols assumed to be deactivated. The prior picture led to the belief that only very few infectious diseases are airborne or requiring air management. This dichotomy, with other misconceptions, rooted in science from the 1930s, has surprisingly persisted in guidelines for decades.

The new guideline is a major milestone, not only because these guidelines do not change very often — every 10 or 15 years at best — but also because in addition to the WHO, five national or transnational health agencies have already endorsed the findings, including the U.S. Centers for Disease Control and Prevention, which also acknowledged the importance of the shift. 

Q: What are the biggest implications of these changes?

A: An agreed-upon common terminology is critical in infectious disease research and mitigation. The new guidelines set the foundation for such a common understanding and process. One might think it is just semantics or a small, incremental change in our understanding. However, risk calculations actually vary tremendously based on the framework one uses. We used mathematical models and physical experiments and found that the physical picture change has dramatic implications on risk estimations.

Another major implication was discussed in one of our publications from the very early stages of the pandemic, which stressed the urgent need for health care workers to have N95 masks because of these cloud dynamics and the associated importance of paying attention to indoor air management. Here again, risk calculations without the puff cloud dynamics would suggest that a typical hospital room or emergency department would dilute sufficiently the pathogen load so as to not pose a high risk. But with the puff cloud and dynamic of the droplets of a continuum of sizes within it, and coupled with it, it becomes clear that health care workers could still be exposed via inhalation to significant viral loads. Thus, they should have been provided N95 masks, in most conditions, when entering the space hosting a Covid-19 patient, even if they were not in their immediate vicinity. That article was the first to call attention to the importance of masking of health care workers due to the actual exhalation puff cloud and continuum of droplet sizes, shaping airborne transmission.

It took public health agencies more than six months to start considering shifting their masking guidelines during Covid-19. But this WHO document is broader than Covid-19. It redefines the basic definitions surrounding all respiratory infectious diseases — those that we know and those yet to come. That means there will be a different risk assessment and thereby different decision trees and policies, trickling down to different choices of protective equipment and mitigation protocols, and different parts of health agencies or facilities that might be activated or deployed.

The new guidelines are also a major acknowledgement that infectious disease transmission is truly an interdisciplinary area where scientists, clinicians, and public health officials of different backgrounds need to communicate with each other efficiently and clearly and share their insights, be it fundamental physics or clinical infectious diseases.  So, it is not just the content of these guidelines, but also the way this update unfolded. Hopefully it changes the mindset for responding to such public health threats.



de MIT News https://ift.tt/FbmYSaJ

Two MIT teams selected for NSF sustainable materials grants

Two teams led by MIT researchers were selected in December 2023 by the U.S. National Science Foundation (NSF) Convergence Accelerator, a part of the TIP Directorate, to receive awards of $5 million each over three years, to pursue research aimed at helping to bring cutting-edge new sustainable materials and processes from the lab into practical, full-scale industrial production. The selection was made after 16 teams from around the country were chosen last year for one-year grants to develop detailed plans for further research aimed at solving problems of sustainability and scalability for advanced electronic products.

Of the two MIT-led teams chosen for this current round of funding, one team, Topological Electric, is led by Mingda Li, an associate professor in the Department of Nuclear Science and Engineering. This team will be finding pathways to scale up sustainable topological materials, which have the potential to revolutionize next-generation microelectronics by showing superior electronic performance, such as dissipationless states or high-frequency response. The other team, led by Anuradha Agarwal, a principal research scientist at MIT’s Materials Research Laboratory, will be focusing on developing new materials, devices, and manufacturing processes for microchips that minimize energy consumption using electronic-photonic integration, and that detect and avoid the toxic or scarce materials used in today’s production methods.

Scaling the use of topological materials

Li explains that some materials based on quantum effects have achieved successful transitions from lab curiosities to successful mass production, such as blue-light LEDs, and giant magnetorestance (GMR) devices used for magnetic data storage. But he says there are a variety of equally promising materials that have shown promise but have yet to make it into real-world applications.

“What we really wanted to achieve is to bring newer-generation quantum materials into technology and mass production, for the benefit of broader society,” he says. In particular, he says, “topological materials are really promising to do many different things.”

Topological materials are ones whose electronic properties are fundamentally protected against disturbance. For example, Li points to the fact that just in the last two years, it has been shown that some topological materials are even better electrical conductors than copper, which are typically used for the wires interconnecting electronic components. But unlike the blue-light LEDs or the GMR devices, which have been widely produced and deployed, when it comes to topological materials, “there’s no company, no startup, there’s really no business out there,” adds Tomas Palacios, the Clarence J. Lebel Professor in Electrical Engineering at MIT and co-principal investigator on Li’s team. Part of the reason is that many versions of such materials are studied “with a focus on fundamental exotic physical properties with little or no consideration on the sustainability aspects,” says Liang Fu, an MIT professor of physics and also a co-PI. Their team will be looking for alternative formulations that are more amenable to mass production.

One possible application of these topological materials is for detecting terahertz radiation, explains Keith Nelson, an MIT professor of chemistry and co-PI. This extremely high-frequency electronics can carry far more information than conventional radio or microwaves, but at present there are no mature electronic devices available that are scalable at this frequency range. “There’s a whole range of possibilities for topological materials” that could work at these frequencies, he says. In addition, he says, “we hope to demonstrate an entire prototype system like this in a single, very compact solid-state platform.”

Li says that among the many possible applications of topological devices for microelectronics devices of various kinds, “we don’t know which, exactly, will end up as a product, or will reach real industrial scaleup. That’s why this opportunity from NSF is like a bridge, which is precious, to allow us to dig deeper to unleash the true potential.”

In addition to Li, Palacios, Fu, and Nelson, the Topological Electric team includes Qiong Ma, assistant professor of physics in Boston College; Farnaz Niroui, assistant professor of electrical engineering and computer science at MIT; Susanne Stemmer, professor of materials at the University of California at Santa Barbara; Judy Cha, professor of materials science and engineering at Cornell University; industrial partners including IBM, Analog Devices, and Raytheon; and professional consultants. “We are taking this opportunity seriously,” Li says. “We really want to see if the topological materials are as good as we show in the lab when being scaled up, and how far we can push to broadly industrialize them.”

Toward sustainable microchip production and use

The microchips behind everything from smartphones to medical imaging are associated with a significant percentage of greenhouse gas emissions today, and every year the world produces more than 50 million metric tons of electronic waste, the equivalent of about 5,000 Eiffel Towers. Further, the data centers necessary for complex computations and huge amount of data transfer — think AI and on-demand video — are growing and will require 10 percent of the world’s electricity by 2030.

“The current microchip manufacturing supply chain, which includes production, distribution, and use, is neither scalable nor sustainable, and cannot continue. We must innovate our way out of this crisis,” says Agarwal.

The name of Agarwal’s team, FUTUR-IC, is a reference to the future of the integrated circuits, or chips, through a global alliance for sustainable microchip manufacturing. Says Agarwal, “We bring together stakeholders from industry, academia, and government to co-optimize across three dimensions: technology, ecology, and workforce. These were identified as key interrelated areas by some 140 stakeholders. With FUTUR-IC we aim to cut waste and CO2-equivalent emissions associated with electronics by 50 percent every 10 years.”

The market for microelectronics in the next decade is predicted to be on the order of a trillion dollars, but most of the manufacturing for the industry occurs only in limited geographical pockets around the world. FUTUR-IC aims to diversify and strengthen the supply chain for manufacturing and packaging of electronics. The alliance has 26 collaborators and is growing. Current external collaborators include the International Electronics Manufacturing Initiative (iNEMI), Tyndall National Institute, SEMI, Hewlett Packard Enterprise, Intel, and the Rochester Institute of Technology.

Agarwal leads FUTUR-IC in close collaboration with others, including, from MIT, Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering; Elsa Olivetti, the Jerry McAfee Professor in Engineering; Randolph Kirchain, principal research scientist in the Materials Research Laboratory; and Greg Norris, director of MIT’s Sustainability and Health Initiative for NetPositive Enterprise (SHINE). All are affiliated with the Materials Research Laboratory. They are joined by Samuel Serna, an MIT visiting professor and assistant professor of physics at Bridgewater State University. Other key personnel include Sajan Saini, education director for the Initiative for Knowledge and Innovation in Manufacturing in MIT’s Department of Materials Science and Engineering; Peter O’Brien, a professor from Tyndall National Institute; and Shekhar Chandrashekhar, CEO of iNEMI.

“We expect the integration of electronics and photonics to revolutionize microchip manufacturing, enhancing efficiency, reducing energy consumption, and paving the way for unprecedented advancements in computing speed and data-processing capabilities,” says Serna, who is the co-lead on the project’s technology “vector.”

Common metrics for these efforts are needed, says Norris, co-lead for the ecology vector, adding, “The microchip industry must have transparent and open Life Cycle Assessment (LCA) models and data, which are being developed by FUTUR-IC.” This is especially important given that microelectronics production transcends industries. “Given the scale and scope of microelectronics, it is critical for the industry to lead in the transition to sustainable manufacture and use,” says Kirchain, another co-lead and the co-director of the Concrete Sustainability Hub at MIT. To bring about this cross-fertilization, co-lead Olivetti, also co-director of the MIT Climate and Sustainability Consortium (MCSC), will collaborate with FUTUR-IC to enhance the benefits from microchip recycling, leveraging the learning across industries.

Saini, the co-lead for the workforce vector, stresses the need for agility. “With a workforce that adapts to a practice of continuous upskilling, we can help increase the robustness of the chip-manufacturing supply chain, and validate a new design for a sustainability curriculum,” he says.

“We have become accustomed to the benefits forged by the exponential growth of microelectronic technology performance and market size,” says Kimerling, who is also director of MIT’s Materials Research Laboratory and co-director of the MIT Microphotonics Center. “The ecological impact of this growth in terms of materials use, energy consumption and end-of-life disposal has begun to push back against this progress. We believe that concurrently engineered solutions for these three dimensions will build a common learning curve to power the next 40 years of progress in the semiconductor industry.”

The MIT teams are two of six that received awards addressing sustainable materials for global challenges through phase two of the NSF Convergence Accelerator program. Launched in 2019, the program targets solutions to especially compelling challenges at an accelerated pace by incorporating a multidisciplinary research approach.



de MIT News https://ift.tt/cPX5gBS

Ian Waitz named vice president for research

In a letter to the MIT community today, President Sally Kornbluth announced the appointment of Ian A. Waitz to the position of vice president for research. In the role, Waitz will report to the president and oversee MIT’s vast research enterprise. The appointment is effective May 1.

Waitz, who is also the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, brings deep knowledge of MIT to the position. Over more than 30 years, he has served in a wide range of roles across the Institute, where he has made his mark through energy, optimism, persistence, and a commitment to MIT’s mission of using education and innovation to create a better world.

“Ian brings a rare range and depth of understanding of MIT’s research and educational enterprise, our daily operations, our institutional challenges and opportunities, our history and our values — and an unmatched record of solving hard problems and getting big, high-stakes things done well,” Kornbluth wrote. 

“MIT’s research enterprise is a critical part of our mission, not just for the impact that innovation and discovery have on the world, but also for the way it enables us to educate people by giving them problems that no one else has ever solved before,” Waitz says. “That builds a sort of intellectual capacity and resilience to work on really hard problems, and the nation and the world need us to work on hard problems.”

Waitz will step down from his current role as vice chancellor overseeing undergraduate and graduate education, where he was instrumental in advancing the priorities of the Chancellor’s Office, currently led by Melissa Nobles.

In that role, which he has held since 2017, Waitz worked with students, faculty, and staff from across the Institute to revamp the first-year undergraduate academic experience, helped steer the Institute through the Covid-19 pandemic, and led efforts to respond to graduate student unionization. Waitz also led a strategic restructuring to integrate the former offices of the Dean for Undergraduate Education and the Dean for Graduate Education, creating the Office of the Vice Chancellor and leading to a more aligned and efficient organization. And, he spearheaded projects to expand professional development opportunities for graduate students, created the MIT Undergraduate Advising Center, worked to significantly expand undergraduate financial aid, and broadly expanded support for graduate students.

“I think my experience gives me a unique perspective on research and education at MIT,” Waitz says. “Education is obviously an amazing part of MIT, and working with students bridges education and the research. That’s one of the things that’s special about a research university. I’m excited for this new role and to continue to work to further strengthen MIT’s exceptional research enterprise.”

Waitz will be filling a role previously held by Maria Zuber, the E. A. Griswold Professor of Geophysics, who now serves as MIT’s presidential advisor for science and technology policy. Waitz says he’s eager to dive in and work to identify ways to help MIT’s prolific research engine run more smoothly. The move is just the latest example of Waitz leaning into new opportunities in service to MIT.

Prior to assuming his current role as vice chancellor, Waitz served as the dean of the School of Engineering between 2011 and 2017, supporting the school’s ability to attract and support exceptional students and faculty. He oversaw the launch of programs including the Institute for Data, Systems, and Society (IDSS), the Institute for Medical Engineering and Science (IMES), the Sandbox Innovation Fund, and the MIT Beaver Works program with Lincoln Laboratory. He also strengthened co-curricular and enrichment programs for undergraduate and graduate students, and worked with department heads to offer more flexible degrees.

Prior to that, Waitz served as the head of MIT’s Department of Aeronautics and Astronautics, where he has been a faculty member since 1991. His research focuses on developing technological, operational, and policy options to mitigate the environmental impacts of aviation. He is a member of the National Academy of Engineering, a fellow of the American Institute of Aeronautics and Astronautics, and has worked closely with industry and government throughout his career.

“One lesson I’ve learned is that the greatest strength of MIT is our students, faculty, and staff,” Waitz says. “We identify people who are real intellectual entrepreneurs. Those are the people that really thrive here, and what you want to do is create a low-friction, high-resource environment for them. Amazing things bubble up from that.”



de MIT News https://ift.tt/EZiMKX0

A closed-loop drug-delivery system could improve chemotherapy

When cancer patients undergo chemotherapy, the dose of most drugs is calculated based on the patient’s body surface area. This is estimated by plugging the patient’s height and weight into an equation, dating to 1916, that was formulated from data on just nine patients.

This simplistic dosing doesn’t take into account other factors and can lead to patients receiving either too much or too little of a drug. As a result, some patients likely experience avoidable toxicity or insufficient benefit from the chemotherapy they receive.

To make chemotherapy dosing more accurate, MIT engineers have come up with an alternative approach that can enable the dose to be personalized to the patient. Their system measures how much drug is in the patient’s system, and these measurements are fed into a controller that can adjust the infusion rate accordingly.

This approach could help to compensate for differences in drug pharmacokinetics caused by body composition, genetic makeup, chemotherapy-induced toxicity of the organs that metabolize the drugs, interactions with other medications being taken and foods consumed, and circadian fluctuations in the enzymes responsible for breaking down chemotherapy drugs, the researchers say.

“Recognizing the advances in our understanding of how drugs are metabolized, and applying engineering tools to facilitate personalized dosing, we believe, can help transform the safety and efficacy of many drugs,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

Louis DeRidder, an MIT graduate student, is the lead author of the paper, which appears today in the journal Med.

Continuous monitoring

In this study, the researchers focused on a drug called 5-fluorouracil, which is used to treat colorectal cancers, among others. The drug is typically infused over a 46-hour period, and the dosage is determined using a formula based on the patient’s height and weight, which gives the estimated body surface area.

However, that approach doesn’t account for differences in body composition that can affect how the drug spreads through the body, or genetic variations that influence how it is metabolized. Those differences can lead to harmful side effects, if too much drug is present. If not enough drug is circulating, it may not kill the tumor as expected.

“People with the same body surface area could have very different heights and weights, could have very different muscle masses or genetics, but as long as the height and the weight plugged into this equation give the same body surface area, their dose is identical,” says DeRidder, a PhD candidate in the Medical Engineering and Medical Physics program within the Harvard-MIT Program in Health Sciences and Technology.

Another factor that can alter the amount of drug in the bloodstream at any given time is circadian fluctuations of an enzyme called dihydropyrimidine dehydrogenase (DPD), which breaks down 5-fluorouracil. DPD’s expression, like many other enzymes in the body, is regulated on a circadian rhythm. Thus, the degradation of 5-FU by DPD is not constant but changes according to the time of the day. These circadian rhythms can lead to tenfold fluctuations in the amount of 5-fluorouracil in a patient’s bloodstream over the course of an infusion.

“Using body surface area to calculate a chemotherapy dose, we know that two people can have profoundly different toxicity from 5-fluorouracil chemotherapy. Looking at one patient, they can have cycles of treatment with minimal toxicity and then have a cycle with miserable toxicity. Something changed in how that patient metabolized chemo from one cycle to the next. Our antiquated dosing fails to capture that change, and patients suffer as a result,” says Douglas Rubinson, a clinical oncologist at Dana-Farber Cancer Institute and an author of the paper.

One way to try to counteract the variability in chemotherapy pharmacokinetics is a strategy called therapeutic drug monitoring, in which the patient gives a blood sample at the end of one treatment cycle. After this sample is analyzed for the drug concentration, the dosage can be adjusted, if needed, at the beginning of the next cycle (usually two weeks later for 5-fluorouracil). This approach has been shown to result in better outcomes for patients, but it is not widely used for chemotherapies such as 5-fluorouracil.

The MIT researchers wanted to develop a similar type of monitoring, but in a manner that is automated and enables real-time drug personalization, which could result in better outcomes for patients. In their “closed-loop” system, drug concentrations can be continually monitored, and that information is used to automatically adjust the infusion rate of the chemotherapy drug and keep the dose within the target range. Such a closed-loop system enables personalization of the drug dose in a manner that considers circadian rhythm changes in the levels of drug-metabolizing enzymes, as well as any changes in the patient’s pharmacokinetics since their last treatment, such as chemotherapy-induced toxicity of the organs that metabolize the drugs.

The new system they designed, known as CLAUDIA (Closed-Loop AUtomated Drug Infusion regulAtor), makes use of commercially available equipment for each step. Blood samples are taken every five minutes and rapidly prepared for analysis. The concentration of 5-fluorouracil in the blood is measured and compared to the target range. The difference between the target and measured concentration is input to a control algorithm, which then adjusts the infusion rate if necessary, to keep the dose within the range of concentrations between which the drug is effective and nontoxic.

“What we’ve developed is a system where you can constantly measure the concentration of drug and adjust the infusion rate accordingly, to keep the drug concentration within the therapeutic window,” DeRidder says.

Rapid adjustment

In tests in animals, the researchers found that using CLAUDIA, they could keep the amount of drug circulating in the body within the target range around 45 percent of the time. Drug levels in animals that received chemotherapy without CLAUDIA remained in the target range only 13 percent of the time, on average. In this study, the researchers did not do any tests of the effectiveness of the drug levels, but keeping the concentration within the target window is believed to lead to better outcomes and less toxicity.

CLAUDIA was also able to keep the dose of 5-fluorouracil within the target range even when the researchers administered a drug that inhibits the DPD enzyme. In animals that received this inhibitor without continuous monitoring and adjustment, levels of 5-fluorouracil increased by up to eightfold.

For this demonstration, the researchers manually performed each step of the process, using off-the-shelf equipment, but they now plan to work on automating each step so that the monitoring and dose adjustment can be done without any human intervention.

To measure drug concentrations, the researchers used high-performance liquid chromatography mass spectroscopy (HPLC-MS), a technique that could be adapted to detect nearly any type of drug.

“We foresee a future where we’re able to use CLAUDIA for any drug that has the right pharmacokinetic properties and is detectable with HPLC-MS, thereby enabling the personalization of dosing for many different drugs,” DeRidder says.

The research was funded by the National Science Foundation Graduate Research Fellowship Program, a MathWorks Fellowship, MIT’s Karl van Tassel Career Development Professorship, the MIT Department of Mechanical Engineering, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.

Other authors of the paper include Kyle A. Hare, Aaron Lopes, Josh Jenkins, Nina Fitzgerald, Emmeline MacPherson, Niora Fabian, Josh Morimoto, Jacqueline N. Chu, Ameya R. Kirtane, Wiam Madani, Keiko Ishida, Johannes L. P. Kuosmanen, Naomi Zecharias, Christopher M. Colangelo, Hen-Wei Huang, Makaya Chilekwa, Nikhil B. Lal, Shriya S. Srinivasan, Alison M Hayward, Brian M. Wolpin, David Trumper, Troy Quast, and Robert Langer.



de MIT News https://ift.tt/lFcAvgK

Geologists discover rocks with the oldest evidence yet of Earth’s magnetic field

Geologists at MIT and Oxford University have uncovered ancient rocks in Greenland that bear the oldest remnants of Earth’s early magnetic field.

The rocks appear to be exceptionally pristine, having preserved their properties for billions of years. The researchers determined that the rocks are about 3.7 billion years old and retain signatures of a magnetic field with a strength of at least 15 microtesla. The ancient field is similar in magnitude to the Earth’s magnetic field today.

The open-access findings, appearing today in the Journal of Geophysical Research, represent some of the earliest evidence of a magnetic field surrounding the Earth. The results potentially extend the age of the Earth’s magnetic field by hundreds of millions of years, and may shed light on the planet’s early conditions that helped life take hold.

A drone photo shows three small researchers on a rocky formation, with a vast expanse of ice and snow in background.

“The magnetic field is, in theory, one of the reasons we think Earth is really unique as a habitable planet,” says Claire Nichols, a former MIT postdoc who is now an associate professor of the geology of planetary processes at Oxford University. “It’s thought our magnetic field protects us from harmful radiation from space, and also helps us to have oceans and atmospheres that can be stable for long periods of time.”

Previous studies have shown evidence for a magnetic field on Earth that is at least 3.5 billion years old. The new study is extending the magnetic field’s lifetime by another 200 million years.

“That’s important because that’s the time when we think life was emerging,” says Benjamin Weiss, the Robert R. Shrock Professor of Planetary Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “If the Earth’s magnetic field was around a few hundred million years earlier, it could have played a critical role in making the planet habitable.”

Nichols and Weiss are co-authors of the new study, which also includes Craig Martin and Athena Eyster at MIT, Adam Maloof at Princeton University, and additional colleagues from institutions including Tufts University and the University of Colorado at Boulder.

A slow churn

Today, the Earth’s magnetic field is powered by its molten iron core, which slowly churns up electric currents in a self-generating “dynamo.” The resulting magnetic field extends out and around the planet like a protective bubble. Scientists suspect that, early in its evolution, the Earth was able to foster life, in part due to an early magnetic field that was strong enough to retain a life-sustaining atmosphere and simultaneously shield the planet from damaging solar radiation.

Exactly how early and robust this magnetic shield was is up for debate, though there has been evidence dating its existence to about 3.5 billion years ago.

“We wanted to see if we could extend this record back beyond 3.5 billion years and nail down how strong that early field was,” Nichols says.

In 2018, as a postdoc working in Weiss’ lab at the time, Nichols and her team set off on an expedition to the Isua Supracrustal Belt, a 20-mile stretch of exposed rock formations surrounded by towering ice sheets in the southwest of Greenland. There, scientists have discovered the oldest preserved rocks on Earth, which have been extensively studied in hopes of answering a slew of scientific questions about Earth’s ancient conditions.

For Nichols and Weiss, the objective was to find rocks that still held signatures of the Earth’s magnetic field when the rocks first formed. Rocks form through many millions of years, as grains of sediment and minerals accumulate and are progressively packed and buried under subsequent deposition over time. Any magnetic minerals such as iron-oxides that are in the deposits follow the pull of the Earth’s magnetic field as they form. This collective orientation, and the imprint of the magnetic field, are preserved in the rocks.

However, this preserved magnetic field can be scrambled and completely erased if the rocks subsequently undergo extreme thermal or aqueous events such as hydrothermal activity or plate tectonics that can pressurize and crush up these deposits. Determining the age of a magnetic field in ancient rocks has therefore been a highly contested area of study.

To get to rocks that were hopefully preserved and unaltered since their original deposition, the team sampled from rock formations in the Isua Supracrustal Belt, a remote location that was only accessible by helicopter.

“It’s about 150 kilometers away from the capital city, and you get helicoptered in, right up against the ice sheet,” Nichols says. “Here, you have the world’s oldest rocks essentially, surrounded by this dramatic expression of the ice age. It’s a really spectacular place.”

Dynamic history

The team returned to MIT with whole rock samples of banded iron formations — a rock type that appears as stripes of iron-rich and silica-rich rock. The iron-oxide minerals found in these rocks can act as tiny magnets that orient with any external magnetic field. Given their composition, the researchers suspect the rocks were originally formed in primordial oceans prior to the rise in atmospheric oxygen around 2.5 billion years ago.

“Back when there wasn’t oxygen in the atmosphere, iron didn’t oxidize so easily, so it was in solution in the oceans until it reached a critical concentration, when it precipitated out,” Nichols explains. “So, it’s basically a result of iron raining out of the oceans and depositing on the seafloor.”

“They’re very beautiful, weird rocks that don’t look like anything that forms on Earth today,” Weiss adds.

Previous studies had used uranium-lead dating to determine the age of the iron oxides in these rock samples. The ratio of uranium to lead (U-Pb) gives scientists an estimate of a rock’s age. This analysis found that some of the magnetized minerals were likely about 3.7 billion years old. The MIT team, in collaboration with researchers from Rensselaer Polytechnic Institute, showed in a paper published last year that the U-Pb age also dates the age of the magnetic record in these minerals.

The researchers then set out to determine whether the ancient rocks preserved magnetic field from that far back, and how strong that field might have been.

“The samples we think are best and have that very old signature, we then demagnetize in the lab, in steps. We apply a laboratory field that we know the strength of, and we remagnetize the rocks in steps, so you can compare the gradient of the demagnetization to the gradient of the lab magnetization. That gradient tells you how strong the ancient field was,” Nichols explains.

Through this careful process of remagnetization, the team concluded that the rocks likely harbored an ancient, 3.7-billion-year-old magnetic field, with a magnitude of at least 15 microtesla. Today, Earth’s magnetic field measures around 30 microtesla.

“It’s half the strength, but the same order of magnitude,” Nichols says. “The fact that it’s similar in strength as today’s field implies whatever is driving Earth’s magnetic field has not changed massively in power over billions of years.”

The team’s experiments also showed that the rocks retained the ancient field, despite having undergone two subsequent thermal events. Any extreme thermal event, such as a tectonic shake-up of the subsurface or hydrothermal eruptions, could potentially heat up and erase a rock’s magnetic field. But the team found that the iron in their samples likely oriented, then crystallized, 3.7 billion years ago, in some initial, extreme thermal event. Around 2.8 billion years ago, and then again at 1.5 billion years ago, the rocks may have been reheated, but not to the extreme temperatures that would have scrambled their magnetization.

“The rocks that the team has studied have experienced quite a bit during their long geological journey on our planet,” says Annique van der Boon, a planetary science researcher at the University of Oslo who was not involved in the study. “The authors have done a lot of work on constraining which geological events have affected the rocks at different times.” 

“The team have taken their time to deliver a very thorough study of these complex rocks, which do not give up their secrets easily,” says Andy Biggin, professor of geomagnetism at the University of Liverpool, who did not contribute to the study. “These new results tell us that the Earth’s magnetic field was alive and well 3.7 billion years ago. Knowing it was there and strong contributes a significant boundary constraint on the early Earth’s environment.”

The results also raise questions about how the ancient Earth could have powered such a robust magnetic field. While today’s field is powered by crystallization of the solid iron inner core, it’s thought that the inner core had not yet formed so early in the planet’s evolution.

“It seems like evidence for whatever was generating a magnetic field back then was a different power source from what we have today,” Weiss says. “And we care about Earth because there’s life here, but it’s also a touchstone for understanding other terrestrial planets. It suggests planets throughout the galaxy probably have lots of ways of powering a magnetic field, which is important for the question of habitability elsewhere.”

This research was supported, in part, by the Simons Foundation.



de MIT News https://ift.tt/bWUF0kN