viernes, 6 de junio de 2025

How the brain distinguishes between ambiguous hypotheses

When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.

Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation to our destination. In a study of mice, MIT neuroscientists have now discovered that these hypotheses are explicitly represented in the brain by distinct neural activity patterns.

This is the first time that neural activity patterns that encode simultaneous hypotheses have been seen in the brain. The researchers found that these representations, which were observed in the brain’s retrosplenial cortex (RSC), not only encode hypotheses but also could be used by the animals to choose the correct way to go.

“As far as we know, no one has shown in a complex reasoning task that there’s an area in association cortex that holds two hypotheses in mind and then uses one of those hypotheses, once it gets more information, to actually complete the task,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jakob Voigts PhD ’17, a former postdoc in Harnett’s lab and now a group leader at the Howard Hughes Medical Institute Janelia Research Campus, is the lead author of the paper, which appears today in Nature Neuroscience.

Ambiguous landmarks

The RSC receives input from the visual cortex, the hippocampal formation, and the anterior thalamus, which it integrates to help guide navigation.

In a 2020 paper, Harnett’s lab found that the RSC uses both visual and spatial information to encode landmarks used for navigation. In that study, the researchers showed that neurons in the RSC of mice integrate visual information about the surrounding environment with spatial feedback of the mice’s own position along a track, allowing them to learn where to find a reward based on landmarks that they saw.

In their new study, the researchers wanted to delve further into how the RSC uses spatial information and situational context to guide navigational decision-making. To do that, the researchers devised a much more complicated navigational task than typically used in mouse studies. They set up a large, round arena, with 16 small openings, or ports, along the side walls. One of these openings would give the mice a reward when they stuck their nose through it. In the first set of experiments, the researchers trained the mice to go to different reward ports indicated by dots of light on the floor that were only visible when the mice get close to them.

Once the mice learned to perform this relatively simple task, the researchers added a second dot. The two dots were always the same distance from each other and from the center of the arena. But now the mice had to go to the port by the counterclockwise dot to get the reward. Because the dots were identical and only became visible at close distances, the mice could never see both dots at once and could not immediately determine which dot was which.

To solve this task, mice therefore had to remember where they expected a dot to show up, integrating their own body position, the direction they were heading, and path they took to figure out which landmark is which. By measuring RSC activity as the mice approached the ambiguous landmarks, the researchers could determine whether the RSC encodes hypotheses about spatial location. The task was carefully designed to require the mice to use the visual landmarks to obtain rewards, instead of other strategies like odor cues or dead reckoning.

“What is important about the behavior in this case is that mice need to remember something and then use that to interpret future input,” says Voigts, who worked on this study while a postdoc in Harnett’s lab. “It’s not just remembering something, but remembering it in such a way that you can act on it.”

The researchers found that as the mice accumulated information about which dot might be which, populations of RSC neurons displayed distinct activity patterns for incomplete information. Each of these patterns appears to correspond to a hypothesis about where the mouse thought it was with respect to the reward.

When the mice get close enough to figure out which dot was indicating the reward port, these patterns collapsed into the one that represents the correct hypothesis. The findings suggest that these patterns not only passively store hypotheses, they can also be used to compute how to get to the correct location, the researchers say.

“We show that RSC has the required information for using this short-term memory to distinguish the ambiguous landmarks. And we show that this type of hypothesis is encoded and processed in a way that allows the RSC to use it to solve the computation,” Voigts says.

Interconnected neurons

When analyzing their initial results, Harnett and Voigts consulted with MIT Professor Ila Fiete, who had run a study about 10 years ago using an artificial neural network to perform a similar navigation task.

That study, previously published on bioRxiv, showed that the neural network displayed activity patterns that were conceptually similar to those seen in the animal studies run by Harnett’s lab. The neurons of the artificial neural network ended up forming highly interconnected low-dimensional networks, like the neurons of the RSC.

“That interconnectivity seems, in ways that we still don’t understand, to be key to how these dynamics emerge and how they’re controlled. And it’s a key feature of how the RSC holds these two hypotheses in mind at the same time,” Harnett says.

In his lab at Janelia, Voigts now plans to investigate how other brain areas involved in navigation, such as the prefrontal cortex, are engaged as mice explore and forage in a more naturalistic way, without being trained on a specific task.

“We’re looking into whether there are general principles by which tasks are learned,” Voigts says. “We have a lot of knowledge in neuroscience about how brains operate once the animal has learned a task, but in comparison we know extremely little about how mice learn tasks or what they choose to learn when given freedom to behave naturally.”

The research was funded, in part, by the National Institutes of Health, a Simons Center for the Social Brain at MIT postdoctoral fellowship, the National Institute of General Medical Sciences, and the Center for Brains, Minds, and Machines at MIT, funded by the National Science Foundation.



de MIT News https://ift.tt/QkGW5Mb

jueves, 5 de junio de 2025

Animation technique simulates the motion of squishy objects

Animators could create more realistic bouncy, stretchy, and squishy characters for movies and video games thanks to a new simulation method developed by researchers at MIT.

Their approach allows animators to simulate rubbery and elastic materials in a way that preserves the physical properties of the material and avoids pitfalls like instability.

The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.

To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.

Wiggly gummy bears

“The way animations look often depends on how accurately we simulate the physics of the problem,” says Leticia Mattos Da Silva, an MIT graduate student and lead author of a paper on this research. “Our method aims to stay true to physical laws while giving more control and stability to animation artists.”

Beyond 3D animation, the researchers also see potential future uses in the design of real elastic objects, such as flexible shoes, garments, or toys. The method could be extended to help engineers explore how stretchy objects will perform before they are built.

She is joined on the paper by Silvia Sellán, an assistant professor of computer science at Columbia University; Natalia Pacheco-Tallaj, an MIT graduate student; and senior author Justin Solomon, an associate professor in the MIT Department of Electrical Engineering and Computer Science and leader of the Geometric Data Processing Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the SIGGRAPH conference.

Truthful to physics

If you drop a rubber ball on a wooden floor, it bounces back up. Viewers expect to see the same behavior in an animated world, but recreating such dynamics convincingly can be difficult. Many existing techniques simulate elastic objects using fast solvers that trade physical realism for speed, which can result in excessive energy loss or even simulation failure.

More accurate approaches, including a class of techniques called variational integrators, preserve the physical properties of the object, such as its total energy or momentum, and, in this way, mimic real-world behavior more closely. But these methods are often unreliable because they depend on complex equations that are hard to solve efficiently.

The MIT researchers tackled this problem by rewriting the equations of variational integrators to reveal a hidden convex structure. They broke the deformation of elastic materials into a stretch component and a rotation component, and found that the stretch portion forms a convex problem that is well-suited for stable optimization algorithms.

“If you just look at the original formulation, it seems fully non-convex. But because we can rewrite it so that is convex in at least some of its variables, we can inherit some advantages of convex optimization algorithms,” she says.

These convex optimization algorithms, when applied under the right conditions, come with guarantees of convergence, meaning they are more likely to find the correct answer to the problem. This generates more stable simulations over time, avoiding issues like a bouncing rubber ball losing too much energy or exploding mid-animation.

One of the biggest challenges the researchers faced was reinterpreting the formulation so they could extract that hidden convexity. Some other works explored hidden convexity in static problems, but it was not clear whether the structures remained solid for dynamic problems like simulating elastic objects in motion, Mattos Da Silva says.

Stability and efficiency

In experiments, their solver was able to simulate a wide range of elastic behavior, from bouncing shapes to squishy characters, with preservation of important physical properties and stability over long periods of time. Other simulation methods quickly ran into trouble: Some became unstable, causing erratic behavior, while others showed visible damping.

A jiggly, bouncing character.

“Because our method demonstrates more stability, it can give animators more reliability and confidence when simulating anything elastic, whether it’s something from the real world or even something completely imaginary,” she says.

While the solver is not as fast as some simulation tools that prioritize speed over accuracy, it avoids many of the trade-offs those methods make. Compared to other physics-based approaches, it also avoids the need for complex, nonlinear solvers that can be sensitive and prone to failure.

In the future, the researchers want to explore techniques to further reduce computational cost. In addition, they want to explore applications of this technique in fabrication and engineering, where reliable simulations of elastic materials could support the design of real-world objects, like garments and toys.

“We were able to revive an old class of integrators in our work. My guess is there are other examples where researchers can revisit a problem to find a hidden convexity structure that could offer a lot of advantages,” she says.

This research is funded, in part, by a MathWorks Engineering Fellowship, the Army Research Office, the National Science Foundation, the CSAIL Future of Data Program, the MIT-IBM Watson AI Laboratory, the Wistron Corporation, and the Toyota-CSAIL Joint Research Center.



de MIT News https://ift.tt/Toy3jFE

Former MIT researchers advance a new model for innovation

Academic research groups and startups are essential drivers of scientific progress. But some projects, like the Hubble Space Telescope or the Human Genome Project, are too big for any one academic lab or loose consortium. They’re also not immediately profitable enough for industry to take on.

That’s the gap researchers at MIT were trying to fill when they created the concept of focused research organizations, or FROs. They describe a FRO as a new type of entity, often philanthropically funded, that undertakes large research efforts using tightly coordinated teams to create a public good that accelerates scientific progress.

The original idea for focused research organizations came out of talks among researchers, most of whom were working to map the brain in MIT Professor Ed Boyden’s lab. After they began publishing their ideas, however, the researchers realized FROs could be a powerful tool to unlock scientific advances across many other applications.

“We were quite pleasantly surprised by the range of fields where we see FRO-shaped problems,” says Adam Marblestone, a former MIT research scientist who co-founded the nonprofit Convergent Research to help launch FROs in 2021. “Convergent has FRO proposals from climate, materials science, chemistry, biology — we even have launched a FRO on software for math. You wouldn’t expect math to be something with a large-scale technological research bottleneck, but it turns out even there, we found a software engineering bottleneck that needed to be solved.”

Marblestone helped formulate the idea for focused research organizations at MIT with a group including Andrew Payne SM ’17, PhD ’21 and Sam Rodriques PhD ’19, who were PhD students in Boyden’s lab at the time. Since then, the FRO concept has caught on. Convergent has helped attract philanthropic funding for FROs working to decode the immune system, identify the unintended targets of approved drugs, and understand the impacts of carbon dioxide removal in our oceans.

In total, Convergent has supported the creation of 10 FROs since its founding in 2021. Many of those groups have already released important tools for better understanding our world — and their leaders believe the best is yet to come.

“We’re starting to see these first open-source tools released in important areas,” Marblestone says. “We’re seeing the first concrete evidence that FROs are effective, because no other entity could have released these tools, and I think 2025 is going to be a significant year in terms of our newer FROs putting out new datasets and tools.”

A new model

Marblestone joined Boyden’s lab in 2014 as a research scientist after completing his PhD at Harvard University. He also worked in a new position called director of scientific architecting at the MIT Media Lab, which Boyden helped create, through which he tried to organize individual research efforts into larger projects. His own research focused on overcoming the challenges of measuring brain activity across large scales.

Marblestone discussed this and other large-scale neuroscience problems with Payne and Rodriques, and the researchers began thinking about gaps in scientific funding more broadly.

“The combination of myself, Sam, Andrew, Ed, and others’ experiences trying to start various large brain-mapping projects convinced us of the gap in support for medium-sized science and engineering teams with startup-inspired structures, built for the nonprofit purpose of building scientific infrastructure,” Marblestone says.

Through MIT, the researchers also connected with Tom Kalil, who was at the time working as the U.S. deputy director for technology and innovation. Rodriques wrote about the concept of a focused research organization as the last chapter of his PhD thesis in 2019.

“Ed always encouraged us to dream very, very big,” Rodriques says. “We were always trying to think about the hardest problems in biology and how to tackle them. My thesis basically ended with me explaining why we needed a new structure that is like a company, but nonprofit and dedicated to science.”

As part of a fellowship with the Federation of American Scientists in 2020, and working with Kalil, Marblestone interviewed scientists in dozens of fields outside of neuroscience and learned that the funding gap existed across disciplines.

When Rodriques and Marblestone published an essay about their findings, it helped attract philanthropic funding, which Marblestone, Kalil, and co-founder Anastasia Gamick used to launch Convergent Research, a nonprofit science studio for launching FROs.

“I see Ed’s lab as a melting pot where myself, Ed, Sam, and others worked on articulating a need and identifying specific projects that might make sense as FROs,” Marblestone says. “All those ideas later got crystallized when we created Convergent Research.”

In 2021, Convergent helped launch the first FROs: E11 Bio, which is led by Payne and committed to developing tools to understand how the brain is wired, and Cultivarium, an FRO making microorganisms more accessible for work in synthetic biology.

“From our brain mapping work we started asking the question, ‘Are there other projects that look like this that aren’t getting funded?’” Payne says. “We realized there was a gap in the research ecosystem, where some of these interdisciplinary, team science projects were being systematically overlooked. We knew a lot of amazing things would come out of getting those projects funded.”

Tools to advance science

Early progress from the first focused research organizations has strengthened Marblestone’s conviction that they’re filling a gap.

[C]Worthy is the FRO building tools to ensure safe, ocean-based carbon dioxide removal. It recently released an interactive map of alkaline activity to improve our understanding of one method for sequestering carbon known as ocean alkalinity enhancement. Last year, a math FRO, Lean, released a programming language and proof assistant that was used by Google’s DeepMind AI lab to solve problems in the International Mathematical Olympiad, achieving the same level as a silver medalist in the competition for the first time. The synthetic biology FRO Cultivarium, in turn, has already released software that can predict growth conditions for microbes based on their genome.

Last year, E11 Bio previewed a new method for mapping the brain called PRISM, which it has used to map out a portion of the mouse hippocampus. It will be making the data and mapping tool available to all researchers in coming months.

“A lot of this early work has proven you can put a really talented team together and move fast to go from zero to one,” Payne says. “The next phase is proving FROs can continue to build on that momentum and develop even more datasets and tools, establish even bigger collaborations, and scale their impact.”

Payne credits Boyden for fostering an ecosystem where researchers could think about problems beyond their narrow area of study.

“Ed’s lab was a really intellectually stimulating, collaborative environment,” Payne says. “He trains his students to think about impact first and work backward. It was a bunch of people thinking about how they were going to change the world, and that made it a particularly good place to develop the FRO idea.”

Marblestone says supporting FROs has been the highest-impact thing he’s been able to do in his career. Still, he believes the success of FROs should be judged over closer to 10-year periods and will depend on not just the tools they produce but also whether they spin out companies, partner with other institutes, and create larger, long-lasting initiatives to deploy what they built.

“We were initially worried people wouldn’t be willing to join these organizations because it doesn’t offer tenure and it doesn’t offer equity in a startup,” Marblestone says. “But we’ve been able to recruit excellent leaders, scientists, engineers, and others to create highly motivated teams. That’s good evidence this is working. As we get strong projects and good results, I hope it will create this flywheel where it becomes easier to fund these ideas, more scientists will come up with them, and I think we’re starting to get there.”



de MIT News https://ift.tt/WDQpROl

Scene at MIT: Reflecting on a shared journey toward MIT PhDs

“My wife, Erin Tevonian, and I both graduated last week with our PhDs in biological engineering, a program we started together when we arrived at MIT in fall 2019. At the time, we had already been dating for three years, having met as classmates in the bioengineering program at the University of Illinois at Urbana-Champaign in 2015. We went through college together — taking classes, vacationing with friends, and biking cross-country, all side-by-side — and so we were lucky to be able to continue doing so by coming to Course 20 at MIT together. It was during our graduate studies at MIT that we got engaged (spring 2022) and married (last September), a milestone that we were able to celebrate with the many wonderful friends we found at MIT.

First-year students in the MIT Biological Engineering PhD program rotate through labs of interest before picking where they will complete their doctorates, and so we found our way to research groups by January 2020 just before the Covid-19 pandemic disrupted on-campus research and caused social distancing. Erin completed her PhD in Doug Lauffenburger and Linda Griffith’s labs, during which she used computational and experimental models to study human insulin resistance and built better liver tissue models for recapitulating disease pathology. I completed my PhD in Anders Hansen’s lab and studied how DNA folds in 3D space to drive gene regulation by building and applying a new method for mapping DNA architecture at finer resolutions than previously possible. The years flew by as we dove into our research projects, and we defended our PhDs a week apart back in April.

Erin and I were standing at Commencement with the Class of 2025 at the moment this photo was snapped, smiling as we listened to MIT’s school song. Graduation is a bittersweet milestone because it represents the end of what has been an incredible adventure for us, an adventure that made campus feel like home, so I must admit that I wasn’t sure how I would feel going into graduation week. This moment, though, felt like a fitting close for our time at MIT, and I was filled with gratitude for the many memories, opportunities, and adventures I got to share with Erin over the course of grad school. I also graduated from the MIT Sloan School of Management/School of Engineering’s Leaders for Global Operations program (hence the stole), so I was also reflecting on the many folks I’ve met across campus that make MIT the wonderful place that it is, and how special it is to be a part of a community that makes it so hard to say goodbye.”

—Viraat Goel MBA ’25, PhD ’25

Have a creative photo of campus life you'd like to share? Submit it to Scene at MIT.



de MIT News https://ift.tt/ZMDl1Sc

miércoles, 4 de junio de 2025

Physicists observe a new form of magnetism for the first time

MIT physicists have demonstrated a new form of magnetism that could one day be harnessed to build faster, denser, and less power-hungry “spintronic” memory chips.

The new magnetic state is a mash-up of two main forms of magnetism: the ferromagnetism of everyday fridge magnets and compass needles, and antiferromagnetism, in which materials have magnetic properties at the microscale yet are not macroscopically magnetized.

Now, the MIT team has demonstrated a new form of magnetism, termed “p-wave magnetism.”

Physicists have long observed that electrons of atoms in regular ferromagnets share the same orientation of “spin,” like so many tiny compasses pointing in the same direction. This spin alignment generates a magnetic field, which gives a ferromagnet its inherent magnetism. Electrons belonging to magnetic atoms in an antiferromagnet also have spin, although these spins alternate, with electrons orbiting neighboring atoms aligning their spins antiparallel to each other. Taken together, the equal and opposite spins cancel out, and the antiferromagnet does not exhibit macroscopic magnetization.

The team discovered the new p-wave magnetism in nickel iodide (NiI2), a two-dimensional crystalline material that they synthesized in the lab. Like a ferromagnet, the electrons exhibit a preferred spin orientation, and, like an antiferromagnet, equal populations of opposite spins result in a net cancellation. However, the spins on the nickel atoms exhibit a unique pattern, forming spiral-like configurations within the material that are mirror-images of each other, much like the left hand is the right hand’s mirror image.

What’s more, the researchers found this spiral spin configuration enabled them to carry out “spin switching”: Depending on the direction of spiraling spins in the material, they could apply a small electric field in a related direction to easily flip a left-handed spiral of spins into a right-handed spiral of spins, and vice-versa.

The ability to switch electron spins is at the heart of “spintronics,” which is a proposed alternative to conventional electronics. With this approach, data can be written in the form of an electron’s spin, rather than its electronic charge, potentially allowing orders of magnitude more data to be packed onto a device while using far less power to write and read that data.   

“We showed that this new form of magnetism can be manipulated electrically,” says Qian Song, a research scientist in MIT’s Materials Research Laboratory. “This breakthrough paves the way for a new class of ultrafast, compact, energy-efficient, and nonvolatile magnetic memory devices.”

Song and his colleagues published their results May 28 in the journal Nature. MIT co-authors include Connor Occhialini, Batyr Ilyas, Emre Ergeçen, Nuh Gedik, and Riccardo Comin, along with Rafael Fernandes at the University of Illinois Urbana-Champaign, and collaborators from multiple other institutions.

Connecting the dots

The discovery expands on work by Comin’s group in 2022. At that time, the team probed the magnetic properties of the same material, nickel iodide. At the microscopic level, nickel iodide resembles a triangular lattice of nickel and iodine atoms. Nickel is the material’s main magnetic ingredient, as the electrons on the nickel atoms exhibit spin, while those on iodine atoms do not.

In those experiments, the team observed that the spins of those nickel atoms were arranged in a spiral pattern throughout the material’s lattice, and that this pattern could spiral in two different orientations.

At the time, Comin had no idea that this unique pattern of atomic spins could enable precise switching of spins in surrounding electrons. This possibility was later raised by collaborator Rafael Fernandes, who along with other theorists was intrigued by a recently proposed idea for a new, unconventional, “p-wave” magnet, in which electrons moving along opposite directions in the material would have their spins aligned in opposite directions.

Fernandes and his colleagues recognized that if the spins of atoms in a material form the geometric spiral arrangement that Comin observed in nickel iodide, that would be a realization of a “p-wave” magnet. Then, when an electric field is applied to switch the “handedness” of the spiral, it should also switch the spin alignment of the electrons traveling along the same direction.

In other words, such a p-wave magnet could enable simple and controllable switching of electron spins, in a way that could be harnessed for spintronic applications.

“It was a completely new idea at the time, and we decided to test it experimentally because we realized nickel iodide was a good candidate to show this kind of p-wave magnet effect,” Comin says.

Spin current

For their new study, the team synthesized single-crystal flakes of nickel iodide by first depositing powders of the respective elements on a crystalline substrate, which they placed in a high-temperature furnace. The process causes the elements to settle into layers, each arranged microscopically in a triangular lattice of nickel and iodine atoms.

“What comes out of the oven are samples that are several millimeters wide and thin, like cracker bread,” Comin says. “We then exfoliate the material, peeling off even smaller flakes, each several microns wide, and a few tens of nanometers thin.”

The researchers wanted to know if, indeed, the spiral geometry of the nickel atoms’s spins would force electrons traveling in opposite directions to have opposite spins, like what Fernandes expected a p-wave magnet should exhibit. To observe this, the group applied to each flake a beam of circularly polarized light — light that produces an electric field that rotates in a particular direction, for instance, either clockwise or counterclockwise.

They reasoned that if travelling electrons interacting with the spin spirals have a spin that is aligned in the same direction, then incoming light, polarized in that same direction, should resonate and produce a characteristic signal. Such a signal would confirm that the traveling electrons’ spins align because of the spiral configuration, and furthermore, that the material does in fact exhibit p-wave magnetism.

And indeed, that’s what the group found. In experiments with multiple nickel iodide flakes, the researchers directly observed that the direction of the electron’s spin was correlated to the handedness of the light used to excite those electrons. Such is a telltale signature of p-wave magnetism, here observed for the first time.

Going a step further, they looked to see whether they could switch the spins of the electrons by applying an electric field, or a small amount of voltage, along different directions through the material. They found that when the direction of the electric field was in line with the direction of the spin spiral, the effect switched electrons along the route to spin in the same direction, producing a current of like-spinning electrons.

“With such a current of spin, you can do interesting things at the device level, for instance, you could flip magnetic domains that can be used for control of a magnetic bit,” Comin explains. “These spintronic effects are more efficient than conventional electronics because you’re just moving spins around, rather than moving charges. That means you’re not subject to any dissipation effects that generate heat, which is essentially the reason computers heat up.”

“We just need a small electric field to control this magnetic switching,” Song adds. “P-wave magnets could save five orders of magnitude of energy. Which is huge.”

“We are excited to see these cutting-edge experiments confirm our prediction of p-wave spin polarized states,” says Libor Šmejkal, head of the Max Planck Research Group in Dresden, Germany, who is one of the authors of the theoretical work that proposed the concept of p-wave magnetism but was not involved in the new paper. “The demonstration of electrically switchable p-wave spin polarization also highlights the promising applications of unconventional magnetic states.”

The team observed p-wave magnetism in nickel iodide flakes, only at ultracold temperatures of about 60 kelvins.

“That’s below liquid nitrogen, which is not necessarily practical for applications,” Comin says. “But now that we’ve realized this new state of magnetism, the next frontier is finding a material with these properties, at room temperature. Then we can apply this to a spintronic device.”

This research was supported, in part, by the National Science Foundation, the Department of Energy, and the Air Force Office of Scientific Research.



de MIT News https://ift.tt/96aBGxi

Day of Climate inspires young learners to take action

“Close your eyes and imagine we are on the same team. Same arena. Same jersey. And the game is on the line,” Jaylen Brown, the 2024 NBA Finals MVP for the Boston Celtics, said to a packed room of about 200 people at the recent Day of Climate event at the MIT Museum.

“Now think about this: We aren’t playing for ourselves; we are playing for the next generation,” Brown added, encouraging attendees to take climate action. 

The inaugural Day of Climate event brought together local learners, educators, community leaders, and the MIT community. Featuring project showcases, panels, and a speaker series, the event sparked hands-on learning and inspired climate action across all ages.

The event marked the celebration of the first year of a larger initiative by the same name. Led by the pK-12 team at MIT Open Learning, Day of Climate has brought together learners and educators by offering free, hands-on curriculum lessons and activities designed to introduce learners to climate change, teach how it shapes their lives, and consider its effects on humanity. 

Cynthia Breazeal, dean of digital learning at MIT Open Learning, notes the breadth of engagement across MIT that made the event, and the larger initiative, possible with contributions from more than 10 different MIT departments, labs, centers, and initiatives. 

“MIT is passionate about K-12 education,” she says. “It was truly inspiring to witness how our entire community came together to demonstrate the power of collaboration and advocacy in driving meaningful change.”

From education to action 

The event kicked off with a showcase, where the Day of Climate grantees and learners invited attendees to learn about their projects and meaningfully engage with lessons and activities. Aranya Karighattam, a local high school senior, adapted the curriculum Urban Heat Islands — developed by Lelia Hampton, a PhD student in electrical engineering and computer science at MIT, and Chris Rabe, program director at the MIT Environmental Solution Initiative — sharing how this phenomenon affects the Boston metropolitan area. 

Karighattam discussed what could be done to shield local communities from urban heat islands. They suggested doubling the tree cover in areas with the lowest quartile tree coverage as one mitigating strategy, but noted that even small steps, like building a garden and raising awareness for this issue, can help.

Day of Climate echoed a consistent call to action, urging attendees to meaningfully engage in both education and action. Brown, who is an MIT Media Lab Director’s Fellow, spoke about how education and collective action will pave the way to tackle big societal challenges. “We need to invest in sustainability communities,” he said. “We need to invest in clean technology, and we need to invest in education that fosters environmental stewardship.”

Part of MIT’s broader sustainability efforts, including The Climate Project, the event reflected a commitment to building a resilient and sustainable future for all. Influenced by the Climate Action Through Education (CATE), Day of Climate panelist Sophie Shen shared how climate education inspired her civic life. “Learning about climate change has inspired me to take action on a wider systemic level,” she said.

Shen, a senior at Arlington High School and local elected official, emphasized how engagement and action looks different for everyone. “There are so many ways to get involved,” she said. “That could be starting a community garden — those can be great community hubs and learning spaces — or it could include advocating to your local or state governments.”

Becoming a catalyst for change 

The larger Day of Climate initiative encourages young people to understand the interdisciplinary nature of climate change and consider how the changing climate impacts many aspects of life. With curriculum available for learners from ages 4 to 18, these free activities range from Climate Change Charades — where learners act out words like “deforestation” and “recycling” — to Climate Change Happens Below Water, where learners use sensors to analyze water quality data like pH and solubility.

Many of the speakers at the event shared personal anecdotes from their childhood about how climate education, both in and out of the classroom, has changed the trajectory of their lives. Addaline Jorroff, deputy climate chief and director of mitigation and community resilience in the Office of Climate Resilience and Innovation for the Commonwealth of Massachusetts, explained how resources from MIT were instrumental in her education as a middle and high schooler, while Jaylen Brown told how his grandmother helped him see the importance of taking care of the planet, through recycling and picking up trash together, when he was young.

Claudia Urrea, director of the pK-12 team at Open Learning and director of Day of Climate, emphasizes how providing opportunities at schools — through new curriculum, classroom resources and mentorship — are crucial, but providing other educational opportunities also matter: in particular, opportunities that support learners in becoming strong leaders.

“I strongly believe that this event not only inspired young learners to take meaningful action, both large and small, towards a better future, but also motivated all the stakeholders to continue to create opportunities for these young learners to emerge as future leaders,” Urrea says.

The team plans to hold the Day of Climate event annually, bringing together young people, educators, and the MIT community. Urrea hopes the event will act as a catalyst for change — for everyone.

“We hope Day of Climate serves as the opportunity for everyone to recognize the interconnectedness of our actions,” Urrea says. “Understanding this larger system is crucial for addressing current and future challenges, ultimately making the world a better place for all.”

The Day of Climate event was hosted by the Day of Climate team in collaboration with MIT Climate Action Through Education (CATE) and Earth Day Boston.



de MIT News https://ift.tt/Bfr59uT

Highlights from MIT’s first-ever Artfinity festival

When people think of MIT, they may first think of code, circuits, and cutting-edge science. But the school has a rich history of interweaving art, science, and technology in unexpected and innovative ways — and that’s never been more clear than with the Institute’s latest festival, Artfinity: A Celebration of Creativity and Community at MIT.

After an open-call invitation to the MIT community in early 2024, the inaugural Artfinity delivered an extended multi-week exploration of art and ideas, with more than 80 free performing and visual arts events between Feb. 15 and May 2, including a two-day film festival, interactive augmented reality art installations, an evening at the MIT Museum, a simulated lunar landing, and concerts by both student groups and internationally renowned musicians. 

“Artfinity was a fantastic celebration of MIT’s creative excellence, offering so many different ways to explore our thriving arts culture,” says MIT president Sally Kornbluth. “It was wonderful to see people from our community getting together with family, friends, and neighbors from Cambridge and Boston to experience the joy of music and the arts.”

Among the highlights were a talk by Tony-winning scenic designer Es Devlin, a concert by Grammy-winning rapper and visiting scholar Lupe Fiasco, and a series of events commemorating the opening of the Edward and Joyce Linde Music Building.

Devlin shared art tied to her recent spring residency at MIT as the latest honoree of the Eugene McDermott Award in the Arts. Working with MIT faculty, students, and staff, she inspired a site-specific installation called “Face to Face,” in which more than 100 community members were paired with strangers to draw each other. In recent years, Devlin has focused her work on fostering interpersonal connection, as in her London multimedia exhibition “Congregation,” in which she drew 50 people displaced from their homelands and documented their stories on video.

Fiasco’s May 2 performance centered around a new project inspired by MIT’s public art collection, developed this year in collaboration with students and faculty as part of his work as a visiting scholar and teaching the class “Rap Theory and Practice.” With the backing of MIT’s Festival Jazz Ensemble, Fiasco presented original compositions based on famed campus sculptures such as Alexander Calder’s La Grande Voile [The Big Sail] and Jaume Plensa’s Alchemist, with members of the MIT Rap Ensemble also jumping on board for many of the pieces. Several students in the ensemble also spearheaded complex multi-instrument arrangements of some of Fiasco’s most popular songs, including “The Show Goes On” and “Kick, Push.” 

Artfinity’s programming also encompassed an eclectic mix of concerts commemorating the new Linde Music Building, which features the 390-seat Tull Hall, rehearsal rooms, a recording studio, and a research lab to help support a new music technology graduate program launching this fall. Events included performances of multiple student ensembles, the Boston Symphony Chamber Players, the Boston Chamber Music Society, Sanford Biggers’ group Moonmedicin, and Grammy-winning jazz saxophonist Miguel Zenón, an assistant professor of music at MIT.

“Across campus, from our new concert hall to the Great Dome, in gallery spaces and in classrooms, our community was inspired by the visual and performing arts of the Artfinity festival,” says MIT provost Cynthia Barnhart. “Artfinity has been an incredible celebration and display of the collective creativity and innovative spirit of our community of students, faculty, and staff.” 

A handful of other Artfinity pieces also made use of MIT’s iconic architecture, including Creative Lumens and Media Lab professor Behnaz Farahi’s “Gaze to the Stars.” Taking place March 12–14 and coinciding with the total lunar eclipse, the large-scale video projections illuminated a wide range of campus buildings, transforming the exteriors of the new Linde Music Building, the MIT Chapel, the Stratton Student Center, the Zesiger Sports & Fitness Center, and even the Great Dome, which Farahi’s team affixed with images of eyes from the MIT community.

Other popular events included the MIT Museum’s After Dark series and its Argus Installation, which examined the interplay of light and hand-blown glass. A two-day Bartos Theatre film festival featured works by students, staff, and faculty, ranging from shorts to 30-minute productions, and spanning the genres of fiction, nonfiction, animation, and experimental pieces. The Welcome Center also hosted “All Our Relations,” a multimedia celebration of MIT's Indigenous community through song, dance, and story.

An Institute event, Artfinity was organized by the Office of the Arts, and led by professor of art, culture, and technology Azra Akšamija and Institute Professor of Music Marcus A. Thompson. Both professors spoke about the importance of spotlighting the arts and demonstrating a diverse breadth and depth of programming for future iterations of the event.

“People think of MIT as a place you go to only for technology. But, in reality,  MIT has always attracted students with broad interests and required them to explore balance in their programs with substantive world-class offerings in the humanities, social sciences, and visual and performing arts,” says Thompson. “We are hoping this festival, Artfinity, will showcase the infinite variety and quality we have been offering and actually doing in the arts for quite some time.”

Professor of music and theater art Jay Scheib sees the mix of art and technology as a way for students to explore other ways for them to approach different research challenges. “In the arts, we tend to look at problems in a different way … framed by ideas of aesthetics, civic discourse, and experience,” says Scheib. “This approach can help students in physics, aerospace design, or artificial intelligence to ask different, yet equally useful, questions.”

An Institute-sponsored campus-wide event organized by the Office of the Arts, Artfinity represents MIT’s largest arts festival since its 150th anniversary in 2011. Akšamija, who is director of MIT’s Art, Culture, and Technology (ACT) program, says that the festival serves as both a student spotlight and an opportunity to interact with, and meaningfully give back to, MIT’s surrounding community in Cambridge and greater Boston.

“What became evident during the planning of this festival was the quantity and quality of art here at MIT, and how much of that work is cutting-edge,” says Akšamija. “We wanted to celebrate the creativity and joyfulness of the brilliant minds on campus [and] to bring joy and beauty to MIT and the surrounding community.”



de MIT News https://ift.tt/UmsvHVQ

Women’s track and field wins first NCAA Division III Outdoor National Championship

With a dramatic victory in the 4x400m relay, the MIT women's track and field team clinched the 2025 NCAA Division III Outdoor Track and Field National Championship May 24 at the SPIRE Institute's Outdoor Track and Field facility. The title was MIT's first NCAA women's outdoor track and field national championship. The team scored first of 79 with 56 points; runners-up included Washington University with 47 points and the University of Winsconsin at La Crosse with 38 points.

With the victory, MIT completed a sweep of the 2024-25 NCAA Division III women's cross country, indoor track and field, and outdoor track and field titles — becoming the first women's program to sweep all three in the same year.

MIT earned 20 All-America honors across three days, including the program's first relay national championship in the 4x400m on Saturday and Alexis Boykin's eighth career national title with an NCAA record-breaking performance in the shot put on Friday.

On Thursday, Boykin opened the championships with a third-place performance in the discus as MIT quickly moved to the top of the team leaderboard on the first day of competition. Boykin and classmate Emily Ball each earned a spot on the podium. Boykin was third with a throw of 45.12m (148' 0") on her second attempt and Ball was seventh with a mark of 41.90m (137' 5") on her final throw of prelims.

In the pole vault, junior Katelyn Howard tied for fifth, clearing 3.85m (12' 7.5") to pick up three points for MIT. Howard passed on the first height and cleared at both 3.75m and 3.85m, but did not pass the fourth progression. Classmate Hailey Surace was 14th, clearing 3.75m (12' 3.5").

Junior Elaine Wang picked up a big point with an eighth-place finish for MIT in the javelin. Wang's second attempt traveled 40.44m (132' 8"), moving her into sixth place. She would eventually finish in eighth on the strength of her second attempt.  

The opening day concluded with junior Kate Sanderson finishing fourth with a personal best of 34:48.601 in the 10,000m to earn a spot on the podium, as MIT continued to lead the team standings. 

On Friday, Boykin returned on day two and set the NCAA Division III women's shot put all-time record, winning her eighth career national championship with a throw of 16.80m (55’ 1/2”). Boykin won the event by over 2 meters, breaking Robyn Jarocki's NCAA Division III record on her final preliminary attempt with a throw of 16.80m.

MIT wrapped action with the 3,000m Steeplechase final, where sophomore Liv Girand finished in 10th place in 10:58.71 to earn the first All-America honor of her career. MIT continued to lead the team standings at the end of the second day of competition.

On Saturday, Boykin earned her third All-America honor in three events at the championships with a third-place finish in the hammer with a throw of 58.79m (192' 10”), while junior Nony Otu Ugwu took 10th with a jump of 11.91m (39' 1") on her final attempt of prelims. Otu Ugwu did not advance to the final.

MIT shined on the track to secure the title, as grad student Gillian Roeder and senior Christina Crow picked up seven big points in the 1,500m final. Roeder was fifth in 4:27.76 and Crow was one spot back, finishing sixth in 4:28.81.

Senior Marina Miller followed and picked up six more points while earning the first of two All-America honors on the day with a third-place finish and a personal record of 54.32 in the 400m.

Junior Rujuta Sane, Roeder, and junior Kate Sanderson finished 13th, 14th, and 16th, respectively, in the 5,000m. Sane had a time of 16:51.45, with Roeder finishing in 16:54.07 and Sanderson clocking in at 17:00.55.

With MIT leading second-place Washington University by seven points heading into the final event, MIT's 4x4 relay team of senior Olivia Dias, junior Shreya Kalyan, junior Krystal Montgomery, and Miller left no doubt, securing the team championship with a national title of their own, as Miller moved from third to first over the final 50m to win an electric final race.



de MIT News https://ift.tt/s6tWAUj

martes, 3 de junio de 2025

Study helps pinpoint areas where microplastics will accumulate

The accumulation of microplastics in the environment, and within our bodies, is an increasingly worrisome issue. But predicting where these ubiquitous particles will accumulate, and therefore where remediation efforts should be focused, has been difficult because of the many factors that contribute to their dispersal and deposition.

New research from MIT shows that one key factor in determining where microparticles are likely to build up has to do with the presence of biofilms. These thin, sticky biopolymer layers are shed by microorganisms and can accumulate on surfaces, including along sandy riverbeds or seashores. The study found that, all other conditions being equal, microparticles are less likely to accumulate in sediment infused with biofilms, because if they land there, they are more likely to be resuspended by flowing water and carried away.

The open-access findings appear in the journal Geophysical Research Letters, in a paper by MIT postdoc Hyoungchul Park and professor of civil and environmental engineering Heidi Nepf. “Microplastics are definitely in the news a lot,” Nepf says, “and we don’t fully understand where the hotspots of accumulation are likely to be. This work gives a little bit of guidance” on some of the factors that can cause these particles, and small particles in general, to accumulate in certain locations.

Most experiments looking at the ways microparticles are transported and deposited have been conducted over bare sand, Park says. “But in nature, there are a lot of microorganisms, such as bacteria, fungi, and algae, and when they adhere to the stream bed they generate some sticky things.” These substances are known as extracellular polymeric substances, or EPS, and they “can significantly affect the channel bed characteristics,” he says. The new research focused on determining exactly how these substances affected the transport of microparticles, including microplastics.

The research involved a flow tank with a bottom lined with fine sand, and sometimes with vertical plastic tubes simulating the presence of mangrove roots. In some experiments the bed consisted of pure sand, and in others the sand was mixed with a biological material to simulate the natural biofilms found in many riverbed and seashore environments.

Water mixed with tiny plastic particles was pumped through the tank for three hours, and then the bed surface was photographed under ultraviolet light that caused the plastic particles to fluoresce, allowing a quantitative measurement of their concentration.

The results revealed two different phenomena that affected how much of the plastic accumulated on the different surfaces. Immediately around the rods that stood in for above-ground roots, turbulence prevented particle deposition. In addition, as the amount of simulated biofilms in the sediment bed increased, the accumulation of particles also decreased.

Nepf and Park concluded that the biofilms filled up the spaces between the sand grains, leaving less room for the microparticles to fit in. The particles were more exposed because they penetrated less deeply in between the sand grains, and as a result they were much more easily resuspended and carried away by the flowing water.

“These biological films fill the pore spaces between the sediment grains,” Park explains, “and that makes the deposited particles — the particles that land on the bed — more exposed to the forces generated by the flow, which makes it easier for them to be resuspended. What we found was that in a channel with the same flow conditions and the same vegetation and the same sand bed, if one is without EPS and one is with EPS, then the one without EPS has a much higher deposition rate than the one with EPS.”

Nepf adds: “The biofilm is blocking the plastics from accumulating in the bed because they can’t go deep into the bed. They just stay right on the surface, and then they get picked up and moved elsewhere. So, if I spilled a large amount of microplastic in two rivers, and one had a sandy or gravel bottom, and one was muddier with more biofilm, I would expect more of the microplastics to be retained in the sandy or gravelly river.”

All of this is complicated by other factors, such as the turbulence of the water or the roughness of the bottom surface, she says. But it provides a “nice lens” to provide some suggestions for people who are trying to study the impacts of microplastics in the field. “They’re trying to determine what kinds of habitats these plastics are in, and this gives a framework for how you might categorize those habitats,” she says. “It gives guidance to where you should go to find more plastics versus less.”

As an example, Park suggests, in mangrove ecosystems, microplastics may preferentially accumulate in the outer edges, which tend to be sandy, while the interior zones have sediment with more biofilm. Thus, this work suggests “the sandy outer regions may be potential hotspots for microplastic accumulation,” he says, and can make this a priority zone for monitoring and protection.

“This is a highly relevant finding,” says Isabella Schalko, a research scientist at ETH Zurich, who was not associated with this research. “It suggests that restoration measures such as re-vegetation or promoting biofilm growth could help mitigate microplastic accumulation in aquatic systems. It highlights the powerful role of biological and physical features in shaping particle transport processes.”

The work was supported by Shell International Exploration and Production through the MIT Energy Initiative.



de MIT News https://ift.tt/qBx0wRo

Professor Emeritus Stanley Fischer, a towering figure in academic macroeconomics and global economic policymaking, dies at 81

Stanley Fischer PhD ’69, MIT professor emeritus of economics and a towering figure in both academic macroeconomics and global economic policymaking, passed away on May 31. He was 81. Fischer was a foundational scholar as well as a wise mentor and a central force in shaping the macroeconomic tradition of MIT’s Department of Economics that continues today.

“Together with Rudi Dornbusch and later Olivier Blanchard, Stan was one of the intellectual engines that powered MIT macroeconomics in the 1970s and beyond,” says Ricardo Caballero PhD ’88, one of Fischer’s advisees and now the Ford International Professor of Economics at MIT. “He was quietly brilliant, never flashy, and always razor-sharp. His students learned not just from his lectures or his groundbreaking work on New Keynesian models and rational expectations, but from the clarity of his mind and the gentleness of his wit. Nearly 40 years later, I can still hear him saying: ‘Isn’t it easier to do it right the first time than to explain why you didn’t?’ That line has stayed with me ever since. A simple comment from Stan during a seminar — often offered with a disarming smile — could puncture a weak argument or crystallize a central insight. He taught generations of macroeconomists to prize discipline, clarity, and policy relevance.”

Olivier Blanchard PhD ’77, the Robert M. Solow Professor of Economics Emeritus at MIT and another advisee, explains that Fischer “was one of the most popular teachers, and one of the most popular thesis advisers. We flocked to his office, and I suspect that the only time for research he had was during the night. What we admired most were his technical skills — he knew how to use stochastic calculus — and his ability to take on big questions and simplify them to the point where the answer, ex post, looked obvious. When Rudi Dornbusch joined him in 1975, macro and international quickly became the most exciting fields at MIT.” Within a decade of his joining the MIT faculty, “Stan had acquired near-guru status.”

Fischer built bridges between economic theory and the practice of economic policy. He served as chief economist of the World Bank (1988-90), first deputy managing director at the International Monetary Fund (IMF, 1994-2001), governor of the Bank of Israel (2005-13), and vice chair of the U.S. Federal Reserve (2014-17). These leadership roles gave him a rare platform to implement ideas he helped develop in the classroom and he was widely praised for his successes at averting financial crises across several decades and continents. Yet even as he moved through the highest circles of global policymaking, he remained a teacher at heart — accessible, thoughtful, and generous with his time.

At MIT, Fischer is best remembered for inspiring generations of graduate students who moved between academics and policy just as he did. Over the course of two decades before he began his active policy role, he was primary adviser for 49 PhD students, secondary adviser to another 23, and a celebrated teacher for many more. 

Many of his students became important macroeconomic policymakers, including Ben Bernanke PhD ’79; Mario Draghi PhD ’77; Ilan Goldfajn PhD ’95; Philip Lowe PhD ’91; and Kazuo Ueda PhD ’80, who chaired the Federal Reserve Board, the European Central Bank, the Banco Central do Brazil, the Reserve Bank of Australia, and the Bank of Japan. Students Gregory Mankiw PhD ’84 and Christina Romer PhD ’85 chaired the Council of Economic Advisors; Maurice Obstfeld PhD ’79 and Kenneth Rogoff PhD ’80 were chief economist at the International Monetary Fund; and Frederic Mishkin PhD ’76 was a governor of the Federal Reserve. Another of his students, former Treasury Secretary Lawrence Summers ’75, explains that “no one had more cumulative influence on the macroeconomic policymakers of the last generation than Stanley Fischer … We all were shaped by his clarity of thought, intellectual balance, personal decency, and quality of character. In a broader sense, everyone who was involved in the macro policy enterprise was Stan Fischer’s disciple. People all over the world who never knew his name lived better, more secure, lives because of all that he did through his teaching, writing, and service.”

Fischer grew up in Northern Rhodesia (now Zambia), living behind the general store his family ran before moving to Southern Rhodesia (now Zimbabwe) at the age of 13. Inspired by the quality of writing in John Maynard Keynes’ “The General Theory of Employment, Interest, and Money,” he applied for and won a scholarship to study at the London School of Economics. He moved to MIT for his graduate studies, where his dissertation was supervised by Franklin M. Fisher. After several years on the University of Chicago faculty, he returned to MIT in 1973, where he stayed for the remainder of his academic career. He held the Elizabeth and James Killian Class of 1926 professorship from 1992 to 1995, serving as department chair in 1993–94, before being called away to the IMF.

Fischer’s intellectual journey from MIT to Chicago and back culminated in his most influential academic work. Ivan Werning, the Robert M. Solow Professor of Economics at MIT notes, “his research was pathbreaking and paved the way to the modern approach to macroeconomics. By merging nominal rigidities associated with MIT’s Keynesian tradition with rational expectations emanating from the Chicago school, his 1977 paper on ‘Long-Term Contracts, Rational Expectations, and the Optimal Money Supply Rule’ showed how the non-neutrality of money did not require agent irrationality or confusion.” The dynamic stochastic general equilibrium models now used at every central bank to evaluate monetary policy options are direct descendants of Fischer’s thinking.

Fischer’s influence goes beyond what has become known as New Keynesian Economics. Werning continues, “Fischer’s research combined theoretical insights to very applied questions. His textbook with Blanchard was instrumental to an entire generation of macroeconomists, showing macroeconomics as a rich and evolving field, ripe with tools and great questions to study. Along with Bob Solow, Rudi Dornbusch, and others, Fischer had a huge impact within the MIT economics department and helped build its day-to-day culture, with an inquisitive, open-minded, and friendly atmosphere.”

Macroeconomics — and MIT — owe him a profound debt.

Fischer is survived by his three sons, Michael, David, and Jonathan, and nine grandchildren.



de MIT News https://ift.tt/XaU74H5

Study shows making hydrogen with soda cans and seawater is scalable and sustainable

Hydrogen has the potential to be a climate-friendly fuel since it doesn’t release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a “green” fuel over its entire life cycle.

A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.

Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.

Now, the researchers have carried out a “cradle-to-grave” life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.

In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.

The low-carbon footprint is on par with other proposed “green hydrogen” technologies, such as those powered by solar and wind energy.

“We’re in the ballpark of green hydrogen,” says lead author Aly Kombargi PhD ’25, who graduated this spring from MIT with a doctorate in mechanical engineering. “This work highlights aluminum’s potential as a clean energy source and offers a scalable pathway for low-emission hydrogen deployment in transportation and remote energy systems.”

The study’s MIT co-authors are Brooke Bao, Enoch Ellis, and professor of mechanical engineering Douglas Hart.

Gas bubble

Dropping an aluminum can in water won’t normally cause much of a chemical reaction. That’s because when aluminum is exposed to oxygen, it instantly forms a shield-like layer. Without this layer, aluminum exists in its pure form and can readily react when mixed with water. The reaction that occurs involves aluminum atoms that efficiently break up molecules of water, producing aluminum oxide and pure hydrogen. And it doesn’t take much of the metal to bubble up a significant amount of the gas.

“One of the main benefits of using aluminum is the energy density per unit volume,” Kombargi says. “With a very small amount of aluminum fuel, you can conceivably supply much of the power for a hydrogen-fueled vehicle.”

Last year, he and Hart developed a recipe for aluminum-based hydrogen production. They found they could puncture aluminum’s natural shield by treating it with a small amount of gallium-indium, which is a rare-metal alloy that effectively scrubs aluminum into its pure form. The researchers then mixed pellets of pure aluminum with seawater and observed that the reaction produced pure hydrogen. What’s more, the salt in the water helped to precipitate gallium-indium, which the team could subsequently recover and reuse to generate more hydrogen, in a cost-saving, sustainable cycle.

“We were explaining the science of this process in conferences, and the questions we would get were, ‘How much does this cost?’ and, ‘What’s its carbon footprint?’” Kombargi says. “So we wanted to look at the process in a comprehensive way.”

A sustainable cycle

For their new study, Kombargi and his colleagues carried out a life cycle assessment to estimate the environmental impact of aluminum-based hydrogen production, at every step of the process, from sourcing the aluminum to transporting the hydrogen after production. They set out to calculate the amount of carbon associated with generating 1 kilogram of hydrogen — an amount that they chose as a practical, consumer-level illustration.

“With a hydrogen fuel cell car using 1 kilogram of hydrogen, you can go between 60 to 100 kilometers, depending on the efficiency of the fuel cell,” Kombargi notes.

They performed the analysis using Earthster — an online life cycle assessment tool that draws data from a large repository of products and processes and their associated carbon emissions. The team considered a number of scenarios to produce hydrogen using aluminum, from starting with “primary” aluminum mined from the Earth, versus “secondary” aluminum that is recycled from soda cans and other products, and using various methods to transport the aluminum and hydrogen.

After running life cycle assessments for about a dozen scenarios, the team identified one scenario with the lowest carbon footprint. This scenario centers on recycled aluminum — a source that saves a significant amount of emissions compared with mining aluminum — and seawater — a natural resource that also saves money by recovering gallium-indium. They found that this scenario, from start to finish, would generate about 1.45 kilograms of carbon dioxide for every kilogram of hydrogen produced. The cost of the fuel produced, they calculated, would be about $9 per kilogram, which is comparable to the price of hydrogen that would be generated with other green technologies such as wind and solar energy.

The researchers envision that if the low-carbon process were ramped up to a commercial scale, it would look something like this: The production chain would start with scrap aluminum sourced from a recycling center. The aluminum would be shredded into pellets and treated with gallium-indium. Then, drivers could transport the pretreated pellets as aluminum “fuel,” rather than directly transporting hydrogen, which is potentially volatile. The pellets would be transported to a fuel station that ideally would be situated near a source of seawater, which could then be mixed with the aluminum, on demand, to produce hydrogen. A consumer could then directly pump the gas into a car with either an internal combustion engine or a fuel cell.

The entire process does produce an aluminum-based byproduct, boehmite, which is a mineral that is commonly used in fabricating semiconductors, electronic elements, and a number of industrial products. Kombargi says that if this byproduct were recovered after hydrogen production, it could be sold to manufacturers, further bringing down the cost of the process as a whole.

“There are a lot of things to consider,” Kombargi says. “But the process works, which is the most exciting part. And we show that it can be environmentally sustainable.”

The group is continuing to develop the process. They recently designed a small reactor, about the size of a water bottle, that takes in aluminum pellets and seawater to generate hydrogen, enough to power an electric bike for several hours. They previously demonstrated that the process can produce enough hydrogen to fuel a small car. The team is also exploring underwater applications, and are designing a hydrogen reactor that would take in surrounding seawater to power a small boat or underwater vehicle.

This research was supported, in part, by the MIT Portugal Program.



de MIT News https://ift.tt/jD7h6BO

lunes, 2 de junio de 2025

Teaching AI models what they don’t know

Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars.

Now, the MIT spinout Themis AI is helping quantify model uncertainty and correct outputs before they cause bigger problems. The company’s Capsa platform can work with any machine-learning model to detect and correct unreliable outputs in seconds. It works by modifying AI models to enable them to detect patterns in their data processing that indicate ambiguity, incompleteness, or bias.

“The idea is to take a model, wrap it in Capsa, identify the uncertainties and failure modes of the model, and then enhance the model,” says Themis AI co-founder and MIT Professor Daniela Rus, who is also the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’re excited about offering a solution that can improve models and offer guarantees that the model is working correctly.”

Rus founded Themis AI in 2021 with Alexander Amini ’17, SM ’18, PhD ’22 and Elaheh Ahmadi ’20, MEng ’21, two former research affiliates in her lab. Since then, they’ve helped telecom companies with network planning and automation, helped oil and gas companies use AI to understand seismic imagery, and published papers on developing more reliable and trustworthy chatbots.

“We want to enable AI in the highest-stakes applications of every industry,” Amini says. “We’ve all seen examples of AI hallucinating or making mistakes. As AI is deployed more broadly, those mistakes could lead to devastating consequences. Our software can make these systems more transparent.”

Helping models know what they don’t know

Rus’ lab has been researching model uncertainty for years. In 2018, she received funding from Toyota to study the reliability of a machine learning-based autonomous driving solution.

“That is a safety-critical context where understanding model reliability is very important,” Rus says.

In separate work, Rus, Amini, and their collaborators built an algorithm that could detect racial and gender bias in facial recognition systems and automatically reweight the model’s training data, showing it eliminated bias. The algorithm worked by identifying the unrepresentative parts of the underlying training data and generating new, similar data samples to rebalance it.

In 2021, the eventual co-founders showed a similar approach could be used to help pharmaceutical companies use AI models to predict the properties of drug candidates. They founded Themis AI later that year.

“Guiding drug discovery could potentially save a lot of money,” Rus says. “That was the use case that made us realize how powerful this tool could be.”

Today Themis is working with companies in a wide variety of industries, and many of those companies are building large language models. By using Capsa, the models are able to quantify their own uncertainty for each output.

“Many companies are interested in using LLMs that are based on their data, but they’re concerned about reliability,” observes Stewart Jamieson SM ’20, PhD ’24, Themis AI's head of technology. “We help LLMs self-report their confidence and uncertainty, which enables more reliable question answering and flagging unreliable outputs.”

Themis AI is also in discussions with semiconductor companies building AI solutions on their chips that can work outside of cloud environments.

“Normally these smaller models that work on phones or embedded systems aren’t very accurate compared to what you could run on a server, but we can get the best of both worlds: low latency, efficient edge computing without sacrificing quality,” Jamieson explains. “We see a future where edge devices do most of the work, but whenever they’re unsure of their output, they can forward those tasks to a central server.”

Pharmaceutical companies can also use Capsa to improve AI models being used to identify drug candidates and predict their performance in clinical trials.

“The predictions and outputs of these models are very complex and hard to interpret — experts spend a lot of time and effort trying to make sense of them,” Amini remarks. “Capsa can give insights right out of the gate to understand if the predictions are backed by evidence in the training set or are just speculation without a lot of grounding. That can accelerate the identification of the strongest predictions, and we think that has a huge potential for societal good.”

Research for impact

Themis AI’s team believes the company is well-positioned to improve the cutting edge of constantly evolving AI technology. For instance, the company is exploring Capsa’s ability to improve accuracy in an AI technique known as chain-of-thought reasoning, in which LLMs explain the steps they take to get to an answer.

“We’ve seen signs Capsa could help guide those reasoning processes to identify the highest-confidence chains of reasoning,” Amini says. “We think that has huge implications in terms of improving the LLM experience, reducing latencies, and reducing computation requirements. It’s an extremely high-impact opportunity for us.”

For Rus, who has co-founded several companies since coming to MIT, Themis AI is an opportunity to ensure her MIT research has impact.

“My students and I have become increasingly passionate about going the extra step to make our work relevant for the world," Rus says. “AI has tremendous potential to transform industries, but AI also raises concerns. What excites me is the opportunity to help develop technical solutions that address these challenges and also build trust and understanding between people and the technologies that are becoming part of their daily lives.”



de MIT News https://ift.tt/kiGB4Cu

Teaching AI models the broad strokes to sketch more like humans do

When you’re trying to communicate or understand ideas, words don’t always do the trick. Sometimes the more efficient approach is to do a simple sketch of that concept — for example, diagramming a circuit might help make sense of how the system works.

But what if artificial intelligence could help us explore these visualizations? While these systems are typically proficient at creating realistic paintings and cartoonish drawings, many models fail to capture the essence of sketching: its stroke-by-stroke, iterative process, which helps humans brainstorm and edit how they want to represent their ideas.

A new drawing system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University can sketch more like we do. Their method, called “SketchAgent,” uses a multimodal language model — AI systems that train on text and images, like Anthropic’s Claude 3.5 Sonnet — to turn natural language prompts into sketches in a few seconds. For example, it can doodle a house either on its own or through collaboration, drawing with a human or incorporating text-based input to sketch each part separately.

The researchers showed that SketchAgent can create abstract drawings of diverse concepts, like a robot, butterfly, DNA helix, flowchart, and even the Sydney Opera House. One day, the tool could be expanded into an interactive art game that helps teachers and researchers diagram complex concepts or give users a quick drawing lesson.

CSAIL postdoc Yael Vinker, who is the lead author of a paper introducing SketchAgent, notes that the system introduces a more natural way for humans to communicate with AI.

“Not everyone is aware of how much they draw in their daily life. We may draw our thoughts or workshop ideas with sketches,” she says. “Our tool aims to emulate that process, making multimodal language models more useful in helping us visually express ideas.”

SketchAgent teaches these models to draw stroke-by-stroke without training on any data — instead, the researchers developed a “sketching language” in which a sketch is translated into a numbered sequence of strokes on a grid. The system was given an example of how things like a house would be drawn, with each stroke labeled according to what it represented — such as the seventh stroke being a rectangle labeled as a “front door” — to help the model generalize to new concepts.

Vinker wrote the paper alongside three CSAIL affiliates — postdoc Tamar Rott Shaham, undergraduate researcher Alex Zhao, and MIT Professor Antonio Torralba — as well as Stanford University Research Fellow Kristine Zheng and Assistant Professor Judith Ellen Fan. They’ll present their work at the 2025 Conference on Computer Vision and Pattern Recognition (CVPR) this month.

Assessing AI’s sketching abilities

While text-to-image models such as DALL-E 3 can create intriguing drawings, they lack a crucial component of sketching: the spontaneous, creative process where each stroke can impact the overall design. On the other hand, SketchAgent’s drawings are modeled as a sequence of strokes, appearing more natural and fluid, like human sketches.

Prior works have mimicked this process, too, but they trained their models on human-drawn datasets, which are often limited in scale and diversity. SketchAgent uses pre-trained language models instead, which are knowledgeable about many concepts, but don’t know how to sketch. When the researchers taught language models this process, SketchAgent began to sketch diverse concepts it hadn’t explicitly trained on.

Still, Vinker and her colleagues wanted to see if SketchAgent was actively working with humans on the sketching process, or if it was working independently of its drawing partner. The team tested their system in collaboration mode, where a human and a language model work toward drawing a particular concept in tandem. Removing SketchAgent’s contributions revealed that their tool’s strokes were essential to the final drawing. In a drawing of a sailboat, for instance, removing the artificial strokes representing a mast made the overall sketch unrecognizable.

In another experiment, CSAIL and Stanford researchers plugged different multimodal language models into SketchAgent to see which could create the most recognizable sketches. Their default backbone model, Claude 3.5 Sonnet, generated the most human-like vector graphics (essentially text-based files that can be converted into high-resolution images). It outperformed models like GPT-4o and Claude 3 Opus.

“The fact that Claude 3.5 Sonnet outperformed other models like GPT-4o and Claude 3 Opus suggests that this model processes and generates visual-related information differently,” says co-author Tamar Rott Shaham.

She adds that SketchAgent could become a helpful interface for collaborating with AI models beyond standard, text-based communication. “As models advance in understanding and generating other modalities, like sketches, they open up new ways for users to express ideas and receive responses that feel more intuitive and human-like,” says Shaham. “This could significantly enrich interactions, making AI more accessible and versatile.”

While SketchAgent’s drawing prowess is promising, it can’t make professional sketches yet. It renders simple representations of concepts using stick figures and doodles, but struggles to doodle things like logos, sentences, complex creatures like unicorns and cows, and specific human figures.

At times, their model also misunderstood users’ intentions in collaborative drawings, like when SketchAgent drew a bunny with two heads. According to Vinker, this may be because the model breaks down each task into smaller steps (also called “Chain of Thought” reasoning). When working with humans, the model creates a drawing plan, potentially misinterpreting which part of that outline a human is contributing to. The researchers could possibly refine these drawing skills by training on synthetic data from diffusion models.

Additionally, SketchAgent often requires a few rounds of prompting to generate human-like doodles. In the future, the team aims to make it easier to interact and sketch with multimodal language models, including refining their interface. 

Still, the tool suggests AI could draw diverse concepts the way humans do, with step-by-step human-AI collaboration that results in more aligned final designs.

This work was supported, in part, by the U.S. National Science Foundation, a Hoffman-Yee Grant from the Stanford Institute for Human-Centered AI, the Hyundai Motor Co., the U.S. Army Research Laboratory, the Zuckerman STEM Leadership Program, and a Viterbi Fellowship.



de MIT News https://ift.tt/8r1K3eI

Eight with MIT ties win 2025 Hertz Foundation Fellowships

The Hertz Foundation announced that it has awarded fellowships to eight MIT affiliates. The prestigious award provides each recipient with five years of doctoral-level research funding (up to a total of $250,000), which gives them an unusual measure of independence in their graduate work to pursue groundbreaking research.

The MIT-affiliated awardees are Matthew Caren ’25; April Qiu Cheng ’24; Arav Karighattam, who begins his PhD at the Institute this fall; Benjamin Lou ’25; Isabelle A. Quaye ’22, MNG ’24; Albert Qin ’24; Ananthan Sadagopan ’24; and Gianfranco (Franco) Yee ’24.

“Hertz Fellows embody the promise of future scientific breakthroughs, major engineering achievements and thought leadership that is vital to our future,” said Stephen Fantone, chair of the Hertz Foundation board of directors and president and CEO of Optikos Corp., in the announcement. “The newest recipients will direct research teams, serve in leadership positions in our government and take the helm of major corporations and startups that impact our communities and the world.”

In addition to funding, fellows receive access to Hertz Foundation programs throughout their lives, including events, mentoring, and networking. They join the ranks of over 1,300 former Hertz Fellows since the fellowship was established in 1963 who are leaders and scholars in a range of technology, science, and engineering fields. Former fellows have contributed to breakthroughs in such areas as advanced medical therapies, computational systems used by billions of people daily, global defense networks, and the recent launch of the James Webb Space Telescope.

This year’s MIT recipients are among a total of 19 Hertz Foundation Fellows scholars selected from across the United States.

Matthew Caren ’25 studied electrical engineering and computer science, mathematics, and music at MIT. His research focuses on computational models of how people use their voices to communicate sound at the Computer Science and Artificial Intelligence Lab (CSAIL) and interpretable real-time machine listening systems at the MIT Music Technology Lab. He spent several summers developing large language model systems and bioinformatics algorithms at Apple and a year researching expressive digital instruments at Stanford University’s Center for Computer Research in Music and Acoustics. He chaired the MIT Schwarzman College of Computing Undergraduate Advisory Group, where he led undergraduate committees on interdisciplinary computing AI and was a founding member of the MIT Voxel Lab for music and arts technology. In addition, Caren has invented novel instruments used by Grammy-winning musicians on international stages. He plans to pursue a doctorate at Stanford.

April Qiu Cheng ’24 majored in physics at MIT, graduating in just three years. Their research focused on black hole phenomenology, gravitational-wave inference, and the use of fast radio bursts as a statistical probe of large-scale structure. They received numerous awards, including an MIT Outstanding Undergraduate Research Award, the MIT Barrett Prize, the Astronaut Scholarship, and the Princeton President’s Fellowship. Cheng contributed to the physics department community by serving as vice president of advocacy for Undergraduate Women in Physics and as the undergraduate representative on the Physics Values Committee. In addition, they have participated in various science outreach programs for middle and high school students. Since graduating, they have been a Fulbright Fellow at the Max Planck Institute for Gravitational Physics, where they have been studying gravitational-wave cosmology. Cheng will begin a doctorate in astrophysics at Princeton in the fall.

Arav Karighattam was home schooled, and by age 14 had completed most of the undergraduate and graduate courses in physics and mathematics at the University of California at Davis. He graduated from Harvard University in 2024 with a bachelor’s degree in mathematics and will attend MIT to pursue a PhD, also in mathematics. Karighattam is fascinated by algebraic number theory and arithmetic geometry and seeks to understand the mysteries underlying the structure of solutions to Diophantine equations. He also wants to apply his mathematical skills to mitigating climate change and biodiversity loss. At a recent conference at MIT titled “Mordell’s Conjecture 100 Years Later,” Karighattam distinguished himself as the youngest speaker to present a paper among graduate students, postdocs, and faculty members.

Benjamin Lou ’25 graduated from MIT in May with a BS in physics and is interested in finding connections between fundamental truths of the universe. One of his research projects applies symplectic techniques to understand the nature of precision measurements using quantum states of light. Another is about geometrically unifying several theorems in quantum mechanics using the Prüfer transformation. For his work, Lou was honored with the Barry Goldwater Scholarship. Lou will pursue his doctorate at MIT, where he plans to work on unifying quantum mechanics and gravity, with an eye toward uncovering experimentally testable predictions. Living with the debilitating disease spinal muscular atrophy, which causes severe, full-body weakness and makes scratchwork unfeasible, Lou has developed a unique learning style emphasizing mental visualization. He also co-founded and helped lead the MIT Assistive Technology Club, dedicated to empowering those with disabilities using creative technologies. He is working on a robotic self-feeding device for those who cannot eat independently.

Isabelle A. Quaye ’22, MNG ’24 studied electrical engineering and computer science as an undergraduate at MIT, with a minor in economics. She was awarded competitive fellowships and scholarships from Hyundai, Intel, D. E. Shaw, and Palantir, and received the Albert G. Hill Prize, given to juniors and seniors who have maintained high academic standards and have made continued contributions to improving the quality of life for underrepresented students at MIT. While obtaining her master’s degree at MIT, she focused on theoretical computer science and systems. She is currently a software engineer at Apple, where she continues to develop frameworks that harness intelligence from data to improve systems and processes. Quaye also believes in contributing to the advancement of science and technology through teaching and has volunteered in summer programs to teach programming and informatics to high school students in the United States and Ghana.

Albert Qin ’24 majored in physics and mathematics at MIT. He also pursued an interest in biology, researching single-molecule approaches to study transcription factor diffusion in living cells and studying the cell circuits that control animal development. His dual interests have motivated him to find common ground between physics and biological fields. Inspired by his MIT undergraduate advisors, he hopes to become a teacher and mentor for aspiring young scientists. Qin is currently pursuing a PhD at Princeton University, addressing questions about the behavior of neural networks — both artificial and biological — using a variety of approaches and ideas from physics and neuroscience.

Ananthan Sadagopan ’24 is currently pursuing a doctorate in biological and biomedical science at Harvard University, focusing on chemical biology and the development of new therapeutic strategies for intractable diseases. He earned his BS at MIT in chemistry and biology in three years and led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing machine learning tools for cancer dependency prediction, using small molecules for targeted protein relocalization and creating a generalizable strategy to drug the most mutated gene in cancer (TP53). He published as the first author in top journals, such as Cell, during his undergraduate career. He also holds patents related to his work on cancer dependency prediction and drugging TP53. While at the Institute, he served as president of the Chemistry Undergraduate Association, winning both the First-Year and Senior Chemistry Achievement Awards, and was head of the events committee for the MIT Science Olympiad.

Gianfranco (Franco) Yee ’24 majored in biological engineering at MIT, conducting research in the Manalis Lab on chemical gradients in the gut microenvironment and helping to develop a novel gut-on-a-chip platform for culturing organoids under these gradients. His senior thesis extended this work to the microbiome, investigating host-microbe interactions linked to intestinal inflammation and metabolic disorders. Yee also earned a concentration in education at MIT, and is committed to increasing access to STEM resources in underserved communities. He co-founded Momentum AI, an educational outreach program that teaches computer science to high school students across Greater Boston. The inaugural program served nearly 100 students and included remote outreach efforts in Ukraine and China. Yee has also worked with MIT Amphibious Achievement and the MIT Office of Engineering Outreach Programs. He currently attends Gerstner Sloan Kettering Graduate School, where he plans to leverage the gut microbiome and immune system to develop innovative therapeutic treatments.

Former Hertz Fellows include two Nobel laureates; recipients of 11 Breakthrough Prizes and three MacArthur Foundation “genius awards;” and winners of the Turing Award, the Fields Medal, the National Medal of Technology, the National Medal of Science, and the Wall Street Journal Technology Innovation Award. In addition, 54 are members of the National Academies of Sciences, Engineering and Medicine, and 40 are fellows of the American Association for the Advancement of Science. Hertz Fellows hold over 3,000 patents, have founded more than 375 companies, and have created hundreds of thousands of science and technology jobs.



de MIT News https://ift.tt/JXEW1N6

3 Questions: How to help students recognize potential bias in their AI datasets

Every year, thousands of students take courses that teach them how to deploy artificial intelligence models that can help doctors diagnose disease and determine appropriate treatments. However, many of these courses omit a key element: training students to detect flaws in the training data used to develop the models.

Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, has documented these shortcomings in a new paper and hopes to persuade course developers to teach students to more thoroughly evaluate their data before incorporating it into their models. Many previous studies have found that models trained mostly on clinical data from white males don’t work well when applied to people from other groups. Here, Celi describes the impact of such bias and how educators might address it in their teachings about AI models.

Q: How does bias get into these datasets, and how can these shortcomings be addressed?

A: Any problems in the data will be baked into any modeling of the data. In the past we have described instruments and devices that don’t work well across individuals. As one example, we found that pulse oximeters overestimate oxygen levels for people of color, because there weren’t enough people of color enrolled in the clinical trials of the devices. We remind our students that medical devices and equipment are optimized on healthy young males. They were never optimized for an 80-year-old woman with heart failure, and yet we use them for those purposes. And the FDA does not require that a device work well on this diverse of a population that we will be using it on. All they need is proof that it works on healthy subjects.

Additionally, the electronic health record system is in no shape to be used as the building blocks of AI. Those records were not designed to be a learning system, and for that reason, you have to be really careful about using electronic health records. The electronic health record system is to be replaced, but that’s not going to happen anytime soon, so we need to be smarter. We need to be more creative about using the data that we have now, no matter how bad they are, in building algorithms.

One promising avenue that we are exploring is the development of a transformer model of numeric electronic health record data, including but not limited to laboratory test results. Modeling the underlying relationship between the laboratory tests, the vital signs and the treatments can mitigate the effect of missing data as a result of social determinants of health and provider implicit biases.

Q: Why is it important for courses in AI to cover the sources of potential bias? What did you find when you analyzed such courses’ content?

A: Our course at MIT started in 2016, and at some point we realized that we were encouraging people to race to build models that are overfitted to some statistical measure of model performance, when in fact the data that we’re using is rife with problems that people are not aware of. At that time, we were wondering: How common is this problem?

Our suspicion was that if you looked at the courses where the syllabus is available online, or the online courses, that none of them even bothers to tell the students that they should be paranoid about the data. And true enough, when we looked at the different online courses, it’s all about building the model. How do you build the model? How do you visualize the data? We found that of 11 courses we reviewed, only five included sections on bias in datasets, and only two contained any significant discussion of bias.

That said, we cannot discount the value of these courses. I’ve heard lots of stories where people self-study based on these online courses, but at the same time, given how influential they are, how impactful they are, we need to really double down on requiring them to teach the right skillsets, as more and more people are drawn to this AI multiverse. It’s important for people to really equip themselves with the agency to be able to work with AI. We’re hoping that this paper will shine a spotlight on this huge gap in the way we teach AI now to our students.

Q: What kind of content should course developers be incorporating?

A: One, giving them a checklist of questions in the beginning. Where did this data came from? Who were the observers? Who were the doctors and nurses who collected the data? And then learn a little bit about the landscape of those institutions. If it’s an ICU database, they need to ask who makes it to the ICU, and who doesn’t make it to the ICU, because that already introduces a sampling selection bias. If all the minority patients don’t even get admitted to the ICU because they cannot reach the ICU in time, then the models are not going to work for them. Truly, to me, 50 percent of the course content should really be understanding the data, if not more, because the modeling itself is easy once you understand the data.

Since 2014, the MIT Critical Data consortium has been organizing datathons (data “hackathons”) around the world. At these gatherings, doctors, nurses, other health care workers, and data scientists get together to comb through databases and try to examine health and disease in the local context. Textbooks and journal papers present diseases based on observations and trials involving a narrow demographic typically from countries with resources for research. 

Our main objective now, what we want to teach them, is critical thinking skills. And the main ingredient for critical thinking is bringing together people with different backgrounds.

You cannot teach critical thinking in a room full of CEOs or in a room full of doctors. The environment is just not there. When we have datathons, we don’t even have to teach them how do you do critical thinking. As soon as you bring the right mix of people — and it’s not just coming from different backgrounds but from different generations — you don’t even have to tell them how to think critically. It just happens. The environment is right for that kind of thinking. So, we now tell our participants and our students, please, please do not start building any model unless you truly understand how the data came about, which patients made it into the database, what devices were used to measure, and are those devices consistently accurate across individuals?

When we have events around the world, we encourage them to look for data sets that are local, so that they are relevant. There’s resistance because they know that they will discover how bad their data sets are. We say that that’s fine. This is how you fix that. If you don’t know how bad they are, you’re going to continue collecting them in a very bad manner and they’re useless. You have to acknowledge that you’re not going to get it right the first time, and that’s perfectly fine. MIMIC (the Medical Information Marked for Intensive Care database built at Beth Israel Deaconess Medical Center) took a decade before we had a decent schema, and we only have a decent schema because people were telling us how bad MIMIC was.

We may not have the answers to all of these questions, but we can evoke something in people that helps them realize that there are so many problems in the data. I’m always thrilled to look at the blog posts from people who attended a datathon, who say that their world has changed. Now they’re more excited about the field because they realize the immense potential, but also the immense risk of harm if they don’t do this correctly.



de MIT News https://ift.tt/QopgSuM