viernes, 13 de marzo de 2026

Financial Times ranks MIT Sloan No. 1 in 2026 Global MBA Ranking

The Financial Times has placed MIT Sloan School of Management at the top of its recently released 2026 Global MBA Ranking. It is the school’s first time gaining the No. 1 spot in the list.

In its announcement of the rankings, the publication noted MIT’s school of management tops the list “at a time of sharpening focus from students on the importance of technology, including artificial intelligence, as they prepare for disruptions in the workplace.”

Global education editor Andrew Jack said in the Financial Times News Briefing podcast that MIT is “very much at the center of the tech revolution that we are seeing.” He added, “there’s no question that we’re talking more and more about artificial intelligence and expertise around some of the technical skills related and notably how you might apply AI in the workplace. That certainly reflects both its technical and engineering computer science skills historically. And [MIT Sloan] is doing a lot with those other departments in the university. So I think that says something very much about how the wider job market and the aspirations of students are evolving.”

“MIT Sloan operates at the intersection of management and technology,” says Richard Locke, the John C Head III Dean of the MIT Sloan School of Management. “Our students and alumni are employing artificial intelligence to solve complex problems in the world and across industries. At MIT Sloan, we focus on doing that work in a way that centers human capabilities, ensuring artificial intelligence extends what humans can do to improve organizations and the world.”

To determine its rankings, the Financial Times considers 21 criteria. Eight of those — accounting for 56 percent of the ranking’s weight — are determined by surveying alumni three years after they have completed their MBA program. School data are used for 34 percent percent of the rank. The remaining 10 percent measures how often full-time faculty publish in top journals.

MIT Sloan ranked fourth for its alumni network, which measures how effectively alumni support one another through career advice, internships, job opportunities, and recruiting efforts. 

“This ranking underscores the strength of our global alumni community,” says Kathy Hawkes, senior associate dean of external engagement. “'Sloanies Helping Sloanies' isn’t just a phrase — it’s a lived experience. Our 31,000 alumni actively open doors, share expertise, and invest in each other’s success.”



de MIT News https://ift.tt/6fgX7DM

Next-generation geothermal energy: Promise, progress, and challenges

Geothermal energy, a clean, continuous energy source accessible in many locations, has been slow to catch on. Nearly 2,000 years ago, the Romans made extensive use of geothermal energy — heat from the Earth — including at the spa complex at present-day Bath, England. Electricity was first produced from geothermal sources in the early 1900s in Italy. In the United States, the Geysers geothermal field in California began generating electricity at scale in 1960, and routinely produces more than 725 megawatts of baseload power today. 

According to the International Energy Agency (IEA), geothermal energy still supplies less than 1 percent of global electricity demand, although countries like Kenya (more than 40 percent of electricity generation) and Iceland (nearly 30 percent of electricity and 90 percent of the heating) have seen widespread adoption.

In recent years, technological advances, an influx of private capital, and shifting energy and environmental policies have driven renewed interest in expanding development of geothermal energy. If project costs continue to decline, the IEA predicts that geothermal energy could meet 15 percent of the growth in global electricity demand between 2024 and 2050. Many countries, including the United States, Indonesia, New Zealand, and Turkey, are prioritizing an expansion of geothermal energy as part of their broader energy strategies.

Achieving large-scale electricity generation from geothermal sources will depend on a significant expansion of so-called next-generation geothermal. This refers to tapping heat from source rocks at temperatures of 100 degrees Celsius to more than 400 C, often at depths of several kilometers below the surface. Last month, U.S. Congressional Rep. Jake Auchincloss (D-MA) and Rep. Mark Amodei (R-NV) introduced bipartisan legislation to promote research, testing, and development of one type of next-generation geothermal energy known as superhot rock.

Geothermal energy at MIT

Through its leadership in producing the influential 2006 “The Future of Geothermal Energy” report led by former MIT professor Jeff Tester, MIT and the predecessor of the MIT Energy Initiative (MITEI) played an important role in national geothermal strategy two decades ago. In 2008, researchers at the Plasma Science and Fusion Center (PSFC) invented millimeter-wave drilling with support from one of the first MITEI seed innovation grants. The technology, which could be particularly useful for geothermal installations in superhot and deep rock, is being commercialized by MIT spinout Quaise Energy.

MITEI is sponsoring next-generation geothermal projects through its Future Energy Systems Center. A project led by MITEI Research Scientist Pablo Duenas-Martinez focuses on the techno-economics of electricity generation from a geothermal plant co-located with a data center, a timely topic given the proliferation of data center power purchase agreements for electricity generated by geothermal energy. MITEI’s March 4 Spring Symposium focused on next-geothermal energy for the generation of firm power, and many of the leading exploration, drilling, reservoir development, and advanced technology companies working in this area sent panelists and speakers. On March 5, MITEI collaborated with the Clean Air Task Force (CATF) to co-host the GeoTech Summit, which explored accelerating technology development for and investment in next-generation geothermal.

To prepare for the recent symposium, MITEI organized a geothermal bootcamp during MIT’s Independent Activities Period (IAP) that introduced more than 40 members of the MIT community to geothermal basics, key technologies, and related MIT research. Carolyn Ruppel, MITEI’s deputy director of science and technology and the organizer of the IAP bootcamp and Spring Symposium, says, “MITEI’s member companies, which represent leading voices on energy, power generation, infrastructure, heavy industry, and digital technology, are increasingly approaching us about their interest in next-generation geothermal. There is also good momentum building across MIT, ranging from projects at the Earth Resources Laboratory to the millimeter-wave testbed being developed by PSFC and its MIT collaborators, individual projects in academic departments, and of course the work MITEI has been funding.”

Geothermal basics

Temperatures a few tens of meters below the ground are typically stable year-round. In some locations, these temperatures are warmer than the surface in winter and cooler in summer, making it possible to use geothermal heat pumps to moderate temperatures in buildings throughout the year. Overlooking the Charles River, Boston University’s 19-story Center for Computing and Data Science meets an estimated 90 percent of its heating and cooling needs using this kind of geothermal system. At the scale of large institutions or whole towns, thermal networks, district heating, and other approaches can efficiently supply heat from shallow geothermal sources without producing greenhouse gas emissions.

Tapping hotter and usually deeper geothermal sources could generate large amounts of electricity for decades at a single site. Next-generation geothermal is the term applied to these higher-temperature systems developed using enhanced, advanced, and superhot technologies. Enhanced geothermal refers to circulating fluids through engineered fracture systems in deep, dry rock with relatively low native permeability. Advanced geothermal adopts a closed loop approach, in which a working fluid is heated by circulating it through pipes embedded in the subsurface. Superhot geothermal, which is in its infancy, will likely use enhanced geothermal technology to circulate supercritical water through rock at almost 400 C.   

Next-generation geothermal

Drill deep enough and higher-temperature resources are nearly ubiquitous beneath the continents, but early-stage development must focus on the most promising sites, where the methods and technologies to routinely reach these hotter rocks can be tested and refined. Locations like Iceland and the southwestern U.S. state of Nevada, where tectonic plates are separating or the Earth’s outer layer is thinning, have hotter temperatures closer to the surface than areas like the northeastern United States, where the Earth’s crust is old, thick, and cooler. Even in the southwestern United States, though, reaching the high temperatures required for generating electricity via geothermal systems will require routinely drilling to depths of greater than 4 kilometers in crystalline rock. This is significantly more challenging than drilling in the sedimentary basins that host most of the world’s oil and gas reserves. 

For a location to be suitable for a next-generation geothermal installation requires not only heat, but also a fluid (usually water) to carry the heat. Water circulated through the rock formation to extract heat can be present naturally or brought from elsewhere and injected into the reservoir. This type of system also requires connected permeability such as an engineered fracture network oriented to prevent significant fluid losses and to channel fluid toward the extraction well. Closed-loop (advanced) systems replace the freely circulating water with a working fluid that has favorable thermal characteristics and that is confined in piping.

Various geophysical methods are used to find sites with sufficient heat within a few kilometers of the surface, a prerequisite for their development as next-generation geothermal installations. Apart from direct measurements of temperatures in test boreholes, electrical resistivity and magnetotelluric surveys are among the most useful for inferring subsurface temperature regimes. Both techniques infer the electrical conductivity structure beneath the ground, permitting the identification of relatively warmer and more permeable rocks.

Drilling is often the most time-consuming and expensive part of preparing a site for a geothermal plant. This is particularly true for next-generation geothermal, where the targets can be deep, or the system design may require large-scale horizontal drilling. Over the past few years, numerous innovations have increased drilling rates and attainable depths and temperatures and also lowered costs. Nonetheless, even with high-quality geophysical surveys, “you may spend $10 million on an exploratory well and find no heat,” says Andrew Inglis, the geothermal channel venture builder at MIT Proto Ventures. 

Superhot geothermal, a next-generation geothermal approach that is advancing rapidly, presents special challenges. The metal drilling tools, the rocks in the formation, and circulating fluids all behave differently at temperatures of several hundred degrees, and standard practices, materials, and sensors must be significantly modified to tolerate the tough conditions. Once temperatures exceed 374 C in a borehole even ~1 km deep, water reaches a supercritical state. This presents substantial advantages for extracting heat from the formation, but introduces the specter of rapid metal corrosion and precipitation of salts and silica that can quickly foul a borehole. Researchers are investigating substitution of supercritical carbon dioxide for water as a working fluid for superhot geothermal.

MIT innovations advancing next-generation geothermal

The millimeter-wave drilling technology invented at PSFC and being commercialized by Quaise Energy is the highest-profile next-generation geothermal innovation to emerge from MIT so far. Millimeter-wave technology uses microwave energy to vaporize rock and could prove to be several times faster than conventional drilling. PSFC and a multidisciplinary MIT team are devising a dedicated laboratory to study how millimeter-wave drilling interacts with crystalline rock at realistic pressure and temperature conditions, and to test improvements to the existing technology. Steve Wukitch, interim director and principal research scientist at PSFC, notes that “the facility we are building at MIT will allow us to test samples 500 times larger than is currently possible. This is an important step for investigating technologies that could unlock superhot geothermal energy."

MIT Proto Ventures, which focuses on creating startups based on technology invented at MIT, currently hosts a dedicated geothermal energy channel led by Inglis. Since arriving at MIT in late 2024, Inglis has identified inventions and research that could advance next-generation geothermal from disciplines as disparate as mechanical and materials engineering, earth sciences, and chemistry. Examples of technologies originating with MIT researchers include sensors that measure micro-cracking in high-temperature rock, advanced metal alloys that could handle superhot fluids at a fraction of the cost of titanium, and anti-fouling coatings to protect pipes from the caustic geofluids common in hot, deep systems.

MITEI Spring Symposium

At the recent MITEI Spring Symposium, these MIT innovators introduced their technology to MITEI member companies in a session led by Inglis. Wukitch, who moderated a panel on advanced drilling, described the planned millimeter-wave testbed, and Duenas-Martinez led a panel on power generation and storage. Terra Rogers, director for superhot rock geothermal energy at the CATF and the organizer of the joint CATF-MITEI GeoTech Summit on March 5, led a discussion of international and U.S. policies and the regulatory environment for expansion of next-generation geothermal. 

Poster presenters included MIT graduate students and researchers, MIT’s D-Lab, and the Geo@MIT geothermal-focused MIT student group, which was recognized with a 2024 bonus award by the U.S. Department of Energy’s Geothermal Technologies Office in the nationwide EnergyTech University Prize competition.  



de MIT News https://ift.tt/iPXhg4Y

How the brain handles the “cocktail party problem”

MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.

This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.

Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.

“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.

Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.

Modeling attention

Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.

When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.

“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”

This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.

“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.

In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.

To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.

To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.

“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.

Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.

The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.

“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.

Effects of location

Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.

The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.

“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.

These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.

“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.

Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.

The research was funded by the National Institutes of Health.



de MIT News https://ift.tt/LDSO7nw

jueves, 12 de marzo de 2026

Discovering the joy of future-forward electrical engineering

“It’s a real validation of all the work behind the scenes,” says Karl Berggren, faculty head of electrical engineering within the MIT Department of Electrical Engineering and Computer Science (EECS). He’s looking at the numbers of new enrollees in Course 6-5, Electrical Engineering With Computing, the flagship electrical engineering degree offered by EECS, which was launched last fall. 

The new major has been embraced by the MIT student community. “The fact that Course 6-5 is now the third-most selected major among first-year students shows that the department is clearly meeting a growing need for a curriculum that bridges electrical engineering and computing. This growth is coming from students already interested in pursuing a degree in EECS,” says Anantha Chandrakasan, MIT’s provost. “The major was thoughtfully designed to offer a strong foundation in core electrical engineering concepts — such as circuits, signals, systems, and architecture — while also providing well-structured specialization tracks that prepare students for the future of the field.”

Those tracks include structured paths to explore not only the traditional domains of electrical engineering (such as hardware design and energy systems), but cutting-edge fields such as nanoelectronics, quantum systems engineering, and photonics. 

“They are very flexible, and essentially allow me to take whatever I want, with the tracks filling up almost automatically,” says 6-5 major Charles Reischer. “For me, it essentially reduces the amount of specific required classes in the major, which has been helpful for choosing the classes I find interesting.” 

Jelena Notaros, who helped develop the Electromagnetics and Photonics track within the new major, has seen the new wave of student interest from the other side. “It’s been incredibly rewarding … I think students are excited to have the opportunity to take a class where they can learn about a cutting-edge field and test real state-of-the-art chip hardware using industry-standard equipment.” Notaros’s class, 6.2320  (Silicon Photonics), includes features not found in a university class anywhere else, such as a sequence in which students can test actual chips at three electronic-photonic probe stations. 

Another 6-5 track, Quantum Systems Engineering, features direct student access to quantum hardware, including electron-nuclear systems and state-of-the-art simulations methods and tools. Professor Dirk Englund, who teaches multiple courses within the track, explains, “it’s been so successful in part through strong industry support, including from QuTools Inc. Students work with the same tech we use in the Boston-Area Quantum Network Testbed — the metro quantum network linking MIT, Lincoln Lab, and Harvard, and the NSF CQN.” 

Many of Englund’s students have gone on to pursue a career in quantum information science, either in grad school or in industry. “Students recognize quantum engineering is the future. They see they’re building the foundation for metro-scale quantum networks.” 

The new curriculum’s emphasis on hands-on learning is deliberate, and ubiquitous throughout 6-5. Within the Circuits track, students who enroll in class 6.208 (Semiconductor Electronic Circuits) will get an opportunity not only to design a circuit, but to actually see their design made, in a process called “tape-out.” Professor Ruonan Han, who helped design the course, explains, “a tape-out is a perfect training that poses [real-life] constraints and forces the students to solve practical engineering problems. Through circuit simulation using mainstream industry CAD tools, the students better understand how deep-scaled transistors differ from the ideal behaviors taught in textbooks. By drawing the layouts of the silicon and metal patterns, the students learn how a modern chip is made, layer by layer. The complex (and often frustrating) rules of the layout also keep reminding the students of all the technical limitations during the chip manufacturing, and make them better appreciate all the accomplishments in semiconductor manufacturing. Even the firm and non-negotiable tape-out submission deadline forces the students to not only wisely manage their development timeline, but also to experience heart-beating moments when decisions on critical engineering trade-offs should be made (in order to deliver). To these students, it was such relentless efforts that gave them lots of satisfaction and pride when they finally hold their own chips in hand.” 

The sense of completing a full problem-solving cycle is echoed in class 6.900 (Engineering for Impact), a capstone course designed by Professor Joel Voldman, a former faculty head of electrical engineering, along with Senior Lecturer Joe Steinmeyer. Over the course of a semester, students team with city governments and nonprofits to solve complex local issues. The course is designed not only to introduce students to realistic project management factors (such as budgets, timelines, and stakeholders), but also to give them a taste of the satisfaction of engineering a solution that meets a real community’s need. 

“I’ve taken 6.900, and it’s been eye-opening in the collaboration of hardware, firmware, and software to create a cohesive and working product,” says Andrea Leang, a senior majoring in 6-2 who nonetheless decided to try the new course. “In my 6-2 experience, I spent the first two years taking more CS [computer science] classes, but as I went into junior year, I wanted to explore more EE [electrical engineering].” That desire led Leang to Voltage, the student group for electrical engineers. “Honestly, it was the first big community of EE I’ve joined. Joining Voltage opened my eyes to what MIT had to offer on EE, and a community who was enthusiastic to share their knowledge.” 

Matthew Kim, one of the executives of the Voltage group, echoes Leang’s experience. “​​It has been great working [...] to build a community for EE. We heard faculty say that they wanted to be more engaged with students and communicate more, and it has definitely been felt with the restart and support of Voltage. And I’m hopeful that the community will continue to grow.” 

That growth has been rapid. The new major’s enrollment is now roughly equivalent to the combined enrollment in the older 6-1 and 6-2 programs, showing the desirability of a major that incorporates fundamentals of both computing and electrical engineering. 

Department head Professor Asu Ozdaglar is thrilled with the energizing effect of the new major. “We are delighted to see the initial success of the 6-5 major, which provides our students an exciting and forward-looking curriculum, developed through extensive work and great deal of thought by electrical engineering faculty. The new curriculum reflects the critical role computing plays in electrical engineering, whether in designing new devices and circuits, analyzing data, or in studying complex systems, which almost invariably combine hardware and software."

“What excites me most about this major is how it empowers students to bring ideas to life — from the invisible signals that connect our world to the complex systems that drive modern technology,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Warren Professor of Electrical Engineering and Computer Science. “Students are using computation as a creative and analytical tool to expand the boundaries of engineering. They gain a deep understanding of how hardware and software come together to drive technological progress.” 

The new degree program’s designers are gratified by the swell of student interest. 

“The buzz surrounding the classes and the new 6-5 degree program is fantastic,” says Voldman. “It’s great to see the strong student interest in what we’ve put together.” 



de MIT News https://ift.tt/o3Ww7bk

3 Questions: On the future of AI and the mathematical and physical sciences

Curiosity-driven research has long sparked technological transformations. A century ago, curiosity about atoms led to quantum mechanics, and eventually the transistor at the heart of modern computing. Conversely, the steam engine was a practical breakthrough, but it took fundamental research in thermodynamics to fully harness its power. 

Today, artificial intelligence and science find themselves at a similar inflection point. The current AI revolution has been fueled by decades of research in the mathematical and physical sciences (MPS), which provided the challenging problems, datasets, and insights that made modern AI possible. The 2024 Nobel Prizes in physics and chemistry, recognizing foundational AI methods rooted in physics and AI applications for protein design, made this connection impossible to miss.

In 2025, MIT hosted a Workshop on the Future of AI+MPS, funded by the National Science Foundation with support from the MIT School of Science and the MIT departments of Physics, Chemistry, and Mathematics. The workshop brought together leading AI and science researchers to chart how the MPS domains can best capitalize on — and contribute to — the future of AI. Now a white paper, with recommendations for funding agencies, institutions, and researchers, has been published in Machine Learning: Science and Technology. In this interview, Jesse Thaler, MIT professor of physics and chair of the workshop, describes key themes and how MIT is positioning itself to lead in AI and science.

Q: What are the report’s key themes regarding last year’s gathering of leaders across the mathematical and physical sciences?

A: Gathering so many researchers at the forefront of AI and science in one room was illuminating. Though the workshop participants came from five distinct scientific communities — astronomy, chemistry, materials science, mathematics, and physics — we found many similarities in how we are each engaging with AI. A real consensus emerged from our animated discussions: Coordinated investment in computing and data infrastructures, cross-disciplinary research techniques, and rigorous training can meaningfully advance both AI and science.

One of the central insights was that this has to be a two-way street. It’s not just about using AI to do better science; science can also make AI better. Scientists excel at distilling insights from complex systems, including neural networks, by uncovering underlying principles and emergent behaviors. We call this the “science of AI,” and it comes in three flavors: science driving AI, where scientific reasoning informs foundational AI approaches; science inspiring AI, where scientific challenges push the development of new algorithms; and science explaining AI, where scientific tools help illuminate how machine intelligence actually works.

In my own field of particle physics, for instance, researchers are developing real-time AI algorithms to handle the data deluge from collider experiments. This work has direct implications for discovering new physics, but the algorithms themselves turn out to be valuable well beyond our field. The workshop made clear that the science of AI should be a community priority — it has the potential to transform how we understand, develop, and control AI systems.

Of course, bridging science and AI requires people who can work across both worlds. Attendees consistently emphasized the need for “centaur scientists” — researchers with genuine interdisciplinary expertise. Supporting these polymaths at every career stage, from integrated undergraduate courses to interdisciplinary PhD programs to joint faculty hires, emerged as essential.

Q: How do MIT’s AI and science efforts align with the workshop recommendations?

A: The workshop framed its recommendations around three pillars: research, talent, and community. As director of the NSF Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) — a collaborative AI and physics effort among MIT and Harvard, Northeastern, and Tufts universities — I’ve seen firsthand how effective this framework can be. Scaling this up to MIT, we can see where progress is being made and where opportunities lie.

On the research front, MIT is already enabling AI-and-science work in both directions. Even a quick scroll through MIT News shows how individual researchers across the School of Science are pursuing AI-driven projects, building a pipeline of knowledge and surfacing new opportunities. At the same time, collaborative efforts like IAIFI and the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute concentrate interdisciplinary energy for greater impact. The MIT Generative AI Impact Consortium is also supporting application-driven AI work at the university scale.

To foster early-career AI-and-science talent, several initiatives are training the next generation of centaur scientists. The MIT Schwarzman College of Computing's Common Ground for Computing Education program helps students become “bilingual” in computing and their home discipline. Interdisciplinary PhD pathways are also gaining traction; IAIFI worked with the MIT Institute for Data, Systems, and Society to create one in physics, statistics, and data science, and about 10 percent of physics PhD students now opt for it — a number that's likely to grow. Dedicated postdoctoral roles like the IAIFI Fellowship and Tayebati Fellowship give early-career researchers the freedom to pursue interdisciplinary work. Funding centaur scientists and giving them space to build connections across domains, universities, and career stages has been transformative.

Finally, community-building ties it all together. From focused workshops to large symposia, organizing interdisciplinary events signals that AI and science isn’t siloed work — it’s an emerging field. MIT has the talent and resources to make a significant impact, and hosting these gatherings at multiple scales helps establish that leadership.

Q: What lessons can MIT draw about further advancing its AI-and-science efforts?

A: The workshop crystallized something important: The institutions that lead in AI and science will be the ones that think systematically, not piecemeal. Resources are finite, so priorities matter. Workshop attendees were clear about what becomes possible when an institution coordinates hires, research, and training around a cohesive strategy.

MIT is well positioned to build on what’s already underway with more structural initiatives — joint faculty lines across computing and scientific domains, expanded interdisciplinary degree pathways, and deliberate “science of AI” funding. We’re already seeing moves in this direction; this year, the MIT Schwarzman College of Computing and the Department of Physics are conducting their first-ever joint faculty search, which is exciting to see.

The virtuous cycle of AI-and-science has the potential to be truly transformative — offering deeper insight into AI, accelerating scientific discovery, and producing robust tools for both. By developing an intentional strategy, MIT will be well positioned to lead in, and benefit from, the coming waves of AI.



de MIT News https://ift.tt/j975feU

miércoles, 11 de marzo de 2026

2026 MacVicar Faculty Fellows named

Two outstanding MIT educators have been named MacVicar Faculty Fellows: professor of mechanical engineering Amos Winter and professor of electrical engineering and computer science Nickolai Zeldovich.

For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Fellows are chosen through an annual and highly competitive nomination process. The Registrar’s Office coordinates and administers the award on behalf of the Division of Graduate and Undergraduate Education. Nominations are reviewed by an advisory committee, and the provost selects the fellows.

Amos Winter: Bringing excitement to the classroom

Amos Winter is the Germeshausen Professor in the Department of Mechanical Engineering (MechE). He joined the faculty in 2012 and is best known for teaching class 2.007 (Design and Manufacturing I).

A hallmark of Winter’s pedagogy is the way he connects technical learning and core engineering science with real-world impacts. His approach keeps students actively engaged and encourages critical thinking while developing their competence and confidence as design engineers. Current graduate student Ariel Mobius ’24 writes, “Professor Winter is a transformative educator. He successfully blends rigorous technical instruction with lessons on problem scoping and hands-on learning and backs it all up with personalized mentorship. He is a committed advocate for his students and has fundamentally shaped my path as a mechanical engineer.”

Especially notable is Winter’s energetic style and use of interactive materials and demonstrations to make fundamental topics tangible. “He wheels in a large steamer trunk filled with demos he has built or collected to illustrate the day’s topic,” writes Class of 1948 Career Development Professor and assistant professor of mechanical engineering Kaitlyn Becker. “Some demos are enduring classics and others newly designed each year.” Through his “Gearhead Moment of Zen” Winter will share an astonishing car stunt to explain the mechanics using course material. “The theatrics stay in students’ minds,” says Becker, highlighting how Winter’s dramatic examples reinforce learning.

These techniques, combined with a supportive culture, allowed Winter to transform 2.007 from a core class and first subject in engineering design into a celebration of student effort and learning. Throughout the term, students learn how to design and build objects culminating in a robot competition in which their creations tackle themed challenges on a life-size game board. In the past, fewer than half the students were able to compete and today, boosted by Winter’s mentorship and enthusiasm, nearly 97 percent finish a competition-ready robot.

Ralph E. and Eloise F. Cross Professor of Mechanical Engineering David Hardt writes, “Thanks to Amos, this subject has become transformative for many MechE undergraduates.” Becker concurs: “He is the heart and captain of the 2.007 ‘cheer squad,’ cultivating a caring and motivated teaching team.”

Current graduate student Aidan Salazar ’25 notes, “His teaching philosophy is grounded in empowerment: he encourages students to take risks when designing while giving them the confidence and support needed to do so with thoughtful engineering analysis.”

Winter is also deeply invested in students’ growth outside the classroom. He serves as faculty supervisor for MIT’s Formula SAE (Society of Automotive Engineers) and Solar Car teams and guides related UROP projects. In fall 2025 alone, he advised nearly 50 UROP students from the teams, demonstrating his commitment to experiential learning and ability to mentor students at scale.

Salazar continues: “He has offered extraordinary contributions in helping MIT undergraduates embody the Institute’s ‘mens-et-manus’ [‘mind-and-hand’] motto, and I am grateful to be one of the individuals shaped by his teaching.”

“I have always looked up to my colleagues who are MacVicar Fellows as the best educators at the Institute,” writes Winter. “What makes this acknowledgement even more special to me is by earning it from teaching 2.007, which I often cite as one of the best parts of my job. The class is where most mechanical engineering undergraduates gain their first real engineering experience by physically realizing a machine of their own conception. It has been extremely gratifying to watch a generation of students translate their knowledge of engineering and design from the class into their careers … I am honored to have played a role in their intellectual growth and done so meaningfully enough to be recognized as a MacVicar Fellow.”

Nickolai Zeldovich: Inspiring independent thinkers and future teachers

Nickolai Zeldovich is the Joan and Irwin M. (1957) Jacobs Professor of Electrical Engineering and Computer Science (EECS). Student testimonials highlight his unique ability to activate their problem-solving skills, cultivate their intellectual curiosity, and infuse learning with joy.

Katarina Cheng ’25 writes, “From my first day of lecture in the course, I was immediately drawn in by Professor Zeldovich’s joy and enthusiasm for every facet of security and its power,” and Rotem Hemo ’17, ’18 says that Zeldovich “empowers students to find solutions themselves.”

Yael Tauman Kalai, the Ellen Swallow Richards (1873) Professor and professor of EECS concurs. She notes that his lectures — with back-and-forth discussion and probing questions — encourage independent thinking and ensure that “everyone feels a little smarter at the end. It is not surprising that students love him.”

Zeldovich’s affinity for problem-solving translates to his curricular work as well. When he arrived at MIT in 2008, Course 6 offered classes in theoretical and applied cryptography, but lacked a dedicated systems security subject. Recognizing this as a significant gap, Zeldovich took it upon himself to create class 6.566/6.858 (Computer Systems Security) in 2009. Since then, the subject has become a central part of the curriculum, but sustained interest from undergraduates revealed another need, and in 2021 he partnered with colleagues to create a dedicated introductory course: 6.1600 (Foundations of Computer Security).

Edwin Sibley Webster Professor of EECS Srini Devadas writes: “What our curriculum was sorely in need of was a systems security class, and Nickolai immediately and single-handedly created [it],” and has “taught this class to rave reviews ever since.”

The impact of Zeldovich’s thoughtful, inquiry-driven approach to pedagogy extends beyond the walls of his classroom, inspiring future educators, teaching assistants (TAs), and even his faculty colleagues at MIT.

Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor of computer science, writes that Zeldovich has “proven himself to be a dedicated teacher of teachers … One of the things that makes teaching with Nickolai so much fun is that he shares his passion with the undergraduates and MEng students who join the course staff as TAs.”

“[He] encourages the TAs to contribute their own creative ideas to the course,” continues Corrigan-Gibbs. “It should not be a surprise then that 100% of the TAs that we have had in our class have signed up to teach with Nickolai again.”

“Due, in no small part, to how I saw Nickolai lead his classroom, I was inspired to become an educator myself,” writes MIT alumna Anna Arpaci-Dusseau ’23, SM ’24. “I saw that the role of an instructor is not only to teach, but to innovate by thinking of creative projects, and to connect by listening to students’ concerns. As I go forward in my career, I am grateful to have such a wonderful example of an educator to look up to.”

Kalai adds, “I have learned a great deal from the two times that I have ‘taken’ (part of) the class from Nickolai. His extensive knowledge and experience are evident in every lecture. There is so much variety to Nickolai’s teaching.”

Nickolai Zeldovich is the recipient of numerous awards including the EECS Spira Teaching Award (2013), the Edgerton Faculty Achievement Award (2014), the EECS Faculty Research Innovation Fellowship (2018), and the EECS Jamieson Award for Excellence in Teaching (2024).

On receiving this award, Zeldovich says, “MIT has a culture of strong undergraduate education, so being selected as a MacVicar Fellow was truly an honor. It’s a joy to teach smart students about computer systems, and the tradition of co-teaching classes in the EECS department helped me improve as a teacher. Most of all, I look forward to continuing to teach MIT’s students!”

Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website. 



de MIT News https://ift.tt/tqQMiRB

3 Questions: On the future of AI and the mathematical and physical sciences

Curiosity-driven research has long sparked technological transformations. A century ago, curiosity about atoms led to quantum mechanics, and eventually the transistor at the heart of modern computing. Conversely, the steam engine was a practical breakthrough, but it took fundamental research in thermodynamics to fully harness its power. 

Today, artificial intelligence and science find themselves at a similar inflection point. The current AI revolution has been fueled by decades of research in the mathematical and physical sciences (MPS), which provided the challenging problems, datasets, and insights that made modern AI possible. The 2024 Nobel Prizes in physics and chemistry, recognizing foundational AI methods rooted in physics and AI applications for protein design, made this connection impossible to miss.

In 2025, MIT hosted a Workshop on the Future of AI+MPS, funded by the National Science Foundation with support from the MIT School of Science and the MIT departments of Physics, Chemistry, and Mathematics. The workshop brought together leading AI and science researchers to chart how the MPS domains can best capitalize on — and contribute to — the future of AI. Now a white paper, with recommendations for funding agencies, institutions, and researchers, has been published in Machine Learning: Science and Technology. In this interview, Jesse Thaler, MIT professor of physics and chair of the workshop, describes key themes and how MIT is positioning itself to lead in AI and science.

Q: What are the report’s key themes regarding last year’s gathering of leaders across the mathematical and physical sciences?

A: Gathering so many researchers at the forefront of AI and science in one room was illuminating. Though the workshop participants came from five distinct scientific communities — astronomy, chemistry, materials science, mathematics, and physics — we found many similarities in how we are each engaging with AI. A real consensus emerged from our animated discussions: Coordinated investment in computing and data infrastructures, cross-disciplinary research techniques, and rigorous training can meaningfully advance both AI and science.

One of the central insights was that this has to be a two-way street. It’s not just about using AI to do better science; science can also make AI better. Scientists excel at distilling insights from complex systems, including neural networks, by uncovering underlying principles and emergent behaviors. We call this the “science of AI,” and it comes in three flavors: science driving AI, where scientific reasoning informs foundational AI approaches; science inspiring AI, where scientific challenges push the development of new algorithms; and science explaining AI, where scientific tools help illuminate how machine intelligence actually works.

In my own field of particle physics, for instance, researchers are developing real-time AI algorithms to handle the data deluge from collider experiments. This work has direct implications for discovering new physics, but the algorithms themselves turn out to be valuable well beyond our field. The workshop made clear that the science of AI should be a community priority — it has the potential to transform how we understand, develop, and control AI systems.

Of course, bridging science and AI requires people who can work across both worlds. Attendees consistently emphasized the need for “centaur scientists” — researchers with genuine interdisciplinary expertise. Supporting these polymaths at every career stage, from integrated undergraduate courses to interdisciplinary PhD programs to joint faculty hires, emerged as essential.

Q: How do MIT’s AI and science efforts align with the workshop recommendations?

A: The workshop framed its recommendations around three pillars: research, talent, and community. As director of the NSF Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) — a collaborative AI and physics effort among MIT and Harvard, Northeastern, and Tufts universities — I’ve seen firsthand how effective this framework can be. Scaling this up to MIT, we can see where progress is being made and where opportunities lie.

On the research front, MIT is already enabling AI-and-science work in both directions. Even a quick scroll through MIT News shows how individual researchers across the School of Science are pursuing AI-driven projects, building a pipeline of knowledge and surfacing new opportunities. At the same time, collaborative efforts like IAIFI and the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute concentrate interdisciplinary energy for greater impact. The MIT Generative AI Impact Consortium is also supporting application-driven AI work at the university scale.

To foster early-career AI-and-science talent, several initiatives are training the next generation of centaur scientists. The MIT Schwarzman College of Computing's Common Ground for Computing Education program helps students become “bilingual” in computing and their home discipline. Interdisciplinary PhD pathways are also gaining traction; IAIFI worked with the MIT Institute for Data, Systems, and Society to create one in physics, statistics, and data science, and about 10 percent of physics PhD students now opt for it — a number that's likely to grow. Dedicated postdoctoral roles like the IAIFI Fellowship and Tayebati Fellowship give early-career researchers the freedom to pursue interdisciplinary work. Funding centaur scientists and giving them space to build connections across domains, universities, and career stages has been transformative.

Finally, community-building ties it all together. From focused workshops to large symposia, organizing interdisciplinary events signals that AI and science isn’t siloed work — it’s an emerging field. MIT has the talent and resources to make a significant impact, and hosting these gatherings at multiple scales helps establish that leadership.

Q: What lessons can MIT draw about further advancing its AI-and-science efforts?

A: The workshop crystallized something important: The institutions that lead in AI and science will be the ones that think systematically, not piecemeal. Resources are finite, so priorities matter. Workshop attendees were clear about what becomes possible when an institution coordinates hires, research, and training around a cohesive strategy.

MIT is well positioned to build on what’s already underway with more structural initiatives — joint faculty lines across computing and scientific domains, expanded interdisciplinary degree pathways, and deliberate “science of AI” funding. We’re already seeing moves in this direction; this year, the MIT Schwarzman College of Computing and the Department of Physics are conducting their first-ever joint faculty search, which is exciting to see.

The virtuous cycle of AI-and-science has the potential to be truly transformative — offering deeper insight into AI, accelerating scientific discovery, and producing robust tools for both. By developing an intentional strategy, MIT will be well positioned to lead in, and benefit from, the coming waves of AI.



de MIT News https://ift.tt/j975feU