miércoles, 29 de abril de 2026

An engineer’s guide to birds

Feathers give birds their dazzling colors. They repel water and trap heat, keeping them warm and dry. They can even stifle sound, allowing species such as owls to hunt in virtual silence.

All of these functions come from the remarkable structure of feathers, explored in two chapters in “Birds Up Close,” by MIT materials engineer and lifelong birder Lorna J. Gibson. The book takes a microscopic look at birds’ feathers, bones, bills, eggs, and the mechanics of flight to explain their extraordinary abilities — how they can hover in place, silently swoop down on prey, and fly hundreds of miles without tiring.

Gibson spent four decades studying the mechanical behavior of materials — examining their underlying structure to determine what makes them hard or soft, supple or brittle. She specialized in cellular materials, such as engineered honeycombs and foams, as well as natural ones such as wood and bamboo.

Now a post-tenure professor, she’s turned her materials engineering perspective to birds, a subject that has long fascinated her. She’s given talks on the properties of feathers, including the Department of Materials Science and Engineering’s Wulff Lecture in 2017, and studied how sandgrouse carry water in their feathers to their young and how woodpeckers avoid brain damage despite their constant battering.

As a graduate student, she recalls, a colleague told her that woodpeckers have foam between their skulls and brains to cushion the blows of pecking. Intrigued, she dug into the topic and discovered a 1976 study in which neurologists dissected a woodpecker’s head and found no foam at all. So how do woodpeckers avoid brain injury?

“Eventually I recognized that because woodpeckers have such tiny brains, they don’t need the kind of protection that larger animals would need,” Gibson writes in the preface. That understanding led to talks for birders — and eventually to the idea for a whole book explaining how birds work through the lens of materials science and mechanics.

Engineering meets birding

Gibson describes “Birds Up Close,” published by the MIT Press, as a book by an engineer for anyone interested in birds, drawing largely on published research. Readers need no scientific or engineering background; sidebars include calculations for those who want more detail.

“I wasn’t writing it for engineers; I was writing it for birders — people who are curious about natural history,” she says. “I think engineers will enjoy it because there are engineering pieces to it, but I really wrote it for birders.”

Birding has surged in popularity in the United States; 96 million people — about one in three Americans — consider themselves birders, according to the U.S. Fish and Wildlife Service.

Those readers will find no shortage of memorable facts. Two chapters on feathers, titled “Fantastic Feathers,” explore striking features such as the wood duck’s brilliant colors and hummingbirds’ iridescence. Gibson explains both the science behind feather colors and how we perceive them.

“The color we see comes from light that is reflected from a surface,” she writes, describing the pigments responsible for the blacks and grays of seagulls as well as the vibrant reds and greens of the African turaco. But the blue jay in your backyard gets its color another way: blue is not a pigment at all, but a structural color, produced by the interaction of light with microscopic structures within the feathers.

Using photography, sketches, and microscope images, Gibson examines the microscopic structures of feathers. The contour feathers covering a bird’s body, for example, are branched structures with connecting barbs and parallel barbules. Scanning electron microscope images reveal details invisible to the naked eye, including the foamy core of a feather shaft.

She uses the same approach to explore how male hummingbirds produce high-pitched buzzing sounds with their tail feathers during dramatic courtship dives. The sound comes from the fluttering edges of the outermost tail feathers — like blowing across a blade of grass.

Gibson also shows how barn owl feathers enable stealthy flight. Comb-like serrations on the wing break up airflow and reduce noise.

Richard Prum, an ornithologist at Yale University who contributed images to the book, says Gibson’s engineering perspective deepens how we think about birds and evolution. Prum, author of “The Evolution of Beauty,” notes that her approach helps explain not just how birds survive, but how their unique features evolve and function.

“The public has absorbed generations of statements about survival and adaptation and ecology,” Prum says. “But that really sweeps under the rug, how do birds do it?”

Each chapter focuses on a different feature and can be read independently — readers can skip to the second feathers chapter to learn how water literally rolls off a duck’s back, or to the bills chapter to explore the bristles on a woodpecker’s tongue that help it capture insects inside a tree.

The final chapters focus on flight — how raptors soar thousands of feet and glide effortlessly, and how geese gain energy efficiency flying in V formations. Gibson is frank: These chapters are more technical, focusing on forces like lift and drag. “The reward is that you’ll learn some of the secrets of bird flight,” she writes.

The human side of the science

It’s not just natural history and science that fill “Birds Up Close.” In her preface, Gibson recalls childhood walks along the Niagara River in Ontario and a summer trip watching breeding colonies of puffins and guillemots in the Farne Islands in her parents’ native England.

In the epilogue, she reflects on writing an earlier version of the book while her wife, Jeannie, faced an aggressive brain cancer in 2019 and died a year later, as the world shut down amid the Covid-19 pandemic. Unable to visit friends and family in Canada — or host them in Boston — Gibson found solace exploring local green spaces, such as Jamaica Pond and the Arnold Arboretum.

On difficult days, “I would be out on a walk, spot something — the kingfisher cackling from its perch on a branch overhanging Leverett Pond, or a wood duck paddling on Jamaica Pond or a hawk circling overhead — and stop in awe and think: Oh, wow, I love seeing that. And for that moment, the grief would disappear.”

Returning to the manuscript, she noticed it was missing something critical. “I had all the science there, but I felt it was too much like a textbook,” she said. She consulted friends, colleagues, and an editor who helped turn “this textbook-y thing into something people would enjoy reading.”

The mix of the scientific and personal stood out to Scott Edwards, professor of organismic and evolutionary biology at Harvard University.

“That’s what science is,” Edwards says. “Science is done by humans. It’s not like we can morph into some ultra-objective person when we’re being a scientist. We bring to science our whole selves.”

He also praises the clear writing and illustrations, which “cuts through all the noise and gets right to the core of the message.” He plans to use the book in his class on birds at Harvard.

“Birds Up Close” goes on sale May 5. Gibson is scheduled to discuss the work at the MIT Museum on May 6. She will also appear at other events; a full list is available on the book’s website. She reflects on the book’s reach:

“Part of it was my own sense of awe and wonder. I couldn’t believe the things that I found out about birds,” Gibson says. “I think a lot of birders are into what’s called listing — seeing lots of species and keeping track of how many different species of birds they see. That’s great, if that’s what you want to do. But this book is really a different way of looking at birds.”



de MIT News https://ift.tt/YwRyWoG

A month in Panama: Rethinking what real estate development can be

Cherry Tang, a master of science in real estate development student at the MIT Center for Real Estate, recently participated in an experiential learning opportunity in Panama working with Conservatorio, a development firm based in Casco Viejo. What began as a modeling exercise quickly became a deeper exploration of how development, community, and environment intersect, shaped as much by people and culture as by the work itself.

“I went in expecting to build a financial model. I didn’t expect that the experience would fundamentally reshape how I think about development,” Tang reflects.

The project centered on Santa Catalina, a remote surf town on Panama’s Pacific coast. The development comprises approximately 140 residential units across condos, villas, and homes, along with vacant lots, four retail spaces, a surf school with a stadium, and a restaurant with a pool — all envisioned as the town’s first true center.

At first glance, Tang says, Santa Catalina didn’t resemble a typical “prime” development market. It had limited infrastructure, low density, and no established core.

“What it does have is something powerful: world-class surf and access to Coiba National Park, a premier diving destination,” Tang says. “Here, the ocean becomes the anchor tenant.”

The project is designed as an open, walkable master-planned community that integrates seamlessly with the existing town. Anchored by surfing and diving, it introduces a diverse product mix and a 600-meter linear park, positioning it as the future heart of Santa Catalina and a differentiated alternative to both local developments and traditional resort-style communities.

Tang saw this as a different vision of place-making. “It wasn’t about building a resort. It was about building a center of gravity for a community that has never really had one.”

Tang’s primary role was to build the project’s financial model from the ground up. The capital structure, with land contributed as equity and sales deposits used to fund construction, required a different way of thinking than the institutional frameworks she had used in previous roles in Toronto and Boston.

“It was more than a technical exercise,” she explains. “It reinforced how financial, physical, and strategic decisions are deeply interconnected, and how thoughtful structuring can unlock projects that might otherwise not be feasible.”

Working directly with KC Hardin, founder and CEO of Conservatorio, and the broader leadership team, Tang gained firsthand exposure to real-time development decision-making. She presented her financial model to leadership and prospective investors, and her assumptions helped shape conversations around phasing, design, and construction.

Development is a feedback loop between underwriting and the built environment,” Tang says.

Throughout the month, Tang and her colleagues met with a range of people shaping the project’s future. They spent time with local developers and brokers, learning about infrastructure improvements and ongoing real estate activity in the region. 

Tang described meeting one family with long-standing ties to the area as one of the more memorable moments.

“Their coastline conservation work in Panama is deeply inspiring,” she says.

They also met with scientists from the Smithsonian Tropical Research Institute, trekking through mangroves and learning about coastal ecosystems and the long-term environmental implications of development.

“It was a vivid reminder that development decisions don’t exist in isolation,” says Tang.

Outside of work, Panama had its way of leaving an impression. Sailing through the Panama Canal ... watching cargo ships pass through landscapes filled with monkeys and sloths ... living in Casco Viejo — each added another layer to the experience for Tang. The neighborhood itself served as a real-life case study in thoughtful, community-oriented development.

“What stayed with me most was Conservatorio’s approach to revitalization, not through displacement, but through deep engagement, trust-building, and creating pathways for local residents to be part of the area’s transformation.”

That same spirit was reflected in everyday moments, from co-workers who went out of their way to make interns feel welcome.

“Strangers greeted us like neighbors,” says Tang. “The level of warmth and hospitality defined the experience as much as the work itself.”

By the end of the month, the experience left her with more than technical skills — she had a shift in perspective.

“I began to see development less as a formula and more as a system,” she explains. “One that sits at the intersection of finance, design, environment, and community.”

Her takeaway is that value can be created in unconventional ways, and leadership in real estate is grounded in trust, curiosity, and a deep respect for place.

Tang arrived in Panama to build a model. She left with a deeper understanding of what it means to build thoughtfully — as a developer, and as a steward of place.



de MIT News https://ift.tt/hvQVg6X

The MIT-IBM Computing Research Lab launches to shape the future of AI and quantum computing

The following is a joint announcement by the MIT Schwarzman College of Computing and IBM.

IBM and MIT today announced the launch of the MIT-IBM Computing Research Lab, advancing their long-standing collaboration to shape the next era of computing. The new lab expands its scope to include quantum computing, alongside foundational artificial intelligence research, with the goal of unlocking new computational approaches that go beyond the limits of today’s classical systems.

The MIT-IBM Computing Research Lab builds on a distinguished history of scientific excellence at the intersection of research and academia. Evolving from the MIT-IBM Watson AI Lab, which originated in 2017 on MIT’s campus, the new lab reflects a transformed technology landscape — one which AI has entered mainstream deployment, and quantum computing is rapidly advancing toward practical impact. Together, MIT and IBM aim to help lead research in AI and quantum and to redefine mathematical foundations across both domains.

“We expect the MIT-IBM Computing Research Lab to emerge as one of the world’s premier academic and industrial hubs accelerating the future of computing,” says Jay Gambetta, director of IBM Research and IBM Fellow, and IBM chair of the MIT-IBM Computing Research Lab. “Together, the brightest minds at MIT and IBM will rethink how models, algorithms, and systems are designed for an era that will be defined by the sum of what’s possible when AI and quantum computing come together.”

“For a decade, the collaboration between MIT and IBM has produced leading-edge research and innovation, and provided mentorship and supported the professional growth of researchers both at MIT and IBM,” says Anantha Chandrakasan, MIT’s provost, who, as then-dean of the School of Engineering, spearheaded the creation of the MIT-IBM Watson AI Lab and will continue as MIT chair of the lab. “The incredible technical achievements sets the bar high for our work together over the next 10 years. I look forward to another decade of impact.”

Addressing the next frontiers in computation

The MIT-IBM Computing Research Lab will serve as a focal point for joint research between MIT and IBM in AI, algorithms, and quantum computing, as well as the integration of these technologies into hybrid computing systems. The lab is designed to accelerate progress toward powerful new computational approaches that take advantage of rapid advances in AI and quantum-centric supercomputing, including those that combine maturing quantum hardware with classical systems and advanced AI methods.

This research initiative will include improving capabilities and integrating AI with traditional computing, alongside pursuing advances in small, efficient, modular language model architectures, novel AI computing paradigms, and enterprise-focused AI systems designed for deployment in real-world environments, where reliability, transparency, and trust are essential.

In parallel, the lab will rethink the mathematical and algorithmic foundations that underpin the next era of computing by accelerating the development of novel quantum algorithms for complex problems, with impacts in areas such as materials science, chemistry, and biology.

Additionally, the lab will investigate mathematical and algorithmic foundations of machine learning, optimization, Hamiltonian simulations, and partial differential equations, which are used to approximate the behaviors of dynamical systems that currently stump classical systems beyond limited scales and accuracy. Innovations from the lab could have wide implications for global industries, from more accurate weather and air turbulence prediction to better forecasts of financial market performance. Similarly, with improved optimization approaches, research from the lab could help lower risks in areas like finance, predict protein structures for more targeted medicine, and streamline global supply chains.

With its focus on AI, algorithms, and quantum, the MIT-IBM Computing Research Lab will complement and enhance the work of two of MIT’s strategic initiatives, the MIT Generative AI Impact Consortium and the MIT Quantum Initiative. MIT President Sally Kornbluth launched these strategic initiatives to broaden and deepen MIT’s impact in developing solutions to serious global challenges. The MIT-IBM Computing Research Lab will also leverage IBM’s longtime leadership and expertise in quantum computing. As part of its ambitious roadmap, IBM has laid out a clear path to delivering the world’s first fault-tolerant quantum computer by 2029, and is working across industries to drive value from quantum-centric supercomputing, tightly integrating quantum computers with high-performance computing and AI accelerators to solve the world’s toughest problems.

Deep integration with scientific domains

The MIT-IBM Computing Research Lab will also continue to serve as a foundation for training the next generation of computational scientists and innovators. It will do so by engaging faculty and students across MIT departments, enabling new computational approaches to accelerate discoveries in the physical and life sciences.

The lab will continue to be co-directed by Aude Oliva, senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, and David Cox, vice president of AI Foundations at IBM Research. MIT and IBM have appointed leads for each of the lab’s three focus areas — AI, algorithms, and quantum. Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science (EECS), and Kenney Ng, principal research scientist at IBM Research and the MIT-IBM science program manager, will co-lead AI; Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in EECS, and Vasileios Kalantzis, IBM Research senior research scientist, will co-lead algorithms; and Aram Harrow, professor of physics, and Hanhee Paik, IBM director of Quantum Algorithm Centers, will co-lead quantum.

“The MIT-IBM Computing Research Lab reflects an important expansion of the collaboration between MIT and IBM and the increasing connections across AI, algorithms, and quantum. This deepened focus also underscores a strong alignment with the MIT Schwarzman College of Computing’s mission to advance the forefront of computing and its integration across disciplines,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and MIT co-chair of the lab. “I’m excited about what this next chapter will enable in these three areas, and their impact broadly.”

Building on nearly a decade of collaboration

The MIT-IBM Watson AI Lab helped pioneer a model for academic-industry research collaboration, aligning long-term scientific inquiry with real-world impact. Since its inception, the lab has funded over 210 research projects involving over 150 MIT faculty members and over 200 IBM researchers. Collectively, the projects have led to over 1,500 peer-reviewed articles. The lab also helped shape the career growth of a number of MIT students and junior researchers, funding more than 500 students and postdocs.

“The true measure of this lab is not just innovation, but transformation of a field. Hundreds of students have contributed to thousands of publications in top conferences and journals, demonstrating their capabilities to address meaningful problems,” says Oliva. “The MIT-IBM Computing Research Lab builds on an extraordinary legacy of impact to advance a trusted collaboration that will redefine the future of AI and quantum computing in a way never seen before.”

“By coupling academic rigor with industrial scale, the lab aims to define the computational foundations that will power the next generation of AI, quantum, and scientific breakthroughs,” says Cox. “By bringing together advances in AI, algorithms, and quantum computing under one integrated research effort, we’re creating the conditions to rethink the mathematical and computational foundations of science and engineering.”

The MIT-IBM Computing Research Lab will capitalize on this foundation, expanding both the scientific scope and the ecosystem of collaborators across the Cambridge-Boston region and beyond.



de MIT News https://ift.tt/9cKzX1x

MIT engineers’ virtual violin produces realistic sounds

There is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.

But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.

In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.

While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.

In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.

As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.

The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.

For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.

“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”

“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”

Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.

Sound matrix

The quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.

In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.

The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”

For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.

They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.

“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”

A plucky model

The team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.

For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.

For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.

The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”

“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”

As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.

“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”

This work was supported, in part, by an MIT Bose Research Fellowship.



de MIT News https://ift.tt/u7lgihe

martes, 28 de abril de 2026

Enabling privacy-preserving AI training on everyday devices

A new method developed by MIT researchers can accelerate a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure.

The MIT researchers boosted the efficiency of a technique known as federated learning, which involves a network of connected devices that work together to train a shared AI model.

In federated learning, the model is broadcast from a central server to wireless devices. Each device trains the model using its local data and then transfers model updates back to the server. Data are kept secure because they remain on each device.

But not all devices in the network have enough capacity, computational capability, and connectivity to store, train, and transfer the model back and forth with the server in a timely manner. This causes delays that worsen training performance.

The MIT researchers developed a technique to overcome these memory constraints and communication bottlenecks. Their method is designed to handle a heterogenous network of wireless devices with varied limitations.

This new approach could make it more feasible for AI models to be used in high-stakes applications with strict security and privacy standards, like health care and finance.

“This work is about bringing AI to small devices where it is not currently possible to run these kinds of powerful models. We carry these devices around with us in our daily lives. We need AI to be able to run on these devices, not just on giant servers and GPUs, and this work is an important step toward enabling that,” says Irene Tenison, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Her co-authors include Anna Murphy ’25, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a machine-learning engineer at Flower Labs; and senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at the IEEE International Joint Conference on Neural Networks. 

Reducing lag time

Many federated learning approaches assume all devices in the network have enough memory to train the full AI model, and stable connectivity to transmit updates back to the server quickly.

But these assumptions fall short with a network of heterogenous devices, like smartwatches, wireless sensors, and mobile phones. These edge devices have limited memory and computational power, and often face intermittent network connectivity.

The central server usually waits to receive model updates from all devices, then averages them to complete the training round. This process repeats until training is complete.

“This lag time can slow down the training procedure or even cause it to fail,” Tenison says.

To overcome these limitations, the MIT researchers developed a new framework called FTTE (Federated Tiny Training Engine) that reduces the memory and communication overhead needed by each mobile device.

Their framework involves three main innovations.

First, rather than broadcasting the entire model to all devices, FTTE sends a smaller subset of model parameters instead, reducing the memory requirement for each device. Parameters are internal variables the model adjusts during training.

FTTE uses a special search procedure to identify parameters that will maximize the model’s accuracy while staying within a certain memory budget. That limit is set based on the most memory-constrained device.

Second, the server updates the model using an asynchronous approach. Rather than waiting for responses from all devices, the server accumulates incoming updates until it reaches a fixed capacity, then proceeds with the training round.

Third, the server weights updates from each device based on when it received them. In this way, older updates don’t contribute as much to the training process. These outdated data can hold the model back, slowing the training process and reducing accuracy.

“We use this semi-asynchronous approach because want to involve the least powerful devices in the training process so they can contribute their data to the model, but we don’t want the more powerful devices in the network to stay idle for a long time and waste resources,” Tenison says.

Achieving acceleration

The researchers tested their framework in simulations with hundreds of heterogeneous devices and a variety of models and datasets. On average, FTTE enabled the training procedure to reach completing 81 percent faster than standard federated learning approaches.

Their method reduced the on-device memory overhead by 80 percent and the communication payload by 69 percent, while attaining near the accuracy of other techniques.

“Because we want the model to train as fast as possible to save the battery life of these resource-constrained devices, we do have a tradeoff in accuracy. But a small drop in accuracy could be acceptable in some applications, especially since our method performs so much faster,” she says.

FTTE also demonstrated effective scalability and delivered higher performance gains for larger groups of devices.

In addition to these simulations, the researchers tested FTTE on a small network of real devices with varying computational capabilities.

“Not everyone has the latest Apple iPhone. In many developing countries, for instance, users might have less powerful mobile phones. With our technique, we can bring the benefits of federated learning to these settings,” she says.

In the future, the researchers want to study how their method could be used to increase the personalized performance of AI models on each device, rather than focusing on the average performance of the model. They also want to conduct larger experiments on real hardware.

This work was funded, in part, by a Takeda PhD Fellowship.



de MIT News https://ift.tt/C8dpkOl

With a swipe of a magnet, microscopic “magno-bots” perform complex maneuvers

Under a microscope, a bouquet of lollipop-like structures, each smaller than a grain of sand, waves gently in a petri dish of liquid. Suddenly, they snap together, like the jaws of a Venus flytrap, as a scientist waves a small magnet over the dish. What was previously an assemblage of tiny passive structures has transformed instantly into an active robotic gripper.

The lollipop gripper is one demonstration of a new type of soft magnetic hydrogel developed by engineers at MIT and their collaborators at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the University of Cincinnati. In a study appearing today in the journal Matter, the MIT team reports on a new method to print and fabricate the gel, which can be made into complex, magnetically activated three-dimensional structures.

The new gel could be the basis for soft, microscopic, magnetically responsive robots and materials. Such magno-bots could be used in medicine, for instance to release drugs or grab biopsies when directed by an external magnet.

Making objects move with magnets is nothing new, at least at the macroscale. We can, for example, wave a refrigerator magnet over a pile of paper clips that will trail the magnet in response. And at the microscale, scientists have designed a variety of magnetic “micro-swimmers” — components that are smaller than a millimeter and can be directed remotely by a magnet to squeeze through small spaces. For the most part, these designs work by mixing magnetic particles into a printable resin and pulling the entire swimmer in the direction of an external magnet.

In contrast, the MIT team’s new material can be made into even more complex and deformable structures with micron-scale precision. These features could enable a magnetic millibot to move individual features and perform more complex maneuvers.

“We can now make a soft, intricate 3D architecture with components that can move and deform in complex ways within the same microscopic structure,” says study author Carlos Portela, the Robert N. Noyce Career Development Associate Professor of Mechanical Engineering at MIT. “For soft microscopic robotics, or stimuli-responsive matter, that could be a game-changing capability.”

The study’s MIT co-authors include graduate students Rachel Sun and Andrew Chen, along with Yiming Ji and Daryl Yee of EPFL and Eric Stewart of the University of Cincinnati.

In a flash

At MIT, Portela’s group develops new metamaterials — materials engineered with unique, microscopic architectures that give rise to beyond-normal material properties. Portela has fabricated a variety of such metamaterials, including extremely tough and stretchy architectures and designs that can manipulate sound and withstand violent impacts.

Most recently, he’s expanded his research to “programmable” materials, which can be engineered to change their properties in response to stimuli, such as certain chemicals, light, and electric and magnetic fields.

From the team’s perspective, magnetic stimuli stand out from the rest.

“With a magnetically responsive material, we have control at a distance and the response is instantaneous,” says co-lead author Andrew Chen. “We don’t have to wait for a slow chemical reaction or physical process, and we can manipulate the material without touching it.”

For the new study, the team aimed to create a magnetically responsive metamaterial that can be made into structures smaller than a millimeter. Researchers typically fabricate microstructures by using two-photon lithography — a high-resolution 3D printing technique that flashes a laser into a small pool of resin. With repeated flashes, the laser traces a microscopic pattern into the resin, which solidifies into the same pattern, ultimately creating a tiny, three-dimensional structure, layer by layer.

While 3D resin printing produces intricate microstructures, using the same process to print magnetic structures has been a challenge. Researchers have tried to combine the resin with magnetic nanoparticles before printing the mixture. But magnetic particles are essentially bits of metal that inherently scatter light away or agglomerate and sediment unintentionally. Scientists have found that any magnetic particles in the resin can reduce the laser’s power at a given spot and weaken the resulting structure or prevent its printing altogether.

“Directly 3D printing deformable micron-scale structures with a high fraction of magnetic particles is extremely difficult, often involving a tradeoff between magnetic functionality and structural integrity,” says Sun, a co-lead author on the work.

A printed double-dip

The researchers created a new way to fabricate magnetic microstructures, by combining 3D resin printing with a double-dip process. The researchers first applied conventional resin printing to create a microstructure using a typical polymer gel, with no added magnetic particles. Then they dipped the printed gel into a solution containing iron ions, which the gel can absorb. The iron-soaked structure is then dipped again in a second solution of hydroxide ions. The iron ions in the gel bond with the hydroxide ions, creating iron-oxide nanoparticles that are inherently magnetic.

With this new process, the team can print intricate structures smaller than a millimeter, and add magnetic properties to the structures after printing. What’s more, they are able to control how magnetic a structure’s individual features can be. They found that, by tuning the laser’s power as they print certain features, they can set how cross-linked, or “tight” the gel is when printed. The tighter the gel, the fewer magnetic particles it can form. In this way, the researchers can determine how magnetic each tiny feature can be.

“This provides unprecedented design freedom to print multifunctional structures and materials at the microscale,” Sun says.

As a demonstration, the team fabricated ball-and-stick structures resembling tiny lollipops. The structures were less than a millimeter in height, with balls that were smaller than a grain of sand. The researchers printed the lollipops out of polymer gel and infused each ball with different amounts of magnetic particles, giving them various degrees of magnetism. Under a microscope, they observed that when they passed an ordinary refrigerator magnet over the structures, the lollipops pulled toward the magnet in various degrees, in a configuration that mimicked gripping fingers.

“You could imagine a magnetic architecture like this could act as a small robot that you could guide through the body with an external magnet, and it could latch onto something, for instance to take a biopsy,” Portela says. “That is a vision that others can take from this work.”

The team also fabricated a magnetically responsive, “bistable” switch. They first printed a small millimeter-long rectangle of polymer gel and attached to either side four tiny, oar-like magnetic structures. Each oar measured about 8 microns thick — about the size of a red blood cell. When the team applied a magnet on one end of the rectangle, the oars flipped toward the magnet, pulling the rectangle in the same direction and locking it in that position. When the magnet was applied to the other side, the oars flipped again, pulling the rectangle, like a switch, in the opposite direction.

“We think this is a new kind of bistable mechanism that could be used, for instance, in a microfluidic device, as a magnetic valve to open or shut some flow,” Portela says. “For now, we’ve figured out how to fabricate magnetic complex architectures at the microscale and also spatially tune their properties. That opens up a lot of interesting ideas for soft miniature robots going forward.”

This research was supported, in part, by the National Science Foundation and the MathWorks seed grant program.



de MIT News https://ift.tt/V9ZM2Sk

lunes, 27 de abril de 2026

Six from MIT awarded 2026 Paul and Daisy Soros Fellowships for New Americans

Six MIT affiliates — Denisse Córdova Carrizales SM ’26; Ria Das ’21, MNG ’22; Ronak Desai; Stacy Godfreey-Igwe ’22; Arya Rao; and Ananthan Sadagopan ’24 — have been named 2026 P.D. Soros Fellows. In addition, P.D. Soros Fellow Avinash Vadali will begin a PhD in condensed-matter physics at MIT this fall.

The fellowship provides immigrants and the children of immigrants up to $90,000 in tuition and stipend support for up to two years of graduate studies. Interested students should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Denisse Córdova Carrizales

Córdova Carrizales SM '26 is a PhD student in nuclear science and engineering in the lab of Professor Mingda Li, where she completed her master's work earlier this year. She is working on synthesizing and characterizing quantum materials with the goal of bridging fundamental science and industry to make our technology more energy-efficient and sustainable.

Córdova Carrizales, who is of Mexican descent, grew up in Houston, Texas, before attending Harvard University, where she graduated in 2023 with a BA in physics. At Harvard, she dove into experimental condensed-matter research. She also conducted research with the Princeton Plasma Physics Laboratory, Commonwealth Fusion Systems, and VEIR, spanning computational plasma physics and high-temperature superconducting magnet and cable engineering.

Her work includes coauthored papers in Nature Physics, Nature Materials, and Advanced Materials, as well as lead-author publications in Nano Letters and Physical Review Materials. In 2023, she received the LeRoy Apker Award from the American Physical Society.

Beyond research, Córdova Carrizales has advocated in Congress for nuclear disarmament and risk reduction and has written a piece on the nuclear stockpile stewardship program. At Harvard, she founded an organization to support first-generation college students studying physics. In a completely different arena, she performed as the lead in an off-Broadway show in New York.

Ria Das

Das ’21, MNG ’22 is a PhD student in the MIT Department of Electrical Engineering and Computer Science. She graduated from MIT in 2021 with a BS dual degree in mathematics along with electrical engineering and computer science, and received her master of engineering degree in 2022.

The daughter of Indian immigrant parents, Das grew up in Nashua, New Hampshire, where she struggled with issues of belonging and identity. These questions came to the forefront during her PhD studies at Stanford University. Das decided to step off the academic treadmill by taking a leave from her PhD to think more deeply about these topics.

During her leave, she traveled around the country before moving to New York to work at Basis Research Institute, an AI research nonprofit. As a research associate, Das developed an urban data team that worked with federal and municipal government agencies on issues of economic and housing equity, blending her interests in science and social problems. She then returned to MIT to complete her doctoral studies.

Today, Das works with Professor Joshua Tenenbaum in the Department of Brain and Cognitive Sciences to study how people undergo conceptual change to build more robust, accessible systems for automated (social) science and improved educational design. Looking ahead, she hopes to become a professor, collaborating closely with policy practitioners.

Ronak Desai

Desai is currently a student in the Harvard/MIT MD-PhD program, where his PhD focuses on chemistry. The son of immigrants from Gujarat, India, Desai was born in Tyler, Texas, and grew up in nearby Lindale. He earned his undergraduate degree at the University of Texas at Austin.

Desai spent a semester interning at the U.S. House of Representatives as a Bill Archer Fellow. He also completed biomedical research focused on studying and engineering novel polyketide synthases, aspiring to produce next-generation antibiotics by harnessing such newly engineered synthases.

Desai graduated with degrees in chemistry and biochemistry as a first-generation college student, Health Science Scholar, and Dean’s Honored Graduate, receiving nine scholarships throughout college. His research has resulted in publications in journals such as Cell and Nature Communications.

Desai hopes to combine his passions for medicine, science, and public policy in his career to advance the treatment of infectious diseases. He is conducting his doctoral research under Professor James J. Collins in the MIT Department of Biological Engineering and the Harvard-MIT Program in Health Sciences and Technology. Desai’s research centers on using artificial intelligence to discover and design novel antibiotics, an opportunity to advance treatments for patients worldwide.

Stacy Godfreey-Igwe

Godfreey-Igwe ’22 attended MIT as a QuestBridge and Gates Scholar, graduating in 2022 with a BS in mechanical engineering and a concentration in sustainable design. A Burchard Scholar, she also became the first student at MIT to complete a major in African and African diaspora studies. After graduating, she pursued a science policy fellowship in Washington and interned at the U.S. Department of Energy’s Building Technologies Office, where she worked to broaden adoption of heat pump technologies across diverse stakeholders.

Growing up in Richardson, Texas, as the daughter of Nigerian immigrants, Godfreey-Igwe developed an early awareness of structural inequality, particularly in how families like hers managed the burden of the severe Texas heat and high electricity costs. These experiences formed the basis of her lifelong journey seeking to address systemic inequities embedded in everyday systems.

Godfreey-Igwe is currently a doctoral student in the joint engineering and public policy - civil and environmental engineering program at Carnegie Mellon University (CMU), where she was selected for the inaugural CMU Rales Fellowship cohort. At CMU, she studies the impact of extreme heat on household energy use, particularly in vulnerable communities.

Beyond her research, Godfreey-Igwe organizes outreach and programming for local underrepresented students in STEM and participates in institutional efforts to expand access and belonging among graduate students. She aims to be a scholar and advocate whose work, drawing on her personal experiences, informs equitable energy solutions in a warming world.

Arya Rao

Rao is a student in the Harvard/MIT MD-PhD program. She completed her undergraduate degrees in biochemistry and computer science at Columbia University. Working with professors Pardis Sabeti (Harvard University) and Sangeeta Bhatia (MIT), Rao uses evolution as a lens for therapeutic design, developing artificial intelligence methods that read the genetic record and guide new intervention strategies.

Leveraging her dual training in medicine and computer science, Rao also leads the MESH AI Research Group at Mass General Brigham, where she develops simulation-based tools that test clinical AI systems in realistic educational settings before they reach patients.

Rao has been recognized for her work with a Forbes 30 Under 30 honor, the Massachusetts Medical Society Information Technology Award, the Harvard Presidential Public Service Fellowship, a Harvard Medical School Dean’s Innovation Award, and a Ladders to Cures Accelerator Award. She has published more than 30 manuscripts in publications including JAMA, Nature, and NEJM AI.

Growing up in rural northern Michigan, Rao was inspired by her parents, Konkani immigrants from India, who served as two of the area’s only physicians. She has always imagined a career that could leverage scientific innovation to improve patient care, especially for communities without access like her own. Going forward, she envisions a career as a surgeon-scientist that keeps her close to patients while taking on leadership that shapes how new technologies are evaluated, implemented, and made usable in the places that need them most.

Ananthan Sadagopan 

Sadagopan ’24 grew up in Westborough, Massachusetts, as the child of immigrants from Chennai, India. He participated in chemistry competitions, winning the You Be the Chemist Challenge in middle school and earning a gold medal at the International Chemistry Olympiad for the United States in high school. He attended MIT for college, graduating in three years in 2024 with a bachelor’s degree in chemistry and biology.

At MIT, Sadagopan worked with Srinivas Viswanathan on computational biology projects and with William Gibson, Matthew Meyerson, and Stuart Schreiber on chemical biology projects. He led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing a machine-learning tool for cancer dependency prediction, using small molecules to relocalize proteins in cells, and creating a generalizable strategy to drug the most mutated gene in cancer, TP53. Sadagopan’s work has been patented and published in journals such as Cell and Nature Chemical Biology.

Sadogopan was president of the chemistry undergraduate association and led the events committee for MIT Science Olympiad. He is currently pursuing a PhD in biological and biomedical science at Harvard University as a Hertz Fellow and Herchel Smith Fellow. He is interested in de-risking new therapeutic strategies and hopes that his work will inspire pharma companies to bring first-in-class therapies to patients.



de MIT News https://ift.tt/yC3jDeK