miércoles, 13 de mayo de 2026

A new approach to cancer vaccination yields more powerful T cells

MIT engineers have developed a new way to amplify the T-cell response to mRNA vaccines — an advance that could lead to much more powerful cancer vaccines and stronger protection against infectious diseases.

Most vaccines generate both antibodies and T cells that can target the vaccine antigen by activating antigen-presenting cells, such as dendritic cells. In this study, the researchers boosted the T-cell response with a new type of vaccine adjuvant (a material that can help stimulate the immune system). The new adjuvant consists of mRNA molecules encoding genes that turn on immune signaling pathways and promote a supercharged T-cell response. 

In studies in mice, this mRNA-encoded adjuvant enabled the immune system to completely eradicate most tumors, either on its own or delivered along with a tumor antigen. The adjuvant also boosted the T-cell response to vaccines against influenza and Covid-19.

“When these adjuvant mRNAs are included in the vaccines, the number of antigen-targeted T cells is substantially increased. These T cells play an important role in the immune response, assisting in the clearance of virally infected cells or, in the case of cancer, killing cancerous cells,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.

Anderson and Christopher Garris, an assistant professor at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears today in Nature Biotechnology. The paper’s lead authors are Akash Gupta, a former Koch Institute research scientist who is now an assistant professor at the University of Houston; Kaelan Reed, an MIT graduate student; and Riddha Das, a research fellow at Harvard Medical School and MGH. Robert Langer, the David H. Koch Institute Professor at MIT, and Ralph Weissleder, a professor of radiology and systems biology at MGH and Harvard Medical School, are also authors.

More powerful vaccines

Vaccines that stimulate the body’s immune system to attack tumors have shown promise in clinical trials, and a handful have been FDA-approved for certain cancers. In some patients, these vaccines stimulate a strong response, but in others, a weak response fails to kill the cancerous cells.

The MIT-MGH team wanted to find a way to make those immune responses more powerful. One way to do that is to deliver immune-stimulating molecules called cytokines along with a vaccine. However, cytokines can overstimulate the immune system, leading to potentially severe side effects.

As an alternative approach, the researchers decided to deliver mRNA strands encoding two genes, IRF8 and NIK, which are involved in antigen presentation and can switch immune cells into a more active state.

NIK is an enzyme that activates a signaling pathway involved in immunity and inflammation, while IRF8 is a transcription factor that helps program dendritic cells, particularly a subset called cDC1, which are especially effective at activating T cells. These antigen-presenting cells can digest foreign antigens and present them to T cells, stimulating the T cells to mount an immune response against the antigen.

“We see that the dendritic cells start shifting toward a more cDC1 phenotype, which is the most important dendritic cell phenotype and can generate a stronger T-cell response,” Gupta says. 

The researchers packaged the mRNA in lipid nanoparticles similar to those used to deliver mRNA Covid vaccines, but with a different chemical composition that promotes their delivery to the spleen after being injected intravenously. 

Inside the spleen, the particles encounter antigen-presenting cells, including dendritic cells. Within 24 hours, these cells begin expressing IRF8 and NIK, and both of these pathways help drive dendritic cells to mature and become activated so that they can prime an anti-tumor response. 

Over a few days to a week, the T-cell population expands. These T cells, along with other immune cells such as natural killer (NK) cells, can then recognize and attack tumors.

“Most cancer immunotherapies rely on external signals to activate immune cells. We take a different approach — reprogramming immune cells from within by targeting their internal signaling machinery, enabling a more potent and durable anti-tumor response,” Das says. 

Stronger T cells

The researchers tested the immune-remodeling mRNAs in several mouse models of cancer, including an aggressive bladder cancer, colon carcinoma, melanoma, and metastatic lung cancer. In nearly all of these mice, the injected mRNA stimulated a strong T-cell response that significantly slowed tumor growth and in many cases completely eradicated the tumors. This happened even when the mice were not given a vaccine against a specific cancer antigen. When they were, the response was even stronger.

“We showed that you can get an anti-cancer response with these adjuvants without including the antigen, just by activating the immune system. However, cancer-specific antigens with the adjuvants in a vaccine further improved the responses,” Anderson says.

The mRNA adjuvant also enhanced the immune response to immunotherapy drugs called checkpoint blockade inhibitors. These drugs, which work by lifting a brake that tumor cells put on T cells, are FDA-approved to treat several kinds of cancer. These drugs don’t work for all patients, but combining them with the mRNA vaccine adjuvant could offer a way to make them more effective, the researchers say.

“The microenvironment of solid tumors is often hostile to T cells and represents a major barrier to effective immunotherapy. We find that immune remodeling with these adjuvants creates a T cell–permissive environment and promotes tumor rejection,” Garris says.

The researchers also explored whether their new adjuvant could boost the immune response to vaccination against viral infection. When they delivered the mRNA particles along with Covid or flu vaccines, they found that the vaccine generated a 10-to-15-fold stronger T cell response in the mice.

The researchers now plan to test this approach in additional animal models, in hopes of developing it for use in both cancer and infectious diseases. 

“While there are differences between the mouse systems that we’ve worked in and humans, we are optimistic that these adjuvants will work in humans and could improve a range of different vaccines,” Anderson says.

The research was funded by Sanofi, the National Institutes of Health, the Marble Center for Cancer Nanomedicine, and the Koch Institute Support (core) Grant from the National Cancer Institute.



de MIT News https://ift.tt/XRIrhxM

martes, 12 de mayo de 2026

Improving the reliability of circuits for quantum computers

Quantum computers could someday solve pressing problems that are too convoluted for classical computers, such as modeling complex molecular interactions to streamline drug discovery and materials development. 

But to build a superconducting quantum computer that is large and resilient enough for real-world applications, scientists must precisely engineer thousands of quantum circuits so they perform operations with the lowest possible error rate.

To help scientists design more predictable circuits, researchers from MIT and Lincoln Laboratory developed a technique to measure a property that can unexpectedly cause a superconducting quantum circuit to deviate from its expected behavior. Their analysis revealed the source of these distortions, known as second-order harmonic corrections, leading to underperforming circuit architectures.

The MIT researchers fabricated a device to detect second-order harmonic corrections, identify their origin, and precisely measure their strength. This technique could help scientists deliberately design quantum circuits that can counteract the effects of these deviations.

This is especially important in larger and more complicated quantum circuits, where the negative impact of second-order harmonic corrections can be amplified. 

“As we make our quantum computers bigger and we want to have more precise control over the parameters of these devices, identifying and measuring these effects is going to be important for us to have a precise understanding of how these systems are constructed. It is always important to keep diving down into the circuit to see if there is an effect you didn’t expect, which impacts how your device is performing,” says Max Hays, a research scientist in the Engineering Quantum Systems (EQuS) group of the Research Laboratory of Electronics (RLE) and co-lead author of a paper on this research.

Hays is joined on the paper by co-lead author Junghyun Kim, an electrical engineering and computer science (EECS) graduate student in the EQuS group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of EECS and professor of physics, leader of the EQuS group, director of the Center for Quantum Engineering, and associate director of RLE; as well as others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.

A pair-wise problem

In a quantum computer that utilizes superconducting circuits, which is one of many potential computing platforms, Josephson junctions are critical elements that enable the transfer and manipulation of information. These devices utilize two superconducting wires that are brought very close together, with a nanometer-scale barrier between them. Like a traditional circuit, the electric charge in Josephson junctions is carried by electrons. 

But in a superconducting circuit, charge-carrying electrons pair up, forming what are called Cooper pairs. These Cooper pairs can “quantum tunnel” through the barrier between the two wires, transporting current from one wire to the other.

Cooper pairs can usually only tunnel one pair at a time, which is a key property that makes quantum computation possible. 

“If you try to force more Cooper pairs through, it just doesn’t work. This non-linear effect is extremely important for all our circuits. If we didn’t have that effect, then we wouldn’t be able to control or manipulate any quantum information that we store in these circuits,” Hays explains.

But sometimes, Cooper pairs can unexpectedly squeeze through the barrier two at a time, an effect that is known as a second-order harmonic correction. This effect limits the performance of a quantum circuit that has been configured to only allow single-pair tunneling.

“If two Cooper pairs tunnel at the same time, then the assumption we used to build our circuit doesn’t apply anymore. We need to fix the circuit so it can handle that,” Kim says.

But before they can fix the circuit, scientists need to know the source and strength of these distortions.

To obtain this information, the MIT researchers fabricated a quantum circuit so it would be very sensitive to these effects. Essentially, the device is designed to suppress the quantum tunneling process of single Cooper pairs, while allowing the two-pair tunneling process to continue. 

In this way, they can detect the presence of second-order harmonic corrections and precisely measure their strength. 

Straight to the source

They can also use this circuit to pinpoint the source of these harmonics, which helps researchers identify the best way to correct for them. 

There are two potential sources of second-order harmonics — one source is intrinsic to the dynamics of the Josephson junction and the other is caused by the wires connecting the junction to other circuit elements. 

While prior research had indicated the second-order harmonics could be due to the dynamics of the junction, the MIT researchers found that additional inductance — the tendency to oppose changes in the flow of electric current —from wires in the circuit was the actual source in their devices. 

“This is important because, if we know where the second-order harmonic correction is coming from, we can predict how strong it is likely to be, and use that information to engineer more predictable circuits that will hopefully perform better,” Hays says.

In the future, the researchers want to design experiments that more accurately predict how a device will perform when second-order harmonic corrections occur. They also want to study other sources of second-order harmonic corrections and whether those sources could have negative impacts on a circuit under different fabrication conditions.

This work is funded, in part, by the U.S. Department of Energy, the U.S. Co-design Center for Quantum Advantage, the U.S. Air Force, the Korea Foundation for Advanced Studies, and the Intelligence Community Postdoctoral Research Fellowship Program at MIT. 



de MIT News https://ift.tt/9y4ivc6

lunes, 11 de mayo de 2026

Solving hard problems in soft electronics

A crepe cake.

That’s how Camille Cunin describes the polymer-metal “sandwiches” that became a highlight of her doctoral thesis at MIT’s Department of Materials Science and Engineering (DMSE). Over close to five years, these composites were a key component of her research on bioelectronics — devices designed to interface with the human body.

Cunin completed her PhD in February — she’ll attend commencement in May — but traces her interest in bioelectronics to a formative summer internship at Massachusetts General Hospital (MGH) in Boston in 2019. There, she saw a patient with Parkinson’s disease struggle to swallow a tethered “capsule” intended to function as an exploratory gut probe. The device failed, and the gap between lab-based design and real life became all too apparent.

The incident validated the career path Cunin had already begun to pursue: to make usable products that have a positive impact on people’s lives. It’s a purpose that hasn’t gone unnoticed. “Some might be happy with a sketch of a concept and no actual demonstration, but Camille has a remarkable ability in that she wants to do materials science that can translate to real-world applications,” says her advisor, Aristide Gumyusenge.

Building blocks

The daughter of a psychologist and an engineer, Cunin grew up in Paris, encouraged by her parents to be curious about the world around her. LEGO blocks featured prominently in her childhood. When her father found some old lights in a box in the attic, 9-year-old Camille strung them to decorate her LEGO castle by creating a circuit, complete with a fuse.

Strong grades earned her a spot in France’s elite post-secondary preparatory classes for admission to the country’s prestigious grandes écoles. The intensive and competitive prep classes, however, left Cunin with a sour aftertaste — “for a while I hated science, because the environment was too competitive for me,” she says — and a bit rudderless in engineering school.

It was the research internship thousands of miles from home, at MGH — part of her master’s in engineering at École Centrale de Marseille in France — that rebooted her love of science. The open-ended nature of research appealed to her curiosity and helped her regain confidence in solving problems. She was delighted to be accepted at MIT DMSE for her doctoral studies. “In Boston, I thrived in collaborative environments, and it felt like anything was possible,” she says.

Stretching possibilities

Before starting at MIT, Cunin had a wealth of interdisciplinary experience, from internships and her graduate studies. Unsure about how to slot it all together, she was looking for an advisor at a time when Gumyusenge, Henry L. Doherty Career Development Professor in Ocean Utilization and assistant professor of materials science and engineering, was himself just establishing his lab at DMSE.

When Gumyusenge shared plans to work on projects to turn biological signals into electronic data, Cunin was excited to build on her prior research in biomedical devices. “Here was a chance to fine-tune the materials and to optimize the performance of bioelectronic devices. I really felt I could leverage my strengths in Aristide’s lab,” she remembers.

Gumyusenge proved a great fit, supporting Cunin’s broad research ambitions while helping her shape and integrate them into a coherent doctoral project. She tackled everything from developing and characterizing new materials to fabricating transistors and learning surgery to test the devices in animal models. The final dissertation focused on organic transistors, which boost body signals for easier detection in soft electronics.

Biological signals, like those from nerves in the body, are weak, and transistors amplify them so they can be measured. The challenge with developing bioelectronic devices is that traditional components are hard and rigid, while the human body is not. Devices must perform as needed and be soft and flexible to avoid irritating human tissue.

Another complication: Biological processes involve charged ions moving through fluids, while electronics rely on electrons moving through materials. Before transistors can amplify signals, they first have to convert biological signals into electronic ones for circuits to pick up.

Cunin’s transistor design needed to solve two major challenges: first, to facilitate the movement of electrons and ions in the “channel,” the hub of all signal activity, in soft, hydrated environments; and second, to be pliable enough to conform to the human body.

It was no easy task.

Elegant simplicity

Gumyusenge’s lab typically uses chemistry to modify material behavior, but Cunin took a different tack. Since polymers are soft, and metals are good conductors, she looked to the classic French pastry mille-feuille, which inspired the layered design: thin metal sheets sandwiched between layers of porous elastomer. The metal stretches with the elastomer and forms microcracks. Charges get trapped in the cracks but can still flow through the stack, while the elastomer’s strong adhesion keeps the layers together.

Her approach won Cunin high marks from her advisor. “Camille was working on a complex problem, but she found a way to simplify it with a straightforward approach,” Gumyusenge says.

Of course, even an elegant solution needs test drives. “The more crystalline the polymers are, the better the charges percolate and travel in the material,” Cunin points out, referring to how ordered the semiconducting polymers in the transistor channel are. But if they’re packed too tightly, ions don’t move freely, and the transistor channel can’t switch properly. The arrangement of the spaghetti-like polymer chains controls this balance, so Cunin studied the composites’ structure to optimize both ionic and electronic performance.

Professor Polina Anikeeva, who co-advised Cunin with Gumyusenge and calls her “unstoppable,” says her innovation in the lab was remarkable — but not surprising.

“She didn’t have to be pushed into trying something new,” says Anikeeva, head of DMSE. “I would have higher and higher expectations, and she would consistently meet those higher and higher expectations.”

That drive continues in industry. Cunin now works at the Cambridge-based neurotechnology startup Axoft — just minutes from her former lab at MIT — researching soft electrodes that can be implanted in the brain. The electrodes detect electrical signals that can shed light on the brain’s many functions. “By understanding the brain better, we can eventually develop therapies and treatments that improve patient outcomes,” Cunin says.

Creative outlets

During her time at MIT, Cunin also made time for activities outside the lab, driven by the same curiosity that fueled her research. Committed to sharing her love of materials science and engineering, she was a leading member of the Polymer Graduate Student Association and organized several editions of MIT Polymer Day, a one-day symposium connecting students, faculty, and industry to showcase cutting-edge polymer research.

She also pursued creative outlets. After learning to use 3D graphics software Blender, Cunin illustrated some of the journal covers featuring her work.

She is also a diehard salsa fan and teaches the dance style a couple of times a week. Salsa’s social and collaborative forms appeal to Cunin, who enjoys sharing her passion, experimenting with choreography, and helping fellow dancers improve. “Salsa is fast — I love the mental challenge it brings. I also like that it exposes you to different aspects of the community; it pushes you out of your bubble,” she says.

Gumyusenge appreciates that Cunin made time for other pursuits throughout the grueling demands of a doctoral degree. “She’d work 14 hours a day in the lab, but also go do some hiking and take a break. I love that — it’s something that other PhD students seem to forget sometimes,” he says.

That balance reflects her determination and resolve. “Camille has never been shy about facing challenging research problems,” he says. “She had a research vision and was dedicated to learning the lessons she needed to get it all done. I learned to not get in her way because when Camille told you she would learn how to do something, she would.”



de MIT News https://ift.tt/RwTaKXE

jueves, 7 de mayo de 2026

Mapping the ocean with autonomous sensors

In late October 2025, Tropical Storm Melissa moved through the Caribbean Sea with moderate winds that didn’t get much attention. But on Oct. 25, aided by a patch of warm ocean, the storm rapidly intensified. By the time it made landfall in Jamaica, it was one of the strongest Atlantic hurricanes on record, uprooting trees, tearing the roofs from buildings, and causing catastrophic flooding and power outages.

Ravi Pappu SM ’95, PhD ’01 blames the surprise on our inability to gather high-quality ocean data.

“The storm intensified because of a small pool of hot water in the Caribbean Ocean that fed it energy,” Pappu explains. “These pools are everywhere. They can be hundreds of kilometers wide and are literally invisible to us. If we knew about that pool, we could say very precisely how the hurricane would intensify and better deal with it.”

Pappu thinks he has a way to solve that problem. He is the founder of Apeiron Labs, a company deploying low-cost autonomous ocean sensors to capture more data, in more places, and at a lower cost than is possible today. The company’s devices roam the ocean up to a quarter mile below the surface and continuously gather data on temperature, acoustics, salinity, and more, providing a real-time look at one of the planet’s last known mysteries. He says the sensors can do for the ocean what small, modular CubeSat satellites did for Earth observation from space.

When the devices are ready to be recharged, trackers make it easy to scoop them from the ocean surface. Pappu envisions the recovery process being done by autonomous boats in the future.

“Humanity needs ocean measurements, and we need them at a scale that has never been attempted before,” Pappu says. “It’s a massively hard problem. In the last century, oceanographers resigned themselves to calling it the century of undersampling. If we are successful, we will have a much more fine-grained understanding of our oceans and how they impact humans. That’s what drives us.”

Homework

Pappu came to MIT after completing a 10-year homework assignment. It started when he was a child in India in the 1980s, when he saw a hologram on the cover of National Geographic for the first time.

“I was so taken by it that I decided I needed to learn how to make those three-dimensional images,” Pappu recalls. “I learned what I could by reading books and papers. I didn’t know who invented the hologram until I read a book about MIT’s Media Lab. The book named the person who invented the rainbow hologram, so I wrote him a letter. I didn’t know his address, so I just wrote on the envelope, ‘Steve Benton, holography researcher, MIT, USA.’”

To Pappu’s surprise, the letter reached Benton, and the former Media Lab professor even wrote back with some further topics he needed to learn about.

Pappu never forgot that. He earned a bachelor’s degree in electrical engineering in India, then earned his master’s degree at Villanova University, taking all the optics classes he could.

“Eventually, about 10 years after I saw my first hologram, I wrote to Steve and I said, ‘I did all these things you asked me, now I want to study with you,’” Pappu says. “That’s how I got into MIT.”

Pappu studied under Benton for the next three years. He also studied under Professor Neil Gershenfeld as part of his PhD. Following graduation, Pappu and four classmates started ThingMagic, a consulting company that eventually sold RFID readers. ThingMagic was acquired 2010. Pappu returned to MIT for two years as a visiting scientist around the time of the acquisition.

Following that experience, Pappu worked at In-Q-Tel, an organization that invested in ThingMagic and other companies with potential to advance national security. It was there that Pappu realized how badly the world needed large-scale, inexpensive ocean sensing.

“All of the ocean sensing up to that point, and even today, was about making a really expensive thing that cost $20 million, goes to the bottom of the ocean, and stays there for five years,” Pappu says. “We needed things that are cheap and scalable to deploy wherever you need them for as long as you want.”

Pappu officially founded Apeiron Labs in 2022.

“What we’re focused on is figuring out how the ocean works,” Pappu says. “How warm is it? What is the pH? How salty is it? These things vary from place to place every 10 kilometers or so. It varies over time, and it varies by season. If we knew the details of the ocean with the same fidelity we have for the atmosphere, we would be able to tell exactly when and where hurricanes hit. It would mean less uncertainty.”

Apeiron’s ocean-sensing devices are each 3 feet long and about 20 pounds. They’re designed to be dropped off a boat or plane with biodegradable parachutes and stay in the ocean for six months. Each device continuously sends data to the cloud, is controllable through a cloud-based ocean operating system, and is accessible on a mobile phone.

“We lower the carbon footprint and cost of gathering ocean data because everything else needs a diesel ship — and a fully crewed ship costs $100,000 a day,” Rappu says. “By the time you collect the first data in the old model, you’ve already committed to a lot of money in addition to millions of dollars for the sensors. “

The company’s devices currently have two types of sensors: one for measuring salinity, temperature, and depth, and the other that uses a hydrophone to passively listen for things like submarines and whales.

That could be used to detect the low-frequency calls and clicks of endangered whales and other fish species. Currently, fishermen must look for whales manually with spotters on ships or planes. The data could also be used to improve weather forecasts, monitor noise from offshore energy projects, and track currents.

“Currents are determined by temperature and salinity, so if there’s an oil spill, our data could help determine where that spill is going,” Pappu says. “Or if you’re a fisherman, knowing where the water changes from warm to cold, which is where the fish hang out, is very useful.”

An ocean of possibilities

Apeiron Labs has worked with government defense agencies including the U.S. Navy over the last two years. The company has also tested its devices off the coast of California and in the Boston Harbor.

“The most important thing is, when we show people our approach and what we’ve demonstrated so far, they are no longer asking, ‘Can it be done?’ they’re asking, ‘What can we do with it?’” Pappu says. “Our customers have spent decades working in the ocean and they understand how novel these capabilities are.”

Of all the possibilities, improved storm forecasting could be the one Pappu is most excited about.

“Our mission is to lower the barriers to ocean data,” Pappu says. “The ocean is a huge determinant of weather, climate, and short-term forecasting. Despite our best efforts to predict the intensity of storms, sudden changes are still the norm, and much of that comes down to a lack of understanding of our oceans. If we were monitoring these things over long periods of time and finer spatial scales, we could see these storms coming much earlier with more certainty.”



de MIT News https://ift.tt/p1lxtk8

Rethinking how our brains use categories to make sense of the world

In the new review article, “Categorization is Baked into the Brain,” cognitive scientists Earl K. Miller, Picower Professor of Neuroscience at MIT, and Lisa Feldman Barrett, university distinguished professor at Northeastern University, contend that categorization is part of a predictive process the brain uses to efficiently meet the body’s needs in a fast-paced, otherwise overwhelming sensory world. In that sense, their paper in Nature Reviews Neuroscience challenges decades of dogma about how and why the brain boils down what it sees, hears, smells, tastes, and feels.

Categories are groups of things that are similar enough to be considered functionally equivalent. When you walk through a neighborhood, you’ll naturally experience the furry, four-legged, barking animal ahead of you as a “dog.” In the classic view of cognition, your brain arrives at that categorization by soaking in lots of basic sensory features of the hound — its shape, its size, the sounds it makes, its behavior — and compares that to some prototype “dog” stored in your memory. Hundreds of milliseconds after the first sensory inputs, you can then decide what you might want to do about the dog.

Barrett and Miller argue that that’s wrong. Instead, they propose that your brain comes prepared for sensory patterns with predictions of the motor action plans that are most likely to achieve the needs and goals you bring to the moment. Those prediction signals can be described as a momentary category that the brain constructs to shape the processing of sensory signals. 

From the very start, incoming sensory signals are compressed and abstracted into that category to efficiently select the best predicted plan. If you are in an unfamiliar neighborhood your brain might construct the category “dog” to avoid being bit, resulting in: “Back away slowly while saying nice doggie.” If you are on your own block and encounter a familiar dog, your brain might construct a category to kneel and open up your arms to summon your neighbor’s adorable pup for a satisfying petting.

In either case, the category “dog” arises in the context of your needs and your prediction from a menu of learned action plans for similar situations, not from an intellectual exercise of neutrally regarding sensory inputs, comparing them to a fixed prototype, and then planning from there. If the brain really worked the classically believed way, you’d be on the back foot when the unfamiliar dog lunged at you.

“One of the main things your brain has to do is predict the world,” says Miller, a faculty member of The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “It takes several hundred milliseconds to process things, and meanwhile the world is moving on. Your brain has to anticipate things.”

The most pragmatic and efficient way to survive and thrive in such a world, Barrett says, is to have your needs and potential plans ready for the sensory situation. If your predictions are right, you’re prepared in time. If they are wrong, you adjust and learn from it.

“The stimulus, cognition, response model of the brain is wrong,” says Barrett, a faculty member in Northeastern’s Department of Psychology and co-director of the Interdisciplinary Affective Science Laboratory. “The brain prepares for a response and then perceives a stimulus. A brain is not reactive. It’s predictive. Action planning comes first. Perception comes second, as a function of the action plan.”

Anatomical and functional evidence

Throughout the review, Barrett and Miller ground the provocative proposal in copious anatomical, electrophysiological, and imaging evidence about the brain. They cite numerous experimental results that show how the brain is structured to broadcast memories to create motor plans that flow back toward signals that arrive from the body’s sensory surfaces, actively whittling them down and shaping them to give them meaning.

“The capacity to create similarities from differences — to abstract — is embedded in the architecture of the nervous system, and you can see that by looking at what is connected to what and by observing signal flow,” Barrett says.

For example, as circuits feed signals “forward” from sensory surfaces (such as the retina) to regions of the cerebral cortex that are focused on sensory processing (such as the visual cortex) toward the areas that are important for executive control (the prefrontal cortex) and control of the body (limbic cortex), information passes from many small, barely connected neurons to fewer, bigger, and more well-connected neurons. Such an architecture compresses sensory details into increasingly abstract representations that group many different features into smaller groups of similar features, and in doing so helps to select a predicted action plan from the broader category that’s already there.

“Your brain is a big funnel to take the outside world and turn it into an output,” Miller says.

Moreover, anatomical evidence shows that the neurons in the cortex maintain many more connections to provide feedback from memory that control sensory regions than to feed sensory information forward. As much as 90 percent of synapses in the visual cortex are “feedback” instead of “feedforward,” Barrett and Miller wrote. In other words, the brain is built to use memory to filter incoming sensory signals, consistent with imposing needs and goals on what would otherwise be a deluge of sights, sounds, and other sensations.

Yet another line of evidence are numerous studies from Miller’s own lab showing that at the broad network level of information flow in the cortex, the brain uses beta frequency waves that carry information about goals and plans, to constrain the expression of gamma frequency waves that carry information about specific sensory inputs.

Finally, the dominance of “feedback” over “feedforward” signals in the cortical architecture allows for the possibility that sensory signals are made meaningful in terms of predicted plans. When these plans are wrong, the resulting surprise can be integrated for future use.

“In science, there is a special name for that: learning,” Barrett says.

Implications for human thought and disease

In the end, Barrett and Miller’s proposal completely changes the idea of categorization, shifting it from being a particular intellectual skill to being a fundamental function for predictively meeting the body’s needs (or, “allostasis”).

“A category may not be a representation that an animal has, but a signal processing event than an animal does, predictively, to constrain the meaning of a high-dimensional ensemble of signals in a particular situation,” the authors wrote. “Categorization renders these signals meaningful — similar to one another and to past allostatic events — in terms of some goal or function.”

Humans, Barrett says, have a relatively massive amount of the neural network architecture to perform these pragmatic abstractions, and therefore can make categorizations that seem outright metaphorical (e.g., a functional similarity between “climbing the career ladder” and climbing a literal physical ladder).

But these processes can also go awry in disease, Barrett and Miller note. Depression can be seen as a disorder in which the brain imposes overly broad categories, such as “threat” or “criticism” on sensory episodes that don’t have to be perceived that way. By contrast, autism can manifest with features of inadequate compression of incoming sensory signals, not generalizing enough to recognize when a situation is similar enough to a prior one to select the appropriate plan.

Funding to support the paper came from the National Institutes of Health, The U.S. Army Research Institute for the Behavioral and Social Sciences, the Office of Naval Research, the Unlikely Collaborators Foundation, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.



de MIT News https://ift.tt/SreCjuP

Photonics advance could enable compact, high-performance lidar sensors

Lidar systems use pulses of infrared light to measure distance and map a 3D scene with high resolution, allowing autonomous vehicles to rapidly react to obstacles that appear in their path. But traditional lidar sensors are expensive, bulky systems with many moving parts that degrade over time, limiting how the sensors can be deployed.

A new study from MIT researchers could help to enable next-generation lidar sensors that are compact, durable, and have no moving parts. The key advance is a novel design for a silicon-photonics chip, which is a semiconductor device that manipulates light rather than electricity. 

Typically, such silicon-photonics chip-based systems have a restricted field of view, so a silicon-photonics-based lidar would not be able to scan angles in the periphery. Existing workarounds to this problem increase noise and hamper precision.

To avoid these drawbacks, the MIT researchers designed and demonstrated an array of integrated antennas that minimizes unwanted crosstalk between the antennas. Their innovation allows a lidar chip to scan a wider field of view while maintaining low-noise operation compared to other silicon-photonics-based approaches.

This novel demonstration could fuel the development of advanced lidar sensors for demanding applications like autonomous vehicle navigation, aerial surveying, and construction site monitoring.

“The functionality we demonstrated in this work solves a fundamental problem for integrated optical-phased-array technology, enabling future lidar sensors that can achieve significantly higher performance than we could demonstrate previously,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this innovation.

She is joined on the paper by lead author and EECS graduate student Henry Crawford-Eng as well as EECS graduate students Andres Garcia Coleto, Benjamin M. Mazur, Daniel M. DeSantis, and Tal Sneh. The research appears today in Nature Communications.

Adjusting an antenna array

Many traditional lidar systems map a scene using a bulky box that spins to send pulses of light in multiple directions. The light bounces off nearby objects and returns to the sensor, providing data that are used to reconstruct the environment. 

Instead, silicon-photonics-based lidar sensors systematically scan an emitted light beam in multiple directions non-mechanically using a system called an integrated optical phased array (OPA).

Key to an OPA is an array of integrated antennas that have tiny perturbations placed periodically along their length. These corrugations allow the antenna to scatter light from an input source up and out of the photonic chip.

By adjusting the phase of light routed to each antenna, the researchers can change the angle at which the light is emitted out of the array. In this way, they can steer the beam with no moving parts.

But if engineers place the antennas too close together, the antennas will couple with each other and the light they emit will get jumbled. To avoid this, scientists typically space the antennas farther apart, but this also has downsides.

If the antennas are spaced too far apart, the array will emit multiple copies of the light beam at different angles. The researchers can only steer the primary beam so far in either direction until it is undiscernible from its neighboring copies.

“This limits our field of view, so the autonomous vehicle now only knows what is in front of it for a certain angular range,” Garcia Coleto explains.

These beam copies, known as grating lobes, can cause false positives by confusing the sensor. They also waste power.

The MIT researchers solved this problem by designing a set of reduced-crosstalk antennas that can be placed close together without causing a significant coupling effect.

In a standard OPA, all the antennas have the same design, meaning the same arrangement of corrugations. These identical antennas couple very strongly when placed close together.

To address this fundamental roadblock, the MIT researchers designed a set of three antennas with different geometries, varying the width of each antenna and the size and arrangement of corrugations. With varied geometries, each antenna has a different propagation coefficient, which determines how light travels down the antenna.

“Because the antennas have very different propagation coefficients, when we put them close together, essentially each antenna doesn’t ‘see’ the antenna next to it. Therefore, it won’t couple with its neighbor,” Garcia Coleto says. 

A photonic balancing act

But even though the antennas have different propagation coefficients, the researchers still need them to emit light in the same way. 

They achieved this by carefully designing the antennas to meet three parameters. 

First, each antenna must emit the same amount of light. Second, each antenna must emit a beam at the same angle for the same wavelength of light. Third, the emission angle must change uniformly across the array as the researchers steer it.

“We have this challenge where we require the antennas to have different geometries to reduce the crosstalk, but we need to simultaneously design the antennas to have the same emission characteristics. While it is possible to engineer this, it is extremely difficult because, typically, when antennas are designed with different geometries, they tend to behave differently,” Crawford-Eng says.

The researchers first developed the fundamental electromagnetic theory behind how radiative modes couple. They used that theory as a guide to design and simulate their antennas.

Building on those analyses, they fabricated the OPA with reduced-crosstalk antennas spaced significantly closer than they would be in a traditional OPA, then experimentally tested the system.

While a typical OPA would have coupling of about 100 percent in this experiment, their OPA reduced coupling to about 1 percent while generating a single, precise beam. Using this design, they demonstrated accurate beam steering across a wide field of view without any grating lobes. 

In the future, the researchers plan to further improve their technique to enable an even wider field of view. In addition, they are exploring a new potential solution to wide field-of-view functionality that they discovered while developing the underlying theory.

“This work addresses a longstanding challenge in integrated optical phased arrays: simultaneously achieving both a wide field of view, which requires dense antenna spacing, and high beam quality, which requires low crosstalk between neighboring antennas. The authors solve this problem with an elegant antenna design. Their innovation is an important step forward for chip-scale, solid-state beam-steering technology,” says Joyce Poon, professor of electrical and computer engineering at the University of Toronto and director of the Max Planck Institute of Microstructure Physics, who was not involved with this work.

This research was supported, in part, by the Semiconductor Research Corporation, the National Science Foundation, an MIT MathWorks Fellowship, the U.S. Department of War, and the MIT Rolf G. Locher Endowed Fellowship.



de MIT News https://ift.tt/7tfjIZg

miércoles, 6 de mayo de 2026

Study: Firms often use automation to control certain workers’ wages

When we hear about automation and artificial intelligence replacing jobs, it may seem like a tsunami of technology is going to wipe out workers broadly, in the name of greater efficiency. But a study co-authored by an MIT economist shows markedly different dynamics in the U.S. since 1980. 

Rather than implement automation in pursuit of maximal productivity, firms have often used automation to replace employees who specifically receive a “wage premium,” earning higher salaries than other comparable workers. In practice, that means automation has frequently reduced the earnings of non-college-educated workers who had obtained better salaries than most employees with similar qualifications. 

This finding has at least two big implications. For one thing, automation has affected the growth in U.S. income inequality even more than many observers realize. At the same time, automation has yielded a mediocre productivity boost, plausibly due to the focus of firms on controlling wages rather than finding more tech-driven ways to enhance efficiency and long-term growth.

“There has been an inefficient targeting of automation,” says MIT’s Daron Acemoglu, co-author of a published paper detailing the study’s results. “The higher the wage of the worker in a particular industry or occupation or task, the more attractive automation becomes to firms.” In theory, he notes, firms could automate efficiently. But they have not, by emphasizing it as a tool for shedding salaries, which helps their own internal short-term numbers without building an optimal path for growth.

The study estimates that automation is responsible for 52 percent of the growth in income inequality from 1980 to 2016, and that about 10 percentage points derive specifically from firms replacing workers who had been earning a wage premium. This inefficient targeting of certain employees has offset 60-90 percent of the productivity gains from automation during the time period.

“It’s one of the possible reasons productivity improvements have been relatively muted in the U.S., despite the fact that we’ve had an amazing number of new patents, and an amazing number of new technologies,” Acemoglu says. “Then you look at the productivity statistics, and they are fairly pitiful.”

The paper, “Automation and Rent Dissipation: Implications for Wages, Inequality, and Productivity,” appears in the May print issue of the Quarterly Journal of Economics. The authors are Acemoglu, who is an Institute Professor at MIT; and Pascual Restrepo, an associate professor of economics at Yale University.

Inequality implications

Dating back to the 2010s, Acemoglu and Restrepo have combined to conduct many studies about automation and its effects on employment, wages, productivity, and firm growth. In general, their findings have suggested that the effects of automation on the workforce after 1980 are more significant than many other scholars have believed. 

To conduct the current study, the researchers used data from many sources, including U.S. Census Bureau statistics, data from the bureau’s American Community Survey, industry numbers, and more. Acemoglu and Restrepo analyzed 500 detailed demographic groups, sorted by five levels of education, as well as gender, age, and ethnic background. The study links this information to an analysis of changes in 49 U.S. industries, for a granular look at the way automation affected the workforce. 

Ultimately, the analysis allowed the scholars to estimate not just the overall amount of jobs erased due to automation, but how much of that consisted of firms very specifically trying to remove the wage premium accruing to some of their workers. 

Among other findings, the study shows that within groups of workers affected by automation, the biggest effects occur for workers in the 70th-95th percentile of the salary range, indicating that higher-earning employees bear much of the brunt of this process. 

And as the analysis indicates, about one-fifth of the overall growth in income inequality is attributable to this sole factor.

“I think that is a big number,” says Acemoglu, who shared the 2024 Nobel Prize in economic sciences with his longtime collaborators Simon Johnson of MIT and James Robinson of the University of Chicago.

He adds: “Automation, of course, is an engine of economic growth and we’re going to use it, but it does create very large inequalities between capital and labor, and between different labor groups, and hence it may have been a much bigger contributor to the increase in inequality in the United States over the last several decades.” 

The productivity puzzle

The study also illuminates a basic choice for firm managers, but one that gets overlooked. Imagine a type of automation — call-center technology, for instance — that might actually be inefficient for a business. Even so, firm managers have incentive to adopt it, reduce wages, and oversee a less productive business with increased net profits.

Writ large, some version of this seems to have been happening to the U.S. economy since 1980: Greater profitability is not the same as increased productivity.

“Those two things are different,” says Acemoglu. “You can reduce costs while reducing productivity.” 

Indeed, the current study by Acemoglu and Restrepo calls to mind an observation by the late MIT economist Robert M. Solow, who in 1987 wrote, “You can see the computer age everywhere but in the productivity statistics.” 

In that vein, Acemoglu observes, “If managers can reduce productivity by 1 percent but increase profits, many of them might be happy with that. It depends on their priorities and values. So the other important implication of our paper is that good automation at the margins is being bundled with not-so-good automation.” 

To be clear, the study does not necessarily imply that less automation is always better. Certain types of automation can boost productivity and feed a virtuous cycle in which a firm makes more money and hires more workers. 

But currently, Acemoglu believes, the complexities of automation are not yet recognized clearly enough. Perhaps seeing the broad historical pattern of U.S. automation, since 1980, will help people better grasp the tradeoffs involved — and not just economists, but firm managers, workers, and technologists. 

“The important thing is whether it becomes incorporated into people’s thinking and where we land in terms of the overall holistic assessment of automation, in terms of inequality, productivity and labor market effects,” Acemoglu says. “So we hope this study moves the dial there.”

Or, as he concludes, “We could be missing out on potentially even better productivity gains by calibrating the type and extent of automation more carefully, and in a more productivity-enhancing way. It’s all a choice, 100 percent.”



de MIT News https://ift.tt/zJRZ75N