miércoles, 6 de noviembre de 2024

Neuroscientists create a comprehensive map of the cerebral cortex

By analyzing brain scans taken as people watched movie clips, MIT researchers have created the most comprehensive map yet of the functions of the brain’s cerebral cortex.

Using functional magnetic resonance imaging (fMRI) data, the research team identified 24 networks with different functions, which include processing language, social interactions, visual features, and other types of sensory input.

Many of these networks have been seen before but haven’t been precisely characterized using naturalistic conditions. While the new study mapped networks in subjects watching engaging movies, previous works have used a small number of specific tasks or examined correlations across the brain in subjects who were simply resting.

“There’s an emerging approach in neuroscience to look at brain networks under more naturalistic conditions. This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s not going to give us all the answers, but it generates a lot of interesting ideas based on what we see going on in the movies that's related to these network maps that emerge.”

The researchers hope that their new map will serve as a starting point for further study of what each of these networks is doing in the brain.

Desimone and John Duncan, a program leader in the MRC Cognition and Brain Sciences Unit at Cambridge University, are the senior authors of the study, which appears today in Neuron. Reza Rajimehr, a research scientist in the McGovern Institute and a former graduate student at Cambridge University, is the lead author of the paper.

Precise mapping

The cerebral cortex of the brain contains regions devoted to processing different types of sensory information, including visual and auditory input. Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using fMRI to measure brain activity as subjects perform a single task such as looking at faces.

In other studies, researchers have scanned people’s brains as they do nothing, or let their minds wander. From those studies, researchers have identified networks such as the default mode network, a network of areas that is active during internally focused activities such as daydreaming.

“Up to now, most studies of networks were based on doing functional MRI in the resting-state condition. Based on those studies, we know some main networks in the cortex. Each of them is responsible for a specific cognitive function, and they have been highly influential in the neuroimaging field,” Rajimehr says.

However, during the resting state, many parts of the cortex may not be active at all. To gain a more comprehensive picture of what all these regions are doing, the MIT team analyzed data recorded while subjects performed a more natural task: watching a movie.

“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic information and contextual information,” Rajimehr says. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”

The data for this study was generated as part of the Human Connectome Project. Using a 7-Tesla MRI scanner, which offers higher resolution than a typical MRI scanner, brain activity was imaged in 176 people as they watched one hour of movie clips showing a variety of scenes.

The MIT team used a machine-learning algorithm to analyze the activity patterns of each brain region, allowing them to identify 24 networks with different activity patterns and functions.

Some of these networks are located in sensory areas such as the visual cortex or auditory cortex, as expected for regions with specific sensory functions. Other areas respond to features such as actions, language, or social interactions. Many of these networks have been seen before, but this technique offers more precise definition of where the networks are located, the researchers say.

“Different regions are competing with each other for processing specific features, so when you map each function in isolation, you may get a slightly larger network because it is not getting constrained by other processes,” Rajimehr says. “But here, because all the areas are considered together, we are able to define more precise boundaries between different networks.”

The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex, which appears to be highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.

Executive control networks

Three of the networks found in this study are involved in “executive control,” and were most active during transitions between different clips. The researchers also observed that these control networks appear to have a “push-pull” relationship with networks that process specific features such as faces or actions. When networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.

“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”

Using a movie-watching paradigm, the researchers are now studying some of the networks they identified in more detail, to identify subregions involved in particular tasks. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies. In a new network that analyzes visual scenes, they have identified regions involved in processing memory of places.

“This kind of experiment is really about generating hypotheses for how the cerebral cortex is functionally organized. Networks that emerge during movie watching now need to be followed up with more specific experiments to test the hypotheses. It’s giving us a new view into the operation of the entire cortex during a more naturalistic task than just sitting at rest,” Desimone says.

The research was funded by the McGovern Institute, the Cognitive Science and Technology Council of Iran, the MRC Cognition and Brain Sciences Unit at the University of Cambridge, and a Cambridge Trust scholarship.



de MIT News https://ift.tt/Q0L2GhA

A portable light system that can digitize everyday objects

When Nikola Tesla predicted we’d have handheld phones that could display videos, photographs, and more, his musings seemed like a distant dream. Nearly 100 years later, smartphones are like an extra appendage for many of us.

Digital fabrication engineers are now working toward expanding the display capabilities of other everyday objects. One avenue they’re exploring is reprogrammable surfaces — or items whose appearances we can digitally alter — to help users present important information, such as health statistics, as well as new designs on things like a wall, mug, or shoe.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of California at Berkeley, and Aarhus University have taken an intriguing step forward by fabricating “PortaChrome,” a portable light system and design tool that can change the color and textures of various objects. Equipped with ultraviolet (UV) and red, green, and blue (RGB) LEDs, the device can be attached to everyday objects like shirts and headphones. Once a user creates a design and sends it to a PortaChrome machine via Bluetooth, the surface can be programmed into multicolor displays of health data, entertainment, and fashion designs.

To make an item reprogrammable, the object must be coated with photochromic dye, an invisible ink that can be turned into different colors with light patterns. Once it’s coated, individuals can create and relay patterns to the item via the team’s graphic design software, or use the team’s API to interact with the device directly and embed data-driven designs. When attached to a surface, PortaChrome’s UV lights saturate the dye while the RGB LEDs desaturate it, activating the colors and ensuring each pixel is toned to match the intended design.

Zhu and her colleagues’ integrated light system changes objects’ colors in less than four minutes on average, which is eight times faster than their prior work, “Photo-Chromeleon.” This speed boost comes from switching to a light source that makes contact with the object to transmit UV and RGB rays. Photo-Chromeleon used a projector to help activate the color-changing properties of photochromic dye, where the light on the object's surface is at a reduced intensity.

“PortaChrome provides a more convenient way to reprogram your surroundings,” says Yunyi Zhu ’20, MEng ’21, an MIT PhD student in electrical engineering and computer science, affiliate of CSAIL, and lead author on a paper about the work. “Compared with our projector-based system from before, PortaChrome is a more portable light source that can be placed directly on top of the photochromic surface. This allows the color change to happen without user intervention and helps us avoid contaminating our environment with UV. As a result, users can wear their heart rate chart on their shirt after a workout, for instance.”

Giving everyday objects a makeover

In demos, PortaChrome displayed health data on different surfaces. A user hiked with PortaChrome sewed onto their backpack, putting it into direct contact with the back of their shirt, which was coated in photochromic dye. Altitude and heart rate sensors sent data to the lighting device, which was then converted into a chart through a reprogramming script developed by the researchers. This process created a health visualization on the back of the user’s shirt. In a similar showing, MIT researchers displayed a heart gradually coming together on the back of a tablet to show how a user was progressing toward a fitness goal.

PortaChrome also showed a flair for customizing wearables. For example, the researchers redesigned some white headphones with sideways blue lines and horizontal yellow and purple stripes. The photochromic dye was coated on the headphones and the team then attached the PortaChrome device to the inside of the headphone case. Finally, the researchers successfully reprogrammed their patterns onto the object, which resembled watercolor art. Researchers also recolored a wrist splint to match different clothes using this process.

Eventually, the work could be used to digitize consumers’ belongings. Imagine putting on a cloak that can change your entire shirt design, or using your car cover to give your vehicle a new look.

PortaChrome’s main ingredients

On the hardware end, PortaChrome is a combination of four main ingredients. Their portable device consists of a textile base as a sort of backbone, a textile layer with the UV lights soldered on and another with the RGB stuck on, and a silicone diffusion layer to top it off. Resembling a translucent honeycomb, the silicone layer covers the interlaced UV and RGB LEDs and directs them toward individual pixels to properly illuminate a design over a surface.

This device can be flexibly wrapped around objects with different shapes. For tables and other flat surfaces, you could place PortaChrome on top, like a placemat. For a curved item like a thermos, you could wrap the light source around like a coffee cup sleeve to ensure it reprograms the entire surface.

The portable, flexible light system is crafted with maker space-available tools (like laser cutters, for example), and the same method can be replicated with flexible PCB materials and other mass manufacturing systems.

While it can also quickly convert our surroundings into dynamic displays, Zhu and her colleagues believe it could benefit from further speed boosts. They'd like to use smaller LEDs, with the likely result being a surface that could be reprogrammed in seconds with a higher-resolution design, thanks to increased light intensity.

“The surfaces of our everyday things are encoded with colors and visual textures, delivering crucial information and shaping how we interact with them,” says Georgia Tech postdoc Tingyu Cheng, who was not involved with the research. “PortaChrome is taking a leap forward by providing reprogrammable surfaces with the integration of flexible light sources (UV and RGB LEDs) and photochromic pigments into everyday objects, pixelating the environment with dynamic color and patterns. The capabilities demonstrated by PortaChrome could revolutionize the way we interact with our surroundings, particularly in domains like personalized fashion and adaptive user interfaces. This technology enables real-time customization that seamlessly integrates into daily life, offering a glimpse into the future of ‘ubiquitous displays.’”

Zhu is joined by nine CSAIL affiliates on the paper: MIT PhD student and MIT Media Lab affiliate Cedric Honnet; former visiting undergraduate researchers Yixiao Kang, Angelina J. Zheng, and Grace Tang; MIT undergraduate student Luca Musk; University of Michigan Assistant Professor Junyi Zhu SM ’19, PhD ’24; recent postdoc and Aarhus University assistant professor Michael Wessely; and senior author Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the HCI Engineering Group at CSAIL.

This work was supported by the MIT-GIST Joint Research Program and was presented at the ACM Symposium on User Interface Software and Technology in October.



de MIT News https://ift.tt/RV6ItYO

martes, 5 de noviembre de 2024

Startup gives surgeons a real-time view of breast cancer during surgery

Breast cancer is the second most common type of cancer and cause of cancer death for women in the United States, affecting one in eight women overall.

Most women with breast cancer undergo lumpectomy surgery to remove the tumor and a rim of healthy tissue surrounding the tumor. After the procedure, the removed tissue is sent to a pathologist to look for signs of disease at the edge of the tissue assessed. Unfortunately, about 20 percent of women who have lumpectomies must undergo a second surgery to remove more tissue.

Now, an MIT spinout is giving surgeons a real-time view of cancerous tissue during surgery. Lumicell has developed a handheld device and an optical imaging agent that, when combined, allow surgeons to scan the tissue within the surgical cavity to visualize residual cancer cells.  The surgeons see these images on a monitor that can guide them to remove additional tissue during the procedure.

In a clinical trial of 357 patients, Lumicell’s technology not only reduced the need for second surgeries but also revealed tissue suspected to contain cancer cells that may have otherwise been missed by the standard of care lumpectomy.

The company received U.S. Food and Drug Administration approval for the technology earlier this year, marking a major milestone for Lumicell and the founders, who include MIT professors Linda Griffith and Moungi Bawendi along with PhD candidate W. David Lee ’69, SM ’70. Much of the early work developing and testing the system took place at the Koch Institute for Integrative Cancer Research at MIT, beginning in 2008.

The FDA approval also held deep personal significance for some of Lumicell’s team members, including Griffith, a two-time breast cancer survivor, and Lee, whose wife’s passing from the disease in 2003 changed the course of his life.

An interdisciplinary approach

Lee ran a technology consulting group for 25 years before his wife was diagnosed with breast cancer. Watching her battle the disease inspired him to develop technologies that could help cancer patients.

His neighbor at the time was Tyler Jacks, the founding director of the Koch Institute. Jacks invited Lee to a series of meetings at the Koch involving professors Robert Langer and Bawendi, and Lee eventually joined the Koch Institute as an integrative program officer in 2008, where he began exploring an approach for improving imaging in living organisms with single-cell resolution using charge-coupled device (CCD) cameras.

“CCD pixels at the time were each 2 or 3 microns and spaced 2 or 3 microns,” Lee explains. “So the idea was very simple: to stabilize a camera on a tissue so it would move with the breathing of the animal, so the pixels would essentially line up with the cells without any fancy magnification.”

That work led Lee to begin meeting regularly with a multidisciplinary group including Lumicell co-founders Bawendi, currently the Lester Wolfe Professor of Chemistry at MIT and winner of the 2023 Nobel Prize in Chemistry; Griffith, the School of Engineering Professor of Teaching Innovation in MIT’s Department of Biological Engineering and an extramural faculty member at the Koch Institute; Ralph Weissleder, a professor at Harvard Medical School; and David Kirsch, formerly a postdoc at the Koch Institute and now a scientist at the Princess Margaret Cancer Center.

“On Friday afternoons, we’d get together, and Moungi would teach us some chemistry, Lee would teach us some engineering, and David Kirsch would teach some biology,” Griffith recalls.

Through those meetings, the researchers began to explore the effectiveness of combining Lee’s imaging approach with engineered proteins that would light up where the immune system meets the edge of tumors, for use during surgery. To begin testing the idea, the group received funding from the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund.

“Without that support, this never would have happened,” Lee says. “When I was learning biology at MIT as an undergrad, genetics weren’t even in the textbooks yet. But the Koch Institute provided education, funding, and most importantly, connections to faculty, who were willing to teach me biology.”

In 2010, Griffith was diagnosed with breast cancer.

“Going through that personal experience, I understood the impact that we could have,” Griffith says. “I had a very unusual situation and a bad kind of tumor. The whole thing was nerve-wracking, but one of the most nerve-wracking times was waiting to find out if my tumor margins were clear after surgery. I experienced that uncertainty and dread as a patient, so I became hugely sensitized to our mission.”

The approach Lumicell’s founders eventually settled on begins two to six hours before surgery, when patients receive the optical imaging agent through an IV. Then, during surgery, surgeons use Lumicell’s handheld imaging device to scan the walls of the breast cavity. Lumicell’s cancer detection software shows spots that highlight regions suspected to contain residual cancer on the computer monitor, which the surgeon can then remove. The process adds less than 7 minutes on average to the procedure.

“The technology we developed allows the surgeon to scan the actual cavity, whereas pathology only looks at the lump removed, and [pathologists] make their assessment based on looking at about 1 or 2 percent of the surface area,” Lee says. “Not only are we detecting cancer that was left behind to potentially eliminate second surgeries, we are also, very importantly, finding cancer in some patients that wouldn't be found in pathology and may not generate a second surgery.”

Exploring other cancer types

Lumicell is currently exploring if its imaging agent is activated in other tumor types, including prostate, sarcoma, esophageal, gastric, and more.

Lee ran Lumicell between 2008 and 2020. After stepping down as CEO, he decided to return to MIT to get his PhD in neuroscience, a full 50 years since he earned his master’s. Shortly thereafter, Howard Hechler took over as Lumicell’s president and chief operating officer.

Looking back, Griffith credits MIT’s culture of learning for the formation of Lumicell.

“People like David [Lee] and Moungi care about solving problems,” Griffith says. “They’re technically brilliant, but they also love learning from other people, and that’s what makes makes MIT special. People are confident about what they know, but they are also comfortable in that they don’t know everything, which drives great collaboration. We work together so that the whole is bigger than the sum of the parts.”



de MIT News https://ift.tt/9o7s2yL

A new approach to modeling complex biological systems

Over the past two decades, new technologies have helped scientists generate a vast amount of biological data. Large-scale experiments in genomics, transcriptomics, proteomics, and cytometry can produce enormous quantities of data from a given cellular or multicellular system.

However, making sense of this information is not always easy. This is especially true when trying to analyze complex systems such as the cascade of interactions that occur when the immune system encounters a foreign pathogen.

MIT biological engineers have now developed a new computational method for extracting useful information from these datasets. Using their new technique, they showed that they could unravel a series of interactions that determine how the immune system responds to tuberculosis vaccination and subsequent infection.

This strategy could be useful to vaccine developers and to researchers who study any kind of complex biological system, says Douglas Lauffenburger, the Ford Professor of Engineering in the departments of Biological Engineering, Biology, and Chemical Engineering.

“We’ve landed on a computational modeling framework that allows prediction of effects of perturbations in a highly complex system, including multiple scales and many different types of components,” says Lauffenburger, the senior author of the new study.

Shu Wang, a former MIT postdoc who is now an assistant professor at the University of Toronto, and Amy Myers, a research manager in the lab of University of Pittsburgh School of Medicine Professor JoAnne Flynn, are the lead authors of a new paper on the work, which appears today in the journal Cell Systems.

Modeling complex systems

When studying complex biological systems such as the immune system, scientists can extract many different types of data. Sequencing cell genomes tells them which gene variants a cell carries, while analyzing messenger RNA transcripts tells them which genes are being expressed in a given cell. Using proteomics, researchers can measure the proteins found in a cell or biological system, and cytometry allows them to quantify a myriad of cell types present.

Using computational approaches such as machine learning, scientists can use this data to train models to predict a specific output based on a given set of inputs — for example, whether a vaccine will generate a robust immune response. However, that type of modeling doesn’t reveal anything about the steps that happen in between the input and the output.

“That AI approach can be really useful for clinical medical purposes, but it’s not very useful for understanding biology, because usually you’re interested in everything that’s happening between the inputs and outputs,” Lauffenburger says. “What are the mechanisms that actually generate outputs from inputs?”

To create models that can identify the inner workings of complex biological systems, the researchers turned to a type of model known as a probabilistic graphical network. These models represent each measured variable as a node, generating maps of how each node is connected to the others.

Probabilistic graphical networks are often used for applications such as speech recognition and computer vision, but they have not been widely used in biology.

Lauffenburger’s lab has previously used this type of model to analyze intracellular signaling pathways, which required analyzing just one kind of data. To adapt this approach to analyze many datasets at once, the researchers applied a mathematical technique that can filter out any correlations between variables that are not directly affecting each other. This technique, known as graphical lasso, is an adaptation of the method often used in machine learning models to strip away results that are likely due to noise.

“With correlation-based network models generally, one of the problems that can arise is that everything seems to be influenced by everything else, so you have to figure out how to strip down to the most essential interactions,” Lauffenburger says. “Using probabilistic graphical network frameworks, one can really boil down to the things that are most likely to be direct and throw out the things that are most likely to be indirect.”

Mechanism of vaccination

To test their modeling approach, the researchers used data from studies of a tuberculosis vaccine. This vaccine, known as BCG, is an attenuated form of Mycobacterium bovis. It is used in many countries where TB is common but isn’t always effective, and its protection can weaken over time.

In hopes of developing more effective TB protection, researchers have been testing whether delivering the BCG vaccine intravenously or by inhalation might provoke a better immune response than injecting it. Those studies, performed in animals, found that the vaccine did work much better when given intravenously. In the MIT study, Lauffenburger and his colleagues attempted to discover the mechanism behind this success.

The data that the researchers examined in this study included measurements of about 200 variables, including levels of cytokines, antibodies, and different types of immune cells, from about 30 animals.

The measurements were taken before vaccination, after vaccination, and after TB infection. By analyzing the data using their new modeling approach, the MIT team was able to determine the steps needed to generate a strong immune response. They showed that the vaccine stimulates a subset of T cells, which produce a cytokine that activates a set of B cells that generate antibodies targeting the bacterium.

“Almost like a roadmap or a subway map, you could find what were really the most important paths. Even though a lot of other things in the immune system were changing one way or another, they were really off the critical path and didn't matter so much,” Lauffenburger says.

The researchers then used the model to make predictions for how a specific disruption, such as suppressing a subset of immune cells, would affect the system. The model predicted that if B cells were nearly eliminated, there would be little impact on the vaccine response, and experiments showed that prediction was correct.

This modeling approach could be used by vaccine developers to predict the effect their vaccines may have, and to make tweaks that would improve them before testing them in humans. Lauffenburger’s lab is now using the model to study the mechanism of a malaria vaccine that has been given to children in Kenya, Ghana, and Malawi over the past few years.

His lab is also using this type of modeling to study the tumor microenvironment, which contains many types of immune cells and cancerous cells, in hopes of predicting how tumors might respond to different kinds of treatment.

The research was funded by the National Institute of Allergy and Infectious Diseases.



de MIT News https://ift.tt/cUuWVZd

lunes, 4 de noviembre de 2024

Despite its impressive output, generative AI doesn’t have a coherent understanding of the world

Large language models can do impressive things, like write poetry or generate viable computer programs, even though these models are trained to predict words that come next in a piece of text.

Such surprising capabilities can make it seem like the models are implicitly learning some general truths about the world.

But that isn’t necessarily the case, according to a new study. The researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy — without having formed an accurate internal map of the city.

Despite the model’s uncanny ability to navigate effectively, when the researchers closed some streets and added detours, its performance plummeted.

When they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.

This could have serious implications for generative AI models deployed in the real world, since a model that seems to be performing well in one context might break down if the task or environment slightly changes.

“One hope is that, because LLMs can accomplish all these amazing things in language, maybe we could use these same tools in other parts of science, as well. But the question of whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says senior author Ashesh Rambachan, assistant professor of economics and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Rambachan is joined on a paper about the work by lead author Keyon Vafa, a postdoc at Harvard University; Justin Y. Chen, an electrical engineering and computer science (EECS) graduate student at MIT; Jon Kleinberg, Tisch University Professor of Computer Science and Information Science at Cornell University; and Sendhil Mullainathan, an MIT professor in the departments of EECS and of Economics, and a member of LIDS. The research will be presented at the Conference on Neural Information Processing Systems.

New metrics

The researchers focused on a type of generative AI model known as a transformer, which forms the backbone of LLMs like GPT-4. Transformers are trained on a massive amount of language-based data to predict the next token in a sequence, such as the next word in a sentence.

But if scientists want to determine whether an LLM has formed an accurate model of the world, measuring the accuracy of its predictions doesn’t go far enough, the researchers say.

For example, they found that a transformer can predict valid moves in a game of Connect 4 nearly every time without understanding any of the rules.

So, the team developed two new metrics that can test a transformer’s world model. The researchers focused their evaluations on a class of problems called deterministic finite automations, or DFAs. 

A DFA is a problem with a sequence of states, like intersections one must traverse to reach a destination, and a concrete way of describing the rules one must follow along the way.

They chose two problems to formulate as DFAs: navigating on streets in New York City and playing the board game Othello.

“We needed test beds where we know what the world model is. Now, we can rigorously think about what it means to recover that world model,” Vafa explains.

The first metric they developed, called sequence distinction, says a model has formed a coherent world model it if sees two different states, like two different Othello boards, and recognizes how they are different. Sequences, that is, ordered lists of data points, are what transformers use to generate outputs.

The second metric, called sequence compression, says a transformer with a coherent world model should know that two identical states, like two identical Othello boards, have the same sequence of possible next steps.

They used these metrics to test two common classes of transformers, one which is trained on data generated from randomly produced sequences and the other on data generated by following strategies.

Incoherent world models

Surprisingly, the researchers found that transformers which made choices randomly formed more accurate world models, perhaps because they saw a wider variety of potential next steps during training. 

“In Othello, if you see two random computers playing rather than championship players, in theory you’d see the full set of possible moves, even the bad moves championship players wouldn’t make,” Vafa explains.

Even though the transformers generated accurate directions and valid Othello moves in nearly every instance, the two metrics revealed that only one generated a coherent world model for Othello moves, and none performed well at forming coherent world models in the wayfinding example.

The researchers demonstrated the implications of this by adding detours to the map of New York City, which caused all the navigation models to fail.

“I was surprised by how quickly the performance deteriorated as soon as we added a detour. If we close just 1 percent of the possible streets, accuracy immediately plummets from nearly 100 percent to just 67 percent,” Vafa says.

When they recovered the city maps the models generated, they looked like an imagined New York City with hundreds of streets crisscrossing overlaid on top of the grid. The maps often contained random flyovers above other streets or multiple streets with impossible orientations.

These results show that transformers can perform surprisingly well at certain tasks without understanding the rules. If scientists want to build LLMs that can capture accurate world models, they need to take a different approach, the researchers say.

“Often, we see these models do impressive things and think they must have understood something about the world. I hope we can convince people that this is a question to think very carefully about, and we don’t have to rely on our own intuitions to answer it,” says Rambachan.

In the future, the researchers want to tackle a more diverse set of problems, such as those where some rules are only partially known. They also want to apply their evaluation metrics to real-world, scientific problems.

This work is funded, in part, by the Harvard Data Science Initiative, a National Science Foundation Graduate Research Fellowship, a Vannevar Bush Faculty Fellowship, a Simons Collaboration grant, and a grant from the MacArthur Foundation.



de MIT News https://ift.tt/6zO7ysU

Bridging Talents and Opportunities Forum connects high school and college students with STEAM leaders and resources

Bridging Talents and Opportunities (BTO) held its second annual forum at the Stratton Student Center at MIT Oct. 11-12. The two-day event gathered over 500 participants, including high school students and their families, undergraduate students, professors, and leaders across STEAM (science, technology, engineering, arts, and mathematics) fields.

The forum sought to empower talented students from across the United States and Latin America to dream big and pursue higher education, demonstrating that access to prestigious institutions like MIT is possible regardless of socioeconomic barriers. The event featured inspirational talks from world-renowned scientists, innovators, entrepreneurs, social leaders, and major figures in entertainment — from Nobel laureate Rigoberta Menchú Tum to musician and producer Emilio Estefan, and more.

“Our initiative is committed to building meaningful connections among talented young individuals, their families, foundations, and leaders in science, art, mathematics, and technology,” says Ronald Garcia Ruiz, the Thomas A. Frank Career Development Assistant Professor of Physics at MIT and an organizer of the forum. “Recognizing that talent is universal but opportunities are often confined to select sectors of society, we are dedicated to bridging this gap. BTO provides a platform for sharing inspiring stories and offering support to promising young talents, empowering them to seize the diverse opportunities that await them.”

During their talks and panel discussions, speakers shared their insight into topics such as access to STEAM education, overcoming challenges and socioeconomic barriers, and strategies for fostering inclusion in STEAM fields. Students also had the opportunity to network with industry leaders and professionals, building connections to foster future collaborations.

Attendees also participated in hands-on scientific demonstrations, interaction with robots, and tours of MIT labs, providing a view of cutting-edge scientific research. The event also included musical performances from Latin American students from Berklee College of Music.

“I was thrilled to see the enthusiasm of young people and their parents and to be inspired by the great life stories of accomplished scientists and individuals from other fields making a positive impact in the real world,” says Edwin Pedrozo Peñafiel, assistant professor of physics at the University of Florida and an organizer. “This is why I strongly believe that representation matters.”

Welcoming a Nobel laureate

The first day of the forum opened with the welcoming words from Nergis Mavalvala, dean of the School of Science, and Boleslaw Wyslouch, director of the Laboratory for Nuclear Science and the MIT Bates Research and Engineering Center, and concluded with a keynote address by human rights activist Rigoberta Menchú Tum, 1992 Nobel Peace laureate and founder of the Rigoberta Menchú Tum Foundation. Reflecting upon Indigenous perspectives on science, she emphasized the importance of maintaining a humanistic perspective in scientific discovery. “My struggle has been one of constructing a humanistic perspective … that science, technology … are products of the strength of human beings,” Menchú remarked. She also shared her extraordinary story, encouraging students to persevere no matter the obstacles.

Diana Grass, a PhD Student in the Harvard-MIT Health Sciences and Technology program and organizer, shares, “As a woman in science and a first-generation student, I’ve experienced firsthand the impact of breaking barriers and the importance of representation. At Bridging Talents and Opportunities (BTO), we are shaping a future where opportunities are available to all. Seeing students from disadvantaged backgrounds, along with their parents, engage with some of today’s most influential scientists and leaders — who shared their own stories of resilience — was both inspiring and transformative. It ignited crucial conversations about how interdisciplinary collaboration in STEAM, grounded in humanity, is essential for tackling the critical challenges of our era.”

Power of the Arts

The second day concluded with a panel on “The Power of the Arts,” featuring actor, singer, and songwriter Carlos Ponce, as well as musician and producer Emilio Estefan. They were joined by journalist and author Luz María Doria, who moderated the discussion. Throughout the panel, the speakers recounted their inspiring journeys toward success in the entertainment industry. “This forum reaffirmed our commitment to bridging talent with opportunity,” says Ponce. “The energy and engagement from students, families, and speakers were incredible, fostering a space of learning, empowerment, and possibility.”

During the forum, a two-hour workshop was held that brought together scientists, nonprofit foundations, and business leaders to discuss concrete proposals for creating opportunities for young talents. In this workshop, they had the opportunity to share their ideas with one another. Key ideas and final takeaways from the workshop included developing strategic programs to match talented young students with mentors from diverse backgrounds who can serve as role models, better utilization of existing programs supporting underserved populations, dissemination of information about such programs, ideas to improve financial support for students pursuing education, and fostering extended collaborations between the three groups involved in the workshop.

Maria Angélica Cuellar, CEO of Incontact Group and a BTO organizer, says, “The event was absolutely spectacular and exceeded our expectations. We not only brought together leaders making a global impact in STEAM and business, but also secured financial commitments to support young talents. Through media coverage and streaming, our message reached every corner of the world, especially Latin America and the U.S. I’m deeply grateful for the commitment of each speaker and for the path now open to turn this dream of connecting stakeholders into tangible results and actions. An exciting challenge lies ahead, driving us to work even harder to create opportunities for these talented young people.”

“Bridging Talents and Opportunities was a unique event that brought together students, parents, professors, and leaders in different fields in a relatable and inspiring environment,” says Sebastián Ruiz Lopera, a PhD candidate in the Department of Electrical Engineering and Computer Science and an organizer. “Every speaker, panelist, and participant shared a story of resilience and passion that will motivate the next generation of young talents from disadvantaged backgrounds to become the new leaders and stakeholders.”

The 2024 BTO forum was made possible with the support of the Latinx Graduate Student Association at MIT, Laboratory of Nuclear Science, MIT MLK Scholars Program, Institute Community and Equity Office, the School of Science, the U.S. Department of Energy, University of Florida, CHN, JGMA Architects, Berklee College of Music, and the Harvard Colombian Student Society.



de MIT News https://ift.tt/XFhwpi9

Artist and designer Es Devlin awarded Eugene McDermott Award in the Arts at MIT

Artist and designer Es Devlin is the recipient of the 2025 Eugene McDermott Award in the Arts at MIT. The $100,000 prize, to be awarded at a gala in her honor, also includes an artist residency at MIT in spring 2025, during which Es Devlin will present her work in a lecture open to the public on May 1, 2025. 

Devlin’s work explores biodiversity, linguistic diversity, and collective AI-generated poetry, all areas that also are being explored within the MIT community. She is known for public art and installations at major museums such as the Tate Modern, kinetic stage designs for the Metropolitan Opera, the Super Bowl, and the Olympics, as well as monumental stage sculptures for large-scale stadium concerts.

“I am always most energized by works I have not yet made, so I am immensely grateful to have this trust and investment in ideas I’ve yet to conceive,” says Devlin. “I’m honored to receive an award that has been granted to so many of my heroes, and look forward to collaborating closely with the brilliant minds at MIT.”

“We look forward to presenting Es Devlin with MIT’s highest award in the arts. Her work will be an inspiration for our students studying the visual arts, theater, media, and design. Her interest in AI and the arts dovetails with a major initiative at MIT to address the societal impact of GenAI [generative artificial intelligence],” says MIT vice provost and Ford International Professor of History Philip S. Khoury. “With a new performing arts center opening this winter and a campus-wide arts festival taking place this spring, there could not be a better moment to expose MIT’s creative community to Es Devlin’s extraordinary artistic practice.”

The Eugene McDermott Award in the Arts at MIT recognizes innovative artists working in any field or cross-disciplinary activity. The $100,000 prize represents an investment in the recipient’s future creative work, rather than a prize for a particular project or lifetime of achievement. The official announcement was made at the Council for the Arts at MIT’s 51st annual meeting on Oct. 24. Since it was established in 1974, the award has been bestowed upon 38 individuals who work in performing, visual, and media arts, as well as authors, art historians, and patrons of the arts. Past recipients include Santiago Calatrava, Gustavo Dudamel, Olafur Eliasson, Robert Lepage, Audra McDonald, Suzan-Lori Parks, Bill Viola, and Pamela Z, among others.

A distinctive feature of the award is a short residency at MIT, which includes a public presentation of the artist’s work, substantial interaction with students and faculty, and a gala that convenes national and international leaders in the arts. The goal of the residency is to provide the recipient with unparalleled access to the creative energy and cutting-edge research at the Institute and to develop mutually enlightening relationships in the MIT community.

The Eugene McDermott Award in the Arts at MIT was established in 1974 by Margaret McDermott (1912-2018) in honor of her husband, Eugene McDermott (1899-1973), a co-founder of Texas Instruments and longtime friend and benefactor of MIT. The award is presented by the Council for the Arts at MIT.

The award is bestowed upon individuals whose artistic trajectory and body of work have achieved the highest distinction in their field and indicate they will remain leaders for years to come. The McDermott Award reflects MIT’s commitment to risk-taking, problem-solving, and connecting creative minds across disciplines.

Es Devlin, born in London in 1971, views an audience as a temporary society and often invites public participation in communal choral works. Her canvas ranges from public sculptures and installations at Tate Modern, V&A, Serpentine, Imperial War Museum, and Lincoln Center, to kinetic stage designs at the Royal Opera House, the National Theatre, and the Metropolitan Opera, as well as Olympic ceremonies, Super Bowl halftime shows, and monumental illuminated stage sculptures for large-scale stadium concerts.

Devlin is the subject of a major monographic book, “An Atlas of Es Devlin,” described by Thames and Hudson as their most intricate and sculptural publication to date, and a retrospective exhibition at the Cooper Hewitt Smithsonian Design Museum in New York. In 2020, she became the first female architect of the U.K. Pavilion at a World Expo, conceiving a building which used AI to co-author poetry with visitors on its 20-meter diameter facade. Her practice was the subject of the 2015 Netflix documentary series “Abstract: The Art of Design.” She is a fellow of the Royal Academy of Music, University of the Arts London, and a Royal Designer for Industry at the Royal Society of Arts. She has been awarded the London Design Medal, three Olivier Awards, a Tony Award, an Ivor Novello Award, doctorates from the Universities of Bristol and Kent, and a Commander of the Order of the British Empire award.



de MIT News https://ift.tt/ivPxm04