viernes, 31 de enero de 2025

With generative AI, MIT chemists quickly calculate 3D genomic structures

Every cell in your body contains the same genetic sequence, yet each cell expresses only a subset of those genes. These cell-specific gene expression patterns, which ensure that a brain cell is different from a skin cell, are partly determined by the three-dimensional structure of the genetic material, which controls the accessibility of each gene.

MIT chemists have now come up with a new way to determine those 3D genome structures, using generative artificial intelligence. Their technique can predict thousands of structures in just minutes, making it much speedier than existing experimental methods for analyzing the structures.

Using this technique, researchers could more easily study how the 3D organization of the genome affects individual cells’ gene expression patterns and functions.

“Our goal was to try to predict the three-dimensional genome structure from the underlying DNA sequence,” says Bin Zhang, an associate professor of chemistry and the senior author of the study. “Now that we can do that, which puts this technique on par with the cutting-edge experimental techniques, it can really open up a lot of interesting opportunities.”

MIT graduate students Greg Schuette and Zhuohan Lao are the lead authors of the paper, which appears today in Science Advances.

From sequence to structure

Inside the cell nucleus, DNA and proteins form a complex called chromatin, which has several levels of organization, allowing cells to cram 2 meters of DNA into a nucleus that is only one-hundredth of a millimeter in diameter. Long strands of DNA wind around proteins called histones, giving rise to a structure somewhat like beads on a string.

Chemical tags known as epigenetic modifications can be attached to DNA at specific locations, and these tags, which vary by cell type, affect the folding of the chromatin and the accessibility of nearby genes. These differences in chromatin conformation help determine which genes are expressed in different cell types, or at different times within a given cell.

Over the past 20 years, scientists have developed experimental techniques for determining chromatin structures. One widely used technique, known as Hi-C, works by linking together neighboring DNA strands in the cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.

This method can be used on large populations of cells to calculate an average structure for a section of chromatin, or on single cells to determine structures within that specific cell. However, Hi-C and similar techniques are labor-intensive, and it can take about a week to generate data from one cell.

To overcome those limitations, Zhang and his students developed a model that takes advantage of recent advances in generative AI to create a fast, accurate way to predict chromatin structures in single cells. The AI model that they designed can quickly analyze DNA sequences and predict the chromatin structures that those sequences might produce in a cell.

“Deep learning is really good at pattern recognition,” Zhang says. “It allows us to analyze very long DNA segments, thousands of base pairs, and figure out what is the important information encoded in those DNA base pairs.”

ChromoGen, the model that the researchers created, has two components. The first component, a deep learning model taught to “read” the genome, analyzes the information encoded in the underlying DNA sequence and chromatin accessibility data, the latter of which is widely available and cell type-specific.

The second component is a generative AI model that predicts physically accurate chromatin conformations, having been trained on more than 11 million chromatin conformations. These data were generated from experiments using Dip-C (a variant of Hi-C) on 16 cells from a line of human B lymphocytes.

When integrated, the first component informs the generative model how the cell type-specific environment influences the formation of different chromatin structures, and this scheme effectively captures sequence-structure relationships. For each sequence, the researchers use their model to generate many possible structures. That’s because DNA is a very disordered molecule, so a single DNA sequence can give rise to many different possible conformations.

“A major complicating factor of predicting the structure of the genome is that there isn’t a single solution that we’re aiming for. There’s a distribution of structures, no matter what portion of the genome you’re looking at. Predicting that very complicated, high-dimensional statistical distribution is something that is incredibly challenging to do,” Schuette says.

Rapid analysis

Once trained, the model can generate predictions on a much faster timescale than Hi-C or other experimental techniques.

“Whereas you might spend six months running experiments to get a few dozen structures in a given cell type, you can generate a thousand structures in a particular region with our model in 20 minutes on just one GPU,” Schuette says.

After training their model, the researchers used it to generate structure predictions for more than 2,000 DNA sequences, then compared them to the experimentally determined structures for those sequences. They found that the structures generated by the model were the same or very similar to those seen in the experimental data.

“We typically look at hundreds or thousands of conformations for each sequence, and that gives you a reasonable representation of the diversity of the structures that a particular region can have,” Zhang says. “If you repeat your experiment multiple times, in different cells, you will very likely end up with a very different conformation. That’s what our model is trying to predict.”

The researchers also found that the model could make accurate predictions for data from cell types other than the one it was trained on. This suggests that the model could be useful for analyzing how chromatin structures differ between cell types, and how those differences affect their function. The model could also be used to explore different chromatin states that can exist within a single cell, and how those changes affect gene expression.

Another possible application would be to explore how mutations in a particular DNA sequence change the chromatin conformation, which could shed light on how such mutations may cause disease.

“There are a lot of interesting questions that I think we can address with this type of model,” Zhang says.

The researchers have made all of their data and the model available to others who wish to use it.

The research was funded by the National Institutes of Health.



de MIT News https://ift.tt/Ru3j7Sb

jueves, 30 de enero de 2025

MIT engineers help multirobot systems stay in the safety zone

Drone shows are an increasingly popular form of large-scale light display. These shows incorporate hundreds to thousands of airborne bots, each programmed to fly in paths that together form intricate shapes and patterns across the sky. When they go as planned, drone shows can be spectacular. But when one or more drones malfunction, as has happened recently in Florida, New York, and elsewhere, they can be a serious hazard to spectators on the ground.

Drone show accidents highlight the challenges of maintaining safety in what engineers call “multiagent systems” — systems of multiple coordinated, collaborative, and computer-programmed agents, such as robots, drones, and self-driving cars.

Now, a team of MIT engineers has developed a training method for multiagent systems that can guarantee their safe operation in crowded environments. The researchers found that once the method is used to train a small number of agents, the safety margins and controls learned by those agents can automatically scale to any larger number of agents, in a way that ensures the safety of the system as a whole.

In real-world demonstrations, the team trained a small number of palm-sized drones to safely carry out different objectives, from simultaneously switching positions midflight to landing on designated moving vehicles on the ground. In simulations, the researchers showed that the same programs, trained on a few drones, could be copied and scaled up to thousands of drones, enabling a large system of agents to safely accomplish the same tasks.

“This could be a standard for any application that requires a team of agents, such as warehouse robots, search-and-rescue drones, and self-driving cars,” says Chuchu Fan, associate professor of aeronautics and astronautics at MIT. “This provides a shield, or safety filter, saying each agent can continue with their mission, and we’ll tell you how to be safe.”

Fan and her colleagues report on their new method in a study appearing this month in the journal IEEE Transactions on Robotics. The study’s co-authors are MIT graduate students Songyuan Zhang and Oswin So as well as former MIT postdoc Kunal Garg, who is now an assistant professor at Arizona State University.

Mall margins

When engineers design for safety in any multiagent system, they typically have to consider the potential paths of every single agent with respect to every other agent in the system. This pair-wise path-planning is a time-consuming and computationally expensive process. And even then, safety is not guaranteed.

“In a drone show, each drone is given a specific trajectory — a set of waypoints and a set of times — and then they essentially close their eyes and follow the plan,” says Zhang, the study’s lead author. “Since they only know where they have to be and at what time, if there are unexpected things that happen, they don’t know how to adapt.”

The MIT team looked instead to develop a method to train a small number of agents to maneuver safely, in a way that could efficiently scale to any number of agents in the system. And, rather than plan specific paths for individual agents, the method would enable agents to continually map their safety margins, or boundaries beyond which they might be unsafe. An agent could then take any number of paths to accomplish its task, as long as it stays within its safety margins.

In some sense, the team says the method is similar to how humans intuitively navigate their surroundings.

“Say you’re in a really crowded shopping mall,” So explains. “You don’t care about anyone beyond the people who are in your immediate neighborhood, like the 5 meters surrounding you, in terms of getting around safely and not bumping into anyone. Our work takes a similar local approach.”

Safety barrier

In their new study, the team presents their method, GCBF+, which stands for “Graph Control Barrier Function.” A barrier function is a mathematical term used in robotics that calculates a sort of safety barrier, or a boundary beyond which an agent has a high probability of being unsafe. For any given agent, this safety zone can change moment to moment, as the agent moves among other agents that are themselves moving within the system.

When designers calculate barrier functions for any one agent in a multiagent system, they typically have to take into account the potential paths and interactions with every other agent in the system. Instead, the MIT team’s method calculates the safety zones of just a handful of agents, in a way that is accurate enough to represent the dynamics of many more agents in the system.

“Then we can sort of copy-paste this barrier function for every single agent, and then suddenly we have a graph of safety zones that works for any number of agents in the system,” So says.

To calculate an agent’s barrier function, the team’s method first takes into account an agent’s “sensing radius,” or how much of the surroundings an agent can observe, depending on its sensor capabilities. Just as in the shopping mall analogy, the researchers assume that the agent only cares about the agents that are within its sensing radius, in terms of keeping safe and avoiding collisions with those agents.

Then, using computer models that capture an agent’s particular mechanical capabilities and limits, the team simulates a “controller,” or a set of instructions for how the agent and a handful of similar agents should move around. They then run simulations of multiple agents moving along certain trajectories, and record whether and how they collide or otherwise interact.

“Once we have these trajectories, we can compute some laws that we want to minimize, like say, how many safety violations we have in the current controller,” Zhang says. “Then we update the controller to be safer.”

In this way, a controller can be programmed into actual agents, which would enable them to continually map their safety zone based on any other agents they can sense in their immediate surroundings, and then move within that safety zone to accomplish their task.

“Our controller is reactive,” Fan says. “We don’t preplan a path beforehand. Our controller is constantly taking in information about where an agent is going, what is its velocity, how fast other drones are going. It’s using all this information to come up with a plan on the fly and it’s replanning every time. So, if the situation changes, it’s always able to adapt to stay safe.”

The team demonstrated GCBF+ on a system of eight Crazyflies — lightweight, palm-sized quadrotor drones that they tasked with flying and switching positions in midair. If the drones were to do so by taking the straightest path, they would surely collide. But after training with the team’s method, the drones were able to make real-time adjustments to maneuver around each other, keeping within their respective safety zones, to successfully switch positions on the fly.

In similar fashion, the team tasked the drones with flying around, then landing on specific Turtlebots — wheeled robots with shell-like tops. The Turtlebots drove continuously around in a large circle, and the Crazyflies were able to avoid colliding with each other as they made their landings.

“Using our framework, we only need to give the drones their destinations instead of the whole collision-free trajectory, and the drones can figure out how to arrive at their destinations without collision themselves,” says Fan, who envisions the method could be applied to any multiagent system to guarantee its safety, including collision avoidance systems in drone shows, warehouse robots, autonomous driving vehicles, and drone delivery systems.

This work was partly supported by the U.S. National Science Foundation, MIT Lincoln Laboratory under the Safety in Aerobatic Flight Regimes (SAFR) program, and the Defence Science and Technology Agency of Singapore.



de MIT News https://ift.tt/EtSLmG4

From bench to bedside, and beyond

In medical school, Matthew Dolan ’81 briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work.

“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.”

Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the United States and abroad through the U.S. Air Force, Dolan has emerged as a leader in immunology and virology, and has served as director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and Covid-19, and has even been a guest speaker on NPR’s “Science Friday.”

“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.”

Pieces of the puzzle

Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge.

He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems such as ice machines or air conditioners, are solved at the interface between public health and ecology.

“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”

Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive. 

“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.”

Choosing To serve

Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”

One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die.

“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.”

Lasting impacts

Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives.

Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future.

“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.”

Dolan understands that the most lasting impact he has had is, likely, teaching: Time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of health-care specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the U.S. departments of State and Defense, and taught those programs around the world.

“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.” 



de MIT News https://ift.tt/DYu5Z2W

miércoles, 29 de enero de 2025

MIT spinout Gradiant reduces companies’ water use and waste by billions of gallons each day

When it comes to water use, most of us think of the water we drink. But industrial uses for things like manufacturing account for billions of gallons of water each day. For instance, making a single iPhone, by one estimate, requires more than 3,000 gallons.

Gradiant is working to reduce the world’s industrial water footprint. Founded by a team from MIT, Gradiant offers water recycling, treatment, and purification solutions to some of the largest companies on Earth, including Coca Cola, Tesla, and the Taiwan Semiconductor Manufacturing Company. By serving as an end-to-end water company, Gradiant says it helps companies reuse 2 billion gallons of water each day and saves another 2 billion gallons of fresh water from being withdrawn.

The company’s mission is to preserve water for generations to come in the face of rising global demand.

“We work on both ends of the water spectrum,” Gradiant co-founder and CEO Anurag Bajpayee SM ’08, PhD ’12 says. “We work with ultracontaminated water, and we can also provide ultrapure water for use in areas like chip fabrication. Our specialty is in the extreme water challenges that can’t be solved with traditional technologies.”

For each customer, Gradiant builds tailored water treatment solutions that combine chemical treatments with membrane filtration and biological process technologies, leveraging a portfolio of patents to drastically cut water usage and waste.

“Before Gradiant, 40 million liters of water would be used in the chip-making process. It would all be contaminated and treated, and maybe 30 percent would be reused,” explains Gradiant co-founder and COO Prakash Govindan PhD ’12. “We have the technology to recycle, in some cases, 99 percent of the water. Now, instead of consuming 40 million liters, chipmakers only need to consume 400,000 liters, which is a huge shift in the water footprint of that industry. And this is not just with semiconductors. We’ve done this in food and beverage, we’ve done this in renewable energy, we’ve done this in pharmaceutical drug production, and several other areas.”

Learning the value of water

Govindan grew up in a part of India that experienced a years-long drought beginning when he was 10. Without tap water, one of Govindan’s chores was to haul water up the stairs of his apartment complex each time a truck delivered it.

“However much water my brother and I could carry was how much we had for the week,” Govindan recalls. “I learned the value of water the hard way.”

Govindan attended the Indian Institute of Technology as an undergraduate, and when he came to MIT for his PhD, he sought out the groups working on water challenges. He began working on a water treatment method called carrier gas extraction for his PhD under Gradiant co-founder and MIT Professor John Lienhard.

Bajpayee also worked on water treatment methods at MIT, and after brief stints as postdocs at MIT, he and Govindan licensed their work and founded Gradiant.

Carrier gas extraction became Gradiant’s first proprietary technology when the company launched in 2013. The founders began by treating wastewater created by oil and gas wells, landing their first partner in a Texas company. But Gradiant gradually expanded to solving water challenges in power generation, mining, textiles, and refineries. Then the founders noticed opportunities in industries like electronics, semiconductors, food and beverage, and pharmaceuticals. Today, oil and gas wastewater treatment makes up a small percentage of Gradiant’s work.

As the company expanded, it added technologies to its portfolio, patenting new water treatment methods around reverse osmosis, selective contaminant extraction, and free radical oxidation. Gradiant has also created a digital system that uses AI to measure, predict, and control water treatment facilities.

“The advantage Gradiant has over every other water company is that R&D is in our DNA,” Govindan says, noting Gradiant has a world-class research lab at its headquarters in Boston. “At MIT, we learned how to do cutting-edge technology development, and we never let go of that.”

The founders compare their suite of technologies to LEGO bricks they can mix and match depending on a customer’s water needs. Gradiant has built more than 2,500 of these end-to-end systems for customers around the world.

“Our customers aren’t water companies; they are industrial clients like semiconductor manufacturers, drug companies, and food and beverage companies,” Bajpayee says. “They aren’t about to start operating a water treatment plant. They look at us as their water partner who can take care of the whole water problem.”

Continuing innovation

The founders say Gradiant has been roughly doubling its revenue each year over the last five years, and it’s continuing to add technologies to its platform. For instance, Gradiant recently developed a critical minerals recovery solution to extract materials like lithium and nickel from customers’ wastewater, which could expand access to critical materials essential to the production of batteries and other products.

“If we can extract lithium from brine water in an environmentally and economically feasible way, the U.S. can meet all of its lithium needs from within the U.S.,” Bajpayee says. “What’s preventing large-scale extraction of lithium from brine is technology, and we believe what we have now deployed will open the floodgates for direct lithium extraction and completely revolutionized the industry.”

The company has also validated a method for eliminating PFAS — so-called toxic “forever chemicals” — in a pilot project with a leading U.S. semiconductor manufacturer. In the near future, it hopes to bring that solution to municipal water treatment plants to protect cities.

At the heart of Gradiant’s innovation is the founders’ belief that industrial activity doesn’t have to deplete one of the world’s most vital resources.

“Ever since the industrial revolution, we’ve been taking from nature,” Bajpayee says. “By treating and recycling water, by reducing water consumption and making industry highly water efficient, we have this unique opportunity to turn the clock back and give nature water back. If that’s your driver, you can’t choose not to innovate.”



de MIT News https://ift.tt/aA0RzxV

MIT students' works redefine human-AI collaboration

Imagine a boombox that tracks your every move and suggests music to match your personal dance style. That’s the idea behind “Be the Beat,” one of several projects from MIT course 4.043/4.044 (Interaction Intelligence), taught by Marcelo Coelho in the Department of Architecture, that were presented at the 38th annual NeurIPS (Neural Information Processing Systems) conference in December 2024. With over 16,000 attendees converging in Vancouver, NeurIPS is a competitive and prestigious conference dedicated to research and science in the field of artificial intelligence and machine learning, and a premier venue for showcasing cutting-edge developments.

The course investigates the emerging field of large language objects, and how artificial intelligence can be extended into the physical world. While “Be the Beat” transforms the creative possibilities of dance, other student submissions span disciplines such as music, storytelling, critical thinking, and memory, creating generative experiences and new forms of human-computer interaction. Taken together, these projects illustrate a broader vision for artificial intelligence: one that goes beyond automation to catalyze creativity, reshape education, and reimagine social interactions.

Be the Beat 

“Be the Beat,” by Ethan Chang, an MIT mechanical engineering and design student, and Zhixing Chen, an MIT mechanical engineering and music student, is an AI-powered boombox that suggests music from a dancer's movement. Dance has traditionally been guided by music throughout history and across cultures, yet the concept of dancing to create music is rarely explored.

“Be the Beat” creates a space for human-AI collaboration on freestyle dance, empowering dancers to rethink the traditional dynamic between dance and music. It uses PoseNet to describe movements for a large language model, enabling it to analyze dance style and query APIs to find music with similar style, energy, and tempo. Dancers interacting with the boombox reported having more control over artistic expression and described the boombox as a novel approach to discovering dance genres and choreographing creatively.

A Mystery for You

“A Mystery for You,” by Mrinalini Singha SM ’24, a recent graduate in the Art, Culture, and Technology program, and Haoheng Tang, a recent graduate of the Harvard University Graduate School of Design, is an educational game designed to cultivate critical thinking and fact-checking skills in young learners. The game leverages a large language model (LLM) and a tangible interface to create an immersive investigative experience. Players act as citizen fact-checkers, responding to AI-generated “news alerts” printed by the game interface. By inserting cartridge combinations to prompt follow-up “news updates,” they navigate ambiguous scenarios, analyze evidence, and weigh conflicting information to make informed decisions.

This human-computer interaction experience challenges our news-consumption habits by eliminating touchscreen interfaces, replacing perpetual scrolling and skim-reading with a haptically rich analog device. By combining the affordances of slow media with new generative media, the game promotes thoughtful, embodied interactions while equipping players to better understand and challenge today’s polarized media landscape, where misinformation and manipulative narratives thrive.

Memorscope

“Memorscope,” by MIT Media Lab research collaborator Keunwook Kim, is a device that creates collective memories by merging the deeply human experience of face-to-face interaction with advanced AI technologies. Inspired by how we use microscopes and telescopes to examine and uncover hidden and invisible details, Memorscope allows two users to “look into” each other’s faces, using this intimate interaction as a gateway to the creation and exploration of their shared memories.

The device leverages AI models such as OpenAI and Midjourney, introducing different aesthetic and emotional interpretations, which results in a dynamic and collective memory space. This space transcends the limitations of traditional shared albums, offering a fluid, interactive environment where memories are not just static snapshots but living, evolving narratives, shaped by the ongoing relationship between users.

Narratron

“Narratron,” by Harvard Graduate School of Design students Xiying (Aria) Bao and Yubo Zhao, is an interactive projector that co-creates and co-performs children's stories through shadow puppetry using large language models. Users can press the shutter to “capture” protagonists they want to be in the story, and it takes hand shadows (such as animal shapes) as input for the main characters. The system then develops the story plot as new shadow characters are introduced. The story appears through a projector as a backdrop for shadow puppetry while being narrated through a speaker as users turn a crank to “play” in real time. By combining visual, auditory, and bodily interactions in one system, the project aims to spark creativity in shadow play storytelling and enable multi-modal human-AI collaboration.

Perfect Syntax

“Perfect Syntax,” by Karyn Nakamura ’24, is a video art piece examining the syntactic logic behind motion and video. Using AI to manipulate video fragments, the project explores how the fluidity of motion and time can be simulated and reconstructed by machines. Drawing inspiration from both philosophical inquiry and artistic practice, Nakamura's work interrogates the relationship between perception, technology, and the movement that shapes our experience of the world. By reimagining video through computational processes, Nakamura investigates the complexities of how machines understand and represent the passage of time and motion.



de MIT News https://ift.tt/6Li8f2C

Smart carbon dioxide removal yields economic and environmental benefits

Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.

Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products. 

To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.

The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.

First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.

By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.

“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.

The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.

“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”

Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts.



de MIT News https://ift.tt/cUVnmtB

martes, 28 de enero de 2025

New training approach could help AI agents perform better in uncertain conditions

A home robot trained to perform household tasks in a factory may fail to effectively scrub the sink or take out the trash when deployed in a user’s kitchen, since this new environment differs from its training space.

To avoid this, engineers often try to match the simulated training environment as closely as possible with the real world where the agent will be deployed.

However, researchers from MIT and elsewhere have now found that, despite this conventional wisdom, sometimes training in a completely different environment yields a better-performing artificial intelligence agent.

Their results indicate that, in some situations, training a simulated AI agent in a world with less uncertainty, or “noise,” enabled it to perform better than a competing AI agent trained in the same, noisy world they used to test both agents.

The researchers call this unexpected phenomenon the indoor training effect.

“If we learn to play tennis in an indoor environment where there is no noise, we might be able to more easily master different shots. Then, if we move to a noisier environment, like a windy tennis court, we could have a higher probability of playing tennis well than if we started learning in the windy environment,” explains Serena Bono, a research assistant in the MIT Media Lab and lead author of a paper on the indoor training effect.

The researchers studied this phenomenon by training AI agents to play Atari games, which they modified by adding some unpredictability. They were surprised to find that the indoor training effect consistently occurred across Atari games and game variations.

They hope these results fuel additional research toward developing better training methods for AI agents.

“This is an entirely new axis to think about. Rather than trying to match the training and testing environments, we may be able to construct simulated environments where an AI agent learns even better,” adds co-author Spandan Madan, a graduate student at Harvard University.

Bono and Madan are joined on the paper by Ishaan Grover, an MIT graduate student; Mao Yasueda, a graduate student at Yale University; Cynthia Breazeal, professor of media arts and sciences and leader of the Personal Robotics Group in the MIT Media Lab; Hanspeter Pfister, the An Wang Professor of Computer Science at Harvard; and Gabriel Kreiman, a professor at Harvard Medical School. The research will be presented at the Association for the Advancement of Artificial Intelligence Conference.

Training troubles

The researchers set out to explore why reinforcement learning agents tend to have such dismal performance when tested on environments that differ from their training space.

Reinforcement learning is a trial-and-error method in which the agent explores a training space and learns to take actions that maximize its reward.

The team developed a technique to explicitly add a certain amount of noise to one element of the reinforcement learning problem called the transition function. The transition function defines the probability an agent will move from one state to another, based on the action it chooses.

If the agent is playing Pac-Man, a transition function might define the probability that ghosts on the game board will move up, down, left, or right. In standard reinforcement learning, the AI would be trained and tested using the same transition function.

The researchers added noise to the transition function with this conventional approach and, as expected, it hurt the agent’s Pac-Man performance.

But when the researchers trained the agent with a noise-free Pac-Man game, then tested it in an environment where they injected noise into the transition function, it performed better than an agent trained on the noisy game.

“The rule of thumb is that you should try to capture the deployment condition’s transition function as well as you can during training to get the most bang for your buck. We really tested this insight to death because we couldn’t believe it ourselves,” Madan says.

Injecting varying amounts of noise into the transition function let the researchers test many environments, but it didn’t create realistic games. The more noise they injected into Pac-Man, the more likely ghosts would randomly teleport to different squares.

To see if the indoor training effect occurred in normal Pac-Man games, they adjusted underlying probabilities so ghosts moved normally but were more likely to move up and down, rather than left and right. AI agents trained in noise-free environments still performed better in these realistic games.

“It was not only due to the way we added noise to create ad hoc environments. This seems to be a property of the reinforcement learning problem. And that was even more surprising to see,” Bono says.

Exploration explanations

When the researchers dug deeper in search of an explanation, they saw some correlations in how the AI agents explore the training space.

When both AI agents explore mostly the same areas, the agent trained in the non-noisy environment performs better, perhaps because it is easier for the agent to learn the rules of the game without the interference of noise.

If their exploration patterns are different, then the agent trained in the noisy environment tends to perform better. This might occur because the agent needs to understand patterns it can’t learn in the noise-free environment.

“If I only learn to play tennis with my forehand in the non-noisy environment, but then in the noisy one I have to also play with my backhand, I won’t play as well in the non-noisy environment,” Bono explains.

In the future, the researchers hope to explore how the indoor training effect might occur in more complex reinforcement learning environments, or with other techniques like computer vision and natural language processing. They also want to build training environments designed to leverage the indoor training effect, which could help AI agents perform better in uncertain environments.



de MIT News https://ift.tt/iWURODz

lunes, 27 de enero de 2025

MIT Climate and Energy Ventures class spins out entrepreneurs — and successful companies

In 2014, a team of MIT students in course 15.366 (Climate and Energy Ventures) developed a plan to commercialize MIT research on how to move information between chips with light instead of electricity, reducing energy usage.

After completing the class, which challenges students to identify early customers and pitch their business plan to investors, the team went on to win both grand prizes at the MIT Clean Energy Prize. Today the company, Ayar Labs, has raised a total of $370 million from a group including chip leaders AMD, Intel, and NVIDIA, to scale the manufacturing of its optical chip interconnects.

Ayar Labs is one of many companies whose roots can be traced back to 15.366. In fact, more than 150 companies have been founded by alumni of the class since its founding in 2007.

In the class, student teams select a technology or idea and determine the best path for its commercialization. The semester-long project, which is accompanied by lectures and mentoring, equips students with real-world experience in launching a business.

“The goal is to educate entrepreneurs on how to start companies in the climate and energy space,” says Senior Lecturer Tod Hynes, who co-founded the course and has been teaching since 2008. “We do that through hands-on experience. We require students to engage with customers, talk to potential suppliers, partners, investors, and to practice their pitches to learn from that feedback.”

The class attracts hundreds of student applications each year. As one of the catalysts for MIT spinoffs, it is also one reason a 2015 report found that MIT alumni-founded companies had generated roughly $1.9 trillion in annual revenues. If MIT were a country, that figure that would make it the 10th largest economy in the world, according to the report.

“’Mens et manus’ (‘mind and hand’) is MIT's motto, and the hands-on experience we try to provide in this class is hard to beat,” Hynes says. “When you actually go through the process of commercialization in the real world, you learn more and you’re in a better spot. That experiential learning approach really aligns with MIT’s approach.”

Simulating a startup

The course was started by Bill Aulet, a professor of the practice at the MIT Sloan School of Management and the managing director of the Martin Trust Center for MIT Entrepreneurship. After serving as an advisor the first year and helping Aulet launch the class, Hynes began teaching the class with Aulet in the fall of 2008. The pair also launched the Climate and Energy Prize around the same time, which continues today and recently received over 150 applications from teams from around the world.

A core feature of the class is connecting students in different academic fields. Each year, organizers aim to enroll students with backgrounds in science, engineering, business, and policy.

“The class is meant to be accessible to anybody at MIT,” Hynes says, noting the course has also since opened to students from Harvard University. “We’re trying to pull across disciplines.”

The class quickly grew in popularity around campus. Over the last few years, the course has had about 150 students apply for 50 spots.

“I mentioned Climate and Energy Ventures in my application to MIT,” says Chris Johnson, a second-year graduate student in the Leaders for Global Operations (LGO) Program. “Coming into MIT, I was very interested in sustainability, and energy in particular, and also in startups. I had heard great things about the class, and I waited until my last semester to apply.”

The course’s organizers select mostly graduate students, whom they prefer to be in the final year of their program so they can more easily continue working on the venture after the class is finished.

“Whether or not students stick with the project from the class, it’s a great experience that will serve them in their careers,” says Jennifer Turliuk, the practice leader for climate and energy artificial intelligence at the Martin Trust Center for Entrepreneurship, who helped teach the class this fall.

Hynes describes the course as a venture-building simulation. Before it begins, organizers select up to 30 technologies and ideas that are in the right stage for commercialization. Students can also come into the class with ideas or technologies they want to work on.

After a few weeks of introductions and lectures, students form into multidisciplinary teams of about five and begin going through each of the 24 steps of building a startup described in Aulet’s book “Disciplined Entrepreneurship,” which includes things like engaging with potential early customers, quantifying a value proposition, and establishing a business model. Everything builds toward a one-hour final presentation that’s designed to simulate a pitch to investors or government officials.

“It’s a lot of work, and because it’s a team-based project, your grade is highly dependent on your team,” Hynes says. “You also get graded by your team; that’s about 10 percent of your grade. We try to encourage people to be proactive and supportive teammates.”

Students say the process is fast-paced but rewarding.

“It’s definitely demanding,” says Sofie Netteberg, a graduate student who is also in the LGO program at MIT. “Depending on where you’re at with your technology, you can be moving very quickly. That’s the stage that I was in, which I found really engaging. We basically just had a lab technology, and it was like, ‘What do we do next?’ You also get a ton of support from the professors.”

From the classroom to the world

This fall’s final presentations took place at the headquarters of the MIT-affiliated venture firm The Engine in front of an audience of professors, investors, members of foundations supporting entrepreneurship, and more.

“We got to hear feedback from people who would be the real next step for the technology if the startup gets up and running,” said Johnson, whose team was commercializing a method for storing energy in concrete. “That was really valuable. We know that these are not only people we might see in the next month or the next funding rounds, but they’re also exactly the type of people that are going to give us the questions we should be thinking about. It was clarifying.”

Throughout the semester, students treated the project like a real venture they’d be working on well beyond the length of the class.

“No one’s really thinking about this class for the grade; it’s about the learning,” says Netteberg, whose team was encouraged to keep working on their electrolyzer technology designed to more efficiently produce green hydrogen. “We’re not stressed about getting an A. If we want to keep working on this, we want real feedback: What do you think we did well? What do we need to keep working on?”

Hynes says several investors expressed interest in supporting the businesses coming out of the class. Moving forward, he hopes students embrace the test-bed environment his team has created for them and try bold new things.

“People have been very pragmatic over the years, which is good, but also potentially limiting,” Hynes says. “This is also an opportunity to do something that’s a little further out there — something that has really big potential impact if it comes together. This is the time where students get to experiment, so why not try something big?”



de MIT News https://ift.tt/ZBGYqnu

Expanding robot perception

Robots have come a long way since the Roomba. Today, drones are starting to deliver door to door, self-driving cars are navigating some roads, robo-dogs are aiding first responders, and still more bots are doing backflips and helping out on the factory floor. Still, Luca Carlone thinks the best is yet to come.

Carlone, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), directs the SPARK Lab, where he and his students are bridging a key gap between humans and robots: perception. The group does theoretical and experimental research, all toward expanding a robot’s awareness of its environment in ways that approach human perception. And perception, as Carlone often says, is more than detection.

While robots have grown by leaps and bounds in terms of their ability to detect and identify objects in their surroundings, they still have a lot to learn when it comes to making higher-level sense of their environment. As humans, we perceive objects with an intuitive sense of not just of their shapes and labels but also their physics — how they might be manipulated and moved — and how they relate to each other, their larger environment, and ourselves.

That kind of human-level perception is what Carlone and his group are hoping to impart to robots, in ways that enable them to safely and seamlessly interact with people in their homes, workplaces, and other unstructured environments.

Since joining the MIT faculty in 2017, Carlone has led his team in developing and applying perception and scene-understanding algorithms for various applications, including autonomous underground search-and-rescue vehicles, drones that can pick up and manipulate objects on the fly, and self-driving cars. They might also be useful for domestic robots that follow natural language commands and potentially even anticipate human’s needs based on higher-level contextual clues.

“Perception is a big bottleneck toward getting robots to help us in the real world,” Carlone says. “If we can add elements of cognition and reasoning to robot perception, I believe they can do a lot of good.”

Expanding horizons

Carlone was born and raised near Salerno, Italy, close to the scenic Amalfi coast, where he was the youngest of three boys. His mother is a retired elementary school teacher who taught math, and his father is a retired history professor and publisher, who has always taken an analytical approach to his historical research. The brothers may have unconsciously adopted their parents’ mindsets, as all three went on to be engineers — the older two pursued electronics and mechanical engineering, while Carlone landed on robotics, or mechatronics, as it was known at the time.

He didn’t come around to the field, however, until late in his undergraduate studies. Carlone attended the Polytechnic University of Turin, where he focused initially on theoretical work, specifically on control theory — a field that applies mathematics to develop algorithms that automatically control the behavior of physical systems, such as power grids, planes, cars, and robots. Then, in his senior year, Carlone signed up for a course on robotics that explored advances in manipulation and how robots can be programmed to move and function.

“It was love at first sight. Using algorithms and math to develop the brain of a robot and make it move and interact with the environment is one of the most fulfilling experiences,” Carlone says. “I immediately decided this is what I want to do in life.”

He went on to a dual-degree program at the Polytechnic University of Turin and the Polytechnic University of Milan, where he received master’s degrees in mechatronics and automation engineering, respectively. As part of this program, called the Alta Scuola Politecnica, Carlone also took courses in management, in which he and students from various academic backgrounds had to team up to conceptualize, build, and draw up a marketing pitch for a new product design. Carlone’s team developed a touch-free table lamp designed to follow a user’s hand-driven commands. The project pushed him to think about engineering from different perspectives.

“It was like having to speak different languages,” he says. “It was an early exposure to the need to look beyond the engineering bubble and think about how to create technical work that can impact the real world.”

The next generation

Carlone stayed in Turin to complete his PhD in mechatronics. During that time, he was given freedom to choose a thesis topic, which he went about, as he recalls, “a bit naively.”

“I was exploring a topic that the community considered to be well-understood, and for which many researchers believed there was nothing more to say.” Carlone says. “I underestimated how established the topic was, and thought I could still contribute something new to it, and I was lucky enough to just do that.”

The topic in question was “simultaneous localization and mapping,” or SLAM — the problem of generating and updating a map of a robot’s environment while simultaneously keeping track of where the robot is within that environment. Carlone came up with a way to reframe the problem, such that algorithms could generate more precise maps without having to start with an initial guess, as most SLAM methods did at the time. His work helped to crack open a field where most roboticists thought one could not do better than the existing algorithms.

“SLAM is about figuring out the geometry of things and how a robot moves among those things,” Carlone says. “Now I’m part of a community asking, what is the next generation of SLAM?”

In search of an answer, he accepted a postdoc position at Georgia Tech, where he dove into coding and computer vision — a field that, in retrospect, may have been inspired by a brush with blindness: As he was finishing up his PhD in Italy, he suffered a medical complication that severely affected his vision.

“For one year, I could have easily lost an eye,” Carlone says. “That was something that got me thinking about the importance of vision, and artificial vision.”

He was able to receive good medical care, and the condition resolved entirely, such that he could continue his work. At Georgia Tech, his advisor, Frank Dellaert, showed him ways to code in computer vision and formulate elegant mathematical representations of complex, three-dimensional problems. His advisor was also one of the first to develop an open-source SLAM library, called GTSAM, which Carlone quickly recognized to be an invaluable resource. More broadly, he saw that making software available to all unlocked a huge potential for progress in robotics as a whole.

“Historically, progress in SLAM has been very slow, because people kept their codes proprietary, and each group had to essentially start from scratch,” Carlone says. “Then open-source pipelines started popping up, and that was a game changer, which has largely driven the progress we have seen over the last 10 years.”

Spatial AI

Following Georgia Tech, Carlone came to MIT in 2015 as a postdoc in the Laboratory for Information and Decision Systems (LIDS). During that time, he collaborated with Sertac Karaman, professor of aeronautics and astronautics, in developing software to help palm-sized drones navigate their surroundings using very little on-board power. A year later, he was promoted to research scientist, and then in 2017, Carlone accepted a faculty position in AeroAstro.

“One thing I fell in love with at MIT was that all decisions are driven by questions like: What are our values? What is our mission? It’s never about low-level gains. The motivation is really about how to improve society,” Carlone says. “As a mindset, that has been very refreshing.”

Today, Carlone’s group is developing ways to represent a robot’s surroundings, beyond characterizing their geometric shape and semantics. He is utilizing deep learning and large language models to develop algorithms that enable robots to perceive their environment through a higher-level lens, so to speak. Over the last six years, his lab has released more than 60 open-source repositories, which are used by thousands of researchers and practitioners worldwide. The bulk of his work fits into a larger, emerging field known as “spatial AI.”

“Spatial AI is like SLAM on steroids,” Carlone says. “In a nutshell, it has to do with enabling robots to think and understand the world as humans do, in ways that can be useful.”

It’s a huge undertaking that could have wide-ranging impacts, in terms of enabling more intuitive, interactive robots to help out at home, in the workplace, on the roads, and in remote and potentially dangerous areas. Carlone says there will be plenty of work ahead, in order to come close to how humans perceive the world.

“I have 2-year-old twin daughters, and I see them manipulating objects, carrying 10 different toys at a time, navigating across cluttered rooms with ease, and quickly adapting to new environments. Robot perception cannot yet match what a toddler can do,” Carlone says. “But we have new tools in the arsenal. And the future is bright.”



de MIT News https://ift.tt/7dSe90v

Professor Emeritus Gerald Schneider, discoverer of the “two visual systems,” dies at 84

Gerald E. Schneider, a professor emeritus of psychology and member of the MIT community for over 60 years, passed away on Dec. 11, 2024. He was 84.

Schneider was an authority on the relationships between brain structure and behavior, concentrating on neuronal development, regeneration or altered growth after brain injury, and the behavioral consequences of altered connections in the brain.

Using the Syrian golden hamster as his test subject of choice, Schneider made numerous contributions to the advancement of neuroscience. He laid out the concept of two visual systems — one for locating objects and one for the identification of objects — in a 1969 issue of Science, a milestone in the study of brain-behavior relationships. In 1973, he described a “pruning effect” in the optic tract axons of adult hamsters who had brain lesions early in life. In 2006, his lab reported a previously undiscovered nanobiomedical technology for tissue repair and restoration in Biological Sciences. The paper showed how a designed self-assembling peptide nanofiber scaffold could create a permissive environment for axons, not only to regenerate through the site of an acute injury in the optic tract of hamsters, but also to knit the brain tissue together.

His work shaped the research and thinking of numerous colleagues and trainees. Mriganka Sur, the Newton Professor of Neuroscience and former Department of Brain and Cognitive Sciences (BCS) department head, recalls how Schneider’s paper, “Is it really better to have your brain lesion early? A revision of the ‘Kennard Principle,’” published in 1979 in the journal Neuropsychologia, influenced his work on rewiring retinal projections to the auditory thalamus, which was used to derive principles of functional plasticity in the cortex.

“Jerry was an extremely innovative thinker. His hypothesis of two visual systems — for detailed spatial processing and for movement processing — based on his analysis of visual pathways in hamsters presaged and inspired later work on form and motion pathways in the primate brain,” Sur says. “His description of conservation of axonal arbor during development laid the foundation for later ideas about homeostatic mechanisms that co-regulate neuronal plasticity.”

Institute Professor Ann Graybiel was a colleague of Schneider’s for over five decades. She recalls early in her career being asked by Schneider to help make a map of the superior colliculus.

“I took it as an honor to be asked, and I worked very hard on this, with great excitement. It was my first such mapping, to be followed by much more in the future,” Graybiel recalls. “Jerry was fascinated by animal behavior, and from early on he made many discoveries using hamsters as his main animals of choice. He found that they could play. He found that they could operate in ways that seemed very sophisticated. And, yes, he mapped out pathways in their brains.”

Schneider was raised in Wheaton, Illinois, and graduated from Wheaton College in 1962 with a degree in physics. He was recruited to MIT by Hans-Lukas Teuber, one of the founders of the Department of Psychology, which eventually became the Department of Brain and Cognitive Sciences. Walle Nauta, another founder of the department, taught Schneider neuroanatomy. The pair were deeply influential in shaping his interests in neuroscience and his research.

“He admired them both very much and was very attached to them,” his daughter, Nimisha Schneider, says. “He was an interdisciplinary scholar and he liked that aspect of neuroscience, and he was fascinated by the mysteries of the human brain.”

Shortly after completing his PhD in psychology in 1966, he was hired as an assistant professor in 1967. He was named an associate professor in 1970, received tenure in 1975, and was appointed a full professor in 1977.

After his retirement in 2017, Schneider remained involved with the Department of BCS. Professor Pawan Sinha brought Schneider to campus for what would be his last on-campus engagement, as part of the “SilverMinds Series,” an initiative in the Sinha Lab to engage with scientists now in their “silver years.”

Schneider’s research made an indelible impact on Sinha, beginning as a graduate student when he was inspired by Schneider’s work linking brain structure and function. His work on nerve regeneration, which merged fundamental science and real-world impact, served as a “North Star” that guided Sinha’s own work as he established his lab as a junior faculty member.

“Even through the sadness of his loss, I am grateful for the inspiring example he has left for us of a life that so seamlessly combined brilliance, kindness, modesty, and tenacity,” Sinha says. “He will be missed.”

Schneider’s life centered around his research and teaching, but he also had many other skills and hobbies. Early in his life, he enjoyed painting, and as he grew older he was drawn to poetry. He was also skilled in carpentry and making furniture. He built the original hamster cages for his lab himself, along with numerous pieces of home furniture and shelving. He enjoyed nature anywhere it could be found, from the bees in his backyard to hiking and visiting state and national parks.

He was a Type 1 diabetic, and at the time of his death, he was nearing the completion of a book on the effects of hypoglycemia on the brain, which his family hopes to have published in the future. He was also the author of “Brain Structure and Its Origins,” published in 2014 by MIT Press.

He is survived by his wife, Aiping; his children, Cybele, Aniket, and Nimisha; and step-daughter Anna. He was predeceased by a daughter, Brenna. He is also survived by eight grandchildren and 10 great-grandchildren. A memorial in his honor was held on Jan. 11 at Saint James Episcopal Church in Cambridge.



de MIT News https://ift.tt/0WExsO8

viernes, 24 de enero de 2025

Is this the new playbook for curing rare childhood diseases?

“There is no treatment available for your son. We can’t do anything to help him.”

When Fernando Goldsztein MBA ’03 heard those words, something inside him snapped.

“I refused to accept what the doctors were saying. I transformed my fear into my greatest strength and started fighting.”

Goldsztein’s 12-year-old son Frederico was diagnosed with relapsing medulloblastoma, a life-threatening pediatric brain tumor. Goldsztein's life — and career plan — changed in an instant. He had to learn to become a different kind of leader altogether.

While Goldsztein never set out to become a founder, the MIT Sloan School of Management taught him the importance of networking, building friendships, and making career connections with peers and faculty from all walks of life. He began using those skills in a new way — boldly reaching out to the top medulloblastoma doctors and scientists at hospitals around the world to ask for help.

“I knew that I had to do something to save Frederico, but also the other estimated 15,000 children diagnosed with the disease around the world each year,” he says.

In 2021, Goldsztein launched The Medulloblastoma Initiative (MBI), a nonprofit organization dedicated to finding a cure using a remarkable new model for funding rare disease research.

In just 18 months, the organization — which is still in startup mode — has raised $11 million in private funding and brought together 14 of the world’s most prestigious labs and hospitals from across North America, Europe, and Brazil.

Two promising trials will launch in the coming months, and three additional trials are in the pipeline and currently awaiting U.S. Food and Drug Administration approval.

All of this in an industry that is notorious for bureaucratic red tape, and where the timeline from an initial lab discovery to a patient receiving a first treatment averages seven to 15 years.

While government research grants typically allocate just 4 cents on the dollar toward pediatric cancer research — pennies doled out across multiple labs pursuing uncoordinated efforts — MBI is laser-focused on pushing 100 percent of their funding toward a singular goal, without any overhead or administrative costs.

“There is no time to lose,” Goldsztein says. “We are making science move faster than it ever has before.”

The MBI blueprint for funding cures for rare diseases is replicable, and likely to disrupt the standard way health care research is funded and carried out by radically shortening the timeline.

From despair to strength

After his initial diagnosis at age 9, Frederico went through a nine-hour brain surgery and came to the United States to receive standard treatment. Goldsztein looked on helplessly as his son received radiation and then nine grueling rounds of chemotherapy.

First pioneered in the 1980s, this standard treatment protocol cures 70 percent of children. Still, it leaves most of them with lifelong side effects like cognitive problems, endocrine issues that stunt growth, and secondary tumors. Frederico was on the wrong side of that statistic. Just three years later, his tumor relapsed.

Goldsztein grimaces as he recalls the prognosis he and his wife heard from the doctors.

“It was unbelievable to me that there had been almost no discoveries in 40 years,” he says.

Ultimately, he found hope and partnership in Roger Packer, the director of the Brain Tumor Institute and the Gilbert Family Neurofibromatosis Institute of Children’s National Hospital. He is also the very doctor who created the standard treatment years before.

Packer explains that finding effective therapies for medulloblastoma was complex for 30 years because it is an umbrella term for 13 types of tumors. Frederico suffers from the most common one, Group 4.

Part of the reason the treatment has not changed is that, until recently, medicine has not advanced enough to detect differences between the different tumor types. Packer explains, “Now with molecular genetic testing and methylation, which is a way to essentially sort tumors, that has changed.”

The problem for Frederico was that very few researchers were working on Group 4, the sub-type of medulloblastoma that is the most common tumor, yet also the one that scientists know the least about.

Goldsztein challenged Packer: “If I can get you the funding, what can your lab do to advance medulloblastoma research quickly?”

An open-source consortium model

Packer advised that they work together to “try something different,” instead of just throwing money at research without any guideposts.

“We set up a consortium of leading institutions around the world doing medulloblastoma research, asked them to change their lab approach to focus on the Group 4 tumor, and assigned each lab a question to answer. We charged them with coming up with therapy — not in seven to 10 years, which is the normal transition from discovery to developing a drug and getting it to a patient, but within a two-year timeline,” he says.

Initially, seven labs signed on. Today, the Cure Group 4 Consortium is made up of 14 partners and reads like a who’s who of medulloblastoma heavy hitters: Children’s National Hospital, SickKids, Hopp Children’s Cancer Center, and Texas Children’s Hospital.

Labs can only join the consortium if they agree to follow some unusual rules. As Goldsztein explains, “To be accepted into this group and receive funding, there are no silos, and there is no duplicated work. Everyone has a piece of the puzzle, and we work together to move fast. That is the magic of our model.”

Inspired by MIT’s open-source methods, researchers must share data freely with one another to accelerate the group’s overall progress. This kind of partnership across labs and borders is unprecedented in a highly competitive sector.

Mariano Gargiulo MBA ’03 met Goldsztein on the first day of their MIT Sloan Fellows MBA program orientation and has been his dear friend ever since. An early-stage donor to MBI and a Houston-based executive in the energy sector, Gargiulo sat down with Goldsztein as he first conceptualized MBI’s operating model.

“Usually, startup business models plot out the next 10-15 years; Fernando’s timeline was only two years, and his benchmarks were in three-month increments.” It was audaciously optimistic, says Gargiulo, but so was the founder.

“When I saw it, I did not doubt that he would achieve his goals. I’m seeing Fernando hit those first targets now and it’s amazing to watch,” Gargiulo says.

Children’s National Hospital endorsed MBI in 2023 and invited Goldsztein to sit on its foundation’s board, adding credibility to the initiative and his ability to fundraise more ambitiously.

According to Packer, in the next few months, the first two MBI protocols will reach patients for the first time: an immunotherapy protocol, which “leverages the body’s immune response to target cancer cells more effectively and safely than traditional therapies,” and a medulloblastoma vaccine, which “adapts similar methodologies used in Covid-19 vaccine development. This approach aims to provide a versatile and mobile treatment that could be distributed globally.”

A matter of when

When Goldsztein is not with his own family in Brazil, fundraising, or managing MBI, he is on Zoom with a network of more than 70 other families with children with relapsed medulloblastoma. “I’m not a doctor and I don’t give out medical advice, but with these trials, we are giving each other hope,” he explains.

Hope and purpose are commodities that Goldsztein has in spades. “I don’t understand the idea of doing business and accumulating assets, but not helping others,” he says. He shared that message with an auditorium of his fellow alumni at his 2023 MIT Sloan Reunion.

Frederico, who defied all odds and lived with the threat of recurrence, recently graduated high school. He is interested in international relations and passionate about photography. “This is about finding a cure for Frederico and for all kids,” Goldsztein says.

When asked how the world would be impacted if MBI found a cure for medulloblastoma, Goldsztein shakes his head.

“We are going to find the cure. It’s not if, it’s a matter of when.”

His next goal is to scale MBI and have it serve as a resource for groups that want to replicate its playbook to solve other childhood diseases.

“I’m never going to stop,” he says.



de MIT News https://ift.tt/EPfaKbQ

jueves, 23 de enero de 2025

A platform to expedite clean energy projects

Businesses and developers often face a steep learning curve when installing clean energy technologies, such as solar installations and EV chargers. To get a fair deal, they need to navigate a complex bidding process that involves requesting proposals, evaluating bids, and ultimately contracting with a provider.

Now the startup Station A, founded by a pair of MIT alumni and their colleagues, is streamlining the process of deploying clean energy. The company has developed a marketplace for clean energy that helps real estate owners and businesses analyze properties to calculate returns on clean energy projects, create detailed project listings, collect and compare bids, and select a provider.

The platform helps real estate owners and businesses adopt clean energy technologies like solar panels, batteries, and EV chargers at the lowest possible prices, in places with the highest potential to reduce energy costs and emissions.

“We do a lot to make adopting clean energy simple,” explains Manos Saratsis MArch ’15, who co-founded Station A with Kevin Berkemeyer MBA ’14. “Imagine if you were trying to buy a plane ticket and your travel agent only used one carrier. It would be more expensive, and you couldn’t even get to some places. Our customers want to have multiple options and easily learn about the track record of whoever they’re working with.”

Station A has already partnered with some of the largest real estate companies in the country, some with thousands of properties, to reduce the carbon footprint of their buildings. The company is also working with grocery chains, warehouses, and other businesses to accelerate the clean energy transition.

“Our platform uses a lot of AI and machine learning to turn addresses into building footprints and to understand their electricity costs, available incentives, and where they can expect the highest ROI,” says Saratsis, who serves as Station A’s head of product. “This would normally require tens or hundreds of thousands of dollars’ worth of consulting time, and we can do it for next to no money very quickly.”

Building the foundation

As a graduate student in MIT’s Department of Architecture, Saratsis studied environmental design modeling, using data from sources like satellite imagery to understand how communities consume energy and to propose the most impactful potential clean energy solutions. He says classes with professors Christoph Reinhart and Kent Larsen were particularly eye-opening.

“My ability to build a thermal energy model and simulate electricity usage in a building started at MIT,” Saratsis says.

Berkemeyer served as president of the MIT Energy Club while at the MIT Sloan School of Management. He was also a research assistant at the MIT Energy Initiative as part of the Future of Solar report and a teacher’s assistant for course 15.366 (Climate and Energy Ventures). He says classes in entrepreneurship with professor of the practice Bill Aulet and in sustainability with Senior Lecturer Jason Jay were formative. Prior to his studies at MIT, Berkemeyer had extensive experience developing solar and storage projects and selling clean energy products to commercial customers. The eventual co-founders didn’t cross paths at MIT, but they ended up working together at the utility NRG Energy after graduation.

“As co-founders, we saw an opportunity to transform how businesses approach clean energy,” said Berkemeyer, who is now Station A’s CEO. “Station A was born out of a shared belief that data and transparency could unlock the full potential of clean energy technologies for everyone.”

At NRG, the founders built software to help identify decarbonization opportunities for customers without having to send analysts to the sites for in-person audits.

“If they worked with a big grocery chain or a big retailer, we would use proprietary analytics to evaluate that portfolio and come up with recommendations for things like solar projects, energy efficiency, and demand response that would yield positive returns within a year,” Saratsis explains.

The tools were a huge success within the company. In 2018, the pair, along with co-founders Jeremy Lucas and Sam Steyer, decided to spin out the technology into Station A.

The founders started by working with energy companies but soon shifted their focus to real estate owners with huge portfolios and large businesses with long-term leasing contracts. Many customers have hundreds or even thousands of addresses to evaluate. Using just the addresses, Station A can provide detailed financial return estimates for clean energy investments.

In 2020, the company widened its focus from selling access to its analytics to creating a marketplace for clean energy transactions, helping businesses run the competitive bidding process for clean energy projects. After a project is installed, Station A can also evaluate whether it’s achieving its expected performance and track financial returns.

“When I talk to people outside the industry, they’re like, ‘Wait, this doesn’t exist already?’” Saratsis says. “It’s kind of crazy, but the industry is still very nascent, and no one’s been able to figure out a way to run the bidding process transparently and at scale.”

From the campus to the world

Today, about 2,500 clean energy developers are active on Station A’s platform. A number of large real estate investment trusts also use its services, in addition to businesses like HP, Nestle, and Goldman Sachs. If Station A were a developer, Saratsis says it would now rank in the top 10 in terms of annual solar deployments.

The founders credit their time at MIT with helping them scale.

“A lot of these relationships originated within the MIT network, whether through folks we met at Sloan or through engagement with MIT,” Saratsis says. “So much of this business is about reputation, and we’ve established a really good reputation.”

Since its founding, Station A has also been sponsoring classes at the Sustainability Lab at MIT, where Saratsis conducted research as a student. As they work to grow Station A’s offerings, the founders say they use the skills they gained as students every day.

“Everything we do around building analysis is inspired in some ways by the stuff that I did when I was at MIT,” Saratsis says.

“Station A is just getting started,” Berkemeyer says. “Clean energy adoption isn’t just about technology — it’s about making the process seamless and accessible. That’s what drives us every day, and we’re excited to lead this transformation.”



de MIT News https://ift.tt/1RcGVXt

Toward video generative models of the molecular world

As the capabilities of generative AI models have grown, you've probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.

More recently, generative AI has shown potential in helping chemists and biologists explore static molecules, like proteins and DNA. Models like AlphaFold can predict molecular structures to accelerate drug discovery, and the MIT-assisted “RFdiffusion,” for example, can help design new proteins. One challenge, though, is that molecules are constantly moving and jiggling, which is important to model when constructing new proteins and drugs. Simulating these motions on a computer using physics — a technique known as molecular dynamics — can be very expensive, requiring billions of time steps on supercomputers.

As a step toward simulating these behaviors more efficiently, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Mathematics researchers have developed a generative model that learns from prior data. The team’s system, called MDGen, can take a frame of a 3D molecule and simulate what will happen next like a video, connect separate stills, and even fill in missing frames. By hitting the “play button” on molecules, the tool could potentially help chemists design new molecules and closely study how well their drug prototypes for cancer and other diseases would interact with the molecular structure it intends to impact.

Co-lead author Bowen Jing SM ’22 says that MDGen is an early proof of concept, but it suggests the beginning of an exciting new research direction. “Early on, generative AI models produced somewhat simple videos, like a person blinking or a dog wagging its tail,” says Jing, a PhD student at CSAIL. “Fast forward a few years, and now we have amazing models like Sora or Veo that can be useful in all sorts of interesting ways. We hope to instill a similar vision for the molecular world, where dynamics trajectories are the videos. For example, you can give the model the first and 10th frame, and it’ll animate what’s in between, or it can remove noise from a molecular video and guess what was hidden.”

The researchers say that MDGen represents a paradigm shift from previous comparable works with generative AI in a way that enables much broader use cases. Previous approaches were “autoregressive,” meaning they relied on the previous still frame to build the next, starting from the very first frame to create a video sequence. In contrast, MDGen generates the frames in parallel with diffusion. This means MDGen can be used to, for example, connect frames at the endpoints, or “upsample” a low frame-rate trajectory in addition to pressing play on the initial frame.

This work was presented in a paper shown at the Conference on Neural Information Processing Systems (NeurIPS) this past December. Last summer, it was awarded for its potential commercial impact at the International Conference on Machine Learning’s ML4LMS Workshop.

Some small steps forward for molecular dynamics

In experiments, Jing and his colleagues found that MDGen’s simulations were similar to running the physical simulations directly, while producing trajectories 10 to 100 times faster.

The team first tested their model’s ability to take in a 3D frame of a molecule and generate the next 100 nanoseconds. Their system pieced together successive 10-nanosecond blocks for these generations to reach that duration. The team found that MDGen was able to compete with the accuracy of a baseline model, while completing the video generation process in roughly a minute — a mere fraction of the three hours that it took the baseline model to simulate the same dynamic.

When given the first and last frame of a one-nanosecond sequence, MDGen also modeled the steps in between. The researchers’ system demonstrated a degree of realism in over 100,000 different predictions: It simulated more likely molecular trajectories than its baselines on clips shorter than 100 nanoseconds. In these tests, MDGen also indicated an ability to generalize on peptides it hadn’t seen before.

MDGen’s capabilities also include simulating frames within frames, “upsampling” the steps between each nanosecond to capture faster molecular phenomena more adequately. It can even ​​“inpaint” structures of molecules, restoring information about them that was removed. These features could eventually be used by researchers to design proteins based on a specification of how different parts of the molecule should move.

Toying around with protein dynamics

Jing and co-lead author Hannes Stärk say that MDGen is an early sign of progress toward generating molecular dynamics more efficiently. Still, they lack the data to make these models immediately impactful in designing drugs or molecules that induce the movements chemists will want to see in a target structure.

The researchers aim to scale MDGen from modeling molecules to predicting how proteins will change over time. “Currently, we’re using toy systems,” says Stärk, also a PhD student at CSAIL. “To enhance MDGen’s predictive capabilities to model proteins, we’ll need to build on the current architecture and data available. We don’t have a YouTube-scale repository for those types of simulations yet, so we’re hoping to develop a separate machine-learning method that can speed up the data collection process for our model.”

For now, MDGen presents an encouraging path forward in modeling molecular changes invisible to the naked eye. Chemists could also use these simulations to delve deeper into the behavior of medicine prototypes for diseases like cancer or tuberculosis.

“Machine learning methods that learn from physical simulation represent a burgeoning new frontier in AI for science,” says Bonnie Berger, MIT Simons Professor of Mathematics, CSAIL principal investigator, and senior author on the paper. “MDGen is a versatile, multipurpose modeling framework that connects these two domains, and we’re very excited to share our early models in this direction.”

“Sampling realistic transition paths between molecular states is a major challenge,” says fellow senior author Tommi Jaakkola, who is the MIT Thomas Siebel Professor of electrical engineering and computer science and the Institute for Data, Systems, and Society, and a CSAIL principal investigator. “This early work shows how we might begin to address such challenges by shifting generative modeling to full simulation runs.”

Researchers across the field of bioinformatics have heralded this system for its ability to simulate molecular transformations. “MDGen models molecular dynamics simulations as a joint distribution of structural embeddings, capturing molecular movements between discrete time steps,” says Chalmers University of Technology associate professor Simon Olsson, who wasn’t involved in the research. “Leveraging a masked learning objective, MDGen enables innovative use cases such as transition path sampling, drawing analogies to inpainting trajectories connecting metastable phases.”

The researchers’ work on MDGen was supported, in part, by the National Institute of General Medical Sciences, the U.S. Department of Energy, the National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency.



de MIT News https://ift.tt/awT9GZl

Physicists discover — and explain — unexpected magnetism in an atomically thin material

MIT physicists have created a new ultrathin, two-dimensional material with unusual magnetic properties that initially surprised the researchers before they went on to solve the complicated puzzle behind those properties’ emergence. As a result, the work introduces a new platform for studying how materials behave at the most fundamental level — the world of quantum physics.

Ultrathin materials made of a single layer of atoms have riveted scientists’ attention since the discovery of the first such material — graphene, composed of carbon — about 20 years ago. Among other advances since then, researchers have found that stacking individual sheets of the 2D materials, and sometimes twisting them at a slight angle to each other, can give them new properties, from superconductivity to magnetism. Enter the field of twistronics, which was pioneered at MIT by Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.

In the current research, reported in the Jan. 7 issue of Nature Physics, the scientists, led by Jarillo-Herrero, worked with three layers of graphene. Each layer was twisted on top of the next at the same angle, creating a helical structure akin to the DNA helix or a hand of three cards that are fanned apart.

“Helicity is a fundamental concept in science, from basic physics to chemistry and molecular biology. With 2D materials, one can create special helical structures, with novel properties which we are just beginning to understand. This work represents a new twist in the field of twistronics, and the community is very excited to see what else we can discover using this helical materials platform!” says Jarillo-Herrero, who is also affiliated with MIT’s Materials Research Laboratory.

Do the twist

Twistronics can lead to new properties in ultrathin materials because arranging sheets of 2D materials in this way results in a unique pattern called a moiré lattice. And a moiré pattern, in turn, has an impact on the behavior of electrons.

“It changes the spectrum of energy levels available to the electrons and can provide the conditions for interesting phenomena to arise,” says Sergio C. de la Barrera, one of three co-first authors of the recent paper. De la Barrera, who conducted the work while a postdoc at MIT, is now an assistant professor at the University of Toronto.

In the current work, the helical structure created by the three graphene layers forms two moiré lattices. One is created by the first two overlapping sheets; the other is formed between the second and third sheets.

The two moiré patterns together form a third moiré, a supermoiré, or “moiré of a moiré,” says Li-Qiao Xia, a graduate student in MIT physics and another of the three co-first authors of the Nature Physics paper. “It’s like a moiré hierarchy.” While the first two moiré patterns are only nanometers, or billionths of a meter, in scale, the supermoiré appears at a scale of hundreds of nanometers superimposed over the other two. You can only see it if you zoom out to get a much wider view of the system.

A major surprise

The physicists expected to observe signatures of this moiré hierarchy. They got a huge surprise, however, when they applied and varied a magnetic field. The system responded with an experimental signature for magnetism, one that arises from the motion of electrons. In fact, this orbital magnetism persisted to -263 degrees Celsius — the highest temperature reported in carbon-based materials to date.

But that magnetism can only occur in a system that lacks a specific symmetry — one that the team’s new material should have had. “So the fact that we saw this was very puzzling. We didn’t really understand what was going on,” says Aviram Uri, an MIT Pappalardo postdoc in physics and the third co-first author of the new paper.

Other authors of the paper include MIT professor of physics Liang Fu; Aaron Sharpe of Sandia National Laboratories; Yves H. Kwan of Princeton University; Ziyan Zhu, David Goldhaber-Gordon, and Trithep Devakul of Stanford University; and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.

What was happening?

It turns out that the new system did indeed break the symmetry that prohibits the orbital magnetism the team observed, but in a very unusual way. “What happens is that the atoms in this system aren’t very comfortable, so they move in a subtle orchestrated way that we call lattice relaxation,” says Xia. And the new structure formed by that relaxation does indeed break the symmetry locally, on the moiré length scale.

This opens the possibility for the orbital magnetism the team observed. However, if you zoom out to view the system on the supermoiré scale, the symmetry is restored. “The moiré hierarchy turns out to support interesting phenomena at different length scales,” says de la Barrera.

Concludes Uri: “It’s a lot of fun when you solve a riddle and it’s such an elegant solution. We’ve gained new insights into how electrons behave in these complex systems, insights that we couldn’t have had unless our experimental observations forced to think about these things.”

This work was supported by the Army Research Office, the National Science Foundation, the Gordon and Betty Moore Foundation, the Ross M. Brown Family Foundation, an MIT Pappalardo Fellowship, the VATAT Outstanding Postdoctoral Fellowship in Quantum Science and Technology, the JSPS KAKENHI, and a Stanford Science Fellowship.



de MIT News https://ift.tt/5hO7DYV

New START.nano cohort is developing solutions in health, data storage, power, and sustainable energy

MIT.nano has announced seven new companies to join START.nano, a program aimed at speeding the transition of hard-tech innovation to market. The program supports new ventures through discounted use of MIT.nano’s facilities and access to the MIT innovation ecosystem.

The advancements pursued by the newly engages startups include wearables for health care, green alternatives to fossil fuel-based energy, novel battery technologies, enhancements in data systems, and interconnecting nanofabrication knowledge networks, among others.

“The transition of the grand idea that is imagined in the laboratory to something that a million people can use in their hands is a journey fraught with many challenges,” MIT.nano Director Vladimir Bulović said at the 2024 Nano Summit, where nine START.nano companies presented their work. The program provides resources to ease startups over the first two hurdles — finding stakeholders and building a well-developed prototype.

In addition to access to laboratory tools necessary to advance their technologies, START.nano companies receive advice from MIT.nano expert staff, are connected to MIT.nano Consortium companies, gain a broader exposure at MIT conferences and community events, and are eligible to join the MIT Startup Exchange.

“MIT.nano has allowed us to push our project to the frontiers of sensing by implementing advanced fabrication techniques using their machinery,” said Uroš Kuzmanović, CEO and founder of Biosens8. “START.nano has surrounded us with exciting peers, a strong support system, and a spotlight to present our work. By taking advantage of all that the program has to offer, BioSens8 is moving faster than we could anywhere else.”

Here are the seven new START.nano participants:

Analog Photonics is developing lidar and optical communications technology using silicon photonics.

Biosens8 is engineering novel devices to enable health ownership. Their research focuses on multiplexed wearables for hormones, neurotransmitters, organ health markers, and drug use that will give insight into the body's health state, opening the door to personalized medicine and proactive, data-driven health decisions.

Casimir, Inc. is working on power-generating nanotechnology that interacts with quantum fields to create a continuous source of power. The team compares their technology to a solar panel that works in the dark or a battery that never needs to be recharged.

Central Spiral focuses on lossless data compression. Their technology allows for the compression of any type of data, including those that are already compressed, reducing data storage and transmission costs, lowering carbon dioxide emissions, and enhancing efficiency.

FabuBlox connects stakeholders across the nanofabrication ecosystem and resolves issues of scattered, unorganized, and isolated fab knowledge. Their cloud-based platform combines a generative process design and simulation interface with GitHub-like repository building capabilities.

Metal Fuels is converting industrial waste aluminum to onsite energy and high-value aluminum/aluminum-oxide powders. Their approach combines existing mature technologies of molten metal purification and water atomization to develop a self-sustaining reactor that produces alumina of higher value than our input scrap aluminum feedstock, while also collecting the hydrogen off-gas.

PolyJoule, Inc. is an energy storage startup working on conductive polymer battery technology. The team’s goal is a grid battery of the future that is ultra-safe, sustainable, long living, and low-cost.

In addition to the seven startups that are actively using MIT.nano, nine other companies have been invited to join the latest START.nano cohort:

  • Acorn Genetics
  • American Boronite Corp.
  • Copernic Catalysts
  • Envoya Bio
  • Helix Carbon
  • Minerali
  • Plaid Semiconductors
  • Quantum Network Technologies
  • Wober Tech

Launched in 2021, START.nano now comprises over 20 companies and eight graduates — ventures that have moved beyond the initial startup stages and some into commercialization. 



de MIT News https://ift.tt/NdPE2yu