martes, 29 de noviembre de 2022

Looking beyond “technology for technology’s sake”

Austen Roberson’s favorite class at MIT is 2.S007 (Design and Manufacturing I-Autonomous Machines), in which students design, build, and program a fully autonomous robot to accomplish tasks laid out on a themed game board.

“The best thing about that class is everyone had a different idea,” says Roberson. “We all had the same game board and the same instructions given to us, but the robots that came out of people’s minds were so different.”

The game board was Mars-themed, with a model shuttle that could be lifted to score points. Roberson’s robot, nicknamed Tank Evans after a character from the movie “Surf’s Up,” employed a clever strategy to accomplish this task. Instead of spinning the gears that would raise the entire mechanism, Roberson realized a claw gripper could wrap around the outside of the shuttle and lift it manually.

“That wasn’t the intended way,” says Roberson, but his outside-of-the-box strategy ending up winning him the competition at the conclusion of the class, which was part of the New Engineering Education Transformation (NEET) program. “It was a really great class for me. I get a lot of gratification out of building something with my hands and then using my programming and problem-solving skills to make it move.”

Roberson, a senior, is majoring in aerospace engineering with a minor in computer science. As his winning robot demonstrates, he thrives at the intersection of both fields. He references the Mars Curiosity Rover as the type of project that inspires him; he even keeps a Lego model of Curiosity on his desk. 

“You really have to trust that the hardware you’ve made is up to the task, but you also have to trust your software equally as much,” says Roberson, referring to the challenges of operating a rover from millions of miles away. “Is the robot going to continue to function after we’ve put it into space? Both of those things have to come together in such a perfect way to make this stuff work.”

Outside of formal classwork, Roberson has pursued multiple research opportunities at MIT that blend his academic interests. He’s worked on satellite situational awareness with the Space Systems Laboratory, tested drone flight in different environments with the Aerospace Controls Laboratory, and is currently working on zero-shot machine learning for anomaly detection in big datasets with the Mechatronics Research Laboratory.

Even while tackling these challenging technical problems head-on, Roberson is also actively thinking about the social impact of his work. He takes classes in the Program on Science, Technology, and Society, which has taught him not only how societal change throughout history has been driven by technological advancements, but also how to be a thoughtful engineer in his own career.

“Learning about the social implications of the technology you’re working on is really important,” says Roberson, acknowledging that his work in automation and machine learning needs to address these questions. “Sometimes, we get caught up in technology for technology’s sake. How can we take these same concepts and bring them to people to help in a tangible, physical way? How have we come together as a scientific community to really affect social change, and what can we do in the future to continue affecting that social change?”

Roberson is already working through what these questions mean for him personally. He’s been a member of the National Society of Black Engineers (NSBE) throughout his entire college experience, which includes serving on the executive board for two years. He’s helped to organize workshops focused on everything from interview preparation to financial literacy, as well as social events to build community among members.

“The mission of the organization is to increase the number of culturally responsible Black engineers that excel academically, succeed professionally, and positively impact the community,” says Roberson. “My goal with NSBE was to be able to provide a resource to help everybody get to where they wanted to be, to be the vehicle to really push people to be their best, and to provide the resources that people needed and wanted to advance themselves professionally.”

In fact, one of his most memorable MIT experiences is the first conference he attended as a member of NSBE.

“Being able to see all different these people from all of these different schools able to come together as a family and just talk to each other, it’s a very rewarding experience,” Roberson says. “It’s important to be able to surround yourself with people who have similar professional goals and share similar backgrounds and experiences with you. It’s definitely the proudest I’ve been of any club at MIT.”

Looking toward his own career, Roberson wants to find a way to work on fast-paced, cutting-edge technologies that move society forward in a positive way.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. “I think learning about space is learning about ourselves as well. The more you can learn about the stuff that’s out there, you can take those lessons to reflect on what’s down here as well.”



de MIT News https://ift.tt/CfOw5r8

A breakthrough on “loss and damage,” but also disappointment, at UN climate conference

As 2022 United Nations climate change conference, known as COP27, stretched into its final hours on Saturday, Nov. 19, it was uncertain what kind of agreement might emerge from two weeks of intensive international negotiations.

In the end, COP27 produced mixed results: on the one hand, a historic agreement for wealthy countries to compensate low-income countries for “loss and damage,” but on the other, limited progress on new plans for reducing the greenhouse gas emissions that are warming the planet.

“We need to drastically reduce emissions now — and this is an issue this COP did not address,” said U.N. Secretary-General António Guterres in a statement at the conclusion of COP27. “A fund for loss and damage is essential — but it’s not an answer if the climate crisis washes a small island state off the map — or turns an entire African country to desert.”

Throughout the two weeks of the conference, a delegation of MIT students, faculty, and staff was at the Sharm El-Sheikh International Convention Center to observe the negotiations, conduct and share research, participate in panel discussions, and forge new connections with researchers, policymakers, and advocates from around the world.

Loss and damage

A key issue coming in to COP27 (COP stands for “conference of the parties” to the U.N. Framework Convention on Climate Change, held for the 27th time) was loss and damage: a term used by the U.N. to refer to harms caused by climate change — either through acute catastrophes like extreme weather events or slower-moving impacts like sea level rise — to which communities and countries are unable to adapt. 

Ultimately, a deal on loss and damage proved to be COP27’s most prominent accomplishment. Negotiators reached an eleventh-hour agreement to “establish new funding arrangements for assisting developing countries that are particularly vulnerable to the adverse effects of climate change.” 

“Providing financial assistance to developing countries so they can better respond to climate-related loss and damage is not only a moral issue, but also a pragmatic one,” said Michael Mehling, deputy director of the MIT Center for Energy and Environmental Policy Research, who attended COP27 and participated in side events. “Future emissions growth will be squarely centered in the developing world, and offering support through different channels is key to building the trust needed for more robust global cooperation on mitigation.”

Youssef Shaker, a graduate student in the MIT Technology and Policy Program and a research assistant with the MIT Energy Initiative, attended the second week of the conference, where he followed the negotiations over loss and damage closely. 

“While the creation of a fund is certainly an achievement,” Shaker said, “significant questions remain to be answered, such as the size of the funding available as well as which countries receive access to it.” A loss-and-damage fund that is not adequately funded, Shaker noted, “would not be an impactful outcome.” 

The agreement on loss and damage created a new committee, made up of 24 country representatives, to “operationalize” the new funding arrangements, including identifying funding sources. The committee is tasked with delivering a set of recommendations at COP28, which will take place next year in Dubai.

Advising the U.N. on net zero

Though the decisions reached at COP27 did not include major new commitments on reducing emissions from the combustion of fossil fuels, the transition to a clean global energy system was nevertheless a key topic of conversation throughout the conference.

The Council of Engineers for the Energy Transition (CEET), an independent, international body of engineers and energy systems experts formed to provide advice to the U.N. on achieving net-zero emissions globally by 2050, convened for the first time at COP27. Jessika Trancik, a professor in the MIT Institute for Data, Systems, and Society and a member of CEET, spoke on a U.N.-sponsored panel on solutions for the transition to clean energy.

Trancik noted that the energy transition will look different in different regions of the world. “As engineers, we need to understand those local contexts and design solutions around those local contexts — that’s absolutely essential to support a rapid and equitable energy transition.”

At the same time, Trancik noted that there is now a set of “low-cost, ready-to-scale tools” available to every region — tools that resulted from a globally competitive process of innovation, stimulated by public policies in different countries, that dramatically drove down the costs of technologies like solar energy and lithium-ion batteries. The key, Trancik said, is for regional transition strategies to “tap into global processes of innovation.”

Reinventing climate adaptation

Elfatih Eltahir, the H. M. King Bhumibol Professor of Hydrology and Climate, traveled to COP27 to present plans for the Jameel Observatory Climate Resilience Early Warning System (CREWSnet), one of the five projects selected in April 2022 as a flagship in MIT’s Climate Grand Challenges initiative. CREWSnet focuses on climate adaptation, the term for adapting to climate impacts that are unavoidable.

The aim of CREWSnet, Eltahir told the audience during a panel discussion, is “nothing short of reinventing the process of climate change adaptation,” so that it is proactive rather than reactive; community-led; data-driven and evidence-based; and so that it integrates different climate risks, from heat waves to sea level rise, rather than treating them individually.

“However, it’s easy to talk about these changes,” said Eltahir. “The real challenge, which we are now just launching and engaging in, is to demonstrate that on the ground.” Eltahir said that early demonstrations will happen in a couple of key locations, including southwest Bangladesh, where multiple climate risks — rising sea levels, increasing soil salinity, and intensifying heat waves and cyclones — are combining to threaten the area’s agricultural production.

Building on COP26

Some members of MIT’s delegation attended COP27 to advance efforts that had been formally announced at last year’s U.N. climate conference, COP26, in Glasgow, Scotland.

At an official U.N. side event co-organized by MIT on Nov. 11, Greg Sixt, the director of the Food and Climate Systems Transformation (FACT) Alliance led by the Abdul Latif Jameel Water and Food Systems Lab, provided an update on the alliance’s work since its launch at COP26.

Food systems are a major source of greenhouse gas emissions — and are increasingly vulnerable to climate impacts. The FACT Alliance works to better connect researchers to farmers, food businesses, policymakers, and other food systems stakeholders to make food systems (which include food production, consumption, and waste) more sustainable and resilient. 

Sixt told the audience that the FACT Alliance now counts over 20 research and stakeholder institutions around the world among its members, but also collaborates with other institutions in an “open network model” to advance work in key areas — such as a new research project exploring how climate scenarios could affect global food supply chains.

Marcela Angel, research program director for the Environmental Solutions Initiative (ESI), helped convene a meeting at COP27 of the Afro-InterAmerican Forum on Climate Change, which also launched at COP26. The forum works with Afro-descendant leaders across the Americas to address significant environmental issues, including climate risks and biodiversity loss. 

At the event — convened with the Colombian government and the nonprofit Conservation International — ESI brought together leaders from six countries in the Americas and presented recent work that estimates that there are over 178 million individuals who identify as Afro-descendant living in the Americas, in lands of global environmental importance. 

“There is a significant overlap between biodiversity hot spots, protected areas, and areas of high Afro-descendant presence,” said Angel. “But the role and climate contributions of these communities is understudied, and often made invisible.”    

Limiting methane emissions

Methane is a short-lived but potent greenhouse gas: When released into the atmosphere, it immediately traps about 120 times more heat than carbon dioxide does. More than 150 countries have now signed the Global Methane Pledge, launched at COP26, which aims to reduce methane emissions by at least 30 percent by 2030 compared to 2020 levels.

Sergey Paltsev, the deputy director of the Joint Program on the Science and Policy of Global Change and a senior research scientist at the MIT Energy Initiative, gave the keynote address at a Nov. 17 event on methane, where he noted the importance of methane reductions from the oil and gas sector to meeting the 2030 goal.

“The oil and gas sector is where methane emissions reductions could be achieved the fastest,” said Paltsev. “We also need to employ an integrated approach to address methane emissions in all sectors and all regions of the world because methane emissions reductions provide a near-term pathway to avoiding dangerous tipping points in the global climate system.”

“Keep fighting relentlessly”

Arina Khotimsky, a senior majoring in materials science and engineering and a co-president of the MIT Energy and Climate Club, attended the first week of COP27. She reflected on the experience in a social media post after returning home. 

“COP will always have its haters. Is there greenwashing? Of course! Is everyone who should have a say in this process in the room? Not even close,” wrote Khotimsky. “So what does it take for COP to matter? It takes everyone who attended to not only put ‘climate’ on front-page news for two weeks, but to return home and keep fighting relentlessly against climate change. I know that I will.”



de MIT News https://ift.tt/enMzmH9

lunes, 28 de noviembre de 2022

MIT Policy Hackathon produces new solutions for technology policy challenges

Almost three years ago, the Covid-19 pandemic changed the world. Many are still looking to uncover a “new normal.”

“Instead of going back to normal, [there’s a new generation that] wants to build back something different, something better,” says Jorge Sandoval, a second-year graduate student in MIT’s Technology and Policy Program (TPP) at the Institute for Data, Systems and Society (IDSS). “How do we communicate this mindset to others, that the world cannot be the same as before?”

This was the inspiration behind “A New (Re)generation,” this year’s theme for the IDSS-student-run MIT Policy Hackathon, which Sandoval helped to organize as the event chair. The Policy Hackathon is a weekend-long, interdisciplinary competition that brings together participants from around the globe to explore potential solutions to some of society’s greatest challenges. 

Unlike other competitions of its kind, Sandoval says MIT’s event emphasizes a humanistic approach. “The idea of our hackathon is to promote applications of technology that are humanistic or human-centered,” he says. “We take the opportunity to examine aspects of technology in the spaces where they tend to interact with society and people, an opportunity most technical competitions don't offer because their primary focus is on the technology.”

The competition started with 50 teams spread across four challenge categories. This year’s categories included Internet and Cybersecurity, Environmental Justice, Logistics, and Housing and City Planning. While some people come into the challenge with friends, Sandoval said most teams form organically during an online networking meeting hosted by MIT.

“We encourage people to pair up with others outside of their country and to form teams of different diverse backgrounds and ages,” Sandoval says. “We try to give people who are often not invited to the decision-making table the opportunity to be a policymaker, bringing in those with backgrounds in not only law, policy, or politics, but also medicine, and people who have careers in engineering or experience working in nonprofits.”

Once an in-person event, the Policy Hackathon has gone through its own regeneration process these past three years, according to Sandoval. After going entirely online during the pandemic's height, last year they successfully hosted the first hybrid version of the event, which served as their model again this year.

“The hybrid version of the event gives us the opportunity to allow people to connect in a way that is lost if it is only online, while also keeping the wide range of accessibility, allowing people to join from anywhere in the world, regardless of nationality or income, to provide their input,” Sandoval says.

For Swetha Tadisina, an undergraduate computer science major at Lafayette College and participant in the internet and cybersecurity category, the hackathon was a unique opportunity to meet and work with people much more advanced in their careers. “I was surprised how such a diverse team that had never met before was able to work so efficiently and creatively,” Tadisina says.

Erika Spangler, a public high school teacher from Massachusetts and member of the environmental justice category’s winning team, says that while each member of “Team Slime Mold” came to the table with a different set of skills, they managed to be in sync from the start — even working across the nine-and-a-half-hour time difference the four-person team faced when working with policy advocate Shruti Nandy from Calcutta, India.

“We divided the project into data, policy, and research and trusted each other’s expertise,” Spangler says, “Despite having separate areas of focus, we made sure to have regular check-ins to problem-solve and cross-pollinate ideas.”

During the 48-hour period, her team proposed the creation of an algorithm to identify high-quality brownfields that could be cleaned up and used as sites for building renewable energy. Their corresponding policy sought to mandate additional requirements for renewable energy businesses seeking tax credits from the Inflation Reduction Act.

“Their policy memo had the most in-depth technical assessment, including deep dives in a few key cities to show the impact of their proposed approach for site selection at a very granular level,” says Amanda Levin, director of policy analysis for the Natural Resources Defense Council (NRDC). Levin acted as both a judge and challenge provider for the environmental justice category.

“They also presented their policy recommendations in the memo in a well-thought-out way, clearly noting the relevant actor,” she adds. This clarity around what can be done, and who would be responsible for those actions, is highly valuable for those in policy.”

Levin says the NRDC, one of the largest environmental nonprofits in the United States, provided five “challenge questions,” making it clear that teams did not need to address all of them. She notes that this gave teams significant leeway, bringing a wide variety of recommendations to the table. 

“As a challenge partner, the work put together by all the teams is already being used to help inform discussions about the implementation of the Inflation Reduction Act,” Levin says. “Being able to tap into the collective intelligence of the hackathon helped uncover new perspectives and policy solutions that can help make an impact in addressing the important policy challenges we face today.”

While having partners with experience in data science and policy definitely helped, fellow Team Slime Mold member Sara Sheffels, a PhD candidate in MIT’s biomaterials program, says she was surprised how much her experiences outside of science and policy were relevant to the challenge: “My experience organizing MIT’s Graduate Student Union shaped my ideas about more meaningful community involvement in renewables projects on brownfields. It is not meaningful to merely educate people about the importance of renewables or ask them to sign off on a pre-planned project without addressing their other needs.”

“I wanted to test my limits, gain exposure, and expand my world,” Tadisina adds. “The exposure, friendships, and experiences you gain in such a short period of time are incredible.”

For Willy R. Vasquez, an electrical and computer engineering PhD student at the University of Texas, the hackathon is not to be missed. “If you’re interested in the intersection of tech, society, and policy, then this is a must-do experience.”



de MIT News https://ift.tt/dYfUp2z

Industrializing 3D printing

The cutting edge of additive manufacturing offers a world of possibilities for companies looking to transform their manufacturing processes and create new products. But companies that want to tap into that world have traditionally had to invest huge sums of money into the latest 3D printing machines and then figure out how to integrate them into their operations.

That’s a tough sell considering 3D printers can struggle with throughput and consistency for many industrial applications.

Now, VulcanForms, founded by Martin C. Feldmann MEng ’14 and MIT Professor John Hart, is offering digital manufacturing as a service for companies to build industrial products at scale. The company assists customers with materials selection and product design, and then crafts a scalable manufacturing workflow in its production foundry.

At the heart of each of those workflows is a proprietary laser powder bed fusion (LPBF) metal 3D printer that uses an array of finely choreographed laser beams to produce high performance metal parts with complex designs. The printers are integrated with VulcanForms’ machining, robotics, and postprocessing equipment through a digital thread that also monitors parts as they’re produced.

“Even though LPBF technology is well-established for several applications including jet engine fuel nozzles and orthopedic implants, it’s barely scratching the surface of the opportunity,” Hart says. “VulcanForms sees a tremendous market opportunity to realize additive manufacturing at industrial scale and integrate it with a digital production system.”

VulcanForms is currently producing parts for companies in the medical, defense, semiconductor, and aerospace industries, turning designs into finished parts in a matter of days. The founders say VulcanForms’ quality exceeds industry standards with materials like titanium as well as nickel-based and advanced steel alloys.

VulcanForms is currently completing its first two digital manufacturing facilities in Devens and Newburyport, Massachusetts. When it’s done, the Devens facility will house several dozen of the company’s additive manufacturing systems in addition to having postprocessing capabilities. The founders say those systems will make Devens the highest-throughput metal additive manufacturing facility in the world. The Newburyport facility focuses on precision machining, industrial automation, and assembly operations. Merging these technologies with a digital thread, VulcanForms is building U.S.-based digital manufacturing infrastructure that the founders say will define the way products are designed, built, and delivered.

Making 3D printing industrially relevant

Hart calls his entry into additive manufacturing serendipitous. In 2013, he was asked by a colleague to teach a class for MIT’s Masters of Engineering in Advanced Manufacturing and Design program.

“I don’t remember what led me to propose that the class focus on additive manufacturing, because I wasn’t yet doing research in the area,” Hart says. “The class was an experiment I used to explore a new interest and to tap into the passion and curiosity of the students.”

One of the students in that class was Feldmann, then in his first semester at MIT. The project-based class tasked students with measuring the accuracy of 3D-printed parts, improving their properties and contributing to lectures relating 3D printing to the core principles of manufacturing.

“MIT throws so much at you — highly technical stuff but very applicable stuff,” Feldmann says. “At MIT, learning additive manufacturing wasn’t just calculating things. It was, ‘Here are [fused deposition modeling] printers, tell me what their capabilities are.’ And you use them and make things. I really enjoyed that. It prepares one for leading research efforts in industry and startups because you have to approach things like you know what you’re doing and have the confidence that you'll figure it out.”

After earning his degree, Feldmann became a research specialist in Hart’s lab, where he studied nanomaterials and battery electrodes. But Feldmann and Hart continued brainstorming ways to make additive manufacturing more industrially relevant.

Eventually they decided to build a new kind of LPBF metal printer that would enable a large number of lasers to operate at the same time, improving throughput while maintaining the quality of the finished part. The pair received early guidance through MIT’s Venture Mentoring Service.

“Our goal was to rearchitect the LPBF process, and to do it in a way that enables a much higher and more consistent quality, which we saw to be the main impediment to industrialization of additive manufacturing.” Hart says.

With that mission in mind, Feldmann and Hart decided to take the leap and start VulcanForms, with Feldmann essentially supporting himself for nearly two years while he worked on the first printer prototype. Today the company’s printers use hundreds of weld tracks in each layer through which lasers move in a synchronized dance. The lasers collectively deliver up to 100 kilowatts of power to make parts at a higher resolution and scale than the founders say other printers can achieve.

VulcanForms’ production foundry also includes CNC machining and post processing equipment, and the founders say the company’s software stack is a key differentiator.

“From the start, we saw 3D printing as a cornerstone of digital manufacturing, where the software and hardware work hand-in-hand to encode and execute production instructions,” Hart says. “We’ve built the software that allows each part to receive the same temperature locally in each voxel in each layer. It also allows us to move quickly to the end product while maintaining that consistency in production.”

Ultimately the founders are focused on what those capabilities unlock for customers.

“What really gets me excited is how we’re able to take a customer part and turn it into reality in a production setting — not in a coat hanger or desk ornament setting,” Feldmann says. “Everyone in additive is just super excited about what a 3D printer can do and not how this works in a production value stream. That’s why we have an entire production value stream in house. It’s why our motto isn’t ‘VulcanForms: 100 kilowatts laser power in a printer.’ It’s ‘VulcanForms: accelerating innovation.’”

Helping 3D printing reach its potential

Last year, a supercomputer manufacturer sent VulcanForms designs for a cooling component in its processors. The titanium part, which contained dozens of microscopic tunnels, was so complex it could only be made using additive manufacturing. As The New York Times reported, VulcanForms came back with a part two days later.

VulcanForms has also produced medical implants, industrial tooling and tire molds, and components for aviation and defense contractors.

Feldmann sees innovations enabled by additive manufacturing driving technological progress in a number of industries.

“I don’t think there are going to be orthopedic implants that aren’t LPBF-printed in the future,” Feldmann says.

That technological progress, in turn, will yield even more use cases.

“The only thing I’m 100 percent sure of is the highest-value applications for additive manufacturing have not yet been found,” Feldmann says.

The company also sees the transformation of manufacturing enabled by digital production technologies as an opportunity for the United States to improve both economic prosperity and its ecosystem of innovation.

“VulcanForms believes that one of the greatest opportunities in the United States is rebuilding its industrial ecosystem around digital production systems,” Hart says. “Digital-first production technologies, including additive manufacturing and automated precision machining, enable more innovative, resource efficient, and resilient supply chains. Innovation in manufacturing is the backbone of the American economy.”



de MIT News https://ift.tt/0AnC6Uv

The task of magnetic classification suddenly looks easier

Knowing the magnetic structure of crystalline materials is critical to many applications, including data storage, high-resolution imaging, spintronics, superconductivity, and quantum computing. Information of this sort, however, is difficult to come by. Although magnetic structures can be obtained from neutron diffraction and scattering studies, the number of machines that can support these analyses — and the time available at these facilities — is severely limited.

As a result, the magnetic structures of only about 1,500 materials worked out experimentally have been tabulated to date. Researchers have also predicted magnetic structures by numerical means, but lengthy calculations are required, even on large, state-of-the-art supercomputers. These calculations, moreover, become increasingly expensive, with power demands growing exponentially, as the size of the crystal structures under consideration goes up.

Now, researchers at MIT, Harvard University, and Clemson University — led by Mingda Li, MIT assistant professor of nuclear science and engineering, and Tess Smidt, MIT assistant professor of electrical engineering and computer science — have found a way to streamline this process by employing the tools of machine learning. “This might be a quicker and cheaper approach,” Smidt says.

The team’s results were recently published in the journal iScience. One unusual feature of this paper, apart from its novel findings, is that its first authors are three MIT undergraduates — Helena Merker, Harry Heiberger, and Linh Nguyen — plus one PhD student, Tongtong Liu.

Merker, Heiberger, and Nguyen joined the project as first-years in fall 2020, and they were given a sizable challenge: to design a neural network that can predict the magnetic structure of crystalline materials. They did not start from scratch, however, making use of “equivariant Euclidean neural networks” that were co-invented by Smidt in 2018. The advantage of this kind of network, Smidt explains, “is that we won’t get a different prediction for the magnetic order if a crystal is rotated or translated, which we know should not affect the magnetic properties.” That feature is especially helpful for examining 3D materials.

The elements of structure

The MIT group drew upon a database of nearly 150,000 substances compiled by the Materials Project at the Lawrence Berkeley National Laboratory, which provided information concerning the arrangement of atoms in the crystal lattice. The team used this input to assess two key properties of a given material: magnetic order and magnetic propagation.

Figuring out the magnetic order involves classifying materials into three categories: ferromagnetic, antiferromagnetic, and nonmagnetic. The atoms in a ferromagnetic material act like little magnets with their own north and south poles. Each atom has a magnetic moment, which points from its south to north pole. In a ferromagnetic material, Liu explains, “all the atoms are lined up in the same direction — the direction of the combined magnetic field produced by all of them.” In an antiferromagnetic material, the magnetic moments of the atoms point in a direction opposite to that of their neighbors — canceling each other out in an orderly pattern that yields zero magnetization overall. In a nonmagnetic material, all the atoms could be nonmagnetic, having no magnetic moments whatsoever. Or the material could contain magnetic atoms, but their magnetic moments would point in random directions so that the net result, again, is zero magnetism.

The concept of magnetic propagation relates to the periodicity of a material’s magnetic structure. If you think of a crystal as a 3D arrangement of bricks, a unit cell is the smallest possible building block — the smallest number, and configuration, of atoms that can make up an individual “brick.” If the magnetic moments of every unit cell are aligned, the MIT researchers accorded the material a propagation value of zero. However, if the magnetic moment changes direction, and hence “propagates,” in moving from one cell to the next, the material is given a non-zero propagation value.

A network solution

So much for the goals. How can machine learning tools help achieve them? The students’ first step was to take a portion of the Materials Project database to train the neural network to find correlations between a material’s crystalline structure and its magnetic structure. The students also learned — through educated guesses and trial-and-error — that they achieved the best results when they included not just information about the atoms’ lattice positions, but also the atomic weight, atomic radius, electronegativity (which reflects an atom’s tendency to attract an electron), and dipole polarizability (which indicates how far the electron is from the atom’s nucleus). During the training process, a large number of so-called “weights” are repeatedly fine-tuned.

“A weight is like the coefficient m in the equation y = mx + b,” Heiberger explains. “Of course, the actual equation, or algorithm, we use is a lot messier, with not just one coefficient but perhaps a hundred; x, in this case, is the input data, and you choose m so that y is predicted most accurately. And sometimes you have to change the equation itself to get a better fit.”

Next comes the testing phase. “The weights are kept as-is,” Heiberger says, “and you compare the predictions you get to previously established values [also found in the Materials Project database].”

As reported in iScience, the model had an average accuracy of about 78 percent and 74 percent, respectively, for predicting magnetic order and propagation. The accuracy for predicting the order of nonmagnetic materials was 91 percent, even if the material contained magnetic atoms.

Charting the road ahead

The MIT investigators believe this approach could be applied to large molecules whose atomic structures are hard to discern and even to alloys, which lack crystalline structures. “The strategy there is to take as big a unit cell — as big a sample — as possible and try to approximate it as a somewhat disordered crystal,” Smidt says.

The current work, the authors wrote, represents one step toward “solving the grand challenge of full magnetic structure determination.” The “full structure” in this case means determining “the specific magnetic moments of every atom, rather than the overall pattern of the magnetic order,” Smidt explains.

“We have the math in place to take this on,” Smidt adds, “though there are some tricky details to be worked out. It’s a project for the future, but one that appears to be within reach.”

The undergraduates won’t participate in that effort, having already completed their work in this venture. Nevertheless, they all appreciated the research experience. “It was great to pursue a project outside the classroom that gave us the chance to create something exciting that didn’t exist before,” Merker says.

“This research, entirely led by undergraduates, started in 2020 when they were first-years. With Institute support from the ELO [Experiential Learning Opportunities] program and later guidance from PhD student Tongtong Liu, we were able to bring them together even while physically remote from each other. This work demonstrates how we can expand the first-year learning experience to include a real research product,” Li adds. “Being able to support this kind of collaboration and learning experience is what every educator strives for. It is wonderful to see their hard work and commitment result in a contribution to the field.”

“This really was a life-changing experience,” Nguyen agrees. “I thought it would be fun to combine computer science with the material world. That turned out to be a pretty good choice.”



de MIT News https://ift.tt/RJCHmOM

Reversing the charge

Owners of electric vehicles (EVs) are accustomed to plugging into charging stations at home and at work and filling up their batteries with electricity from the power grid. But someday soon, when these drivers plug in, their cars will also have the capacity to reverse the flow and send electrons back to the grid. As the number of EVs climbs, the fleet’s batteries could serve as a cost-effective, large-scale energy source, with potentially dramatic impacts on the energy transition, according to a new paper published by an MIT team in the journal Energy Advances.

“At scale, vehicle-to-grid (V2G) can boost renewable energy growth, displacing the need for stationary energy storage and decreasing reliance on firm [always-on] generators, such as natural gas, that are traditionally used to balance wind and solar intermittency,” says Jim Owens, lead author and a doctoral student in the MIT Department of Chemical Engineering. Additional authors include Emre Gençer, a principal research scientist at the MIT Energy Initiative (MITEI), and Ian Miller, a research specialist for MITEI at the time of the study.

The group’s work is the first comprehensive, systems-based analysis of future power systems, drawing on a novel mix of computational models integrating such factors as carbon emission goals, variable renewable energy (VRE) generation, and costs of building energy storage, production, and transmission infrastructure.

“We explored not just how EVs could provide service back to the grid — thinking of these vehicles almost like energy storage on wheels — but also the value of V2G applications to the entire energy system and if EVs could reduce the cost of decarbonizing the power system,” says Gençer. “The results were surprising; I personally didn’t believe we’d have so much potential here.”

Displacing new infrastructure

As the United States and other nations pursue stringent goals to limit carbon emissions, electrification of transportation has taken off, with the rate of EV adoption rapidly accelerating. (Some projections show EVs supplanting internal combustion vehicles over the next 30 years.) With the rise of emission-free driving, though, there will be increased demand for energy. “The challenge is ensuring both that there’s enough electricity to charge the vehicles and that this electricity is coming from renewable sources,” says Gençer.

But solar and wind energy is intermittent. Without adequate backup for these sources, such as stationary energy storage facilities using lithium-ion batteries, for instance, or large-scale, natural gas- or hydrogen-fueled power plants, achieving clean energy goals will prove elusive. More vexing, costs for building the necessary new energy infrastructure runs to the hundreds of billions.

This is precisely where V2G can play a critical, and welcome, role, the researchers reported. In their case study of a theoretical New England power system meeting strict carbon constraints, for instance, the team found that participation from just 13.9 percent of the region’s 8 million light-duty (passenger) EVs displaced 14.7 gigawatts of stationary energy storage. This added up to $700 million in savings — the anticipated costs of building new storage capacity.

Their paper also described the role EV batteries could play at times of peak demand, such as hot summer days. “V2G technology has the ability to inject electricity back into the system to cover these episodes, so we don’t need to install or invest in additional natural gas turbines,” says Owens. “The way that EVs and V2G can influence the future of our power systems is one of the most exciting and novel aspects of our study.”

Modeling power

To investigate the impacts of V2G on their hypothetical New England power system, the researchers integrated their EV travel and V2G service models with two of MITEI’s existing modeling tools: the Sustainable Energy System Analysis Modeling Environment (SESAME) to project vehicle fleet and electricity demand growth, and GenX, which models the investment and operation costs of electricity generation, storage, and transmission systems. They incorporated such inputs as different EV participation rates, costs of generation for conventional and renewable power suppliers, charging infrastructure upgrades, travel demand for vehicles, changes in electricity demand, and EV battery costs.

Their analysis found benefits from V2G applications in power systems (in terms of displacing energy storage and firm generation) at all levels of carbon emission restrictions, including one with no emissions caps at all. However, their models suggest that V2G delivers the greatest value to the power system when carbon constraints are most aggressive — at 10 grams of carbon dioxide per kilowatt hour load. Total system savings from V2G ranged from $183 million to $1,326 million, reflecting EV participation rates between 5 percent and 80 percent.

“Our study has begun to uncover the inherent value V2G has for a future power system, demonstrating that there is a lot of money we can save that would otherwise be spent on storage and firm generation,” says Owens.

Harnessing V2G

For scientists seeking ways to decarbonize the economy, the vision of millions of EVs parked in garages or in office spaces and plugged into the grid for 90 percent of their operating lives proves an irresistible provocation. “There is all this storage sitting right there, a huge available capacity that will only grow, and it is wasted unless we take full advantage of it,” says Gençer.

This is not a distant prospect. Startup companies are currently testing software that would allow two-way communication between EVs and grid operators or other entities. With the right algorithms, EVs would charge from and dispatch energy to the grid according to profiles tailored to each car owner’s needs, never depleting the battery and endangering a commute.

“We don’t assume all vehicles will be available to send energy back to the grid at the same time, at 6 p.m. for instance, when most commuters return home in the early evening,” says Gençer. He believes that the vastly varied schedules of EV drivers will make enough battery power available to cover spikes in electricity use over an average 24-hour period. And there are other potential sources of battery power down the road, such as electric school buses that are employed only for short stints during the day and then sit idle.

The MIT team acknowledges the challenges of V2G consumer buy-in. While EV owners relish a clean, green drive, they may not be as enthusiastic handing over access to their car’s battery to a utility or an aggregator working with power system operators. Policies and incentives would help.

“Since you’re providing a service to the grid, much as solar panel users do, you could be paid for your participation, and paid at a premium when electricity prices are very high,” says Gençer.

“People may not be willing to participate ’round the clock, but if we have blackout scenarios like in Texas last year, or hot-day congestion on transmission lines, maybe we can turn on these vehicles for 24 to 48 hours, sending energy back to the system,” adds Owens. “If there’s a power outage and people wave a bunch of money at you, you might be willing to talk.”

“Basically, I think this comes back to all of us being in this together, right?” says Gençer. “As you contribute to society by giving this service to the grid, you will get the full benefit of reducing system costs, and also help to decarbonize the system faster and to a greater extent.”

Actionable insights

Owens, who is building his dissertation on V2G research, is now investigating the potential impact of heavy-duty electric vehicles in decarbonizing the power system. “The last-mile delivery trucks of companies like Amazon and FedEx are likely to be the earliest adopters of EVs,” Owen says. “They are appealing because they have regularly scheduled routes during the day and go back to the depot at night, which makes them very useful for providing electricity and balancing services in the power system.”

Owens is committed to “providing insights that are actionable by system planners, operators, and to a certain extent, investors,” he says. His work might come into play in determining what kind of charging infrastructure should be built, and where.

“Our analysis is really timely because the EV market has not yet been developed,” says Gençer. “This means we can share our insights with vehicle manufacturers and system operators — potentially influencing them to invest in V2G technologies, avoiding the costs of building utility-scale storage, and enabling the transition to a cleaner future. It’s a huge win, within our grasp.”

The research for this study was funded by MITEI’s Future Energy Systems Center.



de MIT News https://ift.tt/eJRx7VM

MIT forever!

Excited cheers and applause filled the 26-100 lecture hall on Nov. 20, as 77 Massachusetts Avenue — the main entrance to MIT — appeared on the big screen during a showing of Marvel Studios’ “Black Panther: Wakanda Forever.” The several hundred MIT students in the audience had been waiting with eager anticipation for the first glimpse of their campus, which was used as a filming location in summer 2021. 

“Knowing that they shot scenes on campus was really exciting,” said graduate student Jatin Patil as he waited in line for the special screening. “I’m excited to see a bit of campus in the movie.”

Several students at the screening were at MIT when scenes were shot last summer, and they noted feeling a special connection to the movie. 

“This has been a long time coming for me,” explained sophomore Ananda Santos Figueiredo. “They started filming on campus the exact day I arrived at MIT last year. It was all new and exciting, and it was amazing to be on campus at that time and start getting to know the MIT community through the lens of this film.”

The movie aired in 26-100, a student-run movie theater operated by the MIT Lecture Series Committee (LSC), during a special event hosted by the Institute Office of Communications and the Division of Student Life. The LSC brings film screenings to the MIT community and surrounding neighborhoods throughout the year, showcasing everything from current blockbuster films to foreign cinema and classic flicks. Erika Yang, a senior and chairperson of the LSC, explained that the group has “all of the equipment that you would find at a chain cinema.”

“I love film and I love being able to bring all sorts of different films to our school and the surrounding community,” Yang said.

Figueiredo, who is also a member of the LSC, added that being able to “come and watch this movie and help set this screening up has been really special. It feels so nice to be able to share this movie with the MIT community that embraced me the moment that I arrived on campus.” 

The LSC operated concessions, offering complimentary popcorn and water for all those who attended, and worked together on the technical production with Boston Light and Sound, which supported the projection on behalf of Disney.

“A lot of excitement”

While a number of students noted that they had already seen the film in theaters the previous week, many saw it for the first time at MIT. Students and staff members expressed a sense of pride at seeing the Institute portrayed in a film that celebrates racial and gender diversity in science. 

“We are very excited to see MIT on the big screen,” said Stuart Schmill ’86, dean of MIT Admissions and Student Financial Services. “Over the years we have had such a connection with the Marvel cinematic universe. MIT as a place of technological preeminence makes sense and Wakanda as a place of technological preeminence — I think the two go together very well. I think there is a lot of excitement at MIT about this movie and a good amount of pride, too.”

Schmill also emphasized the significance of having one of the film’s Black female stars, Riri Williams (also known as Ironheart), portrayed as a student at MIT. “I think we have always tried to show the fullness of MIT and who the people of MIT are, and I think we will see that on the screen tonight,” Schmill said. 

In the comics, Riri Williams makes her first appearance in 2016 in “Invincible Iron Man (Vol. 3) 7,” written by Brian Michael Bendis. “Ironheart Vol. 1: Those With Courage,” written by Eve Ewing and published in 2019, features the hero as a student at MIT residing in Simmons Hall, and the cover illustration shows Ironheart flying above the Institute’s iconic dome.

“I think part of my excitement for this movie was because Tony Stark, and all the Marvel superheros, are very inspiring, very hands on, very smart people who embody the ‘mens-et-manus’ MIT culture,” said senior Jenny Zhang, referring to MIT’s motto of “mind and hand.” “That’s very inspiring, but at the same time, representation matters, and just being female engineers, female scientists, having Ironheart also there is important.”

For Zhang and fellow senior Claire McLellan-Cassivi — who became friends through their “shared passions for puzzles, engineering, and the MCU,” as Zhang put it — the screening was an opportunity not only to see the latest Marvel film but also to relive their day as professional actors. McLellan-Cassivi and Zhang served as extras on the set of “Wakanda Forever” during a day of filming at MIT they describe as “unforgettable.” Both enjoyed seeing how the iconic spaces on MIT’s campus and the Institute’s maker culture came to life through the movie.  

“Being on set and feeling that energy,” said Zhang, “it was pretty amazing. I still can’t really believe it.”

“Those with courage”

Ironheart was also the subject of a TEDxMIT talk this summer on the importance of inclusivity in STEM by alumna Selam Gano ’18, who referenced the superhero in her discussion of the work required to effect social progress.

Gano, who spent two years as an undergraduate completing work to bring a clean water well to schoolchildren in her father’s hometown in rural Ethiopia, said she often asked herself as a student: Who is innovation really for?

“If you’re a scientist or engineer, don’t you want your ideas and inventions to be available to everyone rather than a select few? And don’t you want a future designed for you and me and everyone, rather than one that only benefits those with the most resources?”

Moving back to the hero of Ironheart, Gano described how she takes heart in the future. “We create the hero, and we have the impact of a super being,” she said. “Not because we were born that way or because we were bitten by a radioactive spider. But because we wake up each day and we have the courage to choose to believe that we can.”



de MIT News https://ift.tt/ly3K0BA

New device can control light at unprecedented speeds

In a scene from “Star Wars: Episode IV — A New Hope,” R2D2 projects a three-dimensional hologram of Princess Leia making a desperate plea for help. That scene, filmed more than 45 years ago, involved a bit of movie magic — even today, we don’t have the technology to create such realistic and dynamic holograms.

Generating a freestanding 3D hologram would require extremely precise and fast control of light beyond the capabilities of existing technologies, which are based on liquid crystals or micromirrors.

An international group of researchers, led by a team at MIT, spent more than four years tackling this problem of high-speed optical beam forming. They have now demonstrated a programmable, wireless device that can control light, such as by focusing a beam in a specific direction or manipulating the light’s intensity, and do it orders of magnitude more quickly than commercial devices.

They also pioneered a fabrication process that ensures the device quality remains near-perfect when it is manufactured at scale. This would make their device more feasible to implement in real-world settings.

Known as a spatial light modulator, the device could be used to create super-fast lidar (light detection and ranging) sensors for self-driving cars, which could image a scene about a million times faster than existing mechanical systems. It could also accelerate brain scanners, which use light to “see” through tissue. By being able to image tissue faster, the scanners could generate higher-resolution images that aren’t affected by noise from dynamic fluctuations in living tissue, like flowing blood.

“We are focusing on controlling light, which has been a recurring research theme since antiquity. Our development is another major step toward the ultimate goal of complete optical control — in both space and time — for the myriad applications that use light,” says lead author Christopher Panuski PhD ’22, who recently graduated with his PhD in electrical engineering and computer science.

The paper is a collaboration between researchers at MIT; Flexcompute, Inc.; the University of Strathclyde; the State University of New York Polytechnic Institute; Applied Nanotools, Inc.; the Rochester Institute of Technology; and the U.S. Air Force Research Laboratory. The senior author is Dirk Englund, an associate professor of electrical engineering and computer science at MIT and a researcher in the Research Laboratory of Electronics (RLE) and Microsystems Technology Laboratories (MTL). The research is published today in Nature Photonics.

Manipulating light

A spatial light modulator (SLM) is a device that manipulates light by controlling its emission properties. Similar to an overhead projector or computer screen, an SLM transforms a passing beam of light, focusing it in one direction or refracting it to many locations for image formation.

Inside the SLM, a two-dimensional array of optical modulators controls the light. But light wavelengths are only a few hundred nanometers, so to precisely control light at high speeds the device needs an extremely dense array of nanoscale controllers. The researchers used an array of photonic crystal microcavities to achieve this goal. These photonic crystal resonators allow light to be controllably stored, manipulated, and emitted at the wavelength-scale.

When light enters a cavity, it is held for about a nanosecond, bouncing around more than 100,000 times before leaking out into space. While a nanosecond is only one billionth of a second, this is enough time for the device to precisely manipulate the light. By varying the reflectivity of a cavity, the researchers can control how light escapes. Simultaneously controlling the array modulates an entire light field, so the researchers can quickly and precisely steer a beam of light.

“One novel aspect of our device is its engineered radiation pattern. We want the reflected light from each cavity to be a focused beam because that improves the beam-steering performance of the final device. Our process essentially makes an ideal optical antenna,” Panuski says.

To achieve this goal, the researchers developed a new algorithm to design photonic crystal devices that form light into a narrow beam as it escapes each cavity, he explains.

Using light to control light

The team used a micro-LED display to control the SLM. The LED pixels line up with the photonic crystals on the silicon chip, so turning on one LED tunes a single microcavity. When a laser hits that activated microcavity, the cavity responds differently to the laser based on the light from the LED.

“This application of high-speed LED-on-CMOS displays as micro-scale optical pump sources is a perfect example of the benefits of integrated photonic technologies and open collaboration. We have been thrilled to work with the team at MIT on this ambitious project,” says Michael Strain, professor at the Institute of Photonics of the University of Strathclyde.  

The use of LEDs to control the device means the array is not only programmable and reconfigurable, but also completely wireless, Panuski says.

“It is an all-optical control process. Without metal wires, we can place devices closer together without worrying about absorption losses,” he adds.

Figuring out how to fabricate such a complex device in a scalable fashion was a years-long process. The researchers wanted to use the same techniques that create integrated circuits for computers, so the device could be mass produced. But microscopic deviations occur in any fabrication process, and with micron-sized cavities on the chip, those tiny deviations could lead to huge fluctuations in performance.

The researchers partnered with the Air Force Research Laboratory to develop a highly precise mass-manufacturing process that stamps billions of cavities onto a 12-inch silicon wafer. Then they incorporated a postprocessing step to ensure the microcavities all operate at the same wavelength.

“Getting a device architecture that would actually be manufacturable was one of the huge challenges at the outset. I think it only became possible because Chris worked closely for years with Mike Fanto and a wonderful team of engineers and scientists at AFRL, AIM Photonics, and with our other collaborators, and because Chris invented a new technique for machine vision-based holographic trimming,” says Englund.

For this “trimming” process, the researchers shine a laser onto the microcavities. The laser heats the silicon to more than 1,000 degrees Celsius, creating silicon dioxide, or glass. The researchers created a system that blasts all the cavities with the same laser at once, adding a layer of glass that perfectly aligns the resonances — that is, the natural frequencies at which the cavities vibrate.

“After modifying some properties of the fabrication process, we showed that we were able to make world-class devices in a foundry process that had very good uniformity. That is one of the big aspects of this work — figuring out how to make these manufacturable,” Panuski says.

The device demonstrated near-perfect control — in both space and time — of an optical field with a joint “spatiotemporal bandwidth” 10 times greater than that of existing SLMs. Being able to precisely control a huge bandwidth of light could enable devices that can carry massive amounts of information extremely quickly, such as high-performance communications systems.

Now that they have perfected the fabrication process, the researchers are working to make larger devices for quantum control or ultrafast sensing and imaging.

This research was funded, in part, by the Hertz Foundation, the NDSEG Fellowship Program, the Schmidt Postdoctoral Award, the Israeli Vatat Scholarship, the U.S. Army Research Office, the U.S. Air Force Research Laboratory, the UK’s Engineering and Physical Sciences Research Council, and the Royal Academy of Engineering.



de MIT News https://ift.tt/s9YQVPq

Channeling creativity through art and engineering

Emily Satterfield likes to create. Whether she’s crocheting a dress she saw on TikTok, baking a cake, dancing at Cambridge’s Havana Club, or tinkering on a project, she fills her days with activities that channel her seemingly endless creativity. 

“Being creative has always been a huge part of who I am. I get a new hobby every week. I just love anything that involves making things,” says Satterfield ’22, who recently graduated from MIT with a degree in mechanical engineering.

Raised in Lowell, Massachusetts, Satterfield was surrounded by creativity from a young age. Her mother is a teacher with a passion for art and oil painting. Her father is an electrical engineer with a knack for do-it-yourself automation projects. Growing up, she thought of art and engineering as two separate entities. You couldn’t be both an artist and an engineer.

“I always thought that engineering and art were opposites and you couldn’t really do both,” she says.

Upon enrolling in MIT, Satterfield set out to study electrical engineering at MIT. But she quickly found herself gravitating more toward mechanical engineering. For her, making robots move was exhilarating.

One of her first opportunities to build a robot came in the spring of her sophomore year. She enrolled in 2.007 (Design and Manufacturing I). Students in the class design and build their own robots. The class culminates in a boisterous final robot competition.

But halfway through the semester, Satterfield and her fellow students were sent home due to the Covid-19 pandemic. Stuck at home, she craved a creative outlet and took up drawing. Her time in quarantine helped her realize that her twin passions of art and engineering didn’t have to be mutually exclusive.

“That’s when I started to realize that art was who I was,” she says. “Especially as a mechanical engineer, I realized how engineering and art aren’t opposites. They actually go hand-in-hand. When you’re designing or building something, you are literally creating something new.”

Her drive for creation led her to an undergraduate research opportunity, known as a SuperUROP, with Professor David Hardt. The project examined the use of additive manufacturing to build low-cost homes for individuals experiencing homelessness. The goal is to utilize techniques like 3D printing to build lightweight homes made out of recycled plastics. She continued work on the project for her senior thesis.

In the senior capstone class 2.009 (Product Engineering Processes), Satterfield had an opportunity to merge her love of art and engineering further. Rather fittingly, her team built a prototype for a device named “Palette.” The portable product enables painters to tint paint to the exact shade they need onsite, eliminating time-consuming trips to the paint store. The team worked with Benjamin Moore to develop their product.

Working with her fellow mechanical engineering students on a large, intensive project like Palette gave Satterfield a preview of what working on an engineering team in industry would be like.

“Most Course 2 students like building things and talking about the things they make, which lends well to teamwork and teaching each other different things. Creative engineers are really good teammates, and I think that’s very true about most Course 2 students,” she says.

After graduating in May, Satterfield joined the creative engineers at SpaceX. Over the summer, she participated in the company’s associate program.

Satterfield now works as a structures engineer for the SpaceX Dragon spacecraft team. She and her coworkers develop technologies for the spacecraft. In early October, as the crewed Dragon took off from Cape Canaveral, technologies that Satterfield worked on were on the spacecraft.

“It was really cool to see something that I helped work on have an impact. Knowing that there are people inside the spacecraft really put things into perspective,” she adds.

Despite her busy schedule, she still has managed to find new hobbies — the latest of which involves refinishing furniture for her new apartment in California. Whatever the future holds, Satterfield will continue to pursue outlets for her creativity.

“I’m excited to see long term how I can take my weird, kind of discombobulated interests and combine them into my own thing,” she says.



de MIT News https://ift.tt/wsHCyEK

jueves, 24 de noviembre de 2022

New CRISPR-based tool inserts large DNA sequences at desired sites in cells

Building on the CRISPR gene-editing system, MIT researchers have designed a new tool that can snip out faulty genes and replace them with new ones, in a safer and more efficient way.

Using this system, the researchers showed that they could deliver genes as long as 36,000 DNA base pairs to several types of human cells, as well as to liver cells in mice. The new technique, known as PASTE, could hold promise for treating diseases that are caused by defective genes with a large number of mutations, such as cystic fibrosis.

“It’s a new genetic way of potentially targeting these really hard to treat diseases,” says Omar Abudayyeh, a McGovern Fellow at MIT’s McGovern Institute for Brain Research. “We wanted to work toward what gene therapy was supposed to do at its original inception, which is to replace genes, not just correct individual mutations.”

The new tool combines the precise targeting of CRISPR-Cas9, a set of molecules originally derived from bacterial defense systems, with enzymes called integrases, which viruses use to insert their own genetic material into a bacterial genome.

“Just like CRISPR, these integrases come from the ongoing battle between bacteria and the viruses that infect them,” says Jonathan Gootenberg, also a McGovern Fellow. “It speaks to how we can keep finding an abundance of interesting and useful new tools from these natural systems.”

Gootenberg and Abudayyeh are the senior authors of the new study, which appears today in Nature Biotechnology. The lead authors of the study are MIT technical associates Matthew Yarnall and Rohan Krajeski, former MIT graduate student Eleonora Ioannidi, and MIT graduate student Cian Schmitt-Ulms.

DNA insertion

The CRISPR-Cas9 gene editing system consists of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut. When Cas9 and the guide RNA targeting a disease gene are delivered into cells, a specific cut is made in the genome, and the cells’ DNA repair processes glue the cut back together, often deleting a small portion of the genome.

If a DNA template is also delivered, the cells can incorporate a corrected copy into their genomes during the repair process. However, this process requires cells to make double-stranded breaks in their DNA, which can cause chromosomal deletions or rearrangements that are harmful to cells. Another limitation is that it only works in cells that are dividing, as nondividing cells don’t have active DNA repair processes.

The MIT team wanted to develop a tool that could cut out a defective gene and replace it with a new one without inducing any double-stranded DNA breaks. To achieve this goal, they turned to a family of enzymes called integrases, which viruses called bacteriophages use to insert themselves into bacterial genomes.

For this study, the researchers focused on serine integrases, which can insert huge chunks of DNA, as large as 50,000 base pairs. These enzymes target specific genome sequences known as attachment sites, which function as “landing pads.” When they find the correct landing pad in the host genome, they bind to it and integrate their DNA payload.

In past work, scientists have found it challenging to develop these enzymes for human therapy because the landing pads are very specific, and it’s difficult to reprogram integrases to target other sites. The MIT team realized that combining these enzymes with a CRISPR-Cas9 system that inserts the correct landing site would enable easy reprogramming of the powerful insertion system.

The new tool, PASTE (Programmable Addition via Site-specific Targeting Elements), includes a Cas9 enzyme that cuts at a specific genomic site, guided by a strand of RNA that binds to that site. This allows them to target any site in the genome for insertion of the landing site, which contains 46 DNA base pairs. This insertion can be done without introducing any double-stranded breaks by adding one DNA strand first via a fused reverse transcriptase, then its complementary strand.

Once the landing site is incorporated, the integrase can come along and insert its much larger DNA payload into the genome at that site. 

“We think that this is a large step toward achieving the dream of programmable insertion of DNA,” Gootenberg says. “It’s a technique that can be easily tailored both to the site that we want to integrate as well as the cargo.”

Gene replacement

In this study, the researchers showed that they could use PASTE to insert genes into several types of human cells, including liver cells, T cells, and lymphoblasts (immature white blood cells). They tested the delivery system with 13 different payload genes, including some that could be therapeutically useful, and were able to insert them into nine different locations in the genome.

In these cells, the researchers were able to insert genes with a success rate ranging from 5 to 60 percent. This approach also yielded very few unwanted “indels” (insertions or deletions) at the sites of gene integration.

“We see very few indels, and because we’re not making double-stranded breaks, you don’t have to worry about chromosomal rearrangements or large-scale chromosome arm deletions,” Abudayyeh says.

The researchers also demonstrated that they could insert genes in “humanized” livers in mice. Livers in these mice consist of about 70 percent human hepatocytes, and PASTE successfully integrated new genes into about 2.5 percent of these cells.

The DNA sequences that the researchers inserted in this study were up to 36,000 base pairs long, but they believe even longer sequences could also be used. A human gene can range from a few hundred to more than 2 million base pairs, although for therapeutic purposes only the coding sequence of the protein needs to be used, drastically reducing the size of the DNA segment that needs to be inserted into the genome.

“The ability to site-specifically make large genomic integrations is of huge value to both basic science and biotechnology studies. This toolset will, I anticipate, be very enabling for the research community,” says Prashant Mali, a professor of bioengineering at the University of California at San Diego, who was not involved in the study.

The researchers are now further exploring the possibility of using this tool as a possible way to replace the defective cystic fibrosis gene. This technique could also be useful for treating blood diseases caused by faulty genes, such as hemophilia and G6PD deficiency, or Huntington’s disease, a neurological disorder caused by a defective gene that has too many gene repeats.

The researchers have also made their genetic constructs available online for other scientists to use.

“One of the fantastic things about engineering these molecular technologies is that people can build on them, develop and apply them in ways that maybe we didn’t think of or hadn't considered,” Gootenberg says. “It’s really great to be part of that emerging community.”

The research was funded by a Swiss National Science Foundation Postdoc Mobility Fellowship, the U.S. National Institutes of Health, the McGovern Institute Neurotechnology Program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold and Leila Y. Mathers Charitable Foundation, the MIT John W. Jarve Seed Fund for Science Innovation, Impetus Grants, a Cystic Fibrosis Foundation Pioneer Grant, Google Ventures, Fast Grants, the Harvey Family Foundation, and the McGovern Institute.



de MIT News https://ift.tt/MLe659y

miércoles, 23 de noviembre de 2022

Teresa Gao named 2024 Mitchell Scholar

MIT senior Teresa Gao has been named one of the 12 winners of the George J. Mitchell Scholarship’s Class of 2024. After graduating next spring with a double major in computer science and engineering as well as brain and cognitive sciences, she will study augmented and virtual reality at Trinity College Dublin. Gao is the fifth MIT student to be named a Mitchell Scholar.

Mitchell Scholars are selected on the basis of academic achievement, leadership, and dedication to public service. The scholarship is named in honor of U.S. Senator George Mitchell’s contributions to the Northern Ireland peace process. This year, over 300 American students were endorsed to apply for the prestigious fellowship, which is sponsored by the U.S.-Ireland Alliance and funds a year of graduate studies in Ireland.

“Teresa’s excellent work at the intersections of engineering, music, and science communication make the Mitchell Scholarship in Ireland a perfect fit for her next step,” says Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development. “We are proud that she will be representing MIT there, as she exemplifies the mind and hand ethos of our education.”

Gao, a resident of Provo, Utah, is interested in artificial intelligence and the development of autonomous agents. She has conducted research in a range of fields, including psycholinguistics in the Department of Brain and Cognitive Sciences, social robots for mental health in the Media Lab, and machine learning architectures for biological images at the Broad Institute. Currently, she is working to establish cognitive benchmarks for AI with the MIT Quest for Intelligence.

Gao’s love for science is only equaled by her passion for creativity and the arts. She hosts an educational radio show, “Psycholochat: Where Neuroscience Meets Philosophy,” on the MIT campus radio station WMBR 88.1 FM, where she investigates topics in psychology, neuroscience, and philosophy.

Completely self-taught on the viola, Gao earned a highly competitive seat in the MIT Chamber Music Society. She also serves as co-president of Ribotones, a student group that plays music in service to hospital patients and nursing home residents throughout the Greater Boston community, and she performs with the competitive MIT Bhangra dance team.

Outside of the arts, Gao tutors fellow MIT students through the IEEE-Eta Kappa Nu Honor Society, manages logistics for the annual Battlecode programming competition fun by MIT’s computer science department, and volunteers with the peer support anonymous campus textline, Lean On Me.



de MIT News https://ift.tt/4up7vNf

martes, 22 de noviembre de 2022

A simpler path to better computer vision

Before a machine-learning model can complete a task, such as identifying cancer in medical images, the model must be trained. Training image classification models typically involves showing the model millions of example images gathered into a massive dataset.

However, using real image data can raise practical and ethical concerns: The images could run afoul of copyright laws, violate people’s privacy, or be biased against a certain racial or ethnic group. To avoid these pitfalls, researchers can use image generation programs to create synthetic data for model training. But these techniques are limited because expert knowledge is often needed to hand-design an image generation program that can create effective training data. 

Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere took a different approach. Instead of designing customized image generation programs for a particular training task, they gathered a dataset of 21,000 publicly available programs from the internet. Then they used this large collection of basic image generation programs to train a computer vision model.

These programs produce diverse images that display simple colors and textures. The researchers didn’t curate or alter the programs, which each comprised just a few lines of code.

The models they trained with this large dataset of programs classified images more accurately than other synthetically trained models. And, while their models underperformed those trained with real data, the researchers showed that increasing the number of image programs in the dataset also increased model performance, revealing a path to attaining higher accuracy.

“It turns out that using lots of programs that are uncurated is actually better than using a small set of programs that people need to manipulate. Data are important, but we have shown that you can go pretty far without real data,” says Manel Baradad, an electrical engineering and computer science (EECS) graduate student working in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper describing this technique.

Co-authors include Tongzhou Wang, an EECS grad student in CSAIL; Rogerio Feris, principal scientist and manager at the MIT-IBM Watson AI Lab; Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL; and senior author Phillip Isola, an associate professor in EECS and CSAIL; along with others at JPMorgan Chase Bank and Xyla, Inc. The research will be presented at the Conference on Neural Information Processing Systems. 

Rethinking pretraining

Machine-learning models are typically pretrained, which means they are trained on one dataset first to help them build parameters that can be used to tackle a different task. A model for classifying X-rays might be pretrained using a huge dataset of synthetically generated images before it is trained for its actual task using a much smaller dataset of real X-rays.

These researchers previously showed that they could use a handful of image generation programs to create synthetic data for model pretraining, but the programs needed to be carefully designed so the synthetic images matched up with certain properties of real images. This made the technique difficult to scale up.

In the new work, they used an enormous dataset of uncurated image generation programs instead.

They began by gathering a collection of 21,000 images generation programs from the internet. All the programs are written in a simple programming language and comprise just a few snippets of code, so they generate images rapidly.

“These programs have been designed by developers all over the world to produce images that have some of the properties we are interested in. They produce images that look kind of like abstract art,” Baradad explains.

These simple programs can run so quickly that the researchers didn’t need to produce images in advance to train the model. The researchers found they could generate images and train the model simultaneously, which streamlines the process.

They used their massive dataset of image generation programs to pretrain computer vision models for both supervised and unsupervised image classification tasks. In supervised learning, the image data are labeled, while in unsupervised learning the model learns to categorize images without labels.

Improving accuracy

When they compared their pretrained models to state-of-the-art computer vision models that had been pretrained using synthetic data, their models were more accurate, meaning they put images into the correct categories more often. While the accuracy levels were still less than models trained on real data, their technique narrowed the performance gap between models trained on real data and those trained on synthetic data by 38 percent.

“Importantly, we show that for the number of programs you collect, performance scales logarithmically. We do not saturate performance, so if we collect more programs, the model would perform even better. So, there is a way to extend our approach,” Manel says.

The researchers also used each individual image generation program for pretraining, in an effort to uncover factors that contribute to model accuracy. They found that when a program generates a more diverse set of images, the model performs better. They also found that colorful images with scenes that fill the entire canvas tend to improve model performance the most.

Now that they have demonstrated the success of this pretraining approach, the researchers want to extend their technique to other types of data, such as multimodal data that include text and images. They also want to continue exploring ways to improve image classification performance.

“There is still a gap to close with models trained on real data. This gives our research a direction that we hope others will follow,” he says.



de MIT News https://ift.tt/feghJa6

A far-sighted approach to machine learning

Picture two teams squaring off on a football field. The players can cooperate to achieve an objective, and compete against other players with conflicting interests. That’s how the game works.

Creating artificial intelligence agents that can learn to compete and cooperate as effectively as humans remains a thorny problem. A key challenge is enabling AI agents to anticipate future behaviors of other agents when they are all learning simultaneously.

Because of the complexity of this problem, current approaches tend to be myopic; the agents can only guess the next few moves of their teammates or competitors, which leads to poor performance in the long run. 

Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a new approach that gives AI agents a farsighted perspective. Their machine-learning framework enables cooperative or competitive AI agents to consider what other agents will do as time approaches infinity, not just over a few next steps. The agents then adapt their behaviors accordingly to influence other agents’ future behaviors and arrive at an optimal, long-term solution.

This framework could be used by a group of autonomous drones working together to find a lost hiker in a thick forest, or by self-driving cars that strive to keep passengers safe by anticipating future moves of other vehicles driving on a busy highway.

“When AI agents are cooperating or competing, what matters most is when their behaviors converge at some point in the future. There are a lot of transient behaviors along the way that don’t matter very much in the long run. Reaching this converged behavior is what we really care about, and we now have a mathematical way to enable that,” says Dong-Ki Kim, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper describing this framework.

The senior author is Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics and a member of the MIT-IBM Watson AI Lab. Co-authors include others at the MIT-IBM Watson AI Lab, IBM Research, Mila-Quebec Artificial Intelligence Institute, and Oxford University. The research will be presented at the Conference on Neural Information Processing Systems.

More agents, more problems

The researchers focused on a problem known as multiagent reinforcement learning. Reinforcement learning is a form of machine learning in which an AI agent learns by trial and error. Researchers give the agent a reward for “good” behaviors that help it achieve a goal. The agent adapts its behavior to maximize that reward until it eventually becomes an expert at a task.

But when many cooperative or competing agents are simultaneously learning, things become increasingly complex. As agents consider more future steps of their fellow agents, and how their own behavior influences others, the problem soon requires far too much computational power to solve efficiently. This is why other approaches only focus on the short term.

“The AIs really want to think about the end of the game, but they don’t know when the game will end. They need to think about how to keep adapting their behavior into infinity so they can win at some far time in the future. Our paper essentially proposes a new objective that enables an AI to think about infinity,” says Kim.

But since it is impossible to plug infinity into an algorithm, the researchers designed their system so agents focus on a future point where their behavior will converge with that of other agents, known as equilibrium. An equilibrium point determines the long-term performance of agents, and multiple equilibria can exist in a multiagent scenario. Therefore, an effective agent actively influences the future behaviors of other agents in such a way that they reach a desirable equilibrium from the agent’s perspective. If all agents influence each other, they converge to a general concept that the researchers call an “active equilibrium.”

The machine-learning framework they developed, known as FURTHER (which stands for FUlly Reinforcing acTive influence witH averagE Reward), enables agents to learn how to adapt their behaviors as they interact with other agents to achieve this active equilibrium.

FURTHER does this using two machine-learning modules. The first, an inference module, enables an agent to guess the future behaviors of other agents and the learning algorithms they use, based solely on their prior actions.

This information is fed into the reinforcement learning module, which the agent uses to adapt its behavior and influence other agents in a way that maximizes its reward.

“The challenge was thinking about infinity. We had to use a lot of different mathematical tools to enable that, and make some assumptions to get it to work in practice,” Kim says.

Winning in the long run

They tested their approach against other multiagent reinforcement learning frameworks in several different scenarios, including a pair of robots fighting sumo-style and a battle pitting two 25-agent teams against one another. In both instances, the AI agents using FURTHER won the games more often.

Since their approach is decentralized, which means the agents learn to win the games independently, it is also more scalable than other methods that require a central computer to control the agents, Kim explains.

The researchers used games to test their approach, but FURTHER could be used to tackle any kind of multiagent problem. For instance, it could be applied by economists seeking to develop sound policy in situations where many interacting entitles have behaviors and interests that change over time.

Economics is one application Kim is particularly excited about studying. He also wants to dig deeper into the concept of an active equilibrium and continue enhancing the FURTHER framework.

This research is funded, in part, by the MIT-IBM Watson AI Lab.



de MIT News https://ift.tt/bLpoi4a

International team observes innermost structure of quasar jet

At the heart of nearly every galaxy lurks a supermassive black hole. But not all supermassive black holes are alike: there are many types. Quasars, or quasi-stellar objects, are one of the brightest and most active types of supermassive black holes.

An international group of scientists has published new observations of the first quasar ever identified, known as 3C 273 and located in the Virgo constellation, that show the innermost, deepest parts of the quasar’s prominent plasma jet. 

Active supermassive black holes emit narrow, incredibly powerful jets of plasma that escape at nearly the speed of light. These jets have been studied over many decades, yet their formation process is still a mystery to astronomers and astrophysicists. An unresolved issue has been how and where the jets are collimated, or concentrated into a narrow beam, which allows them to extend to extreme distances beyond their host galaxy and even affect galactic evolution. These new observations are thus far the deepest into the heart of a black hole, where the plasma flow is collimated into a narrow beam.

This new study, published today in The Astrophysical Journal, includes observations of the 3C 273 jet at the highest angular resolution to date, obtaining data for the innermost portion of the jet, close to the central black hole. The ground-breaking work was made possible by using a closely coordinated set of radio antennas around the globe, a combination of the Global Millimeter VLBI Array (GMVA) and the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. Coordinated observations were also made with the High Sensitivity Array to study 3C 273 on different scales, in order to also measure the global shape of the jet. The data in this study were collected in 2017, around the same time that the Event Horizon Telescope (EHT) observations revealed the first images of a black hole.

The image of the 3C 273 jet gives scientists the very first view of the innermost part of the jet in a quasar, where the collimation occurs. The team further found that the angle of the plasma stream flowing from the black hole is tightened up over a very long distance. This narrowing part of the jet continues incredibly far, well beyond the area where the black hole’s gravity rules.

“It is striking to see that the shape of the powerful stream is slowly formed over a long distance in an extremely active quasar. This has also been discovered nearby in much fainter and less active supermassive black holes,” says Kazunori Akiyama, research scientist at MIT Haystack Observatory and project lead. “The results pose a new question: How does the jet collimation happen so consistently across such varied black hole systems?”

“3C 273 has been studied for decades as the ideal closest laboratory for quasar jets,” says Hiroki Okino, lead author of this paper and a PhD student at the University of Tokyo and National Astronomical Observatory of Japan. “However, even though the quasar is a close neighbor, until recently, we didn’t have an eye sharp enough to see where this narrow powerful flow of plasma is shaped.”

The new, incredibly sharp images of the 3C 273 jet were made possible by the inclusion of the ALMA array. The GMVA and ALMA were connected across continents using a technique called very long baseline interferometry (VLBI) to obtain highly detailed information about distant astronomical sources. The remarkable VLBI capability of ALMA was enabled by the ALMA Phasing Project (APP) team. The international APP team, led by MIT Haystack Observatory, developed the hardware and software to turn ALMA, an array of 66 telescopes, into the world’s most sensitive astronomical interferometry station. Collecting data at these wavelengths greatly increases the resolution and sensitivity of the array. This capability was fundamental to the EHT’s black hole imaging work as well. 

“The ability to use ALMA as part of global VLBI networks has been a complete game-changer for black hole science,” says Lynn Matthews, MIT Haystack Observatory principal research scientist and commissioning scientist for the APP. “It enabled us to obtain the first-ever images of supermassive black holes, and now it is helping us to see for the first time incredible new details about how black holes power their jets.”

This study opens the door to further exploration of jet collimation processes in other types of black holes. Data obtained at higher frequencies, such as 230 and 345 GHz with the EHT, will allow scientists to observe even finer details within quasars and other black holes. 

“This discovery sheds new light on jet collimation in the quasar jets,” says Keiichi Asada, associate research fellow at the Academia Sinica, Institute of Astronomy and Astrophysics (ASIAA) in Taiwan. “The sharper eyes of the EHT will enable access to similar regions in more distant quasar jets. We hope to be able to make progress on our new ‘homework’ from this study, which may allow us to finally answer the hundred-year-old problem of how jets are collimated.”

The GMVA observes at the 3mm wavelength, using the following stations for this research in April 2017: eight antennas of Very Long Baseline Array (VLBA), the Effelsberg 100m Radio Telescope of the Max-Planck-Institut für Radioastronomie (MPIfR), the IRAM 30m Telescope, the 20m telescope of the Onsala Space Observatory, and the 40m Radio Telescope of Yebes Observatory. The data were correlated at the DiFX VLBI correlator at the MPIfR in Bonn, Germany.

ALMA is a partnership of European Southern Observatory (ESO, representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ.

APP partner organizations include MIT Haystack Observatory, USA; Max-Planck-Institut für Radioastronomie (MPIfR), Germany; University of Concepción, Chile; National Astronomical Observatory of Japan (NAOJ), Japan; National Radio Astronomy Observatory (NRAO), USA; Institute of Astronomy and Astrophysics, Academia Sinica (ASIAA), Taiwan; Onsala Space Observatory, Sweden; Harvard-Smithsonian Center for Astrophysics (CfA), USA; and the University of Valencia, Spain. Funding for the APP was provided by the National Science Foundation Major Research Instrumentation Program, the ALMA North America Development Program, and international cost-sharing partners.

The VLBA is an instrument of the National Radio Astronomy Observatory, a facility of the U.S. National Science Foundation operated under cooperative agreement by Associated Universities, Inc.



de MIT News https://ift.tt/L1hQEus

Alzheimer’s risk gene undermines insulation of brain’s “wiring”

It’s well-known that carrying one copy of the APOE4 gene variant increases one’s risk for Alzheimer’s disease threefold and two copies about tenfold, but the fundamental reasons why, and what can be done to help patients, remain largely unknown. A study published by an MIT-based team Nov. 16 in Nature provides some new answers as part of a broader line of research that has demonstrated APOE4’s consequences, cell-type-by-cell-type, in the brain.

The new study combines evidence from postmortem human brains, lab-based human brain cell cultures, and Alzheimer’s model mice to show that when people have one or two copies of APOE4, rather than the more common and risk-neutral APOE3 version, cells called oligodendrocytes mismanage cholesterol, failing to transport the fat molecule to wrap the long vine-like axon “wiring” that neurons project to make brain circuit connections. Deficiency of this fatty insulation, called myelin, may be a significant contributor to the pathology and symptoms of Alzheimer’s disease because without proper myelination, communications among neurons are degraded.

Recent studies by the research group, led by MIT Professor Li-Huei Tsai, director of The Picower Institute for Learning and Memory and the Aging Brain Initiative at MIT, have found distinct ways that APOE4 disrupts how fat molecules, or lipids, are handled by key brain cell types including neurons, astrocytes, and microglia. In both the new and earlier studies, the team has identified compounds that appear in the lab to correct these different problems, yielding potential pharmaceutical-based treatment strategies.

The new study extends that work not only by discovering how APOE4 disrupts myelination, but also by providing the first systematic analysis across major brain cell types using single nucleus RNA sequencing (snRNAseq) to compare how gene expression differs in people with APOE4 compared to APOE3.

“This paper shows very clearly from the snRNAseq of postmortem human brains in a genotype-specific manner that APOE4 influences different brain cell types very distinctly,” says Tsai, a member of MIT’s Department of Brain and Cognitive Sciences faculty. “We see convergence of lipid metabolism being disrupted, but when you really look into further detail at the kind of lipid pathways being disturbed in different brain cell types, they are all different.

“I feel that lipid dysregulation could be this very fundamental biology underlying a lot of the pathology we observe,” she says.

The paper’s lead authors are Joel Blanchard, an assistant professor at Mt. Sinai’s Icahn School of Medicine who began the work as a postdoc in Tsai’s MIT lab; Djuna Von Maydell and Leyla Akay, who are graduate students in Tsai’s lab; and Jose Davila Velderrain, a research group leader at Human Technopole and former postdoc in the lab of co-corresponding author Manolis Kellis, a professor of computer science at MIT.

Many methods to examine myelination

Postmortem human brain samples came from the Religious Orders Study and the Rush Memory and Aging Project. The team’s snRNAseq results, a dataset that von Maydell has made freely available, encompasses more than 160,000 individual cells of 11 different types from the prefrontal cortex of 32 people — 12 with two APOE3 copies, 12 with one copy of each APOE3 and APOE4, and eight with two APOE4 copies. The APOE3/3 and APOE3/4 samples were balanced by Alzheimer’s diagnosis, gender, and age. All APOE4/4 carriers had Alzheimer’s and five of eight were female.

Some results reflected known Alzheimer’s pathology, but other patterns were novel. One in particular showed that APOE4-carrying oligodendrocytes exhibited greater expression of cholesterol synthesis genes and disruptions to cholesterol transport. The more APOE4 copies people had, the greater the effect. This was especially interesting given results from a prior analysis by Tsai’s and Kellis’s labs in 2019 that linked Alzheimer’s disease to reduced expression of myelination genes among oligodendrocytes.

Using a variety of techniques to look directly at the tissue, the team saw that in APOE4 brains, aberrant amounts of cholesterol accumulated within cell bodies, especially of oligodendrocytes, but was relatively lacking around neural axons.

To understand why, the team used patient-derived induced pluripotent stem cells to create lab cell cultures of oligodendrocytes engineered to differ only by whether they had APOE4 or APOE3. Again APOE4 cells showed major lipid disruptions. In particular, the afflicted oligodendrocytes hoarded extra cholesterol within their bodies, showed signs that the extra internal fats were stressing organelles called the endoplasmic reticulum that have a role in cholesterol transport, and indeed transported less cholesterol out to their membranes. Later, when they were co-cultured with neurons, the APOE4 oligodendrocytes failed to myelinate the neurons as well as APO3 cells did, regardless of whether the neurons carried APOE4 or APOE3.

The team also observed that in postmortem brains there was less myelination in APOE4 carriers than APOE3 carriers. For instance, the sheaths around axons running through the corpus callosum (the structure that connects brain hemispheres) were notably thinner in APOE4 brains. The same was true in mice engineered to harbor human APOE4 versus those engineered to have APOE3.

A productive intervention

Eager to find a potential intervention, the team focused on drugs that affect cholesterol, including statins (which suppress synthesis) and cyclodextrin, which aids cholesterol transport. The statins didn’t help, but applying cyclodextrin to APOE4 oligodendrocyte cultured in a dish reduced accumulation of cholesterol within the cells and improved myelination in co-cultures with neurons. Moreover, it also had these effects in APOE4 mice.

Finally, the team treated some APOE4 mice with cyclodextrin, left others untreated, and subjected them all to two different memory tests. The cyclodextrin-treated mice performed both tests significantly better, suggesting an association between improved myelination and improved cognition.

Tsai said a clear picture is emerging in which intervening to correct specific lipid dysregulations by cell type could potentially help counteract APOE4’s contributions to Alzheimer’s pathology.

“It’s encouraging that we’ve seen a way to rescue oligodendrocyte function and myelination in lab and mouse models,” Tsai says. “But in addition to oligodendrocytes, we may also need to find clinically effective ways to take care of microglia, astrocytes, and vasculature to really combat the disease.”

In addition to the lead authors, Tsai and Kellis, the paper’s other authors are Hansruedi Mathys, Shawn Davidson, Audrey Effenberger, Chih-Yu Chen, Kristan Maner-Smith, Ihab Jahhar, Eric Orlund, Michael Bula, Emre Agbas, Ayesha Ng, Xueqiao Jiang, Martin Kahn, Cristina Blanco-Duque, Nicolas Lavoie, Liwang Liu, Ricardo Reyes, Yuan-Ta lin, Tak Ko, Lea R’Bibo, William Ralvenius, David Bennet, and Hugh Cam.

The Robert A. and Renee E. Belfer Foundation, the JPB Foundation, the Carol and Gene Ludwig Family Foundation, the Cure Alzheimer’s Fund, and the National Insitutes of Health funded the study.



de MIT News https://ift.tt/nZE7OaY