jueves, 30 de diciembre de 2021

Speeding up directed evolution of molecules in the lab

Natural evolution is a slow process that relies on the gradual accumulation of genetic mutations. In recent years, scientists have found ways to speed up the process on a small scale, allowing them to rapidly create new proteins and other molecules in their lab.

This widely-used technique, known as directed evolution, has yielded new antibodies to treat cancer and other diseases, enzymes used in biofuel production, and imaging agents for magnetic resonance imaging (MRI).

Researchers at MIT have now developed a robotic platform that can perform 100 times as many directed-evolution experiments in parallel, giving many more populations the chance to come up with a solution, while monitoring their progress in real-time. In addition to helping researchers develop new molecules more rapidly, the technique could also be used to simulate natural evolution and answer fundamental questions about how it works.

“Traditionally, directed evolution has been much more of an art than a science, let alone an engineering discipline. And that remains true until you can systematically explore different permutations and observe the results,” says Kevin Esvelt, an assistant professor in MIT’s Media Lab and the senior author of the new study.

MIT graduate student Erika DeBenedictis and postdoc Emma Chory are the lead authors of the paper, which appears today in Nature Methods.

Rapid evolution

Directed evolution works by speeding up the accumulation and selection of novel mutations. For example, if scientists wanted to create an antibody that binds to a cancerous protein, they would start with a test tube of hundreds of millions of yeast cells or other microbes that have been engineered to express mammalian antibodies on their surfaces. These cells would be exposed to the cancer protein that the researchers want the antibody to bind to, and researchers would pick out those that bind the best.

Scientists would then introduce random mutations into the antibody sequence and screen these new proteins again. The process can be repeated many times until the best candidate emerges.

About 10 years ago, as a graduate student at Harvard University, Esvelt developed a way to speed up directed evolution. This approach harnesses bacteriophages (viruses that infect bacteria) to help proteins evolve faster toward a desired function. The gene that the researchers hope to optimize is linked to a gene needed for bacteriophage survival, and the viruses compete against each other to optimize the protein. The selection process is run continuously, shortening each mutation round to the lifespan of the bacteriophage, which is about 20 minutes, and can be repeated many times, with no human intervention needed.

Using this method, known as phage-assisted continuous evolution (PACE), directed evolution can be performed 1 billion times faster than traditional directed evolution experiments. However, evolution often fails to come up with a solution, requiring the researchers to guess which new set of conditions will do better.

The technique described in the new Nature Methods paper, which the researchers have named phage and robotics-assisted near-continuous evolution (PRANCE), can evolve 100 times as many populations in parallel, using different conditions.

In the new PRANCE system, bacteriophage populations (which can only infect a specific strain of bacteria) are grown in wells of a 96-well plate, instead of a single bioreactor. This allows for many more evolutionary trajectories to occur simultaneously. Each viral population is monitored by a robot as it goes through the evolution process. When the virus succeeds in generating the desired protein, it produces a fluorescent protein that the robot can detect.

“The robot can babysit this population of viruses by measuring this readout, which allows it to see whether the viruses are performing well, or whether they’re really struggling and something needs to be done to help them,” DeBenedictis says.

If the viruses are struggling to survive, meaning that the target protein is not evolving in the desired way, the robot can help save them from extinction by replacing the bacteria they’re infecting with a different strain that makes it easier for the viruses to replicate. This prevents the population from dying out, which is a cause of failure for many directed evolution experiments.

“We can tune these evolutions in real-time, in direct response to how well these evolutions are occurring,” Chory says. “We can tell when an experiment is succeeding and we can change the environment, which gives us many more shots on goal, which is great from both a bioengineering perspective and a basic science perspective.”

Novel molecules

In this study, the researchers used their new platform to engineer a molecule that allows viruses to encode their genes in a new way. The genetic code of all living organisms stipulates that three DNA base pairs specify one amino acid. However, the MIT team was able to evolve several viral transfer RNA (tRNA) molecules that read four DNA base pairs instead of three.

In another experiment, they evolved a molecule that allows viruses to incorporate a synthetic amino acid into the proteins they make. All viruses and living cells use the same 20 naturally occurring amino acids to build their proteins, but the MIT team was able to generate an enzyme that can incorporate an additional amino acid called Boc-lysine.

The researchers are now using PRANCE to try to make novel small-molecule drugs. Other possible applications for this kind of large-scale directed evolution include trying to evolve enzymes that degrade plastic more efficiently, or molecules that can edit the epigenome, similarly to how CRISPR can edit the genome, the researchers say.

With this system, scientists can also gain a better understanding of the step-by-step process that leads to a particular evolutionary outcome. Because they can study so many populations in parallel, they can tweak factors such as the mutation rate, size of original population, and environmental conditions, and then analyze how those variations affect the outcome. This type of large-scale, controlled experiment could allow them to potentially answer fundamental questions about how evolution naturally occurs.

“Our system allows us to actually perform these evolutions with substantially more understanding of what's happening in the system,” Chory says. “We can learn about the history of the evolution, not just the end point.”

The research was funded by the MIT Media Lab, an Alfred P. Sloan Research Fellowship, the Open Philanthropy Project, the Reid Hoffman Foundation, the National Institute of Digestive and Kidney Diseases, the National Institute for Allergy and Infectious Diseases, and a Ruth L. Kirschstein NRSA Fellowship from the National Cancer Institute.



de MIT News https://ift.tt/32AAAU0

lunes, 27 de diciembre de 2021

Scientists build new atlas of ocean’s oxygen-starved waters

Life is teeming nearly everywhere in the oceans, except in certain pockets where oxygen naturally plummets and waters become unlivable for most aerobic organisms. These desolate pools are “oxygen-deficient zones,” or ODZs. And though they make up less than 1 percent of the ocean’s total volume, they are a significant source of nitrous oxide, a potent greenhouse gas. Their boundaries can also limit the extent of fisheries and marine ecosystems.

Now MIT scientists have generated the most detailed, three-dimensional “atlas” of the largest ODZs in the world. The new atlas provides high-resolution maps of the two major, oxygen-starved bodies of water in the tropical Pacific. These maps reveal the volume, extent, and varying depths of each ODZ, along with fine-scale features, such as ribbons of oxygenated water that intrude into otherwise depleted zones.

The team used a new method to process over 40 years’ worth of ocean data, comprising nearly 15 million measurements taken by many research cruises and autonomous robots deployed across the tropical Pacific. The researchers compiled then analyzed this vast and fine-grained data to generate maps of oxygen-deficient zones at various depths, similar to the many slices of a three-dimensional scan.

From these maps, the researchers estimated the total volume of the two major ODZs in the tropical Pacific, more precisely than previous efforts. The first zone, which stretches out from the coast of South America, measures about 600,000 cubic kilometers — roughly the volume of water that would fill 240 billion Olympic-sized pools. The second zone, off the coast of Central America, is roughly three times larger.

The atlas serves as a reference for where ODZs lie today. The team hopes scientists can add to this atlas with continued measurements, to better track changes in these zones and predict how they may shift as the climate warms.

“It’s broadly expected that the oceans will lose oxygen as the climate gets warmer. But the situation is more complicated in the tropics where there are large oxygen-deficient zones,” says Jarek Kwiecinski ’21, who developed the atlas along with Andrew Babbin, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It’s important to create a detailed map of these zones so we have a point of comparison for future change.”

The team’s study appears today in the journal Global Biogeochemical Cycles.

Airing out artifacts

Oxygen-deficient zones are large, persistent regions of the ocean that occur naturally, as a consequence of marine microbes gobbling up sinking phytoplankton along with all the available oxygen in the surroundings. These zones happen to lie in regions that miss passing ocean currents, which would normally replenish regions with oxygenated water. As a result, ODZs are locations of relatively permanent, oxygen-depleted waters, and can exist at mid-ocean depths of between roughly 35 to 1,000 meters below the surface. For some perspective, the oceans on average run about 4,000 meters deep.

Over the last 40 years, research cruises have explored these regions by dropping bottles down to various depths and hauling up seawater that scientists then measure for oxygen.

“But there are a lot of artifacts that come from a bottle measurement when you’re trying to measure truly zero oxygen,” Babbin says. “All the plastic that we deploy at depth is full of oxygen that can leach out into the sample. When all is said and done, that artificial oxygen inflates the ocean’s true value.”

Rather than rely on measurements from bottle samples, the team looked at data from sensors attached to the outside of the bottles or integrated with robotic platforms that can change their buoyancy to measure water at different depths. These sensors measure a variety of signals, including changes in electrical currents or the intensity of light emitted by a photosensitive dye to estimate the amount of oxygen dissolved in water. In contrast to seawater samples that represent a single discrete depth, the sensors record signals continuously as they descend through the water column.

Scientists have attempted to use these sensor data to estimate the true value of oxygen concentrations in ODZs, but have found it incredibly tricky to convert these signals accurately, particularly at concentrations approaching zero.

“We took a very different approach, using measurements not to look at their true value, but rather how that value changes within the water column,” Kwiecinski says. “That way we can identify anoxic waters, regardless of what a specific sensor says.”

Bottoming out

The team reasoned that, if sensors showed a constant, unchanging value of oxygen in a continuous, vertical section of the ocean, regardless of the true value, then it would likely be a sign that oxygen had bottomed out, and that the section was part of an oxygen-deficient zone.

The researchers brought together nearly 15 million sensor measurements collected over 40 years by various research cruises and robotic floats, and mapped the regions where oxygen did not change with depth.

“We can now see how the distribution of anoxic water in the Pacific changes in three dimensions,” Babbin says. 

The team mapped the boundaries, volume, and shape of two major ODZs in the tropical Pacific, one in the Northern Hemisphere, and the other in the Southern Hemisphere. They were also able to see fine details within each zone. For instance, oxygen-depleted waters are “thicker,” or more concentrated towards the middle, and appear to thin out toward the edges of each zone.

“We could also see gaps, where it looks like big bites were taken out of anoxic waters at shallow depths,” Babbin says. “There’s some mechanism bringing oxygen into this region, making it oxygenated compared to the water around it.”

Such observations of the tropical Pacific’s oxygen-deficient zones are more detailed than what’s been measured to date.

“How the borders of these ODZs are shaped, and how far they extend, could not be previously resolved,” Babbin says. “Now we have a better idea of how these two zones compare in terms of areal extent and depth.”

“This gives you a sketch of what could be happening,” Kwiecinski says. “There’s a lot more one can do with this data compilation to understand how the ocean’s oxygen supply is controlled.”

This research is supported, in part, by the Simons Foundation.



de MIT News https://ift.tt/3z2fTMy

lunes, 20 de diciembre de 2021

MIT engineers produce the world’s longest flexible fiber battery

Researchers have developed a rechargeable lithium-ion battery in the form of an ultra-long fiber that could be woven into fabrics. The battery could enable a wide variety of wearable electronic devices, and might even be used to make 3D-printed batteries in virtually any shape.

The researchers envision new possibilities for self-powered communications, sensing, and computational devices that could be worn like ordinary clothing, as well as devices whose batteries could also double as structural parts.

In a proof of concept, the team behind the new battery technology has produced the world’s longest flexible fiber battery, 140 meters long, to demonstrate that the material can be manufactured to arbitrarily long lengths. The work is described today in the journal Materials Today. MIT postdoc Tural Khudiyev (now an assistant professor at National University of Singapore), former MIT postdoc Jung Tae Lee (now a professor at Kyung Hee University), and Benjamin Grena SM ’13, PhD ’17 (currently at Apple) are the lead authors on the paper. Other co-authors are MIT professors Yoel Fink, Ju Li, and John Joannopoulos, and seven others at MIT and elsewhere.

Researchers, including members of this team, have previously demonstrated fibers that contain a wide variety of electronic components, including light emitting diodes (LEDs), photosensors, communications, and digital systems. Many of these are weavable and washable, making them practical for use in wearable products, but all have so far relied on an external power source. Now, this fiber battery, which is also weavable and washable, could enable such devices to be completely self-contained.

The new fiber battery is manufactured using novel battery gels and a standard fiber-drawing system that starts with a larger cylinder containing all the components and then heats it to just below its melting point. The material is drawn through a narrow opening to compress all the parts to a fraction of their original diameter, while maintaining all the original arrangement of parts.

fiber battery

While others have attempted to make batteries in fiber form, Khudiyev says, those were structured with key materials on the outside of the fiber, whereas this system embeds the lithium and other materials inside the fiber, with a protective outside coating, thus directly making this version stable and waterproof. This is the first demonstration of a sub-kilometer long fiber battery which is both sufficiently long and highly durable to have practical applications, he says.

The fact that they were able to make a 140-meter fiber battery shows that “there’s no obvious upper limit to the length. We could definitely do a kilometer-scale length,” he says. A demonstration device using the new fiber battery incorporated a “Li-Fi” communications system — one in which pulses of light are used to transmit data, and included a microphone, pre-amp, transistor, and diodes to establish an optical data link between two woven fabric devices.

“When we embed the active materials inside the fiber, that means sensitive battery components already have a good sealing,” Khudiyev says, “and all the active materials are very well-integrated, so they don’t change their position” during the drawing process. In addition, the resulting fiber battery is much thinner and more flexible yielding an aspect ratio, that is the length-to-width fraction, up to a million, which is way beyond other designs, which makes it practical to use standard weaving equipment to create fabrics that incorporate the batteries as well as electronic systems.

battery example

The 140-meter fiber produced so far has an energy storage capacity of 123 milliamp-hours, which can charge smartwatches or phones, he says. The fiber device is only a few hundred microns in thickness, thinner than any previous attempts to produce batteries in fiber form.

“The beauty of our approach is that we can embed multiple devices in an individual fiber, Lee says, “unlike other approaches which need integration of multiple fiber devices.” They demonstrated integration of LED and Li-ion battery in a single fiber and he believes more than three or four devices can be combinded in such a small space in the future. “When we integrate these fibers containing multi-devices, the aggragate will advance the realization of a compact fabric computer.”

In addition to individual one-dimensional fibers, which can be woven to produce two-dimensional fabrics, the material can also be used in 3D printing or custom-shape systems to create solid objects, such as casings that could provide both the structure of a device and its power source. To demonstrate this capability, a toy submarine was wrapped with the battery fiber to provide it with power. Incorporating the power source into the structure of such devices could lower the overall weight and so improve the efficiency and range they can achieve.

“This is the first 3D printing of a fiber battery device,” Khudiyev says. “If you want to make complex objects” through 3D printing that incorporate a battery device, he says, this is the first system that can achieve that. “After printing, you do not need to add anything else, because everything is already inside the fiber, all the metals, all the active materials. It’s just a one-step printing. That’s a first.”

That means that now, he says, “Computational units can be put inside everyday objects, including Li-Fi.”

The team has already applied for a patent on the process and continues to develop further improvements in power capacity and variations on the materials used to improve efficiency. Khudiyev says such fiber batteries could be ready for use in commercial products within a few years.

“The shape flexibility of the new battery cell allows designs and applications that have not been possible before,” says Martin Winter, a professor of physical chemistry at the University of Muenster in Germany, who was not involved in this work. Calling this work “very creative,” he adds: “As most academic works on batteries look now at grid storage and electric vehicles, this is a wonderful deviation from mainstream.”

The research was supported by the MIT MRSEC program of the National Science Foundation, the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, the National Science Foundation’s graduate research fellowship program, and the National Research Foundation of Korea.



de MIT News https://ift.tt/3J5VMS5

viernes, 17 de diciembre de 2021

Selective separation could help alleviate critical metals shortage

New processing methods developed by MIT researchers could help ease looming shortages of the essential metals that power everything from phones to automotive batteries, by making it easier to separate these rare metals from mining ores and recycled materials.

Selective adjustments within a chemical process called sulfidation allowed professor of metallurgy Antoine Allanore and his graduate student Caspar Stinn to successfully target and separate rare metals, such as the cobalt in a lithium-ion battery, from mixed-metal materials.

As they report in the journal Nature, their processing techniques allow the metals to remain in solid form and be separated without dissolving the material. This avoids traditional but costly liquid separation methods that require significant energy. The researchers developed processing conditions for 56 elements and tested these conditions on 15 elements.

Their sulfidation approach, they write in the paper, could reduce the capital costs of metal separation between 65 and 95 percent from mixed-metal oxides. Their selective processing could also reduce greenhouse gas emissions by 60 to 90 percent compared to traditional liquid-based separation.

“We were excited to find replacements for processes that had really high levels of water usage and greenhouse gas emissions, such as lithium-ion battery recycling, rare-earth magnet recycling, and rare-earth separation,” says Stinn. “Those are processes that make materials for sustainability applications, but the processes themselves are very unsustainable.”

The findings offer one way to alleviate a growing demand for minor metals like cobalt, lithium, and rare earth elements that are used in “clean” energy products like electric cars, solar cells, and electricity-generating windmills. According to a 2021 report by the International Energy Agency, the average amount of minerals needed for a new unit of power generation capacity has risen by 50 percent since 2010, as renewable energy technologies using these metals expand their reach.

Opportunity for selectivity

For more than a decade, the Allanore group has been studying the use of sulfide materials in developing new electrochemical routes for metal production. Sulfides are common materials, but the MIT scientists are experimenting with them under extreme conditions like very high temperatures — from 800 to 3,000 degrees Fahrenheit — that are used in manufacturing plants but not in a typical university lab.

“We are looking at very well-established materials in conditions that are uncommon compared to what has been done before,” Allanore explains, “and that is why we are finding new applications or new realities.”

In the process of synthetizing high-temperature sulfide materials to support electrochemical production, Stinn says, “we learned we could be very selective and very controlled about what products we made. And it was with that understanding that we realized, ‘OK, maybe there’s an opportunity for selectivity in separation here.’”

The chemical reaction exploited by the researchers reacts a material containing a mix of metal oxides to form new metal-sulfur compounds or sulfides. By altering factors like temperature, gas pressure, and the addition of carbon in the reaction process, Stinn and Allanore found that they could selectively create a variety of sulfide solids that can be physically separated by a variety of methods, including crushing the material and sorting different-sized sulfides or using magnets to separate different sulfides from one another.

Current methods of rare metal separation rely on large quantities of energy, water, acids, and organic solvents which have costly environmental impacts, says Stinn. “We are trying to use materials that are abundant, economical, and readily available for sustainable materials separation, and we have expanded that domain to now include sulfur and sulfides.”

Stinn and Allanore used selective sulfidation to separate out economically important metals like cobalt in recycled lithium-ion batteries. They also used their techniques to separate dysprosium — a rare-earth element used in applications ranging from data storage devices to optoelectronics — from rare-earth-boron magnets, or from the typical mixture of oxides available from mining minerals such as bastnaesite.

Leveraging existing technology

Metals like cobalt and rare earths are only found in small amounts in mined materials, so industries must process large volumes of material to retrieve or recycle enough of these metals to be economically viable, Allanore explains. “It’s quite clear that these processes are not efficient. Most of the emissions come from the lack of selectivity and the low concentration at which they operate.”

By eliminating the need for liquid separation and the extra steps and materials it requires to dissolve and then reprecipitate individual elements, the MIT researchers’ process significantly reduces the costs incurred and emissions produced during separation.

“One of the nice things about separating materials using sulfidation is that a lot of existing technology and process infrastructure can be leveraged,” Stinn says. “It’s new conditions and new chemistries in established reactor styles and equipment.”

The next step is to show that the process can work for large amounts of raw material — separating out 16 elements from rare-earth mining streams, for example. “Now we have shown that we can handle three or four or five of them together, but we have not yet processed an actual stream from an existing mine at a scale to match what’s required for deployment,” Allanore says.

Stinn and colleagues in the lab have built a reactor that can process about 10 kilograms of raw material per day, and the researchers are starting conversations with several corporations about the possibilities.

“We are discussing what it would take to demonstrate the performance of this approach with existing mineral and recycling streams,” Allanore says.

This research was supported by the U.S. Department of Energy and the U.S. National Science Foundation.



de MIT News https://ift.tt/3p7hKfP

miércoles, 15 de diciembre de 2021

Nonsense can make sense to machine-learning models

For all that neural networks can accomplish, we still don’t really understand how they operate. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted. 

If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders. 

This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs — irrespective of what else was in the image. 

The team found that neural networks trained on popular datasets like CIFAR-10 and ImageNet suffered from overinterpretation. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans. 

“Overinterpretation is a dataset problem that's caused by these nonsensical signals in datasets. Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. We found that these images were meaningless to humans, yet models can still classify them with high confidence,” says Brandon Carter, MIT Computer Science and Artificial Intelligence Laboratory PhD student and lead author on a paper about the research. 

Deep-image classifiers are widely used. In addition to medical diagnosis and boosting autonomous vehicle technology, there are use cases in security, gaming, and even an app that tells you if something is or isn’t a hot dog, because sometimes we need reassurance. The tech in discussion works by processing individual pixels from tons of pre-labeled images for the network to “learn.” 

Image classification is hard, because machine-learning models have the ability to latch onto these nonsensical subtle signals. Then, when image classifiers are trained on datasets such as ImageNet, they can make seemingly reliable predictions based on those signals. 

Although these nonsensical signals can lead to model fragility in the real world, the signals are actually valid in the datasets, meaning overinterpretation can’t be diagnosed using typical evaluation methods based on that accuracy. 

To find the rationale for the model's prediction on a particular input, the methods in the present study start with the full image and repeatedly ask, what can I remove from this image? Essentially, it keeps covering up the image, until you’re left with the smallest piece that still makes a confident decision. 

To that end, it could also be possible to use these methods as a type of validation criteria. For example, if you have an autonomously driving car that uses a trained machine-learning method for recognizing stop signs, you could test that method by identifying the smallest input subset that constitutes a stop sign. If that consists of a tree branch, a particular time of day, or something that's not a stop sign, you could be concerned that the car might come to a stop at a place it's not supposed to.

While it may seem that the model is the likely culprit here, the datasets are more likely to blame. “There's the question of how we can modify the datasets in a way that would enable models to be trained to more closely mimic how a human would think about classifying images and therefore, hopefully, generalize better in these real-world scenarios, like autonomous driving and medical diagnosis, so that the models don't have this nonsensical behavior,” says Carter. 

This may mean creating datasets in more controlled environments. Currently, it’s just pictures that are extracted from public domains that are then classified. But if you want to do object identification, for example, it might be necessary to train models with objects with an uninformative background. 

This work was supported by Schmidt Futures and the National Institutes of Health. Carter wrote the paper alongside Siddhartha Jain and Jonas Mueller, scientists at Amazon, and MIT Professor David Gifford. They are presenting the work at the 2021 Conference on Neural Information Processing Systems.



de MIT News https://ift.tt/3E3elTk

3 Questions: Kristin Knouse on the liver’s regenerative capabilities

Why is the liver the only human organ that can regenerate? How does it know when it’s been injured? What can our understanding of the liver contribute to regenerative medicine? These are just some of the questions that new assistant professor of biology Kristin Knouse and her lab members are asking in their research at the Koch Institute for Integrative Cancer Research. Knouse sat down to discuss why the liver is so unique, what lessons we might learn from the organ, and what its regeneration might teach us about cancer.

Q: Your lab is interested in questions about how body tissues sense and respond to damage. What is it about the liver that makes it a good tool to model those questions?

A: I've always felt that we, as scientists, have so much to gain from treasuring nature’s exceptions, because those exceptions can shine light onto a completely unknown area of biology and provide building blocks to confer such novelty to other systems. When it comes to organ regeneration in mammals, the liver is that exception. It is the only solid organ that can completely regenerate itself. You can damage or remove over 75 percent of the liver and the organ will completely regenerate in a matter of weeks. The liver therefore contains the instructions for how to regenerate a solid organ; however, we have yet to access and interpret those instructions. If we could fully understand how the liver is able to regenerate itself, perhaps one day we could coax other solid organs to do the same.

There are some things we already know about liver regeneration, such as when it begins, what genes are expressed, and how long it takes. However, we still don’t understand why the liver can regenerate but other organs cannot. Why is it that these fully differentiated liver cells — cells that have already assumed specialized roles in the liver — can re-enter the cell cycle and regenerate the organ? We don’t have a molecular explanation for this. Our lab is working to answer this fundamental question of cell and organ biology and apply our discoveries to unlock new approaches for regenerative medicine. In this regard, I don't necessarily consider myself exclusively a liver biologist, but rather someone who is leveraging the liver to address this much broader biological problem.

Q: As an MD/PhD student, you conducted your graduate research in the lab of the late Professor Angelika Amon here at MIT. How did your work in her lab lead to an interest in studying the liver’s regenerative capacities?

A: What was incredible about being in Angelika’s lab was that she had an interest in almost everything and gave me tremendous independence in what I pursued. I began my graduate research in her lab with an interest in cell division, and I was doing experiments to observe how cells from different mammalian tissues divide. I was isolating cells from different mouse tissues and then studying them in culture. In doing that, I found that when the cells were isolated and grown in a dish they could not segregate their chromosomes properly, suggesting that the tissue environment was essential for accurate cell division. In order to further study and compare these two different contexts — cells in a tissue versus cells in culture — I was keen to study a tissue in which I could observe a lot of cells undergoing cell division at the same time.

So I thought back to my time in medical school, and I remembered that the liver has the ability to completely regenerate itself. With a single surgery to remove part of the liver, I could stimulate millions of cells to divide. I therefore began exploiting liver regeneration as a means of studying chromosome segregation in tissue. But as I continued to perform surgeries on mice and watch the liver rapidly regenerate itself, I couldn’t help but become absolutely fascinated by this exceptional biological process. It was that fascination with this incredibly unique but poorly understood phenomenon — alongside the realization that there was a huge, unmet medical need in the area of regeneration — that convinced me to dedicate my career to studying this.

Q: What kinds of clinical applications might a better understanding of organ regeneration lead to, and what role do you see your lab playing in that research?

A: The most proximal medical application for our work is to confer regenerative capacity to organs that are currently non-regenerative. As we begin to achieve a molecular understanding of how and why the liver can regenerate, we put ourselves in a powerful position to identify and surmount the barriers to regeneration in non-regenerative tissues, such as the heart and nervous system. By answering these complementary questions, we bring ourselves closer to the possibility that, one day, if someone has a heart attack or a spinal cord injury, we could deliver a therapy that stimulates the tissue to regenerate itself. I realize that may sound like a moonshot now, but I don’t think any problem is insurmountable so long as it can be broken down into a series of tractable questions.

Beyond regenerative medicine, I believe our work studying liver regeneration also has implications for cancer. At first glance this may seem counterintuitive, as rapid regrowth is the exact opposite of what we want cancer cells to do. However, the reality is that the majority of cancer-related deaths are attributable not to the rapidly proliferating cells that constitute primary tumors, but rather to the cells that disperse from the primary tumor and lie dormant for years before manifesting as metastatic disease and creating another tumor. These dormant cells evade most of the cancer therapies designed to target rapidly proliferating cells. If you think about it, these dormant cells are not unlike the liver: they are quiet for months, maybe years, and then suddenly awaken. I hope that as we start to understand more about the liver, we might learn how to target these dormant cancer cells, prevent metastatic disease, and thereby offer lasting cancer cures.



de MIT News https://ift.tt/3s9PkDC

From counting blood cells to motion capture, sensors drive patient-centered research

Sensors and sensing systems — from devices that count white blood cells to technologies that monitor muscle coordination during rehabilitation — can positively impact medical research, scientists said at the 2021 SENSE.nano Symposium.

The virtual event focused on how sensing technologies are enabling current medical studies and aiding translation of their findings to improve human health. Featuring leaders from research and industry, MIT-launched startup companies, and graduate students, the event was the fifth annual meeting organized by SENSE.nano.

“In this era of big data, sensors are everywhere — in our homes and vehicles, medical devices, phones, and even clothing,” says MIT.nano Director Vladimir Bulović. “This year’s symposium was an exploration of how this breadth of new sensors and new sensing techniques will propel the standards of current medical work, bringing forward new clinical practice and better health for all.”

The SENSE.nano 2021 speakers discussed a range of technologies under the research themes of human motion studies, physiological monitoring, imaging at multiple scales, and devices and strategies for collecting specimens and performing biopsies. Presenters described novel research methods — such as drawing inspiration from dancers’ movement to study how muscles represent rhythm — and novel applications such as neural interface wearables to help humans better interact with robots and other electronic systems.

The symposium also celebrated the re-opening of the MIT Center for Clinical and Translational Research (CCTR, formerly the MIT Research Clinical Center). Along with remodeled health labs for research participants, the CCTR features a prototype workshop, motion capture lab, and observation and instrumentation suites for MIT and visiting human health researchers.

“SENSE.nano 2021 brought together nanoscience, nanotechnology, and the practice of medicine through our shared and central facilities — MIT.nano and the new CCTR,” says Brian Anthony, the associate director of MIT.nano and principal research scientist in the Department of Mechanical Engineering. “MIT.nano has the tools to support fabrication and design of sensors, and the CCTR has the clinical research space to study how these sensors can support medical practice.”

The patient-centered application of many sensing technologies used in medical research, including motion capture and wearable analytic devices, makes it more important than ever to include patients as active participants in such research, said keynote speaker Cecilia Stuopis, medical director of MIT Medical.

“We want the evaluation of questions and outcomes meaningful and important to patients and caregivers to be central to the process, because we want to recognize that they have unique perspectives, values, and goals for what we are trying to learn or accomplish,” Stuopis said.

Encouraging collaborative relationships between researchers and health-care providers has the potential to shorten the usual 17-year gap between basic research and its widespread acceptance in the clinic, she added, in part by connecting researchers with underserved populations who may not normally participate in clinical trials.

The symposium featured speakers from more than 10 MIT departments, labs, or centers (DLCs), including mechanical engineering, biological engineering, chemistry, and computer science and artificial intelligence. Their presentations underscored the multidisciplinary reach of sensors research. Mechanical engineering Associate Professor Jeehwan Kim demonstrated a perforated electronic skin, which can collect physiological data from the body without being damaged by sweat. Inspired by her grandmother’s stroke, Kaymie Shiozawa, a mechanical engineering graduate student, shared her work on human balance that she hopes will lead to a new robotic cane. In the imaging session, Lester Wolfe Professor of Chemistry Moungi Bawendi discussed a noninvasive method of using near-infrared and shortwave infrared to track the progression of liver disease.

As in previous years, SENSE.nano 2021 also highlighted the innovation ecosystem at MIT, with presentations by MIT-launched startups working to grow their ideas to scale.

MIT.nano and the CCTR are united by their active engagement with startup companies, said Brian Anthony. For instance, ongoing studies at the center have helped Leuko, a startup that makes a device for non-invasive, at-home white blood cell monitoring, refine and improve its product. Leuko was one of three medical sensor startups featured in this year’s symposium, along with Pison Technology and Stratagen Bio.

SENSE.nano 2021 was sponsored by MIT.nano, the MIT Industrial Liaison Program, and the MIT Center for Clinical and Translational Research.



de MIT News https://ift.tt/3dRvCEu

lunes, 13 de diciembre de 2021

Super-bright stellar explosion is likely a dying star giving birth to a black hole or neutron star

In June of 2018, telescopes around the world picked up a brilliant blue flash from the spiral arm of a galaxy 200 million light years away. The powerful burst appeared at first to be a supernova, though it was much faster and far brighter than any stellar explosion scientists had yet seen. The signal, procedurally labeled AT2018cow, has since been dubbed simply “the Cow,” and astronomers have catalogued it as a fast blue optical transient, or FBOT — a bright, short-lived event of unknown origin.

Now an MIT-led team has found strong evidence for the signal’s source. In addition to a bright optical flash, the scientists detected a strobe-like pulse of high-energy X-rays. They traced hundreds of millions of such X-ray pulses back to the Cow, and found the pulses occurred like clockwork, every 4.4 milliseconds, over a span of 60 days.

Based on the frequency of the pulses, the team calculated that the X-rays must have come from an object measuring no more than 1,000 kilometers wide, with a mass smaller than 800 suns. By astrophysical standards, such an object would be considered compact, much like a small black hole or a neutron star.

Their findings, published today in the journal Nature Astronomy, strongly suggest that AT2018cow was likely a product of a dying star that, in collapsing, gave birth to a compact object in the form of a black hole or neutron star. The newborn object continued to devour surrounding material, eating the star from the inside — a process that released an enormous burst of energy.

“We have likely discovered the birth of a compact object in a supernova,” says lead author Dheeraj “DJ” Pasham, a research scientist in MIT’s Kavli Institute for Astrophysics and Space Research. “This happens in normal supernovae, but we haven’t seen it before because it’s such a messy process. We think this new evidence opens possibilities for finding baby black holes or baby neutron stars.”

“The core of the Cow”

AT2018cow is one of many “astronomical transients” discovered in 2018. The “cow” in its name is a random coincidence of the astronomical naming process (for instance, “aaa” refers to the very first astronomical transient discovered in 2018). The signal is among a few dozen known FBOTs, and it is one of only a few such signals that have been observed in real-time. Its powerful flash — up to 100 times brighter than a typical supernova — was detected by a survey in Hawaii, which immediately sent out alerts to observatories around the world.

“It was exciting because loads of data started piling up,” Pasham says. “The amount of energy was orders of magnitude more than the typical core collapse supernova. And the question was, what could produce this additional source of energy?”

Astronomers have proposed various scenarios to explain the super-bright signal. For instance, it could have been a product of a black hole born in a supernova. Or it could have resulted from a middle-weight black hole stripping away material from a passing star. However, the data collected by optical telescopes haven’t resolved the source of the signal in any definitive way. Pasham wondered whether an answer could be found in X-ray data.

“This signal was close and also bright in X-rays, which is what got my attention,” Pasham says. “To me, the first thing that comes to mind is, some really energetic phenomenon is going on to generate X-rays. So, I wanted to test out the idea that there is a black hole or compact object at the core of the Cow.”

Finding a pulse

The team looked to X-ray data collected by NASA’s Neutron Star Interior Composition Explorer (NICER), an X-ray-monitoring telescope aboard the International Space Station. NICER started observing the Cow about five days after its initial detection by optical telescopes, monitoring the signal over the next 60 days. This data was recorded in a publicly available archive, which Pasham and his colleagues downloaded and analyzed.

The team looked through the data to identify X-ray signals emanating near AT2018cow, and confirmed that the emissions were not from other sources such as instrument noise or cosmic background phenomena. They focused on the X-rays and found that the Cow appeared to be giving off bursts at a frequency of 225 hertz, or once every 4.4 milliseconds.

Pasham seized on this pulse, recognizing that its frequency could be used to directly calculate the size of whatever was pulsing. In this case, the size of the pulsing object cannot be larger than the distance that the speed of light can cover in 4.4 milliseconds. By this reasoning, he calculated that the size of the object must be no larger than 1.3x108 centimeters, or roughly 1,000 kilometers wide.

“The only thing that can be that small is a compact object — either a neutron star or black hole,” Pasham says.

The team further calculated that, based on the energy emitted by AT2018cow, it must amount to no more than 800 solar masses.

“This rules out the idea that the signal is from an intermediate black hole,” Pasham says.

Apart from pinning down the source for this particular signal, Pasham says the study demonstrates that X-ray analyses of FBOTs and other ultrabright phenomena could be a new tool for studying infant black holes.

“Whenever there’s a new phenomenon, there’s excitement that it could tell something new about the universe,” Pasham says. “For FBOTs, we have shown we can study their pulsations in detail, in a way that’s not possible in the optical. So, this is a new way to understand these newborn compact objects.”

This research was supported, in part, by NASA.



de MIT News https://ift.tt/323kim8

David Li wins 2022 Marshall Scholarship

David Li, from Woodbury, Minnesota, has been selected as a Marshall Scholar and will commence graduate studies in the UK next fall. Funded by the British government, the Marshall Scholarship provides exceptional American students with the opportunity to pursue two years of advanced study in any field at any university in the U.K.

Li, along with MIT’s other endorsed Marshall candidates, was mentored by the distinguished fellowships team in Career Advising and Professional Development, and the Presidential Committee on Distinguished Fellowships. “We are very proud of all the MIT students who applied for the Marshall this year,” says Professor Will Broadhead, who chairs the committee along with Professor Tamar Schapiro. “These are students whose undergraduate experience has been disrupted by all kinds of turmoil, and yet they maintain an optimism about the future and their ability to improve it that we on the committee found truly inspiring. David stands out as a richly deserving winner of the scholarship and will no doubt thrive as he continues his studies in the U.K. We offer him our warmest congratulations!”

Li is majoring in electrical engineering and computer science with minors in mechanical engineering and economics. As a Marshall Scholar, he will complete an MPhil in biological science at the MRC Laboratory of Molecular Biology through Cambridge University, and then an MS in neuroscience at Oxford University. After returning to the U.S., Li intends to pursue a PhD in bioengineering, combining engineering and computer science with biology to develop transformative technologies.

Li has been interested in CRISPR technology since middle school, when he enrolled in an online version of MIT’s introductory biology class. This experience led him to conduct research during high school in the Hendrickson Laboratory at the University of Minnesota. Li was a national winner of the 2018 Genes in Space competition as part of a team that designed an experiment to measure changes in DNA double-strand break repair pathway choice in microgravity. This experiment was performed on the International Space Station (ISS) in 2019, presented at the 2018/2019 ISS R&D conferences, and published in PLOS One in 2021.

When he arrived at MIT, Li immediately began working in the Zhang Lab at the Broad Institute, which seeks to develop molecular and cellular tools for manipulating biological systems. Li focused on helping to engineer CRISPR technologies for genome editing as well as new approaches for directed evolution and Covid-19 testing. Within his first year with the lab, he was a co-first author on an article published in Molecular Cell. Since then, he has worked on three other projects, two of which have led to publications.

Li has served as a community teaching assistant for the MIT biology department’s online courses on edX, and has tested content for new online courses since middle school. He teaches courses on molecular biology and CRISPR/Cas9 to high school students through MIT’s Splash educational outreach program, and has volunteered with MIT Science Bowl as a session moderator and biology question writer. In his free time, Li enjoys recreational swimming and participating in the MIT Asian Christian Fellowship.



de MIT News https://ift.tt/30kEqzq

miércoles, 8 de diciembre de 2021

A tool to speed development of new solar cells

In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.

Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.

The new system, called a differentiable solar cell simulator, is described in a paper published today in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.

Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.

In short, he says, “we didn't discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it's very complex to run those simulations.”

While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance.

Since advanced solar cells often are composed of multiple layers interlaced with conductive materials to carry electric charge from one to the other, this computational tool reveals how changing the relative thicknesses of these different layers will affect the device’s output. “This is very important because the thickness is critical. There is a strong interplay between light propagation and the thickness of each layer and the absorption of each layer,” Mann explains.

Other variables that can be evaluated include the amount of doping (the introduction of atoms of another element) that each layer receives, or the dielectric constant of insulating layers, or the bandgap, a measure of the energy levels of photons of light that can be captured by different materials used in the layers.

This simulator is now available as an open-source tool that can be used immediately to help guide research in this field, Romano says. “It is ready, and can be taken up by industry experts.” To make use of it, researchers would couple this device’s computations with an optimization algorithm, or even a machine learning system, to rapidly assess a wide variety of possible changes and home in quickly on the most promising alternatives.

At this point, the simulator is based on just a one-dimensional version of the solar cell, so the next step will be to expand its capabilities to include two- and three-dimensional configurations. But even this 1D version “can cover the majority of cells that are currently under production,” Romano says. Certain variations, such as so-called tandem cells using different materials, cannot yet be simulated directly by this tool, but “there are ways to approximate a tandem solar cell by simulating each of the individual cells,” Mann says.

The simulator is “end-to-end," Romano says, meaning it computes the sensitivity of the efficiency, also taking into account light absorption. He adds: “An appealing future direction is composing our simulator with advanced existing differentiable light-propagation simulators, to achieve enhanced accuracy.”

Moving forward, Romano says, because this is an open-source code, “that means that once it's up there, the community can contribute to it. And that's why we are really excited.” Although this research group is “just a handful of people,” he says, now anyone working in the field can make their own enhancements and improvements to the code and introduce new capabilities.

“Differentiable physics is going to provide new capabilities for the simulations of engineered systems,” says Venkat Viswanathan, an associate professor of mechanical engineering at Carnegie Mellon University, who was not associated with this work. “The  differentiable solar cell simulator is an incredible example of differentiable physics, that can now provide new capabilities to optimize solar cell device performance,” he says, calling the study “an exciting step forward."

In addition to Mann and Romano, the team included Eric Fadel and Steven Johnson at MIT, and Samuel Schoenholz and Ekin Cubuk at Google Brain. The work was supported in part by Eni S.p.A. and the MIT Energy Initiative, and the MIT Quest for Intelligence.



de MIT News https://ift.tt/31NejBt

Machine-learning system flags remedies that might do more harm than good

Sepsis claims the lives of nearly 270,000 people in the U.S. each year. The unpredictable medical condition can progress rapidly, leading to a swift drop in blood pressure, tissue damage, multiple organ failure, and death.

Prompt interventions by medical professionals save lives, but some sepsis treatments can also contribute to a patient’s deterioration, so choosing the optimal therapy can be a difficult task. For instance, in the early hours of severe sepsis, administering too much fluid intravenously can increase a patient’s risk of death.

To help clinicians avoid remedies that may potentially contribute to a patient’s death, researchers at MIT and elsewhere have developed a machine-learning model that could be used to identify treatments that pose a higher risk than other options. Their model can also warn doctors when a septic patient is approaching a medical dead end — the point when the patient will most likely die no matter what treatment is used — so that they can intervene before it is too late.

When applied to a dataset of sepsis patients in a hospital intensive care unit, the researchers’ model indicated that about 12 percent of treatments given to patients who died were detrimental. The study also reveals that about 3 percent of patients who did not survive entered a medical dead end up to 48 hours before they died.

“We see that our model is almost eight hours ahead of a doctor’s recognition of a patient’s deterioration. This is powerful because in these really sensitive situations, every minute counts, and being aware of how the patient is evolving, and the risk of administering certain treatment at any given time, is really important,” says Taylor Killian, a graduate student in the Healthy ML group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Killian on the paper are his advisor, Assistant Professor Marzyeh Ghassemi, head of the Healthy ML group and senior author; lead author Mehdi Fatemi, a senior researcher at Microsoft Research; and Jayakumar Subramanian, a senior research scientist at Adobe India. The research is being presented at this week’s Conference on Neural Information Processing Systems.  

A dearth of data

This research project was spurred by a 2019 paper Fatemi wrote that explored the use of reinforcement learning in situations where it is too dangerous to explore arbitrary actions, which makes it difficult to generate enough data to effectively train algorithms. These situations, where more data cannot be proactively collected, are known as “offline” settings.

In reinforcement learning, the algorithm is trained through trial and error and learns to take actions that maximize its accumulation of reward. But in a health care setting, it is nearly impossible to generate enough data for these models to learn the optimal treatment, since it isn’t ethical to experiment with possible treatment strategies.

So, the researchers flipped reinforcement learning on its head. They used the limited data from a hospital ICU to train a reinforcement learning model to identify treatments to avoid, with the goal of keeping a patient from entering a medical dead end.

Learning what to avoid is a more statistically efficient approach that requires fewer data, Killian explains.

“When we think of dead ends in driving a car, we might think that is the end of the road, but you could probably classify every foot along that road toward the dead end as a dead end. As soon as you turn away from another route, you are in a dead end. So, that is the way we define a medical dead end: Once you’ve gone on a path where whatever decision you make, the patient will progress toward death,” Killian says.

“One core idea here is to decrease the probability of selecting each treatment in proportion to its chance of forcing the patient to enter a medical dead-end — a property that is called treatment security. This is a hard problem to solve as the data do not directly give us such an insight. Our theoretical results allowed us to recast this core idea as a reinforcement learning problem,” Fatemi says.

To develop their approach, called Dead-end Discovery (DeD), they created two copies of a neural network. The first neural network focuses only on negative outcomes — when a patient died — and the second network only focuses on positive outcomes — when a patient survived. Using two neural networks separately enabled the researchers to detect a risky treatment in one and then confirm it using the other.

They fed each neural network patient health statistics and a proposed treatment. The networks output an estimated value of that treatment and also evaluate the probability the patient will enter a medical dead end. The researchers compared those estimates to set thresholds to see if the situation raises any flags.

A yellow flag means that a patient is entering an area of concern while a red flag identifies a situation where it is very likely the patient will not recover.

Treatment matters

The researchers tested their model using a dataset of patients presumed to be septic from the Beth Israel Deaconess Medical Center intensive care unit. This dataset contains about 19,300 admissions with observations drawn from a 72-hour period centered around when the patients first manifest symptoms of sepsis. Their results confirmed that some patients in the dataset encountered medical dead ends.

The researchers also found that 20 to 40 percent of patients who did not survive raised at least one yellow flag prior to their death, and many raised that flag at least 48 hours before they died. The results also showed that, when comparing the trends of patients who survived versus patients who died, once a patient raises their first flag, there is a very sharp deviation in the value of administered treatments. The window of time around the first flag is a critical point when making treatment decisions.

“This helped us confirm that treatment matters and the treatment deviates in terms of how patients survive and how patients do not. We found that upward of 11 percent of suboptimal treatments could have potentially been avoided because there were better alternatives available to doctors at those times. This is a pretty substantial number, when you consider the worldwide volume of patients who have been septic in the hospital at any given time,” Killian says.

Ghassemi is also quick to point out that the model is intended to assist doctors, not replace them.

“Human clinicians are who we want making decisions about care, and advice about what treatment to avoid isn’t going to change that,” she says. “We can recognize risks and add relevant guardrails based on the outcomes of 19,000 patient treatments — that’s equivalent to a single caregiver seeing more than 50 septic patient outcomes every day for an entire year.”

Moving forward, the researchers also want to estimate causal relationships between treatment decisions and the evolution of patient health. They plan to continue enhancing the model so it can create uncertainty estimates around treatment values that would help doctors make more informed decisions. Another way to provide further validation of the model would be to apply it to data from other hospitals, which they hope to do in the future.

This research was supported in part by Microsoft Research, a Canadian Institute for Advanced Research Azrieli Global Scholar Chair, a Canada Research Council Chair, and a Natural Sciences and Engineering Research Council of Canada Discovery Grant.



de MIT News https://ift.tt/3lQNqUF

EECS graduate women’s research summit increases research visibility and strengthens community

The MIT Department of Electrical Engineering and Computer Science (EECS) group Graduate Women in Course 6 (GW6) held its third annual research summit on Nov. 5, with attendees convening in person along with a simultaneous webcast. The summit featured research lightning talks from graduate women and other underrepresented genders across EECS, as well as a keynote from Institute Professor Barbara Liskov and a panel of five prominent women in industry and academia. (Registration was open to all students and faculty.)

“We aimed to increase the visibility of work being done by women and underrepresented genders in the department,” says Katie Matton, one of GW6’s three co-presidents. Much of the summit was devoted to a whirlwind of three-minute lightning talks, with graduates giving attendees mini crash courses in their latest research pursuits. “I was really impressed with how excited people were to share their research,” says Hallee Wong, another GW6 co-president.

The talks spanned a wide variety of research areas, from photonics to wireless communication to machine learning, reflecting the department’s diverse spectrum of electrical engineering, computer science, and artificial intelligence and decision-making disciplines. The speakers were not only pushing the boundaries of their respective fields, but they were also targeting real-world applications, with work focusing on the next generation of technology, such as quantum computing and augmented reality, as well as quality of life, including health care, agriculture, and waste management.

“It was fun presenting to an audience, and it was good seeing what other women are doing in the department,” says Leticia Mattos Da Silva, one of the lightning talk speakers.

In addition to technical talks, the summit also hosted a panel on navigating research careers, which was “the crown jewel of the event,” says Anna Zeng, a former GW6 co-president, “Every single year, the panel has been a smash hit, and this year was no exception.” The panelists were five accomplished women: MIT president emerita and professor of neuroscience Susan Hockfield; MIT assistant professors Jelena Notaros and Sixian You; Google principal scientist and Harvard SEAS Professor Fernanda Viegas; and Aude Oliva, ​​director of the MIT-IBM Watson AI Lab. “It was cool to have women who have so much experience” and who are “pioneers in their fields,” says Matton.

Throughout the panel, there was a strong sense of community among those in the room. The panelists served as role models and mentors, giving advice on finding interesting research problems, remaining resilient to rejection, and establishing a strong support system. And when the panelists shared personal stories about overcoming gender barriers and building their own confidence, the entire room reacted in support with audible gasps and loud applause. Male attendees were also eager to lend their support, with one student asking the panelists for advice on how he can help better the experience of women students on campus. “It was nice having them hearing us and seeing the incredible work we’re doing,” says Mattos Da Silva.

After being virtual last year, the GW6 summit returned to a primarily in-person environment, much to the excitement of both the summit organizers and attendees. To take advantage of this, the summit included a coffee break and banquet dinner for attendees to connect with each other. “We know how important these interactions are with people on-campus,” says Wong.

During these networking opportunities, speakers, panelists, and other attendees flocked to chat, eat, and drink, filling the venue’s atrium with laughter and conversation. Attendees took the time to both catch up with familiar faces and also make new friends. “People seemed to have renewed enthusiasm to get to know one another,” says Zeng.

The GW6 summit is one of several major events that GW6 hosts each year to augment the professional development and community building of EECS graduate women and underrepresented genders. Recently, GW6 partnered with the newly announced EECS Thriving Stars initiative to help further improve gender representation and increase support for underrepresented genders. The GW6 summit complements a series of research summits in planning under this initiative, providing a platform for women and other underrepresented genders to share their research.  

“I loved [the GW6 summit],” says Mattos Da Silva, “Definitely one of the best experiences I’ve had at MIT so far.”



de MIT News https://ift.tt/3Gvoaez

Tiny machine learning design alleviates a bottleneck in memory usage on internet-of-things devices

Machine learning provides powerful tools to researchers to identify and predict patterns and behaviors, as well as learn, optimize, and perform tasks. This ranges from applications like vision systems on autonomous vehicles or social robots to smart thermostats to wearable and mobile devices like smartwatches and apps that can monitor health changes. While these algorithms and their architectures are becoming more powerful and efficient, they typically require tremendous amounts of memory, computation, and data to train and make inferences.

At the same time, researchers are working to reduce the size and complexity of the devices that these algorithms can run on, all the way down to a microcontroller unit (MCU) that’s found in billions of internet-of-things (IoT) devices. An MCU is memory-limited minicomputer housed in compact integrated circuit that lacks an operating system and runs simple commands. These relatively cheap edge devices require low power, computing, and bandwidth, and offer many opportunities to inject AI technology to expand their utility, increase privacy, and democratize their use — a field called TinyML.

Now, an MIT team working in TinyML in the MIT-IBM Watson AI Lab and the research group of Song Han, assistant professor in the Department of Electrical Engineering and Computer Science (EECS), has designed a technique to shrink the amount of memory needed even smaller, while improving its performance on image recognition in live videos.

“Our new technique can do a lot more and paves the way for tiny machine learning on edge devices,” says Han, who designs TinyML software and hardware.

To increase TinyML efficiency, Han and his colleagues from EECS and the MIT-IBM Watson AI Lab analyzed how memory is used on microcontrollers running various convolutional neural networks (CNNs). CNNs are biologically-inspired models after neurons in the brain and are often applied to evaluate and identify visual features within imagery, like a person walking through a video frame. In their study, they discovered an imbalance in memory utilization, causing front-loading on the computer chip and creating a bottleneck. By developing a new inference technique and neural architecture, the team alleviated the problem and reduced peak memory usage by four-to-eight times. Further, the team deployed it on their own tinyML vision system, equipped with a camera and capable of human and object detection, creating its next generation, dubbed MCUNetV2. When compared to other machine learning methods running on microcontrollers, MCUNetV2 outperformed them with high accuracy on detection, opening the doors to additional vision applications not before possible.

The results will be presented in a paper at the conference on Neural Information Processing Systems (NeurIPS) this week. The team includes Han, lead author and graduate student Ji Lin, postdoc Wei-Ming Chen, graduate student Han Cai, and MIT-IBM Watson AI Lab Research Scientist Chuang Gan.

A design for memory efficiency and redistribution

TinyML offers numerous advantages over deep machine learning that happens on larger devices, like remote servers and smartphones. These, Han notes, include privacy, since the data are not transmitted to the cloud for computing but processed on the local device; robustness, as the computing is quick and the latency is low; and low cost, because IoT devices cost roughly $1 to $2. Further, some larger, more traditional AI models can emit as much carbon as five cars in their lifetimes, require many GPUs, and cost billions of dollars to train. “So, we believe such TinyML techniques can enable us to go off-grid to save the carbon emissions and make the AI greener, smarter, faster, and also more accessible to everyone — to democratize AI,” says Han.

However, small MCU memory and digital storage limit AI applications, so efficiency is a central challenge. MCUs contain only 256 kilobytes of memory and 1 megabyte of storage. In comparison, mobile AI on smartphones and cloud computing, correspondingly, may have 256 gigabytes and terabytes of storage, as well as 16,000 and 100,000 times more memory. As a precious resource, the team wanted to optimize its use, so they profiled the MCU memory usage of CNN designs — a task that had been overlooked until now, Lin and Chen say.

Their findings revealed that the memory usage peaked by the first five convolutional blocks out of about 17. Each block contains many connected convolutional layers, which help to filter for the presence of specific features within an input image or video, creating a feature map as the output. During the initial memory-intensive stage, most of the blocks operated beyond the 256KB memory constraint, offering plenty of room for improvement. To reduce the peak memory, the researchers developed a patch-based inference schedule, which operates on only a small fraction, roughly 25 percent, of the layer’s feature map at one time, before moving onto the next quarter, until the whole layer is done. This method saved four-to-eight times the memory of the previous layer-by-layer computational method, without any latency.

“As an illustration, say we have a pizza. We can divide it into four chunks and only eat one chunk at a time, so you save about three-quarters. This is the patch-based inference method,” says Han. “However, this was not a free lunch.” Like photoreceptors in the human eye, they can only take in and examine part of an image at a time; this receptive field is a patch of the total image or field of view. As the size of these receptive fields (or pizza slices in this analogy) grows, there becomes increasing overlap, which amounts to redundant computation that the researchers found to be about 10 percent. The researchers proposed to also redistribute the neural network across the blocks, in parallel with the patch-based inference method, without losing any of the accuracy in the vision system. However, the question remained about which blocks needed the patch-based inference method and which could use the original layer-by-layer one, together with the redistribution decisions; hand-tuning for all of these knobs was labor-intensive, and better left to AI.

“We want to automate this process by doing a joint automated search for optimization, including both the neural network architecture, like the number of layers, number of channels, the kernel size, and also the inference schedule including number of patches, number of layers for patch-based inference, and other optimization knobs,” says Lin, “so that non-machine learning experts can have a push-button solution to improve the computation efficiency but also improve the engineering productivity, to be able to deploy this neural network on microcontrollers.”

A new horizon for tiny vision systems

The co-design of the network architecture with the neural network search optimization and inference scheduling provided significant gains and was adopted into MCUNetV2; it outperformed other vision systems in peak memory usage, and image and object detection and classification. The MCUNetV2 device includes a small screen, a camera, and is about the size of an earbud case. Compared to the first version, the new version needed four times less memory for the same amount of accuracy, says Chen. When placed head-to-head against other tinyML solutions, MCUNetV2 was able to detect the presence of objects in image frames, like human faces, with an improvement of nearly 17 percent. Further, it set a record for accuracy, at nearly 72 percent, for a thousand-class image classification on the ImageNet dataset, using 465KB of memory. The researchers tested for what’s known as visual wake words, how well their MCU vision model could identify the presence of a person within an image, and even with the limited memory of only 30KB, it achieved greater than 90 percent accuracy, beating the previous state-of-the-art method. This means the method is accurate enough and could be deployed to help in, say, smart-home applications.

With the high accuracy and low energy utilization and cost, MCUNetV2’s performance unlocks new IoT applications. Due to their limited memory, Han says, vision systems on IoT devices were previously thought to be only good for basic image classification tasks, but their work has helped to expand the opportunities for TinyML use. Further, the research team envisions it in numerous fields, from monitoring sleep and joint movement in the health-care industry to sports coaching and movements like a golf swing to plant identification in agriculture, as well as in smarter manufacturing, from identifying nuts and bolts to detecting malfunctioning machines.

“We really push forward for these larger-scale, real-world applications,” says Han. “Without GPUs or any specialized hardware, our technique is so tiny it can run on these small cheap IoT devices and perform real-world applications like these visual wake words, face mask detection, and person detection. This opens the door for a brand-new way of doing tiny AI and mobile vision.”

This research was sponsored by the MIT-IBM Watson AI Lab, Samsung, and Woodside Energy, and the National Science Foundation.



de MIT News https://ift.tt/33cMoMe

Climbing new heights across New England

The MIT Outing Club (MITOC) is dedicated to helping the MIT and Cambridge communities enjoy the great outdoors. Whether it's hiking, climbing, skiing, biking, camping, backpacking, snow shoeing, or canoeing, think of an outdoor activity and they probably offer it.

MITOC is a network of MIT community members — students, staff, alumni, faculty — and even affiliates from other area schools who come together for year-round outdoor recreation in the company of other outdoor recreation enthusiasts. Together, new and longstanding members have the thrilling opportunity to experience some of the highest, widest, most scenic vistas in the New England region.

Serving as one of the group’s largest programs in the warmer months, School of Rock has been running since its inception in 2014. However, MITOCers were rock climbing as early as the 1940s and have pioneered many climbs of the Northeast since the 1950s. “The program is a great way to teach people how to rock climb outside and advance their skills to more complex styles of climbing,” says Nicolas Romeo, a graduate student in the Department of Physics. “It’s a great way for us to foster our climbing community and recruit the next generation of leaders.”

MITOC members travel to their destinations by carpooling to small and large cliffs in the Northeast, known to climbers as “crags.” Rumney or Cathedral Ledges in New Hampshire, Farley Ledges in central Massachusetts, and the Shawangunk Ridge near New Paltz, New York, are locations the group often frequent for their daring climbs.

Building community outdoors

Over the last decade, rock climbing has continued to grow in popularity, especially after sport climbing made its Olympic debut during the 2020 Tokyo Summer Games. Climbing and hiking are MITOC’s core activities, and “climbing has a higher barrier of entry than hiking because of the increased risk and technical skills required to climb safely,” says Cole Crawford, Harvard University affiliate and member of MITOC.

Community safety and well-being are top priorities of MITOC. The club has a rigorous leadership selection process that involves a substantial application and recommendation system. Climbing leaders are required to have adequate technical, physical, and interpersonal skills in order to lead specific trips. In addition, climbing leaders have formal first-aid training for certain programs. Participants in climbs organized by MITOC and School of Rock are required to attend a number of mandatory lectures and technical review sessions prior to trips in order to learn skills such as building and cleaning anchors, safe belaying and rappelling, placing gear efficiently, picking appropriate climbs, and more.

Once safety measures are reviewed and in place, community members have the option to participate in two different climbing tracks: the sport track or the traditional climbing track. Beginner climbers typically start off on the sport track, which often involves the use of a rope and where the goal is to take indoor climbers to a stage where they can confidently and safely climb outdoors on permanently bolted sport routes up to 35 meters in length. The traditional track, also known as the “trad track,” is a more advanced version of climbing with the use of a rope, but without using permanent equipment. Trad climbing requires more gear, more rock-climbing experience, and is more difficult to get into safely without active mentoring.

The primary goal of MITOC’s School of Rock program is to help people become self-sufficient outdoor climbers. Crawford shares that oftentimes climbing partnerships form and grow organically out of the dedicated time spent together climbing and sharing adventures high on a remote mountain. Alongside the community building aspect of the sport, rock climbing is known to be a great workout that has strong benefits for an individual's mental, physical, emotional, and social well-being. Also, there are climbs for every ability level. “It is not just about strength. A large part of the sport is learning how to overcome the fear of falling, deciphering complex sequences of movement, and staying focused through them,” says Romeo. “Math and climbing are really the only things that get my brain to focus like that, and I love it.”

To some members, rock climbing is like solving a puzzle that requires focus. “I love it because it is both a great workout and involves some level of mental engagement that lifting or running or something doesn’t really have,” says Grady Thomas, a sophomore in electrical engineering and computer science. “The problem-solving aspect of it is really fun. It is also a great excuse to get outside and hang out in a pretty place.”

While there can be a number of barriers to outdoor recreation — money for transportation or gear, lack of experience, or lack of partners — the School of Rock program tries their best to eliminate these barriers to allow community members to learn how to climb outside safely. “We really just want to get people stoked on climbing outside!” says Crawford.

As the weather turns colder, MITOC is preparing for Winter School, its largest annual event held during MIT’s Independent Activities Period (IAP), and they encourage anyone interested to join them outside in January.



de MIT News https://ift.tt/31IyOQ9

With “Hello!” as its theme, 2.009 returns to the stage

On Monday night, Kresge Auditorium was lit up in the colors of the rainbow as a vibrant welcome for the final presentations of 2.009, MIT’s popular Product Design Processes course. After going virtual in 2020, the annual event was back in exuberant, pom-pom-waving form, with Covid-19 precautions in place to help ensure a safe and spectacular in-person show.

To attend the night’s festivities, everyone 12 years and older was required to be vaccinated, and all guests were required to wear masks indoors. At the door, course members handed out KF-94 masks to those who did not have a comparably protective face covering. With these safety measures in place, nearly 1,000 attendees, including more than 50 course staff and volunteers, filled Kresge to capacity as this year’s 149 students, working in eight design teams, prepared to pitch and demonstrate their new products. Another 4,100-plus viewers tuned in to watch live online.

The products are the culmination of 2.009, a course that gives students a taste of what engineers in a product development firm might go through to design a new product. Students are grouped into color-coded design teams and work together through the semester to envision, design, build, and draw up a business plan for a product, inspired each year by a different theme.

This year’s theme was simply, “Hello!” which course instructors chose as “a greeting, a beginning, a friendly signal, and an invitation to engage.” Students were encouraged to design products that would welcome users to new and meaningful experiences.

Greetings, Earthlings

The night kicked off in playful fashion with a video of a cartoon alien, which made similar appearances throughout the night. In the opening video, the animated visitor hovered over Kresge Auditorium, playing a five-note tune that mirrored the five letters in the word “Hello.” The tune was echoed by Kresge’s pipe organ, played live in the auditorium by an organist who then riffed on the musical greeting, transforming it into a sprawling improvisation. A house band then revved up the proceedings by belting out a rendition of Måneskin’s “Beggin,” sung to lyrics tailored to 2.009:

We’re designin’

Double oh-ninin’

Now launching our prototype, oh yeah!

Onstage, large illuminated block letters spelled out the word HELLO, then parted in the middle with a dramatic whoosh of smoke, to usher each team onstage to pitch their products. Leading the festivities was 2.009 instructor David Wallace, professor of mechanical engineering and the event’s longtime master of ceremonies, who sported his signature rainbow-sequined vest and top hat.

Staged pitches

The Blue Team was first to take the stage, with an eye-opening statistic: Over 2 million people worldwide have photosensitive epilepsy, a condition in which flashing lights can trigger debilitating seizures. The team’s solution is Eclipse, a pair of glasses that automatically darkens in response to flashing lights in the environment. The glasses have a built-in microprocessor and circuit board that processes light levels and sends a signal through the glasses to block incoming light if it senses flashing within a range that doctors have identified as inducing seizures. As part of its business plan, the team estimates the glasses could retail for around $600, potentially offering patients “a new way to see the world.”

Next was the Orange Team, with Escalate — a lightweight wheelchair attachment that helps boost a user up and over a curb. Escalate consists of two antislip rods that attach to the back of a wheelchair and can extend or retract wirelessly. A volunteer demonstrated the device onstage by first popping a wheelie to lift the front of his wheelchair over a manufactured curb. He then used a remote control to extend the rods toward the ground, which in turn lifted the back of the chair up and over the curb. Once stable, the user pressed a button to retract the rods. The system can lift up to 400 pounds in under 20 seconds, and should retail at around $900, with the potential to help wheelchair users “rise above life’s challenges.”

The Purple Team made its entrance with Vulcan, a cordless handheld tool designed to make the task of soldering less cumbersome. Soldering is the process of applying heat to join together different metals such as wires. Often, multiple people are required to carefully hold wires together and solder them in place. Vulcan combines the work of four hands into one tool. A team member demonstrated the product by soldering two wires, live onstage. He slid the wires through the tool’s clamping mechanism, similar to a hole-puncher. A small butane torch inside the tool heated the wires in a few seconds, soldering them together. The team estimates Vulcan can be a time-saving advantage for hobbyists, makers, and engineers, potentially selling for around $150.

The Red Team was next with Dextra, a “prosthesis accessory system” designed to simplify everyday tasks for people with below-elbow hook prosthetics. Such prosthetic hooks enable users to lift bags, for example, but other daily tasks such as opening jars can be more difficult. Instead, the team designed a circular connector that fits onto the wrist of a hook prosthetic, along with accessory tools that can slide into the connector to support and assist the user. A prosthetic user, who also happens to be a professional chef, demonstrated the tool onstage in a large kitchen setup. He inserted and used multiple tools to help him slice an onion and open a jar of pickles. The team estimates Dextra can retail at around $750, and aims to expand to prosthetic accessories for the garden, art studio, and other environments.

Colorful assists

The Green Team presented Delta Therapy, a system that provides continuous hot and cold therapy for knee surgery recovery patients. Following knee surgeries, patients must alternately apply ice packs and heat pads, which can quickly fade and routinely need replacing. Delta Therapy includes a soft, flexible bandage that contains fluid piping, connected to a compact housing, inside of which is a system that continuously cools or heats the fluid as it flows through. To demo the product, the team invited onstage MIT Undergraduate Association President Danielle Geathers, who underwent knee surgery three years ago. The team wrapped the bandage around Geathers’ knee, taking its temperature before and after the presentation, and found it dropped by 16 degrees in six minutes. They estimate that Delta Therapy, “the coolest way to recover,” could retail for around $5,500.

The Silver Team followed with Aisle Assist, a wheelchair designed to help people into airplane seats. Currently, when a wheelchair user boards a plane, they ride up to the gate, where an attendant helps them into an airplane wheelchair designed to fit through a plane’s narrow aisles. The attendant then must lift the person up and into their assigned seat — an awkward and physically demanding task. Aisle Assist is designed to replace the conventional airplane wheelchair. The team set the stage with rows of airplane seats, and wheeled an able-bodied team member out in Aisle Assist. The new wheelchair has a stabilizing element, and a seat that can slide a person across and into an airplane seat with minimal assistance. At a cost of $7,000, the team hopes the product will give passengers “power to inspire confidence.”

The Pink Team was next with ReVise, a new take on the workbench vise. To show the limitations of a traditional vise, a team member attempted to close a vise, set up onstage, around an uncooked egg, drawing groans from the crowd when it inevitably splattered. ReVise resembles a conventional vise, but with soft, granule-filled pads where a vise’s hard blocks would be. The pads connect to a pneumatic system that sucks air in and out of the pads, adjusting their stiffness to the object they affix. The same team member closed ReVise around another uncooked egg, this time without cracking it. The team envisions selling ReVise to engineering and trade schools, at around $450.

Finally, the Yellow Team presented Palette, a suitcase-sized device, fitted with paint tint cartridges, that automatically mixes tints to a desired shade. The product is meant for housepainters, who can mix specific tints on-site rather than spend time doing so at a hardware store. The team wheeled out two yellow walls, each needing touchups with a color named “Viking Yellow.” One student used a store-mixed can of paint, while another programmed Palette to reproduce the same shade onstage. The system quickly mixed the color, and both members repainted their walls, demonstrating that the colors matched. The system, they estimate, could sell for around $1,800, promising to “color your world faster.”

“To a better world”

As has become a 2.009 tradition, the student presentations ended with a group gift to Wallace. This year, they presented the professor with a set of newly minted nonfungible tokens (NFTs), in the form of past 2.009 Wallace avatars, along with a token for each product designed this year, signed by every student.

In closing the event, Wallace led the audience through a round of tejime, a Japanese custom of hand-clapping and whooping, often performed to conclude a celebration. To his students, he had one message:

“Please see your potential to bring beauty to this Earth. Imagine the stories that you have yet to tell, and dream about the lives you improve through your endeavors. And say hello to a better world.”



de MIT News https://ift.tt/3dzMBLs

martes, 7 de diciembre de 2021

Study reveals a protein’s key contribution to heterogeneity of neurons

The versatility of the nervous system comes from not only the diversity of ways in which neurons communicate in circuits, but also their “plasticity,” or ability to change those connections when new information has to be remembered, when their circuit partners change, or other conditions emerge. A new study by neuroscientists at The Picower Institute for Learning and Memory of MIT shows how just one protein situated on the front lines of neural connections, or synapses, can profoundly change how some neurons communicate and implement plasticity.

The team found that expression of the tomosyn protein is a major determining factor in whether the “presynaptic” neurons that send signals to control muscle contraction will be “phasic,” meaning they quickly release a lot of the neurotransmitter glutamate across synapses to drive communication, or will be “tonic,” meaning they will apportion glutamate in measured doses, keeping some in reserve. Because tonic neurons have those reserves, the study shows, they can step up glutamate release when receptors across the synapse begin to falter, a plasticity known as presynaptic homeostatic potentiation (PHP). Phasic neurons, with little or no tomosyn-mediated reserve, cannot respond similarly.

“If you break the synapse on the postsynaptic side, the presynaptic neuron will recognize that and generate more output to keep the overall synaptic response the same. This critical type of adaptive plasticity requires tomosyn,” says Troy Littleton, senior author of the new study in eLife and the Menicon Professor of Neuroscience in The Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences. “Diversity in the ability of different neurons to express this form of plasticity depends on whether they normally express the protein or not.”

Understanding Tomosyn’s role in neurons is important not only for defining the fundamental workings of synapses and plasticity mechanisms, a long-term goal of Littleton’s lab, but also because like flies, humans make tomosyn proteins and have tonic and phasic classes of neurons.

A decoy diversion

Before the study, tomosyn was known to become involved in the “SNARE” molecular machinery of presynaptic neurons. SNARE proteins dock packets, or vesicles, of neurotransmitters such as glutamate on the membrane of neurons so they can be released across the synapse. Tomosyn was also suspected to be a target of an enzyme considered important for learning and memory and plasticity, Littleton said.

Picower Fellow and former graduate student Chad Sauvola led the new study in Littleton’s lab to determine exactly what tomosyn does. He picked up on work started by co-author Nicole Aponte-Santiago PhD ’20, who had made (but not yet tested) mutations of the tomosyn gene in her research on tonic and phasic neuron plasticity.

When Sauvola started recording synaptic transmission from neurons with the tomosyn mutations, which were designed to disable the protein, he saw that the synapses engaged in much more glutamate transmission, with the muscles having much larger responses than normal. The loss of normal tomosyn apparently took the brakes off of glutamate release. Notably, he could repair the effects of the mutation by swapping in the human tomosyn protein, suggesting conservation of the protein’s property across species.

To learn how tomosyn works, Sauvola studied its structure and found the protein prevented synaptic vesicles from docking to the membrane by acting as a decoy to sequester SNARE proteins on the plasma membrane. He confirmed this in electron microscopy of neurons, with synapses lacking tomosyn showing 50 percent more vesicles at the membrane than those with tomosyn present. He also purposely stimulated synapses to encourage glutamate release and found that while normal tomosyn normally kept a lid on activity in wild-type animals, the mutants could not properly brake the amount of synaptic transmission.

A stark difference

Given the difference in glutamate release behavior between tonic and phasic neurons, Sauvola decided to examine tomosyn levels in those cell types. The weaker tonic neurons turned out to have more than twice as much tomosyn as the stronger phasic neurons, suggesting that tomosyn levels could account for the difference in glutamate release style.

To determine if tomosyn had such a pivotal role, Sauvola did more stimulation experiments in the two neuronal types. After stimulation in normal animals, phasic neurons emitted much more glutamate than tonic neurons, as expected. However, in the tomosyn mutants, the two neuronal classes behaved similarly, with tonic neurons releasing more similarly to their phasic neuronal counterparts.

Enabling plasticity

If tomosyn was holding back vesicle release of glutamate specifically in tonic neurons, then that might account for why only tonic neurons are able to exhibit PHP plasticity. Sure enough, when Sauvola disrupted glutamate receptors in muscle cells to induce the PHP response, he found that tonic neurons lacking tomosyn, just like control phasic neurons, could not trigger this form of plasticity. But when he looked at the response in normal tonic neurons, he found that synapse by synapse there were major increases in glutamate release — even synapses that showed very little propensity beforehand seemed to gain substantial capability to release synaptic signals.

“That’s really an amazing discovery that I hadn’t anticipated,” Littleton says. “It’s very surprising to see that these weak synapses could act much more mature on a very rapid timescale.”

One of the next steps for the lab will be to figure out what molecular interaction causes tomosyn to ease off the brakes when PHP is needed, Littleton says. Another future direction will be to look at other neuron types, especially in the brain, to see how tomosyn levels vary and how that affects their synaptic output.

But the new results definitively show that tomosyn’s ability to prevent SNARE binding of vesicles and resulting glutamate release makes a dramatic difference in neural communication style between tonic and phasic neurons.

In addition to Sauvola, Littleton, and Aponte-Santiago, the paper’s other authors are Yulia Akbergenova and Karen Cunningham.

The National Institutes of Health and the JPB Foundation provided funding for the research.



de MIT News https://ift.tt/3rGL4eI