sábado, 26 de febrero de 2022

Bridging the worlds of research and industry

Graduate student Nidhi Juthani was not content with just one graduate degree. Instead, she decided to earn two in one fell swoop, via MIT’s PhD in Chemical Engineering Practice (PhDCEP) program, which allows her to obtain a doctorate and an MBA concurrently. The combination is a perfect fit for Juthani, who wants to pursue a career bridging scientific research and industry.

An undergraduate internship helped spark her interest in combining the two fields. As a chemical engineering manufacturing process intern at Procter and Gamble, she worked in the business unit that produced feminine care products, starting her days on the manufacturing floor at 6:30 a.m. There, she gained an appreciation for the efficiency, procedures, and processes required to manufacture a product. “I could also see how a material [the absorbent core of the pads] that was conceivably once a lab project maybe 10 years ago had to be scaled up to get mass produced,” she says.

Now in her fifth year at MIT, Juthani has already completed her PhD, working in the lab of chemical engineering professor Patrick Doyle. Her research involved designing a microRNA-based diagnostic that could potentially help with early detection of certain cancers.  

Juthani began the MBA portion of the program at the Sloan School of Management last fall. She misses the freedom she had as a doctoral student to work on her own schedule. But the experience has been worthwhile. “My worldview has definitely been expanded,” she says. “I’ve learned about different industries and fields that I didn’t know existed, learned about different cultures and countries, ranging from Brazilian New Year’s traditions to how hierarchy works aboard a Navy ship, and developed a great support network.”

Finding a “perfect fit” graduate program

A native of Waterloo, Ontario, Juthani knew early on that she wanted to pursue a PhD. When she was 16, her family attended an open house at the University of Waterloo’s new Institute of Quantum Computing. The connections she made there led to the opportunity to work in Professor David Cory’s lab as a high schooler. She credits Cory, a physical chemist, with “opening up this whole world of academia to me, [a world] that I quite literally didn’t know existed before that.” Moreover, he planted the seed in her mind that she should pursue engineering and maybe a PhD. He even suggested that she consider MIT — a school that seemed out-of-reach to her at the time.

Juthani went on to study chemical engineering at the University of Waterloo. She was particularly enthralled by the school’s co-op program, which enabled her to try out a diverse array of careers. “You get to know what you like, but more importantly, you get to know what you don’t like,” she recalls.

Her first internship took her to Cambridge, Massachusetts, where she did research in the Aizenberg Lab at Harvard’s Wyss Institute for Biologically Inspired Engineering. “I had an immense amount of freedom to structure my entire project,” Juthani recalls. She immersed herself in her work, which ultimately helped lead to the publication of two papers.

Employing a different set of skills, Juthani worked at an energy sector-focused materials science and data analytics startup that had spun out of the Aizenberg Lab. As employee number five, she learned how to be a jack-of-all-trades doing anything and everything, including taking calls at 6 a.m. from customs officials to ensure orders arrived on time. (Happily, she also met her now-husband that summer, at another startup in the same building.)

In all, Juthani tackled five distinct internships during her undergraduate career. While each one helped inform her thinking about her professional trajectory, there was no question in her mind that she still wanted to pursue a PhD after graduation. However, she also recognized that a lifelong research career would not fulfill her. She desperately wanted the scientific foundation that can only be provided by a doctorate, but ultimately hoped to focus on the business management of science. To succeed at this kind of career, Juthani wanted to learn to be “bilingual” in both the language of science and the language of business, so that she can serve as a bridge between the technical and business teams on a project. 

A friend suggested that the unique PhD in Chemical Engineering Practice program at MIT would be a perfect fit. The program is very small, with only two to four students per year. “It’s so specifically geared for people who want to go into business out of a PhD that it just made sense for me,” Juthani says.

Making every minute count

Asked to describe her research, Juthani excitedly launches into a detailed technical discussion, noting that she hasn’t been able to explain her work in such depth to her MBA classmates. Her PhD focused on developing hydrogel microparticles for microRNA and extracellular vesicle detection (EVs), which both serve as biomarkers for a variety of diseases, including cancer, and may make cancer detection possible before it manifests into a tumor. “There is a need for better tools to enable research and diagnostics with EVs, since it is such a nascent field and there is much to learn,” she says.

Juthani developed a colorimetric assay using the microparticles, whose different shapes enable detection of multiple targets simultaneously. The round particles can be used for one specific microRNA, and the cuboid particles can be used to identify a different microRNA. Moreover, the process doesn’t require specialized equipment; the particles can be imaged with just a phone camera. Her animated description of the color theory involved in creating “perfect images” of the microparticles for her thesis is just one more manifestation of her many diverse passions.

Ever since she arrived at MIT, Juthani has had to grapple with the aggressive deadline of the PhDCEP program, which typically requires completing the PhD within three years. Despite a year-long setback due to the pandemic, she defended last August and started the MBA in September.

Changing gears has been eye-opening. “The MBA experience has been completely different from anything I’ve experienced in engineering — grad or undergrad — and in research,” she says. “There is a significant emphasis on group work, discussion-driven learning, and learning from each other’s experiences … and I’ve also learned how to think in a more systematic, framework-driven manner.” After she graduates, Juthani is considering life sciences consulting or venture capital, so she can use her business experience, satisfy her scientific side, and be exposed to a wide variety of companies and projects.

Outside the lab and classroom, Juthani seems to make the most of every minute. She has attended countless seminars, taken pottery classes, participated in MIT Figure Skating, joined a Bollywood dance group, and makes time for coffee dates with friends. And yet, her advice to other students is to “take time to look back and see how far you’ve come.” It’s a practice that has served her well. During some difficult weeks of her PhD experience, when she was overwhelmed by seemingly impossible problem sets, she would take a slow stroll down the Infinite Corridor. She says seeing the stream of flyers that line the walls prompted her to savor all the possibilities that MIT offers — and to remind herself of how lucky she is.

“I have this great opportunity to be here at MIT, and I want to try to do as much as possible,” she says. “I want to come out of MIT satisfied that I learned and tried new things.” True to form, Juthani politely says goodbye and rushes off to her glassblowing class, one of MIT’s most iconic experiences.



de MIT News https://ift.tt/rJjpL1Q

jueves, 24 de febrero de 2022

A new, inexpensive catalyst speeds the production of oxygen from water

An electrochemical reaction that splits apart water molecules to produce oxygen is at the heart of multiple approaches aiming to produce alternative fuels for transportation. But this reaction has to be facilitated by a catalyst material, and today’s versions require the use of rare and expensive elements such as iridium, limiting the potential of such fuel production.

Now, researchers at MIT and elsewhere have developed an entirely new type of catalyst material, called a metal hydroxide-organic framework (MHOF), which is made of inexpensive and abundant components. The family of materials allows engineers to precisely tune the catalyst’s structure and composition to the needs of a particular chemical process, and it can then match or exceed the performance of conventional, more expensive catalysts.

The findings are described today in the journal Nature Materials, in a paper by MIT postdoc Shuai Yuan, graduate student Jiayu Peng, Professor Yang Shao-Horn, Professor Yuriy Román-Leshkov, and nine others.

Oxygen evolution reactions are one of the reactions common to the electrochemical production of fuels, chemicals, and materials. These processes include the generation of hydrogen as a byproduct of the oxygen evolution, which can be used directly as a fuel or undergo chemical reactions to produce other transportation fuels; the manufacture of ammonia, for use as a fertilizer or chemical feedstock; and carbon dioxide reduction in order to control emissions.

But without help, “these reactions are sluggish,” Shao-Horn says. “For a reaction with slow kinetics, you have to sacrifice voltage or energy to promote the reaction rate.” Because of the extra energy input required, “the overall efficiency is low. So that’s why people use catalysts,” she says, as these materials naturally promote reactions by lowering energy input.

But until now, these catalysts “are all relying on expensive materials or late transition metals that are very scarce, for example iridium oxide, and there has been a big effort in the community to find alternatives based on Earth-abundant materials that have the same performance in terms of activity and stability,” Román-Leshkov says. The team says they have found materials that provide exactly that combination of characteristics.

Other teams have explored the use of metal hydroxides, such as nickel-iron hydroxides, Román-Leshkov says. But such materials have been difficult to tailor to the requirements of specific applications. Now, though, “the reason our work is quite exciting and quite relevant is that we’ve found a way of tailoring the properties by nanostructuring these metal hydroxides in a unique way.”

The team borrowed from research that has been done on a related class of compounds known as metal-organic frameworks (MOFs), which are a kind of crystalline structure made of metal oxide nodes linked together with organic linker molecules. By replacing the metal oxide in such materials with certain metal hydroxides, the team found, it became possible to create precisely tunable materials that also had the necessary stability to be potentially useful as catalysts.

“You put these chains of these organic linkers next to each other, and they actually direct the formation of metal hydroxide sheets that are interconnected with these organic linkers, which are then stacked, and have a higher stability,” Román-Leshkov says. This has multiple benefits, he says, by allowing a precise control over the nanostructured patterning, allowing precise control of the electronic properties of the metal, and also providing greater stability, enabling them to stand up to long periods of use.

In testing such materials, the researchers found the catalysts’ performance to be “surprising,” Shao-Horn says. “It is comparable to that of the state-of-the-art oxide materials catalyzing for the oxygen evolution reaction.”

Being composed largely of nickel and iron, these materials should be at least 100 times cheaper than existing catalysts, they say, although the team has not yet done a full economic analysis.

This family of materials “really offers a new space to tune the active sites for catalyzing water splitting to produce hydrogen with reduced energy input,” Shao-Horn says, to meet the exact needs of any given chemical process where such catalysts are needed.

The materials can provide “five times greater tunability” than existing nickel-based catalysts, Peng says, simply by substituting different metals in place of nickel in the compound. “This would potentially offer many relevant avenues for future discoveries.” The materials can also be produced in extremely thin sheets, which could then be coated onto another material, further reducing the material costs of such systems.

So far, the materials have been tested in small-scale laboratory test devices, and the team is now addressing the issues of trying to scale up the process to commercially relevant scales, which could still take a few years. But the idea has great potential, Shao-Horn says, to help catalyze the production of clean, emissions-free hydrogen fuel, so that “we can bring down the cost of hydrogen from this process while not being constrained by the availability of precious metals. This is important, because we need  hydrogen production technologies that can scale.”

The research team included others at MIT, Stockholm University in Sweden, SLAC National Accelerator Laboratory, and Institute of Ion Beam Physics and Materials Research in Dresden, Germany. The work was supported by the Toyota Research Institute.



de MIT News https://ift.tt/GDNEbwF

More sensitive X-ray imaging

Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.

Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.

Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.

The findings are described today in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.

While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.

To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter).

“The key to what we’re doing is a general theory and framework we have developed,” Rivera says. This allows the researchers to calculate the scintillation levels that would be produced by any arbitrary configuration of nanophotonic structures. The scintillation process itself involves a series of steps, making it complicated to unravel. The framework the team developed involves integrating three different types of physics, Roques-Carmes says. Using this system they have found a good match between their predictions and the results of their subsequent experiments.

The experiments showed a tenfold improvement in emission from the treated scintillator. “So, this is something that might translate into applications for medical imaging, which are optical photon-starved, meaning the conversion of X-rays to optical light limits the image quality. [In medical imaging,] you do not want to irradiate your patients with too much of the X-rays, especially for routine screening, and especially for young patients as well,” Roques-Carmes says.

“We believe that this will open a new field of research in nanophotonics,” he adds. “You can use a lot of the existing work and research that has been done in the field of nanophotonics to improve significantly on existing materials that scintillate.”

“The research presented in this paper is hugely significant,” says Rajiv Gupta, chief of neuroradiology at Massachusetts General Hospital and an associate professor at Harvard Medical School, who was not associated with this work. “Nearly all detectors used in the $100 billion [medical X-ray] industry are indirect detectors,” which is the type of detector the new findings apply to, he says. “Everything that I use in my clinical practice today is based on this principle. This paper improves the efficiency of this process by 10 times. If this claim is even partially true, say the improvement is two times instead of 10 times, it would be transformative for the field!”

Soljacic says that while their experiments proved a tenfold improvement in emission could be achieved in particular systems, by further fine-tuning the design of the nanoscale patterning, “we also show that you can get up to 100 times [improvement] in certain scintillator systems, and we believe we also have a path toward making it even better,” he says.

Soljacic points out that in other areas of nanophotonics, a field that deals with how light interacts with materials that are structured at the nanometer scale, the development of computational simulations has enabled rapid, substantial improvements, for example in the development of solar cells and LEDs. The new models this team developed for scintillating materials could facilitate similar leaps in this technology, he says.

Nanophotonics techniques “give you the ultimate power of tailoring and enhancing the behavior of light,” Soljacic says. “But until now, this promise, this ability to do this with scintillation was unreachable because modeling the scintillation was very challenging. Now, this work for the first time opens up this field of scintillation, fully opens it, for the application of nanophotonics techniques.” More generally, the team believes that the combination of nanophotonic and scintillators might ultimately enable higher resolution, reduced X-ray dose, and energy-resolved X-ray imaging.

This work is “very original and excellent,” says Eli Yablonovitch, a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, who was not associated with this research. “New scintillator concepts are very important in medical imaging and in basic research.”

Yablonovitch adds that while the concept still needs to be proven in a practical device, he says that, “After years of research on photonic crystals in optical communication and other fields, it's long overdue that photonic crystals should be applied to scintillators, which are of great practical importance yet have been overlooked” until this work.

The research team included Ali Ghorashi, Steven Kooi, Yi Yang, Zin Lin, Justin Beroz, Aviram Massuda, Jamison Sloan, and Nicolas Romeo at MIT; Yang Yu at Raith America, Inc.; and Ido Kaminer at Technion in Israel. The work was supported, in part, by the U.S. Army Research Office and the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, by the Air Force Office of Scientific Research, and by a Mathworks Engineering Fellowship.



de MIT News https://ift.tt/DewBqi0

Chemical synthesis yields potential antibiotic

Chemists at MIT have developed a novel way to synthesize himastatin, a natural compound that has shown potential as an antibiotic.

Using their new synthesis, the researchers were able not only to produce himastatin but also to generate variants of the molecule, some of which also showed antimicrobial activity. They also discovered that the compound appears to kill bacteria by disrupting their cell membranes. The researchers now hope to design other molecules that could have even stronger antibiotic activity.

“What we want to do right now is learn the molecular details about how it works, so we can design structural motifs that could better support that mechanism of action. A lot of our effort right now is to learn more about the physicochemical properties of this molecule and how it interacts with the membrane,” says Mohammad Movassaghi, an MIT professor of chemistry and one of the senior authors of the study.

Brad Pentelute, an MIT professor of chemistry, is also a senior author of the study, which appears today in Science. MIT graduate student Kyan D’Angelo is the lead author of the study, and graduate student Carly Schissel is also an author.

Mimicking nature

Himastatin, which is produced by a species of soil bacteria, was first discovered in the 1990s. In animal studies, it was found to have anticancer activity, but the required doses had toxic side effects. The compound also showed potential antimicrobial activity, but that potential hasn’t been explored in detail, Movassaghi says.

himastatin

Himastatin is a complex molecule that consists of two identical subunits, known as monomers, that join together to form a dimer. The two subunits are hooked together by a bond that connect a six-carbon ring in one of the monomers to the identical ring in the other monomer.

This carbon-carbon bond is critical for the molecule’s antimicrobial activity. In previous efforts to synthesize himastatin, researchers have tried to make that bond first, using two simple subunits, and then added more complex chemical groups onto the monomers.

The MIT team took a different approach, inspired by the way this reaction is performed in bacteria that produce himastatin. Those bacteria have an enzyme that can join the two monomers as the very last step of the synthesis, by turning each of the carbon atoms that need to be joined together into highly reactive radicals.

To mimic that process, the researchers first built complex monomers from amino acid building blocks, helped by a rapid peptide synthesis technology previously developed by Pentelute’s lab.

“By using solid-phase peptide synthesis, we could fast-forward through many synthetic steps and mix-and-match building blocks easily,” D’Angelo says. “That’s just one of the ways that our collaboration with the Pentelute Lab was very helpful.”

The researchers then used a new dimerization strategy developed in the Movassaghi lab to connect two complex molecules together. This new dimerization is based on the oxidation of aniline to form carbon radicals in each molecule. These radicals can react to form the carbon-carbon bond that hooks the two monomers together. Using this approach, the researchers can create dimers that contain different types of subunits, in addition to naturally occurring himastatin dimers.

“The reason we got excited about this type of dimerization is because it allows you to really diversify the structure and access other potential derivatives very quickly,” Movassaghi says.

Membrane disruption

One of the variants that the researchers created has a fluorescent tag, which they used to visualize how himastatin interacts with bacterial cells. Using these fluorescent probes, the researchers found that the drug accumulates in the bacterial cell membranes. This led them to hypothesize that it works by disrupting the cell membrane, which is also a mechanism used by at least one FDA-approved antibiotic, daptomycin.

The researchers also designed several other himastatin variants by swapping in different atoms in specific parts of the molecule, and tested their antimicrobial activity against six bacterial strains. They found that some of these compounds had strong activity, but only if they included one naturally occurring monomer along with one that was different.

“By bringing two complete halves of the molecule together, we could make a himastatin derivative with only a single fluorescent label. Only with this version could we do microscopy studies that offered evidence of himastatin’s localization within bacterial membranes, because symmetric versions with two labels did not have the right activity,” D’Angelo says.

Andrew Myers, a professor of chemistry at Harvard University, says that the new synthesis features “fascinating new chemical innovations.”

“This approach permits oxidative dimerization of fully synthetic monomer subunits to prepare the antibiotic himastatin, in a manner related to its biosynthesis,” says Myers, who was not involved in the research. “By synthesizing a number of analogs, important structure-activity relationships were revealed, as well as evidence that the natural product functions at the level of the bacterial envelope.”

The researchers now plan to design more variants that they hope might have more potent antibiotic activity.

“We’ve already identified positions that we can derivatize that could potentially either retain or enhance the activity. What’s really exciting to us is that a significant number of the derivatives that we accessed through this design process retain their antimicrobial activity,” Movassaghi says.

The research was funded by the National Institutes of Health, the Natural Sciences and Engineering Research Council of Canada, and a National Science Foundation graduate research fellowship.



de MIT News https://ift.tt/2T3avpV

miércoles, 23 de febrero de 2022

A security technique to fool would-be cyber attackers

Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets could be stolen by a malicious program through a “memory timing side-channel attack.”

This malicious program notices delays when it tries to access a computer’s memory, because the hardware is shared among all programs using the machine. It can then interpret those delays to obtain another program’s secrets, like a password or cryptographic key.

One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this dramatically slows down computation. Instead, a team of MIT researchers has devised a new approach that allows memory sharing to continue while providing strong security against this type of side-channel attack. Their method is able to speed up programs by 12 percent when compared to state-of-the-art security schemes.

In addition to providing better security while enabling faster computation, the technique could be applied to a range of different side-channel attacks that target shared computing resources, the researchers say.

“Nowadays, it is very common to share a computer with others, especially if you are do computation in the cloud or even on your own mobile device. A lot of this resource sharing is happening. Through these shared resources, an attacker can seek out even very fine-grained information,” says senior author Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

The co-lead authors are CSAIL graduate students Peter Deutsch and Yuheng Yang. Additional co-authors include Joel Emer, a professor of the practice in EECS, and CSAIL graduate students Thomas Bourgeat and Jules Drean. The research will be presented at the International Conference on Architectural Support for Programming Languages and Operating Systems.

Committed to memory

One can think about a computer’s memory as a library, and the memory controller as the library door. A program needs to go to the library to retrieve some stored information, so that program opens the library door very briefly to go inside.

There are several ways a malicious program can exploit shared memory to access secret information. This work focuses on a contention attack, in which an attacker needs to determine the exact instant when the victim program is going through the library door. The attacker does that by trying to use the door at the same time.

“The attacker is poking at the memory controller, the library door, to say, ‘is it busy now?’ If they get blocked because the library door is opening already — because the victim program is already using the memory controller — they are going to get delayed. Noticing that delay is the information that is being leaked,” says Emer.

To prevent contention attacks, the researchers developed a scheme that “shapes” a program’s memory requests into a predefined pattern that is independent of when the program actually needs to use the memory controller. Before a program can access the memory controller, and before it could interfere with another program’s memory request, it must go through a “request shaper” that uses a graph structure to process requests and send them to the memory controller on a fixed schedule. This type of graph is known as a directed acyclic graph (DAG), and the team’s security scheme is called DAGguise.

Fooling an attacker

Using that rigid schedule, sometimes DAGguise will delay a program’s request until the next time it is permitted to access memory (according to the fixed schedule), or sometimes it will submit a fake request if the program does not need to access memory at the next schedule interval.

“Sometimes the program will have to wait an extra day to go to the library and sometimes it will go when it didn’t really need to. But by doing this very structured pattern, you are able to hide from the attacker what you are actually doing. These delays and these fake requests are what ensures security,” Deutsch says.

DAGguise represents a program’s memory access requests as a graph, where each request is stored in a “node,” and the “edges” that connect the nodes are time dependencies between requests. (Request A must be completed before request B.) The edges between the nodes — the time between each request — are fixed.

A program can submit a memory request to DAGguise whenever it needs to, and DAGguise will adjust the timing of that request to always ensure security. No matter how long it takes to process a memory request, the attacker can only see when the request is actually sent to the controller, which happens on a fixed schedule.

This graph structure enables the memory controller to be dynamically shared. DAGguise can adapt if there are many programs trying to use memory at once and adjust the fixed schedule accordingly, which enables a more efficient use of the shared memory hardware while still maintaining security.

A performance boost

The researchers tested DAGguise by simulating how it would perform in an actual implementation. They constantly sent signals to the memory controller, which is how an attacker would try to determine another program’s memory access patterns. They formally verified that, with any possible attempt, no private data were leaked.

Then they used a simulated computer to see how their system could improve performance, compared to other security approaches.

“When you add these security features, you are going to slow down compared to a normal execution. You are going to pay for this in performance,” Deutsch explains.

While their method was slower than a baseline insecure implementation, when compared to other security schemes, DAGguise led to a 12 percent increase in performance.

With these encouraging results in hand, the researchers want to apply their approach to other computational structures that are shared between programs, such as on-chip networks. They are also interested in using DAGguise to quantify how threatening certain types of side-channel attacks might be, in an effort to better understand performance and security tradeoffs, Deutsch says.

This work was funded, in part, by the National Science Foundation and the Air Force Office of Scientific Research.



de MIT News https://ift.tt/mguH0ZO

Alan Grossman to step down as head of the Department of Biology

Alan D. Grossman, the Praecis Professor of Biology at MIT, has announced he will step down as the head of the Department of Biology before the start of the next academic year. He will continue to lead the department until the new head is selected. A search committee will convene later this spring to recommend candidates for Grossman’s successor.

“Alan Grossman is an outstanding biologist who is, and has been, deeply committed to the research and educational missions of the biology department,” says Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics and the dean of the MIT School of Science. “He has time and again established MIT biology as a leader in the life sciences at the Institute, in Kendall Square, and beyond.”

“It has been a privilege to lead this department and its talented members — faculty, staff, and students — for the past eight years,” says Grossman. “With the dedication and drive of this community, we have accomplished so much together and set new and ambitious goals for the future of life sciences research and education.”

Grossman was instrumental in securing a $50 million gift from Professor Emeritus Paul Schimmel PhD ’66 and his family to support life sciences across the Institute. Schimmel’s initial gift of $25 million established the Schimmel Family Program for Life Sciences that matched $25 million secured from other sources in support of the Department of Biology. The remaining $25 million from the Schimmel family will support the Schimmel Family Program in the form of matching funds.

“This transformative gift provides students with the resources they need to be successful in their education, research, and careers,” says Institute Professor Phillip A. Sharp, who also contributed to the matching gift. “Alan’s leadership and vision provided the framework to make this gift a reality for graduate students who perform life sciences research across the Institute, not just in biology.”

For many years, Grossman was deeply involved in graduate education. He served on the committees that oversee the graduate program in biology and the interdepartmental graduate program in computational and systems biology. For seven years, Grossman was director or co-director of the biology graduate program. He helped establish the interdepartmental graduate program in microbiology in 2007 and served as its founding director until 2012.

Before assuming the role as department head, Grossman also served the department as associate head and had served MIT on several committees, including as a member of the Committee on Curriculum and the Faculty Advisory Committee for the Office of Minority Education. Through the work of the department’s academic officers, student leaders, and advisors, Grossman oversaw the development of the most recent interdisciplinary undergraduate biology major, Course 5-7 (Chemistry and Biology).

Within his department, Grossman raised funds to endow support for students in the MIT Summer Research Program in Biology (MSRP-Biology). He worked with others to expand the diversity of the graduate program, the applicant pool for biology faculty positions, and the scientific workforce through a variety of outreach programs and endeavors.

Recently, Grossman raised additional funds to endow MSRP-Biology. Michael Gould and Sara Moss supplemented their initial gift in 2015 with an additional donation to further support, endow and rename MSRP-Biology to the Bernard S. and Sophie G. Gould MIT Summer Research Program in Biology to honor Gould’s parents.

“Sara and I are grateful for Alan’s nurturing of the program,” said Gould. “Without Alan, we never would have supported this wonderful program; and with Alan at the helm and Mandana Sassanfar as the director of outreach, we knew that many talented individuals would benefit from the research opportunities at MIT.”

Grossman’s tenure also saw the establishment of a cryo-electron microscopy (cryo-EM) facility at MIT. An anonymous donation of $5 million and a $2.5 million gift from the Arnold and Mabel Beckman Foundation supported the purchase of two cryo-electron microscopes that are housed in MIT.nano. These microscopes are used by life science researchers from many departments across MIT and throughout the Boston area.

“The existence of this facility has made it possible for MIT to recruit outstanding junior faculty members focused on using cryo-EM to address fundamental biological problems,” says associate department head Professor Jacqueline Lees. “At a more general level, Alan has been remarkably successful at junior faculty recruitment and in increasing the diversity of our faculty.”

During Grossman’s tenure as department head and in collaboration with the MIT-affiliated life sciences institutes and the hard work of search committees, the department has hired more than 20 faculty members, over than half of whom are women and/or from groups underrepresented in STEM. This faculty renewal involved forging a relationship with the Ragon Institute of MGH, MIT, and Harvard and includes three new faculty members located at the Ragon Institute. With the influx of new faculty members, the department’s senior faculty instituted a robust plan for mentoring junior faculty, supplementing programs that are offered at the school and Institute levels. 

In his own research, Grossman combines a range of approaches — genetic, molecular, physiological, biochemical, cell-biological, and genomic — to study fundamental biological processes in bacteria. His current work is focused mechanisms controlling horizontal gene transfer, the process by which bacteria move genes from one organism to another, the primary means by which antibiotic resistances are spread among bacteria.

Grossman received a BA in biochemistry from Brown University in 1979, and a PhD in molecular biology from the University of Wisconsin at Madison in 1984. After a postdoctoral fellowship in the Department of Cellular and Developmental Biology at Harvard University, Grossman joined MIT’s Department of Biology in 1988. He is a fellow of the American Academy of Arts and Sciences, the American Academy of Microbiology, and is a member of the National Academy of Sciences. He received a life-saving heart transplant in 2006.



de MIT News https://ift.tt/QFkj1ad

How sectoral employment training can advance economic mobility for workers who face barriers to employment

For many U.S. workers, it has become increasingly difficult to gain employment in jobs that offer living wages, opportunities for career advancement, or economic mobility — especially for workers without a college degree. Rising wage inequality has reinforced significant and persistent racial gaps in earnings stemming from structural barriers to opportunity faced by people of color in the American job market.

In the face of rising income inequality over the past several decades, policymakers and workforce development organizations have made supporting workers’ ability to access high-quality jobs a priority. Sectoral employment programs, which train job seekers for employment in specific industries considered to have strong labor demand and opportunities for career growth, offer a promising pathway to higher-wage jobs for workers who may face barriers to employment, typically those without college degrees. But rigorous research is necessary to truly understand how effective these programs are and through what mechanisms they generate impacts.

J-PAL North America’s new publication, “Sectoral employment programs as a path to quality jobs: Lessons from randomized evaluations,” summarizes an academic paper that examines four randomized evaluations of sectoral employment programs and describes the mechanisms behind their success. This analysis finds that sectoral employment programs generate consistently large, positive impacts on worker employment and earnings. These benefits are largely driven by workers gaining access to higher-wage and higher-quality jobs after participating in the training programs. 

The magnitude and consistency of the findings point to sectoral employment programs as a promising tool to advance worker prosperity. Key findings include: 

  • Increased earnings: Sectoral employment programs generate substantial earnings increases in the year following training completion. These earnings persist in the evaluations with longer-term follow-up evidence. Earnings gains from high-performing sectoral employment programs are among the largest found in evaluations of U.S. training and employment services programs. 
  • Higher levels of credential and certificate attainment: Sectoral employment programs substantially increase training and career services received and educational credentials and certificates attained, particularly those related to targeted sectors. 
  • Pathways to new jobs: Earnings gains from access to sectoral employment programs are driven by an increased share of participants working in higher-wage jobs after training, rather than increased employment rates or increased hours worked. This is likely from participants gaining employment in the targeted sectors. 

The most effective sectoral employment training programs include a combination of the following key features: upfront screening for applicants on basic skills and motivation; occupational skills training targeted to high-wage sectors and leading to an industry-recognized certificate; career readiness training (also sometimes referred to as soft skills); wraparound support services for participants; and strong connections to employers.

This publication is meant to serve as a resource for policymakers, practitioners, and researchers who are working to reduce barriers to high-opportunity employment, improve earnings for workers, and minimize the growth of wage inequality. To that end, J-PAL North America is partnering with WorkRise, an Urban Institute initiative, to host a discussion of this publication, other research on sectoral employment programs, and key open questions on Wednesday, March 9 at 2:30 p.m. ET. Speakers include Jukay Hsu, co-founder and chief executive officer at Pursuit Fellowship, a sectoral employment program; Maurice Jones, chief executive officer of OneTen, an organization whose mission is to hire, promote, and advance 1 million Black Americans into family-sustaining careers; and Lawrence Katz, the Elisabeth Allison Professor of Economics at Harvard University and co–scientific director of J-PAL North America. The panelists will offer insights into the role sectoral employment programs can play in improving economic mobility and closing racial equity gaps in the labor market. 

The publication also outlines ongoing questions about how to effectively scale programs, support displaced and discouraged workers, and deliver economic mobility to workers across the country. J-PAL North America is seeking to answer these and other key questions on supporting workers through its Worker Prosperity Initiative. Readers interested in discussing the evidence review, pursuing opportunities to rigorously evaluate questions related to sectoral employment and worker prosperity, or learning more about J-PAL North America’s labor work can visit the J-PAL North America website, subscribe to the Worker Prosperity Initiative newsletter, or contact J-PAL North America Labor Sector Lead Toby Chaiken

J-PAL North America is a regional office of the Abdul Latif Jameel Poverty Action Lab (J-PAL), a global research center based at MIT.



de MIT News https://ift.tt/q9knXUC

martes, 22 de febrero de 2022

A security technique to fool would-be cyber attackers

Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets could be stolen by a malicious program through a “memory timing side-channel attack.”

This malicious program notices delays when it tries to access a computer’s memory, because the hardware is shared among all programs using the machine. It can then interpret those delays to obtain another program’s secrets, like a password or cryptographic key.

One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this dramatically slows down computation. Instead, a team of MIT researchers has devised a new approach that allows memory sharing to continue while providing strong security against this type of side-channel attack. Their method is able to speed up programs by 12 percent when compared to state-of-the-art security schemes.

In addition to providing better security while enabling faster computation, the technique could be applied to a range of different side-channel attacks that target shared computing resources, the researchers say.

“Nowadays, it is very common to share a computer with others, especially if you are do computation in the cloud or even on your own mobile device. A lot of this resource sharing is happening. Through these shared resources, an attacker can seek out even very fine-grained information,” says senior author Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

The co-lead authors are CSAIL graduate students Peter Deutsch and Yuheng Yang. Additional co-authors include Joel Emer, a professor of the practice in EECS, and CSAIL graduate students Thomas Bourgeat and Jules Drean. The research will be presented at the International Conference on Architectural Support for Programming Languages and Operating Systems.

Committed to memory

One can think about a computer’s memory as a library, and the memory controller as the library door. A program needs to go to the library to retrieve some stored information, so that program opens the library door very briefly to go inside.

There are several ways a malicious program can exploit shared memory to access secret information. This work focuses on a contention attack, in which an attacker needs to determine the exact instant when the victim program is going through the library door. The attacker does that by trying to use the door at the same time.

“The attacker is poking at the memory controller, the library door, to say, ‘is it busy now?’ If they get blocked because the library door is opening already — because the victim program is already using the memory controller — they are going to get delayed. Noticing that delay is the information that is being leaked,” says Emer.

To prevent contention attacks, the researchers developed a scheme that “shapes” a program’s memory requests into a predefined pattern that is independent of when the program actually needs to use the memory controller. Before a program can access the memory controller, and before it could interfere with another program’s memory request, it must go through a “request shaper” that uses a graph structure to process requests and send them to the memory controller on a fixed schedule. This type of graph is known as a directed acyclic graph (DAG), and the team’s security scheme is called DAGguise.

Fooling an attacker

Using that rigid schedule, sometimes DAGguise will delay a program’s request until the next time it is permitted to access memory (according to the fixed schedule), or sometimes it will submit a fake request if the program does not need to access memory at the next schedule interval.

“Sometimes the program will have to wait an extra day to go to the library and sometimes it will go when it didn’t really need to. But by doing this very structured pattern, you are able to hide from the attacker what you are actually doing. These delays and these fake requests are what ensures security,” Deutsch says.

DAGguise represents a program’s memory access requests as a graph, where each request is stored in a “node,” and the “edges” that connect the nodes are time dependencies between requests. (Request A must be completed before request B.) The edges between the nodes — the time between each request — are fixed.

A program can submit a memory request to DAGguise whenever it needs to, and DAGguise will adjust the timing of that request to always ensure security. No matter how long it takes to process a memory request, the attacker can only see when the request is actually sent to the controller, which happens on a fixed schedule.

This graph structure enables the memory controller to be dynamically shared. DAGguise can adapt if there are many programs trying to use memory at once and adjust the fixed schedule accordingly, which enables a more efficient use of the shared memory hardware while still maintaining security.

A performance boost

The researchers tested DAGguise by simulating how it would perform in an actual implementation. They constantly sent signals to the memory controller, which is how an attacker would try to determine another program’s memory access patterns. They formally verified that, with any possible attempt, no private data were leaked.

Then they used a simulated computer to see how their system could improve performance, compared to other security approaches.

“When you add these security features, you are going to slow down compared to a normal execution. You are going to pay for this in performance,” Deutsch explains.

While their method was slower than a baseline insecure implementation, when compared to other security schemes, DAGguise led to a 12 percent increase in performance.

With these encouraging results in hand, the researchers want to apply their approach to other computational structures that are shared between programs, such as on-chip networks. They are also interested in using DAGguise to quantify how threatening certain types of side-channel attacks might be, in an effort to better understand performance and security tradeoffs, Deutsch says.

This work was funded, in part, by the National Science Foundation and the Air Force Office of Scientific Research.



de MIT News https://ift.tt/9jA3Hnb

A “hot Jupiter’s” dark side is revealed in detail for first time

MIT astronomers have obtained the clearest view yet of the perpetual dark side of an exoplanet that is “tidally locked” to its star. Their observations, combined with measurements of the planet’s permanent day side, provide the first detailed view of an exoplanet’s global atmosphere.

“We’re now moving beyond taking isolated snapshots of specific regions of exoplanet atmospheres, to study them as the 3D systems they truly are,” says Thomas Mikal-Evans, who led the study as a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.

The planet at the center of the new study, which appears today in Nature Astronomy, is WASP-121b, a massive gas giant nearly twice the size of Jupiter. The planet is an ultrahot Jupiter and was discovered in 2015 orbiting a star about 850 light years from Earth. WASP-121b has one of the shortest orbits detected to date, circling its star in just 30 hours. It is also tidally locked, such that its star-facing “day” side is permanently roasting, while its “night” side is turned forever toward space.  

“Hot Jupiters are famous for having very bright day sides, but the night side is a different beast. WASP-121b's night side is about 10 times fainter than its day side,” says Tansu Daylan, an MIT postdoc working on NASA’s MIT-led mission, TESS, who co-authored the study.

Astronomers had previously detected water vapor and studied how the atmospheric temperature changes with altitude on the planet’s day side.

The new study captures a much more detailed picture. The researchers were able to map the dramatic temperature changes from the day to the night side, and to see how these temperatures change with altitude. They also tracked the presence of water through the atmosphere to show, for the first time, how water circulates between a planet’s day and night sides.

While on Earth, water cycles by first evaporating, then condensing into clouds, then raining out, on WASP-121b, the water cycle is far more intense: On the day side, the atoms that make up water are ripped apart at temperatures over 3,000 kelvins. These atoms are blown around to the night side, where colder temperatures allow hydrogen and oxygen atoms to recombine into water molecules, which then blow back to the day side, where the cycle starts again.

The team calculates that the planet’s water cycle is sustained by winds that whip the atoms around the planet at speeds of up to 5 kilometers per second, or more than 11,000 miles per hour.

It also appears that water isn’t alone in circulating around the planet. The astronomers found that the night side is cold enough to host exotic clouds of iron and corundum — a mineral that makes up rubies and sapphires. These clouds, like water vapor, may whip around to the day side, where high temperatures vaporize the metals into gas form. On the way, exotic rain might be produced, such as liquid gems from the corundum clouds.

“With this observation, we’re really getting a global view of an exoplanet’s meteorology,” Mikal-Evans says.

The study’s co-authors include collaborators from MIT, Johns Hopkins University, Caltech, and other institutions.  

Day and night

The team observed WASP-121b using a spectroscopic camera aboard NASA’s Hubble Space Telescope. The instrument observes the light from a planet and its star, and breaks that light down into its constituent wavelengths, the intensities of which give astronomers clues to an atmosphere’s temperature and composition.

Through spectroscopic studies, scientists have observed atmospheric details on the day sides of many exoplanets. But doing the same for the night side is far trickier, as it requires watching for tiny changes in the planet’s entire spectrum as it circles its star.

For the new study, the team observed WASP-121b throughout two full orbits — one in 2018, and the other in 2019. For both observations, the researchers looked through the light data for a specific line, or spectral feature, that indicated the presence of water vapor.

“We saw this water feature and mapped how it changed at different parts of the planet’s orbit,” Mikal-Evans says. “That encodes information about what the temperature of the planet’s atmosphere is doing as a function of altitude.”

The changing water feature helped the team map the temperature profile of both the day and night side. They found the day side ranges from 2,500 kelvins at its deepest observable layer, to 3,500 K in its topmost layers. The night side ranged from 1,800 K at its deepest layer, to 1,500 K in its upper atmosphere. Interestingly, temperature profiles appeared to flip-flop, rising with altitude on the day side — a “thermal inversion,” in meteorological terms — and dropping with altitude on the night side.

The researchers then passed the temperature maps through various models to identify chemicals that are likely to exist in the planet’s atmosphere, given specific altitudes and temperatures. This modeling revealed the potential for metal clouds, such as iron, corundum, and titanium on the night side.

From their temperature mapping, the team also observed that the planet’s hottest region is shifted to the east of the “substellar” region directly below the star. They deduced that this shift is due to extreme winds.

“The gas gets heated up at the substellar point but is getting blown eastward before it can reradiate to space,” Mikal-Evans explains.

From the size of the shift, the team estimates that the wind speeds clock in at around 5 kilometers per second.

“These winds are much faster than our jet stream, and can probably move clouds across the entire planet in about 20 hours,” says Daylan, who led previous work on the planet using TESS.

The astronomers have reserved time on the James Webb Space Telescope to observe WASP-121b later this year, and hope to map changes in not just water vapor but also carbon monoxide, which scientists suspect should reside in the atmosphere.

“That would be the first time we could measure a carbon-bearing molecule in this planet’s atmosphere,” Mikal-Evans says. “The amount of carbon and oxygen in the atmosphere provides clues on where these kinds of planet form.”

This research was supported, in part, by NASA through a grant from the Space Telescope Science Institute.



de MIT News https://ift.tt/av4jt6G

Singing in the brain

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

The research was funded by the National Institutes of Health, the U.S. Army Research Office, the National Science Foundation, the NSF Science and Technology Center for Brains, Minds, and Machines, the Fondazione Neurone, and the Howard Hughes Medical Institute.



de MIT News https://ift.tt/qELW3bX

lunes, 21 de febrero de 2022

New power sources

In the mid-1990s, a few energy activists in Massachusetts had a vision: What if citizens had choice about the energy they consumed? Instead of being force-fed electricity sources selected by a utility company, what if cities, towns, and groups of individuals could purchase power that was cleaner and cheaper?

The small group of activists — including a journalist, the head of a small nonprofit, a local county official, and a legislative aide — drafted model legislation along these lines that reached the state Senate in 1995. The measure stalled out. In 1997, they tried again. Massachusetts legislators were busy passing a bill to reform the state power industry in other ways, and this time the activists got their low-profile policy idea included in it — as a provision so marginal it only got a brief mention in The Boston Globe’s coverage of the bill.

Today, this idea, often known as Community Choice Aggregation (CCA), is used by roughly 36 million people in the U.S., or 11 percent of the population. Local residents, as a bloc, purchase energy with certain specifications attached, and over 1,800 communities have adopted CCA in six states, with others testing CCA pilot programs. From such modest beginnings, CCA has become a big deal.

“It started small, then had a profound impact,” says David Hsu, an associate professor at MIT who studies energy policy issues. Indeed, the trajectory of CCA is so striking that Hsu has researched its origins, combing through a variety of archival sources and interviewing the principals. He has now written a journal article examining the lessons and implications of this episode.

Hsu’s paper, “Straight out of Cape Cod: The origin of community choice aggregation and its spread to other states,” appears in advance online form in the journal Energy Research and Social Science, and in the April print edition of the publication.

“I wanted to show people that a small idea could take off into something big,” Hsu says. “For me that’s a really hopeful democratic story, where people could do something without feeling they had to take on a whole giant system that wouldn’t immediately respond to only one person.”

Local control

Aggregating consumers to purchase energy was not a novelty in the 1990s. Companies within many industries have long joined forces to gain purchasing power for energy. And Rhode Island tried a form of CCA slightly earlier than Massachusetts did.

However, it is the Massachusetts model that has been adopted widely: Cities or towns can require power purchases from, say, renewable sources, while individual citizens can opt out of those agreements. More state funding (for things like efficiency improvements) is redirected to cities and towns as well.

In both ways, CCA policies provide more local control over energy delivery. They have been adopted in California, Illinois, New Jersey, New York, and Ohio. Meanwhile, Maryland, New Hampshire, and Virginia have recently passed similar legislation (also known as municipal or government aggregation, or community choice energy).

For cities and towns, Hsu says, “Maybe you don’t own outright the whole energy system, but let’s take away one particular function of the utility, which is procurement.”

That vision motivated a handful of Massachusetts activists and policy experts in the 1990s, including journalist Scott Ridley, who co-wrote a 1986 book, “Power Struggle,” with the University of Massachusetts historian Richard Rudolph and had spent years thinking about ways to reconfigure the energy system; Matt Patrick, chair of a local nonprofit focused on energy efficiency; Rob O’Leary, a local official in Barnstable County, on Cape Cod; and Paul Fenn, a staff aide to the state senator who chaired the legislature’s energy committee.

“It started with these political activists,” Hsu says.

Hsu’s research emphasizes several lessons to be learned from the fact the legislation first failed in 1995, before unexpectedly passing in 1997. Ridley remained an author and public figure; Patrick and O’Leary would each eventually be elected to the state legislature, but only after 2000; and Fenn had left his staff position by 1995 and worked with the group long-distance from California (where he became a long-term advocate about the issue). Thus, at the time CCA passed in 1997, none of its main advocates held an insider position in state politics. How did it succeed?

Lessons of the legislation

In the first place, Hsu believes, a legislative process resembles what the political theorist John Kingdon has called a “multiple streams framework,” in which “many elements of the policymaking process are separate, meandering, and uncertain.” Legislation isn’t entirely controlled by big donors or other interest groups, and “policy entrepreneurs” can find success in unpredictable windows of opportunity.

“It’s the most true-to-life theory,” says Hsu.  

Second, Hsu emphasizes, finding allies is crucial. In the case of CCA, that came about in a few ways. Many towns in Massachusetts have a town-level legislature known as Town Meeting; the activists got those bodies in about 20 towns to pass nonbinding resolutions in favor of community choice. O’Leary helped create a regional county commission in Barnstable County, while Patrick crafted an energy plan for it. High electricity rates were affecting all of Cape Cod at the time, so community choice also served as an economic benefit for Cape Cod’s working-class service-industry employees. The activists also found that adding an opt-out clause to the 1997 version appealed to legislators, who would support CCA if their constituents were not all bound to it.

“You really have to stick with it, and you have to look for coalition partners,” Hsu says. “It’s fun to hear them [the activists] talk about going to Town Meetings, and how they tried to build grassroots support. If you look for allies, you can get things done. [I hope] the people can see [themselves] in other people’s activism even if they’re not exactly the same as you are.”

By 1997, the CCA legislation had more geographic support, was understood as both an economic and environmental benefit for voters, and would not force membership upon anyone. The activists, while giving media interviews, and holding conferences, had found additional traction in the principle of citizen choice.

“It’s interesting to me how the rhetoric of [citizen] choice and the rhetoric of democracy proves to be effective,” Hsu says. “Legislators feel like they have to give everyone some choice. And it expresses a collective desire for a choice that the utilities take away by being monopolies.”

He adds: “We need to set out principles that shape systems, rather than just taking the system as a given and trying to justify principles that are 150 years old.”

One last element in CCA passage was good timing. The governor and legislature in Massachusetts were already seeking a “grand bargain” to restructure electricity delivery and loosen the grip of utilities; the CCA fit in as part of this larger reform movement. Still, CCA adoption has been gradual; about one-third of Massachusetts towns with CCA have only adopted it within the last five years.

CCA’s growth does not mean it’s invulnerable to repeal or utility-funded opposition efforts — “In California there’s been pretty intense pushback,” Hsu notes. Still, Hsu concludes, the fact that a handful of activists could start a national energy-policy movement is a useful reminder that everyone’s actions can make a difference.

“It wasn’t like they went charging through a barricade, they just found a way around it,” Hsu says. “I want my students to know you can organize and rethink the future. It takes some commitment and work over a long time.”



de MIT News https://ift.tt/0NsD4mJ

Can machine-learning models overcome biased datasets?

Artificial intelligence systems may be able to complete tasks quickly, but that doesn’t mean they always do so fairly. If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice.

For instance, if a dataset contains mostly images of white men, then a facial-recognition model trained with these data may be less accurate for women or people with different skin tones.

A group of researchers at MIT, in collaboration with researchers at Harvard University and Fujitsu Ltd., sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias. They used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognize objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or “neurons,” that process data.

The new results show that diversity in training data has a major influence on whether a neural network is able to overcome bias, but at the same time dataset diversity can degrade the network’s performance. They also show that how a neural network is trained, and the specific types of neurons that emerge during the training process, can play a major role in whether it is able to overcome a biased dataset.

“A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of the paper.  

Co-authors include former MIT graduate students Timothy Henry, Jamell Dozier, Helen Ho, Nishchal Bhandari, and Spandan Madan, a corresponding author who is currently pursuing a PhD at Harvard; Tomotake Sasaki, a former visiting scientist now a senior researcher at Fujitsu Research; Frédo Durand, a professor of electrical engineering and computer science at MIT and a member of the Computer Science and Artificial Intelligence Laboratory; and Hanspeter Pfister, the An Wang Professor of Computer Science at the Harvard School of Enginering and Applied Sciences. The research appears today in Nature Machine Intelligence.

Thinking like a neuroscientist

Boix and his colleagues approached the problem of dataset bias by thinking like neuroscientists. In neuroscience, Boix explains, it is common to use controlled datasets in experiments, meaning a dataset in which the researchers know as much as possible about the information it contains.

The team built datasets that contained images of different objects in varied poses, and carefully controlled the combinations so some datasets had more diversity than others. In this case, a dataset had less diversity if it contains more images that show objects from only one viewpoint. A more diverse dataset had more images showing objects from multiple viewpoints. Each dataset contained the same number of images.

The researchers used these carefully constructed datasets to train a neural network for image classification, and then studied how well it was able to identify objects from viewpoints the network did not see during training (known as an out-of-distribution combination). 

For example, if researchers are training a model to classify cars in images, they want the model to learn what different cars look like. But if every Ford Thunderbird in the training dataset is shown from the front, when the trained model is given an image of a Ford Thunderbird shot from the side, it may misclassify it, even if it was trained on millions of car photos.

The researchers found that if the dataset is more diverse — if more images show objects from different viewpoints — the network is better able to generalize to new images or viewpoints. Data diversity is key to overcoming bias, Boix says.

“But it is not like more data diversity is always better; there is a tension here. When the neural network gets better at recognizing new things it hasn’t seen, then it will become harder for it to recognize things it has already seen,” he says.

Testing training methods

The researchers also studied methods for training the neural network.

In machine learning, it is common to train a network to perform multiple tasks at the same time. The idea is that if a relationship exists between the tasks, the network will learn to perform each one better if it learns them together.

But the researchers found the opposite to be true — a model trained separately for each task was able to overcome bias far better than a model trained for both tasks together.

“The results were really striking. In fact, the first time we did this experiment, we thought it was a bug. It took us several weeks to realize it was a real result because it was so unexpected,” he says.

They dove deeper inside the neural networks to understand why this occurs.

They found that neuron specialization seems to play a major role. When the neural network is trained to recognize objects in images, it appears that two types of neurons emerge — one that specializes in recognizing the object category and another that specializes in recognizing the viewpoint.

When the network is trained to perform tasks separately, those specialized neurons are more prominent, Boix explains. But if a network is trained to do both tasks simultaneously, some neurons become diluted and don’t specialize for one task. These unspecialized neurons are more likely to get confused, he says.

“But the next question now is, how did these neurons get there? You train the neural network and they emerge from the learning process. No one told the network to include these types of neurons in its architecture. That is the fascinating thing,” he says.

That is one area the researchers hope to explore with future work. They want to see if they can force a neural network to develop neurons with this specialization. They also want to apply their approach to more complex tasks, such as objects with complicated textures or varied illuminations.

Boix is encouraged that a neural network can learn to overcome bias, and he is hopeful their work can inspire others to be more thoughtful about the datasets they are using in AI applications.

This work was supported, in part, by the National Science Foundation, a Google Faculty Research Award, the Toyota Research Institute, the Center for Brains, Minds, and Machines, Fujitsu Research, and the MIT-Sensetime Alliance on Artificial Intelligence.



de MIT News https://ift.tt/2oL8VwF

A “hot Jupiter’s” dark side is revealed in detail for first time

MIT astronomers have obtained the clearest view yet of the perpetual dark side of an exoplanet that is “tidally locked” to its star. Their observations, combined with measurements of the planet’s permanent day side, provide the first detailed view of an exoplanet’s global atmosphere.

“We’re now moving beyond taking isolated snapshots of specific regions of exoplanet atmospheres, to study them as the 3D systems they truly are,” says Thomas Mikal-Evans, who led the study as a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.

The planet at the center of the new study, which appears today in Nature Astronomy, is WASP-121b, a massive gas giant nearly twice the size of Jupiter. The planet is an ultrahot Jupiter and was discovered in 2015 orbiting a star about 850 light years from Earth. WASP-121b has one of the shortest orbits detected to date, circling its star in just 30 hours. It is also tidally locked, such that its star-facing “day” side is permanently roasting, while its “night” side is turned forever toward space.  

“Hot Jupiters are famous for having very bright day sides, but the night side is a different beast. WASP-121b's night side is about 10 times fainter than its day side,” says Tansu Daylan, an MIT postdoc working on NASA’s MIT-led mission, TESS, who co-authored the study.

Astronomers had previously detected water vapor and studied how the atmospheric temperature changes with altitude on the planet’s day side.

The new study captures a much more detailed picture. The researchers were able to map the dramatic temperature changes from the day to the night side, and to see how these temperatures change with altitude. They also tracked the presence of water through the atmosphere to show, for the first time, how water circulates between a planet’s day and night sides.

While on Earth, water cycles by first evaporating, then condensing into clouds, then raining out, on WASP-121b, the water cycle is far more intense: On the day side, the atoms that make up water are ripped apart at temperatures over 3,000 kelvins. These atoms are blown around to the night side, where colder temperatures allow hydrogen and oxygen atoms to recombine into water molecules, which then blow back to the day side, where the cycle starts again.

The team calculates that the planet’s water cycle is sustained by winds that whip the atoms around the planet at speeds of up to 5 kilometers per second, or more than 11,000 miles per hour.

It also appears that water isn’t alone in circulating around the planet. The astronomers found that the night side is cold enough to host exotic clouds of iron and corundum — a mineral that makes up rubies and sapphires. These clouds, like water vapor, may whip around to the day side, where high temperatures vaporize the metals into gas form. On the way, exotic rain might be produced, such as liquid gems from the corundum clouds.

“With this observation, we’re really getting a global view of an exoplanet’s meteorology,” Mikal-Evans says.

The study’s co-authors include collaborators from MIT, Johns Hopkins University, Caltech, and other institutions.  

Day and night

The team observed WASP-121b using a spectroscopic camera aboard NASA’s Hubble Space Telescope. The instrument observes the light from a planet and its star, and breaks that light down into its constituent wavelengths, the intensities of which give astronomers clues to an atmosphere’s temperature and composition.

Through spectroscopic studies, scientists have observed atmospheric details on the day sides of many exoplanets. But doing the same for the night side is far trickier, as it requires watching for tiny changes in the planet’s entire spectrum as it circles its star.

For the new study, the team observed WASP-121b throughout two full orbits — one in 2018, and the other in 2019. For both observations, the researchers looked through the light data for a specific line, or spectral feature, that indicated the presence of water vapor.

“We saw this water feature and mapped how it changed at different parts of the planet’s orbit,” Mikal-Evans says. “That encodes information about what the temperature of the planet’s atmosphere is doing as a function of altitude.”

The changing water feature helped the team map the temperature profile of both the day and night side. They found the day side ranges from 2,500 kelvins at its deepest observable layer, to 3,500 K in its topmost layers. The night side ranged from 1,800 K at its deepest layer, to 1,500 K in its upper atmosphere. Interestingly, temperature profiles appeared to flip-flop, rising with altitude on the day side — a “thermal inversion,” in meteorological terms — and dropping with altitude on the night side.

The researchers then passed the temperature maps through various models to identify chemicals that are likely to exist in the planet’s atmosphere, given specific altitudes and temperatures. This modeling revealed the potential for metal clouds, such as iron, corundum, and titanium on the night side.

From their temperature mapping, the team also observed that the planet’s hottest region is shifted to the east of the “substellar” region directly below the star. They deduced that this shift is due to extreme winds.

“The gas gets heated up at the substellar point but is getting blown eastward before it can reradiate to space,” Mikal-Evans explains.

From the size of the shift, the team estimates that the wind speeds clock in at around 5 kilometers per second.

“These winds are much faster than our jet stream, and can probably move clouds across the entire planet in about 20 hours,” says Daylan, who led previous work on the planet using TESS.

The astronomers have reserved time on the James Webb Space Telescope to observe WASP-121b later this year, and hope to map changes in not just water vapor but also carbon monoxide, which scientists suspect should reside in the atmosphere.

“That would be the first time we could measure a carbon-bearing molecule in this planet’s atmosphere,” Mikal-Evans says. “The amount of carbon and oxygen in the atmosphere provides clues on where these kinds of planet form.”

This research was supported, in part, by NASA through a grant from the Space Telescope Science Institute.



de MIT News https://ift.tt/8Nf2rgv

domingo, 20 de febrero de 2022

A revolution in learning

To understand a country, it helps to know its schools. To grasp Mexico, MIT historian Tanalís Padilla believes, that means learning about its rural “normales,” teacher-training schools with outsized historical influence on the country’s politics.

This might seem surprising. At its height, the system of rural normales consisted of only 35 such boarding schools, scattered in the countryside, populated by the children of peasants and indigenous residents. Yet these schools had been founded on the ideals of the Mexican Revolution of the 1910s, which promised justice for the poor, land reform, workers’ rights, and education. The normales turned out to be among the places where those ideals were taken most seriously.

“Their legacy is really profound,” says Padilla, a professor of history at MIT. At normales, she adds, from their creation in the 1920s onward, “Teachers were not just supposed to teach students to read and write, but also teach them about their rights — their rights to land, their right to form unions, to education. The schools wanted to form the students to be leaders.”

Rather than simply inculcating loyalty to the Mexican state and its policies, these schools churned out generations of activists working to realize the unfulfilled promises of Mexico’s revolutionary moment.

“The very schools meant to shape a loyal citizenry became hotbeds of radicalism,” Padilla writes in a new book on the subject, “Unintended Lessons of Revolution: Student Teachers and Political Radicalism in Twentieth-Century Mexico,” just published by Duke University Press.

“Their perspective, their struggle, brings into sharp relief the power relations
that created the past and produced the present,” Padilla writes. That present includes the 2014 kidnapping and disappearance of 43 protesting normalista students from Ayotzinapa, an event that generated headlines and protests, and tragically underscored the ongoing salience of the schools in the country’s political disputes.

Action learning

The origins of Padilla’s new book lie partly in her 2008 book, “Rural Resistance in the Land of Zapata,” which examined agrarian uprisings in midcentury Mexico.

“Throughout my research on the peasant movement, I kept [noticing] the role of rural teachers,” Padilla says. “Earlier works assumed rural teachers were agents of state consolidation, sent to the four corners of the country to instill patriotism to the Mexican nation. But I came across these schools that had a very radical tradition, based on [promises] of the Mexican state. I wanted to study how these schools came to be.”

Padilla’s research draws on many kinds of archival sources, as well as interviews with students and teachers. Her work establishes a chronology for the schools that reflects the contours of Mexican politics over the last century. Founded in the 1920s, the normales grew and flourished in the 1930s under the reform-minded presidency of Lázaro Cárdenas. Abetted by an undersecretary of education named Moisés Sáenz, who had studied with John Dewey at Columbia University, the normales emphasized an “action pedagogy” and learning by doing.

“The schools gave students a big say in running the schools themselves,” Padilla says. “A lot of student activism leads to skills — public speaking, how to organize a meeting, how to mobilize people, how to get on a bus and talk to people. These skills become valuable later on for leadership, and this is one of the ways the graduates of these normales are different than graduates from other schools. Everyone says that, even opponents who want to co-opt the schools.”

A conservative turn in the 1940s decreased state support for the rural normales as institutions, and by the 1970s their numbers had decreased to 15, about where they remain today. Still, through the decades, where reform movements have occurred in Mexico, students from the rural normales have often been involved.

“They have an outsized role in political movements,” Padilla says. At the same time, she acknowledges, not every last graduate of a rural normale became a leftist organizer: “The legacy of these schools is based on radical politics, but once students graduate, you have a full gamut of [life trajectories]. Some people remain committed to teaching, others take seriously being activists, and some will go on to be conservative officials in party politics.”

Still, as much as some conservatives in Mexico might like to shutter the normales, it has never happened, due to the activism of their students.

“They’re a really important safety net, and that’s one reason why they’re so valued to the rural population,” Padilla says. “While there is poverty in Mexico, these schools will continue to have a reason to exist.”

Broader themes

In reconstructing the history of 35 teacher-training schools, Padilla’s work uses that narrower slice of history to get at larger ideas. One such theme is thinking of the Mexican revolution as unfinished business, a movement that has only delivered part of what it promised to the masses, with political fractures stemming from that situation.

Another, related point involves questioning the vaunted political stability of Mexico and its long-term one-party system. “For a long time scholars thought of Mexico as the most stable Latin American country,” Padilla says. Instead, she prefers to focus on the struggles to change the country, in the wake of “the abandoning of the revolutionary reforms which were the constituting elements of modern Mexico,” as she puts it.

Still one more theme of Padilla’s work involves the centrality of rural and agrarian life to the country’s politics. Mexico has become quite urbanized in the postwar era, and some political events, like the protests of 1968, are thought of as urban events. But, as Padilla details in one chapter of the book, student protests in the 1960s were much more heavily rural than people now realize, and often preceded 1968 — with a rural normale influence, of course.

“Unintended Lessons of Revolution” has received praise from other scholars in the field. Brooke Larson, a professor of history at Stony Brook University, calls the book “a tremendously impressive study of the rural normal school,” adding that it casts “new light on a series of larger questions concerning Mexico's legacy of revolution, its failed rural policies, and the explosion of unrest among rural teachers and activists.”

For her part, Padilla says she hopes readers will both reflect on Mexican history and relate her narrative to other countries where similar issues pertain. 

“The book is both specific to Mexico, and also universal, where it relates to the effects of the power of education,” Padilla says. “Education is not just reading and writing or [training for] a profession, but understanding the world around you. Historically, education can be assimilationist, forming loyalty to a country, but it can also have liberatory qualities. These schools speak to the power, and potentially liberatory power, of education.”



de MIT News https://ift.tt/YwemgWo

sábado, 19 de febrero de 2022

On a mission to alleviate chronic pain

About 50 million Americans suffer from chronic pain, which interferes with their daily life, social interactions, and ability to work. MIT Professor Fan Wang wants to develop new ways to help relieve that pain, by studying and potentially modifying the brain’s own pain control mechanisms.

Her recent work has identified an “off switch” for pain, located in the brain’s amygdala. She hopes that finding ways to control this switch could lead to new treatments for chronic pain.

“Chronic pain is a major societal issue,” Wang says. “By studying pain-suppression neurons in the brain’s central amygdala, I hope to create a new therapeutic approach for alleviating pain.”

Wang, who joined the MIT faculty in January 2021, is also the leader of a new initiative at the McGovern Institute for Brain Research that is studying drug addiction, with the goal of developing more effective treatments for addiction.

“Opioid prescription for chronic pain is a major contributor to the opioid epidemic. With the Covid pandemic, I think addiction and overdose are becoming worse. People are more anxious, and they seek drugs to alleviate such mental pain,” Wang says. “As scientists, it’s our duty to tackle this problem.”

Sensory circuits

Wang, who grew up in Beijing, describes herself as “a nerdy child” who loved books and math. In high school, she took part in science competitions, then went on to study biology at Tsinghua University. She arrived in the United States in 1993 to begin her PhD at Columbia University. There, she worked on tracing the connection patterns of olfactory receptor neurons in the lab of Richard Axel, who later won the Nobel Prize for his discoveries of odorant receptors and how the olfactory system is organized.

After finishing her PhD, Wang decided to switch gears. As a postdoc at the University of California at San Francisco and then Stanford University, she began studying how the brain perceives touch. 

In 2003, Wang joined the faculty at Duke University School of Medicine. There, she began developing techniques to study the brain circuits that underlie the sense of touch, tracing circuits that carry sensory information from the whiskers of mice to the brain. She also studied how the brain integrates movements of touch organs with signals of sensory stimuli to generate perception (such as using stretching movements to sense elasticity).

As she pursued her sensory perception studies, Wang became interested in studying pain perception, but she felt she needed to develop new techniques to tackle it. While at Duke, she invented a technique called CANE (capturing activated neural ensembles), which can identify networks of neurons that are activated by a particular stimulus.

Using this approach in mice, she identified neurons that become active in response to pain, but so many neurons across the brain were activated that it didn’t offer much useful information. As a way to indirectly get at how the brain controls pain, she decided to use CANE to explore the effects of drugs used for general anesthesia. During general anesthesia, drugs render a patient unconscious, but Wang hypothesized that the drugs might also shut off pain perception.

“At that time, it was just a wild idea,” Wang recalls. “I thought there may be other mechanisms — that instead of just a loss of consciousness, anesthetics may do something to the brain that actually turns pain off.”

Support for the existence of an “off switch” for pain came from the observation that wounded soldiers on a battlefield can continue to fight, essentially blocking out pain despite their injuries.

In a study of mice treated with anesthesia drugs, Wang discovered that the brain does have this kind of switch, in an unexpected location: the amygdala, which is involved in regulating emotion. She showed that this cluster of neurons can turn off pain when activated, and when it is suppressed, mice become highly sensitive to ordinary gentle touch.

“There’s a baseline level of activity that makes the animals feel normal, and when you activate these neurons, they’ll feel less pain. When you silence them, they’ll feel more pain,” Wang says.

Turning off pain

That finding, which Wang reported in 2020, raised the possibility of somehow modulating that switch in humans to try to treat chronic pain. This is a long-term goal of Wang’s, but more work is required to achieve it, she says. Currently her lab is working on analyzing the RNA expression patterns of the neurons in the cluster she identified. They also are measuring the neurons’ electrical activity and how they interact with other neurons in the brain, in hopes of identifying circuits that could be targeted to tamp down the perception of pain.

One way of modulating these circuits could be to use deep brain stimulation, which involves implanting electrodes in certain areas of the brain. Focused ultrasound, which is still in early stages of development and does not require surgery, could be a less invasive alternative.

Another approach Wang is interested in exploring is pairing brain stimulation with a context such as looking at a smartphone app. This kind of pairing could help train the brain to shut off pain using the app, without the need for the original stimulation (deep brain stimulation or ultrasound).

“Maybe you don’t need to constantly stimulate the brain. You may just need to reactivate it with a context,” Wang says. “After a while you would probably need to be restimulated, or reconditioned, but at least you have a longer window where you don't need to go to the hospital for stimulation, and you just need to use a context.”

Wang, who was drawn to MIT in part by its focus on fostering interdisciplinary collaborations, is now working with several other McGovern Institute members who are taking different angles to try to figure out how the brain generates the state of craving that occurs in drug addiction, including opioid addiction.

“We’re going to focus on trying to understand this craving state: how it’s created in the brain and how can we sort of erase that trace in the brain, or at least control it. And then you can neuromodulate it in real time, for example, and give people a chance to get back their control,” she says.



de MIT News https://ift.tt/WIVGFyT