lunes, 25 de diciembre de 2017

Cleaner air, longer lives

The air we breathe contains particulate matter from a range of natural and human-related sources. Particulate matter is responsible for thousands of premature deaths in the United States each year, but legislation from the U.S. Environmental Protection Agency (EPA) is credited with significantly decreasing this number, as well as the amount of particulate matter in the atmosphere. However, the EPA may not be getting the full credit they deserve: New research from MIT’s Department of Civil and Environmental Engineering (CEE) proposes that the EPA’s legislation may have saved even more lives than initially reported.

“In the United States, the number of premature deaths associated with exposure to outdoor particulate matter exceeds the number of car accident fatalities every year. This highlights the vital role that the EPA plays in reducing the exposure of people living in the United States to harmful pollutants,” says Colette Heald, associate professor in CEE and the Department of Earth, Atmospheric and Planetary Sciences.

The EPA’s 1970 Clean Air Act and amendments in 1990 address the health effects of particulate matter, specifically by regulating emissions of air pollutants and promoting research into cleaner alternatives. In 2011 the EPA announced that the legislation was responsible for a considerable decrease in particulate matter in the atmosphere, estimating that over 100,000 lives were saved every year from 2000 to 2010. However, the report did not consider organic aerosol, a major component of atmospheric particulate matter, to be a large contributor to the decline in particulate matter during this period. Organic aerosol is emitted directly from fossil fuel combustion (e.g. vehicles), residential burning, and wildfires but is also chemically produced in the atmosphere from the oxidation of both natural and anthropogenically emitted hydrocarbons.

The CEE research team, including Heald; Jesse Kroll, an associate professor of CEE and of chemical engineering; David Ridley, a research scientist in CEE; and Kelsey Ridley SM ’15, looked at surface measurements of organic aerosol from across the United States from 1990 to 2012, creating a comprehensive picture of organic aerosol in the United States.

“Widespread monitoring of air pollutant concentrations across the United States enables us to verify changes in air quality over time in response to regulations. Previous work has focused on the decline in particulate matter associated with efforts to reduce acid rain in the United States. But to date, no one had really explored the long-term trend in organic aerosol,” Heald says. 

The MIT researchers found a more dramatic decline in organic aerosol across the U.S. than previously reported, which may account for more lives saved than the EPA anticipated. Their work showed that these changes are likely due to anthropogenic, or human, behaviors. The paper is published this week in Proceedings of the National Academy of Sciences.

“The EPA report showed a very large impact from the decline in particulate matter, but we were surprised to see a very little change in the organic aerosol concentration in their estimates,” explains Ridley. “The observations suggest that the decrease in organic aerosol had been six times larger than estimated between 2000 and 2010 in the EPA report.”

Using data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network the researchers found that organic aerosol decreased across the entire country in the winter and summer seasons. This decline in organic aerosol is surprising, especially when considering the increase in wildfires. But the researchers found that despite the wildfires, organic aerosols continue to decline. 

The researchers also used information from the NASA Modern-Era Retrospective analysis for Research and Applications to analyze the impact of other natural influences on organic aerosol, such as precipitation and temperature, finding that the decline would be occurring despite cloud cover, rain, and temperature changes. 

The absence of a clear natural cause for the decline in organic aerosol suggests the decline was the result of anthropogenic causes. Further, the decline in organic aerosol was similar to the decrease in other measured atmospheric pollutants, such as nitrogen dioxide and carbon monoxide, which are likewise thought to be due to EPA regulations. Also, similarities in trends across both urban and rural areas suggest that the declines may also be the result of behavioral changes stemming from EPA regulations.

By leveraging the emissions data of organic aerosol and its precursors, from both natural and anthropogenic sources, the researchers simulated organic aerosol concentrations from 1990 to 2012 in a model. They found that more than half of the decline in organic aerosol is accounted for by changes in human emissions behaviors, including vehicle emissions and residential and commercial fuel burning. 

“We see that the model captures much of the observed trend of organic aerosol across the U.S., and we can explain a lot of that purely through changes in anthropogenic emissions. The changes in organic aerosol emissions are likely to be indirectly driven by controls by the EPA on different species, like black carbon from fuel burning and nitrogen dioxide from vehicles,” says Ridley. ”This wasn’t really something that the EPA was anticipating, so it’s an added benefit of the Clean Air Act.”

In considering mortality rates and the impact of organic aerosol over time, the researchers used a previously established method that relates exposure to particulate matter to increased risk of mortality through different diseases such as cardiovascular disease or respiratory disease. The researchers could thus figure out the change in mortality rate based on the change in particulate matter. Since the researchers knew how much organic aerosol is in the particulate matter samples, they were able to determine how much changes in organic aerosol levels decreased mortality.

“There are costs and benefits to implementing regulations such as those in the Clean Air Act, but it seems that we are reaping even greater benefits from the reduced mortality associated with particulate matter because of the change in organic aerosol,” Ridley says. “There are health benefits to reducing organic aerosol further, especially in urban locations. As we do, natural sources will contribute a larger fraction, so we need to understand how they will vary into the future too.”

This research was funded, in part, by the National Science Foundation, the National Aeronautics and Space Administration, and the National Oceanic and Atmospheric Administration.



de MIT News http://ift.tt/2BOmgc7

How the brain selectively remembers new places

When you enter a room, your brain is bombarded with sensory information. If the room is a place you know well, most of this information is already stored in long-term memory. However, if the room is unfamiliar to you, your brain creates a new memory of it almost immediately.

MIT neuroscientists have now discovered how this occurs. A small region of the brainstem, known as the locus coeruleus, is activated in response to novel sensory stimuli, and this activity triggers the release of a flood of dopamine into a certain region of the hippocampus to store a memory of the new location.

“We have the remarkable ability to memorize some specific features of an experience in an entirely new environment, and such ability is crucial for our adaptation to the constantly changing world,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory.

“This study opens an exciting avenue of research into the circuit mechanism by which behaviorally relevant stimuli are specifically encoded into long-term memory, ensuring that important stimuli are stored preferentially over incidental ones,” adds Tonegawa, the senior author of the study.

Akiko Wagatsuma, a former MIT research scientist, is the lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Dec. 25.

New places

In a study published about 15 years ago, Tonegawa’s lab found that a part of the hippocampus called the CA3 is responsible for forming memories of novel environments. They hypothesized that the CA3 receives a signal from another part of the brain when a novel place is encountered, stimulating memory formation.

They believed this signal to be carried by chemicals known as neuromodulators, which influence neuronal activity. The CA3 receives neuromodulators from both the locus coeruleus (LC) and a region called the ventral tegmental area (VTA), which is a key part of the brain’s reward circuitry. The researchers decided to focus on the LC because it has been shown to project to the CA3 extensively and to respond to novelty, among many other functions.

The LC responds to an array of sensory input, including visual information as well as sound and odor, then sends information on to other brain areas, including the CA3. To uncover the role of LC-CA3 communication, the researchers genetically engineered mice so that they could block the neuronal activity between those regions by shining light on neurons that form the connection.

To test the mice’s ability to form new memories, the researchers placed the mice in a large open space that they had never seen before. The next day, they placed them in the same space again. Mice whose LC-CA3 connections were not disrupted spent much less time exploring the space on the second day, because the environment was already familiar to them. However, when the researchers interfered with the LC-CA3 connection during the first exposure to the space, the mice explored the area on the second day just as much as they had on the first. This suggests that they were unable to form a memory of the new environment.

The LC appears to exert this effect by releasing the neuromodulator dopamine into the CA3 region, which was surprising because the LC is known to be a major source of norepinephrine to the hippocampus. The researchers believe that this influx of dopamine helps to boost CA3’s ability to strengthen synapses and form a memory of the new location.

They found that this mechanism was not required for other types of memory, such as memories of fearful events, but appears to be specific to memory of new environments. The connections between the LC and CA3 are necessary for long-term spatial memories to form in CA3.

“The selectivity of successful memory formation has long been a puzzle,” says Richard Morris, a professor of neuroscience at the University of Edinburgh, who was not involved in the research. “This study goes a long way toward identifying the brain mechanisms of this process. Activity in the pathway between the locus coeruleus and CA3 occurs most strongly during novelty, and it seems that activity fixes the representations of everyday experience, helping to register and retain what’s been happening and where we’ve been.”

Choosing to remember

This mechanism likely evolved as a way to help animals survive, allowing them to remember new environments without wasting brainpower on recording places that are already familiar, the researchers say.

“When we are exposed to sensory information, we unconsciously choose what to memorize. For an animal’s survival, certain things are necessary to be remembered, and other things, familiar things, probably can be forgotten,” Wagatsuma says.

Still unknown is how the LC recognizes that an environment is new. The researchers hypothesize that some part of the brain is able to compare new environments with stored memories or with expectations of the environment, but more studies are needed to explore how this might happen.

“That’s the next big question,” Tonegawa says. “Hopefully new technology will help to resolve that.”

The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.



de MIT News http://ift.tt/2zt5Dwv

viernes, 22 de diciembre de 2017

Dongkeun Park: Winding his way to medical insights

Research engineer Dongkeun Park watches a thin, coppery tape of high-temperature superconductor (HTS) wind its way from one spool on his plywood worktable to another, cautiously overseeing the speed and tension of the tape’s journey.

When completed, in about half a day, this HTS double-pancake (DP) winding will look like two flat coils, one atop the other, but they will be one, connected internally, leaving both terminal ends on the outside. Park has been managing this process on and off for eight years, knowing that every turn of the coil creates a stronger magnet. This is just one of 96 double pancake coils that have been wound over the past five years for an 800 MHz HTS insert coil, the H800, being built in the Francis Bitter Magnet Laboratory (FBML) at MIT’s Plasma Science and Fusion Center. 

High-field superconducting magnets are vital for nuclear magnetic resonance (NMR) spectroscopy, a technology that provides a unique insight into biological processes. The stronger the NMR magnet, the greater the detail and resolution in imaging the molecular structure of proteins, providing researchers with the information they may need to develop medications for combating disease.

Park joined the laboratory as a postdoc in 2009. He traces his interest in superconductivity, and MIT, to a lecture given by visiting FBML magnetic technology division head Yuki Iwasa at Yongsei University in Seoul, South Korea. Park says that as a graduate student in electrical engineering, “I wanted to make something by hand, not only by calculation.”

When Park first arrived at FBML, the lab had been working on high-resolution HTS-based NMR magnets since 1999 as part of a program sponsored by the National Institutes of Health (NIH) to complete a 1-GHz NMR magnet with a combination of low temperature superconductor (LTS) and HTS double-pancake insert coils. The lab’s work on LTS-based NMR began several decades earlier.

At the time of his arrival, NIH and MIT had recently agreed to increase the target strength of the magnet being developed from 1 GHz to 1.3 GHz. To reach this strength, FBML planned to create an H600 magnet and nest it inside a 700 MHz LTS (L700) magnet, which could be purchased elsewhere. Park notes that this combination translates to a magnetic field strength of 30.5 Tesla, “which would make it the world’s strongest magnet for NMR applications.”

One responsibility given to Park, along with his colleague research engineer Juan Bascuñán, was to wind each DP, then test it in liquid nitrogen. The DPs would then be stacked, compressed, joined together and retested as a finished coil. Finally, this stacked coil would be over-banded with layers of stainless steel tape to support the much larger electromagnetic forces generated during high-current operation in liquid helium. Park and his colleagues needed to create two of these coils, one slightly larger than the other, and nest them inside a series of LTS coils to create the final magnet. The combined coils would create a magnet that could provide the sharpest imaging yet for investigating protein structure, possibly three times the image resolution from FBML’s current 900-MHz NMR.

In December 2011, Park and his colleagues had virtually finished the preliminary DP windings, and were looking forward to stacking them for further testing. But returning from MIT’s winter recess, they discovered that the coils were missing. The 112 double pancake coils they had carefully crafted and wound for the H600 had been stolen.

Park’s current PSFC colleague, research scientist Phil Michael, suggests that the theft, though traumatic to the project, “ultimately made the magnet better.” To save money, MIT and NIH decided that instead of purchasing an L700 magnet to surround the H600 coils as originally planned, they could use an L500 coil already on hand at FBML, and create for it a higher strength HTS magnet: the H800.

With new security measures in place, Iwasa’s group set out to accomplish this goal by adopting a new HTS magnet technology known as no-insulation winding, developed by Park along with former FBML research engineer Seungyong Hahn. All previous coils had been created from HTS tape insulated with plastic film or high resistive metal. The new coils would be made without the insulation, allowing them to become more compact and mechanically robust, with increased current density.

Park did not take part in the early production of the H800. In February of 2012, he decided to pursue an opportunity to make a new commercial magnetic resonance imaging (MRI) magnet for Samsung Electronics in South Korea and the UK. In 2016 he happily returned to MIT as a research engineer, his hiatus having provided him an appreciation for the benefits of an academic environment.

“A company’s objective is to make a profit. So you must always be concerned with reducing costs,” he says. “This is very different from exploring basic science and engineering on innovative ideas at MIT.”

Although many coils for the H800 had been wound in his absence, he returned in time to complete and test more than half the required DP coils, along with team members Bascuñán, Phil Michael, Jiho Lee, Yoonhyuck Choi, and Yi Li. As 2018 approaches the three HTS coils necessary to create the H800 are nearly completed. Only Coil 3 remains to be finally tested in liquid helium. As the new year begins, the coils will be combined and tested as the H800.

But even after the H800 is nested in the L500 coils and the target 1.3 GHz magnet is created, there will still be three to four years of work to ready it for the high-resolution NMR spectroscopy that will provide new insights into biological structures. Until then, Park will remain patient as he looks to other projects he is overseeing, including one developing an MRI magnet for screening osteoporosis.

And yes, his new project requires superconducting coils. Park is always ready to start winding. 



de MIT News http://ift.tt/2BBcLNj

Turning real estate data into decision-making tools

The unprecedented amount of commercial real estate information being generated today presents new opportunities for analysts to develop models that translate masses of data into predictive tools for investors. Recognizing that potential, the MIT Center for Real Estate (CRE) has launched the Real Estate Price Dynamics Research Platform (REPD Platform) to explore models and analytics that can lay the foundation for providing real-world solutions. The platform builds on CRE’s earlier work in the field of commercial property price index development. 

The lead researcher for the platform is postdoc Alexander van de Minne, with David Geltner, professor of real estate finance, serving as principal investigator. Geltner is the lead author of “Commercial Real Estate Analysis and Investment,” a standard graduate textbook in the field.

“Real estate investment has always been a world with a lack of good empirical data,” says Geltner, a pioneer in the development of transaction price based commercial property price and investment performance indices over a decade ago. “But with the digital revolution, there’s an explosion of data aggregators, information companies, and other sources of empirical data relevant to commercial real estate investment.”

In addition to the increase in data availability, Geltner says, the other crucial component for the REPD Platform has been the advancement of econometric capability to handle the new data. Econometrics, a toolkit of statistical methods used by economists to test hypotheses using real-world data, provides a means to turn enormous quantities of data into actionable information.

The aim of the platform, whose research and analysis will be available to the public, is to advance real estate investment-related analytics in such areas as price and rent indexing (how prices change over time) and automated valuation models. These can ultimately have a real-world impact by improving investment and management decisions. One feature that distinguishes the REPD Platform from most other property investment research is the application of Bayesian techniques, as distinguished from classical statistics. By employing Bayesian econometrics, researchers are able to use prior knowledge and economic theory to help inform the statistical analysis, which Geltner says makes the analysis more efficient.

Van de Minne says that this is important because of a seeming paradox: “Even though we have much more data than we’ve ever had before in commercial real estate, we still find ourselves typically in situations of scarce data.”

This occurs because the analysis of investment properties is subject to a host of variables, including market and submarket location as well as varying data sources, which make the study of real estate pricing very challenging. Geltner adds that because the values of properties are so market specific — with market rents tied to the value of the asset — “you’ve really got to track locally.” 

“What is going on in San Francisco in terms of asset pricing may be totally different from what is going on in Dallas,” he says. “And even what’s going on in the Dallas central business district is different from what’s going on in North Dallas.”

Van de Minne, using the analogy of how an insufficient number of property sales within a given period can produce skewed results, says there are inherent flaws in using a classical statistical model for real estate price indexing.

“If you’re looking at a price index that has only two data points [property sales], for instance, and you try to use that sample to tell us that prices went down 85 percent in one quarter, can you really take that conclusion seriously?” he asks. “So what our models allow us to do is to still use that information, but to weigh that data against our a priori knowledge.”

Although the primary focus of the REPD Platform is on commercial property asset prices, related subjects are being explored, such as rents and space market dynamics, with the platform already being used to study office markets in India. The platform also engages with other research organizations within CRE, including the Real Estate Innovation Lab and the newly created China Future City Lab, which focuses on China’s rapidly growing urban areas. The researchers also collaborate with academics from other disciplines within MIT, such Youssef M. Marzouk, the director of the Aerospace Computational Design Laboratory.

The REPD Platform was seed funded with a gift from long-time CRE industry partner Real Capital Analytics Inc. In classic MIT "mens-et-manus" fashion, the platform serves as a bridge between pioneering academic research and industry practice.

“This is an academic entity in an academic institution, so we’re not particularly driven by ‘Is there a profit?’ in producing this information product,” says Geltner. “We’re more about discovering fundamental things about the real estate investment industry — the markets and how they work.”

The function of the platform is not purely academic either, says van de Minne.

“We are interested in the actual needs of people in the industry,” he says. “We want to have an impact, so we’re not just living in an academic bubble.”



de MIT News http://ift.tt/2kHF4iF

Jennifer Rupp wins international electrochemistry prize

Jennifer L. M. Rupp, who holds joint appointments at MIT as an assistant professor in the departments of Materials Science and Engineering (DMSE) and Electrical Engineering and Computer Science (EECS), has won the 2017 “Science Award Electrochemistry,” from Volkswagen and BASF. Rupp was honored for her work on energy storage systems.

Rupp received the award, which is worth about $47,000, on Dec. 1 at ceremonies held at Karlsruhe Institute of Technology in Germany. Rupp’s Electrochemical Materials Laboratory at MIT is working to replace the flammable liquid electrolyte in lithium batteries with a safer solid-state lithium electrolyte.

“The team was honored to receive the award for their work on processing and designing new solid-state, garnet-type batteries, and for their commitment to integrate cathodes with socioeconomically acceptable elements," Rupp says. “Designing lithium conducting glass-ceramics and battery electrode alloys can be interesting strategies for future battery architectures based on garnets to avoid lithium dendrites that often lead to performance failure." Dendrites are lithium filaments shaped like tree leaves or snowflakes that can form in rechargeable lithium metal batteries, and their unchecked growth can cause a cell to short-circuit.

Ulrich Eichhorn, head of group research and development for Volkswagen AG, says the the winners of the Science Award “are an excellent example of innovative and creative ideas in this field.” The German automaker plans to reach a goal of 25 percent of its vehicles being battery-powered electrics by 2025.

The award noted that Rupp’s work on ceramic engineering for fast lithium transfer in garnet-type batteries and a novel glassy-type lithium ion conductor that may lead to new design principles for solid-state batteries.

“BASF creates chemistry for a sustainable future. We all know that batteries are at the core of electromobility, and there is great potential for specific technological progress in this area. Yet, there are scientific hurdles we must first overcome,” says Martin Brudermüller, vice chairman of the board of executive directors and chief technology officer at BASF. “Electrochemistry is a key technology for sustainable future mobility. That is why we need first-class research around the globe conducted by excellent scientists who inspire each other to continuously develop new and better solutions.”

Rupp joined the DMSE in January as the Thomas Lord Assistant Professor of Materials Science and Engineering at MIT, and recently was appointed as an assistant professor in EECS. She also conducts research on materials for solid oxide fuel cells, electrochemical sensors, and information storage devices.

The BASF and Volkswagen International “Science Award Electrochemistry” has been given out annual since 2012.



de MIT News http://ift.tt/2kIXfUU

MIT custodian Francisco Rodriguez released from detention

After being held in detention for more than five months by U.S. Immigration and Customs Enforcement (ICE), MIT custodian and father of four Francisco Rodriguez will be spending Christmas at home with his family.

While in detention, Rodriguez missed the birth of his son and spent his own birthday away from his family. On Friday, Rodriguez thanked all of the supporters who worked for his release and said that his faith in God helped to sustain him throughout his detention.

“I feel very happy today. The most fabulous gift that I have in my life is to be with my family,” Rodriguez said at a press conference today at Bellingham Hill Park in Chelsea, Massachusetts. “It’s a miracle. Miracles can happen any time. It’s fantastic to be home and to be free.”

Rodriguez, who has been employed at MIT for five years, left his home country of El Salvador in 2006, fearing violence after a co-worker was murdered. He applied for asylum and although his application was denied, he has been granted yearly stays of removal and permits authorizing him to work in the United States. In July, he was detained by ICE during one of his regular check-ins with the agency.

Earlier this month, the 10th Circuit Court of Appeals in Denver granted a request to stay his removal from the United States while he appeals the rejection of his asylum request. On Monday, his legal team filed a motion to seek his immediate release from custody, and on Wednesday, the U.S. Attorney’s office in Boston told Rodriguez’ lawyers that they would not oppose his request to be released pending the outcome of the appeal.

"That Francisco has been released in time to share Christmas with his family is a gift to us all. He is a beloved member of the MIT community whose hard work and eagerness to serve his community are a beautiful reflection of the values of MIT, and of America," says MIT President L. Rafael Reif. "While we celebrate his return to his wife, daughters, and newborn son, however, we recognize that this reprieve is not the same as a permanent offer of asylum. We strongly believe that Francisco should be allowed to remain in the United States permanently."

After Rodriguez was taken into custody in July, the MIT community came together to support him, holding rallies, signing petitions, and raising money. MIT’s Office of the General Counsel secured the law firm of Goodwin Procter to join Rodriguez’ legal team, pro bono.

“I want to thank the whole MIT community — professors, students, [Director of Campus Services and Chief of Police] John DiFava, the lawyers for MIT, the president, my coworkers — for all the help,” Rodriguez said after Friday’s press conference. “They really support people. MIT is a family. That’s the way that they are.”

Rodriquez described a joyous reunion with his daughters, who had not been expecting him until Friday. He was released earlier than planned, on Thursday, and was able to surprise them after school.

“Yesterday when my first daughter came home from school, she wasn’t expecting I would be home, and when she saw me she just jumped in the air, she was so happy,” he said.

Just days ago, Rodriguez was thinking about how he might celebrate Christmas with some of his fellow detainees, but now he plans to spend the holiday attending church and celebrating with his family.

“My daughters said the best gift they have is ‘you being here with us, Daddy,’” Rodriguez said. “We shared this great gift already, before Christmas.”

In the meantime, Rodriguez’ legal team will continue its efforts to obtain asylum for Rodriguez.

“This is just the first step for us,” said John Bennett, a Goodwin Procter partner working on the case. “This has been a gratifying past few days, and our work with Francisco continues. We believe Francisco is a strong candidate for (asylum). We think that keeping him here will make us a stronger country and all of us a better people.”

Thomas Kochan, a professor at MIT’s Sloan School of Management who has helped raised money for Rodriguez’ expenses, came to the press conference to convey good wishes from MIT. “Welcome back to our community, and we look forward to seeing you back here,” he told Rodriguez.



de MIT News http://ift.tt/2BSxpFs

Tea with teachers

Everyone knows that the Student Center is a community space students use to eat and study. What happens on the fourth floor may come as a surprise to many in the MIT community: A semiprofessional film studio is home to Tea With Teachers, a project founded to make MIT professors more approachable to students.

Sina Booeshagi '17 co-founded Tea With Teachers to address a discomfort he felt when approaching professors. “I felt that myself and other students faced confidence or language barriers that made it difficult to approach professors and get to know them,” says Booeshagi. A casual conversation over tea, he thought, would help others to see just how approachable these professors are.

The Tea with Teachers team applied to the MindHandHeart Innovation Fund, a grant program sponsored by MIT Medical and the Office of the Chancellor for advancing mental health, well-being, and community at MIT, in order to start a YouTube series profiling MIT professors. MindHandHeart enthusiastically supported Tea with Teachers’ innovative model of fostering connectedness on campus.

Along with co-founders senior Tchelet Segev, junior Nicholas Curtis, and sophomore Melissa Cao, Booeshagi was able to move forward with the series and served as its first host. He was well-suited to the role. “I enjoy getting to know people, talking about current events, and simply relaxing over a cup of tea,” he says. “It therefore felt pretty natural to do the same with MIT professors.”

Professors who appear on "Tea with Teachers" field a number of questions designed to provide a glimpse into their lives outside of the classroom, such as what their guilty pleasures are, what pranks they've pulled, and what superstitions they believe in. At the end of each episode, professors are asked if they have any wisdom to impart to MIT students.

To date, the channel has amassed over 16,000 views in the course of seven months, featuring the likes of MIT Chancellor Cynthia Barnhart, Walter M. May and A. Hazel May Professor Alexander Slocum, Vice Chancellor Ian Waitz, and Institute Professor Robert Langer. Its second season, featuring new host sophomore Talia Khan, debuted on Nov. 21 with an interview with associate professor of physics Pablo Jarillo-Herrero.

"Tea with Teachers has been an amazing experience," Kahn says. "I have really appreciated the opportunity to engage in so many meaningful conversations, and I am thankful to be a part of an initiative that helps bridge the gap between students and professors. I am excited to see how Tea with Teachers will grow and evolve in the future.”

The professors seem to love the idea as well. After being interviewed in early November, professor of biology Eric Lander reflected: “I loved the chance to sit down and have tea with Tea with Teachers. What a great program to help students connect with teachers!”

After the successful launch of Tea With Teachers, the group doesn’t plan on letting up. One can hardly walk down a hallway at MIT without seeing a poster advertising the project, or open their mailbox without receiving an email touting the most recent episode. Their first season is available on the Tea With Teachers YouTube channel, and new videos are released every Tuesday at 10 p.m.



de MIT News http://ift.tt/2l25ZES

jueves, 21 de diciembre de 2017

New depth sensors could be sensitive enough for self-driving cars

For the past 10 years, the Camera Culture group at MIT’s Media Lab has been developing innovative imaging systems — from a camera that can see around corners to one that can read text in closed books — by using “time of flight,” an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.

In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That’s the type of resolution that could make self-driving cars practical.

The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.

At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That’s good enough for the assisted-parking and collision-detection systems on today’s cars.

But as Achuta Kadambi, a  joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, “As you increase the range, your resolution goes down exponentially. Let’s say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you’re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life.”

At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

Kadambi is joined on the paper by his thesis advisor, Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group.

Slow uptake

With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. The longer the light burst, the more ambiguous the measurement of how far it’s traveled. So light-burst length is one of the factors that determines system resolution.

The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today’s detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.

There is, however, another imaging technique that enables higher resolution, Kadambi says. That technique is interferometry, in which a light beam is split in two, and half of it is kept circulating locally while the other half — the “sample beam” — is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams — the relative alignment of the troughs and crests of their electromagnetic waves — yields a very precise measure of the distance the sample beam has traveled.

But interferometry requires careful synchronization of the two light beams. “You could never put interferometry on a car because it’s so sensitive to vibrations,” Kadambi says. “We’re using some ideas from interferometry and some of the ideas from LIDAR, and we’re really combining the two here.”

On the beat

They’re also, he explains, using some ideas from acoustics. Anyone who’s performed in a musical ensemble is familiar with the phenomenon of “beating.” If two singers, say, are slightly out of tune — one producing a pitch at 440 hertz and the other at 437 hertz — the interplay of their voices will produce another tone, whose frequency is the difference between those of the notes they’re singing — in this case, 3 hertz.

The same is true with light pulses. If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a light signal pulsing once a second — a rate easily detectable with a commodity video camera. And that slow “beat” will contain all the phase information necessary to gauge distance.

But rather than try to synchronize two high-frequency light signals — as interferometry systems must — Kadambi and Raskar simply modulate the returning signal, using the same technology that produced it in the first place. That is, they pulse the already pulsed light. The result is the same, but the approach is much more practical for automotive systems.

“The fusion of the optical coherence and electronic coherence is very unique,” Raskar says. “We’re modulating the light at a few gigahertz, so it’s like turning a flashlight on and off millions of times per second. But we’re changing that electronically, not optically. The combination of the two is really where you get the power for this system.”

Through the fog

Gigahertz optical systems are naturally better at compensating for fog than lower-frequency systems. Fog is problematic for time-of-flight systems because it scatters light: It deflects the returning light signals so that they arrive late and at odd angles. Trying to isolate a true signal in all that noise is too computationally challenging to do on the fly.

With low-frequency systems, scattering causes a slight shift in phase, one that simply muddies the signal that reaches the detector. But with high-frequency systems, the phase shift is much larger relative to the frequency of the signal. Scattered light signals arriving over different paths will actually cancel each other out: The troughs of one wave will align with the crests of another. Theoretical analyses performed at the University of Wisconsin and Columbia University suggest that this cancellation will be widespread enough to make identifying a true signal much easier.

“I am excited about medical applications of this technique,” says Rajiv Gupta, director of the Advanced X-ray Imaging Sciences Center at Massachusetts General Hospital and an associate professor at Harvard Medical School. “I was so impressed by the potential of this work to transform medical imaging that we took the rare step of recruiting a graduate student directly to the faculty in our department to continue this work.”

“I think it is a significant milestone in development of time-of-flight techniques because it removes the most stringent requirement in mass deployment of cameras and devices that use time-of-flight principles for light, namely, [the need for] a very fast camera,” he adds. “The beauty of Achuta and Ramesh’s work is that by creating beats between lights of two different frequencies, they are able to use ordinary cameras to record time of flight.”



de MIT News http://ift.tt/2BMZ9Lw

Recalculating time

Whether it’s tracking brain activity in the operating room, seismic vibrations during an earthquake, or biodiversity in a single ecosystem over a million years, measuring the frequency of an occurrence over a period of time is a fundamental data analysis task that yields critical insight in many scientific fields. But when it comes to analyzing these time series data, researchers are limited to looking at pieces of the data at a time to assemble the big picture, instead of being able to look at the big picture all at once.

In a new study, MIT researchers have developed a novel approach to analyzing time series data sets using a new algorithm, termed state-space multitaper time-frequency analysis (SS-MT). SS-MT provides a framework to analyze time series data in real-time, enabling researchers to work in a more informed way with large sets of data that are nonstationary, i.e. when their characteristics evolve over time. It allows researchers to not only quantify the shifting properties of data but also make formal statistical comparisons between arbitrary segments of the data.

“The algorithm functions similarly to the way a GPS calculates your route when driving. If you stray away from your predicted route, the GPS triggers the recalculation to incorporate the new information,” says Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, a member of the Picower Institute for Learning and Memory, associate director of the Institute for Medical Engineering and Science, and senior author on the study.

“This allows you to use what you have already computed to get a more accurate estimate of what you’re about to compute in the next time period,” Brown says. “Current approaches to analyses of long, nonstationary time series ignore what you have already calculated in the previous interval leading to an enormous information loss.”

In the study, Brown and his colleagues combined the strengths of two existing statistical analysis paradigms: state-space modeling and multitaper methods. State-space modeling is a flexible paradigm, which has been broadly applied to analyze data whose characteristics evolve over time. Examples include GPS, tracking learning, and performing speech recognition. Multitaper methods are optimal for computing spectra on a finite interval. When combined, the two methods bring together the local optimality properties of the multitaper approach with the ability to combine information across intervals with the state-space framework to produce an analysis paradigm that provides increased frequency resolution, increased noise reduction and formal statistical inference.

To test the SS-MT algorithm, Brown and colleagues first analyzed electroencephalogram (EEG) recordings measuring brain activity from patients receiving general anesthesia for surgery. The SS-MT algorithm provided a highly denoised spectrogram characterizing the changes in power across frequencies over time. In a second example, they used the SS-MT’s inference paradigm to compare different levels of unconsciousness in terms of the differences in the spectral properties of these behavioral states.

“The SS-MT analysis produces cleaner, sharper spectrograms,” says Brown. “The more background noise we can remove from a spectrogram, the easier it is to carry out formal statistical analyses.”

Going forward, Brown and his team will use this method to investigate in detail the nature of the brain’s dynamics under general anesthesia. He further notes that the algorithm could find broad use in other applications of time-series analyses.

“Spectrogram estimation is a standard analytic technique applied commonly in a number of problems such as analyzing solar variations, seismic activity, stock market activity, neuronal dynamics and many other types of time series,” says Brown. “As use of sensor and recording technologies becomes more prevalent, we will need better, more efficient ways to process data in real time. Therefore, we anticipate that the SS-MT algorithm could find many new areas of application.”

Seong-Eun Kim, Michael K. Behr, and Demba E. Ba are lead authors of the paper, which was published online the week of Dec. 18 in Proceedings of the National Academy of Sciences PLUS. This work was partially supported by a National Research Foundation of Korea Grant, Guggenheim Fellowships in Applied Mathematics, the National Institutes of Health including NIH Transformative Research Awards, funds from Massachusetts General Hospital, and funds from the Picower Institute for Learning and Memory.



de MIT News http://ift.tt/2kYdVay

Can computers help us synthesize new materials?

Last month, three MIT materials scientists and their colleagues published a paper describing a new artificial-intelligence system that can pore through scientific papers and extract “recipes” for producing particular types of materials.

That work was envisioned as the first step toward a system that can originate recipes for materials that have been described only theoretically. Now, in a paper in the journal npj Computational Materials, the same three materials scientists, with a colleague in MIT’s Department of Electrical Engineering and Computer Science (EECS), take a further step in that direction, with a new artificial-intelligence system that can recognize higher-level patterns that are consistent across recipes.

For instance, the new system was able to identify correlations between “precursor” chemicals used in materials recipes and the crystal structures of the resulting products. The same correlations, it turned out, had been documented in the literature.

The system also relies on statistical methods that provide a natural mechanism for generating original recipes. In the paper, the researchers use this mechanism to suggest alternative recipes for known materials, and the suggestions accord well with real recipes.

The first author on the new paper is Edward Kim, a graduate student in materials science and engineering. The senior author is his advisor, Elsa Olivetti, the Atlantic Richfield Assistant Professor of Energy Studies in the Department of Materials Science and Engineering (DMSE). They’re joined by Kevin Huang, a postdoc in DMSE, and by Stefanie Jegelka, the X-Window Consortium Career Development Assistant Professor in EECS.

Sparse and scarce

Like many of the best-performing artificial-intelligence systems of the past 10 years, the MIT researchers’ new system is a so-called neural network, which learns to perform computational tasks by analyzing huge sets of training data. Traditionally, attempts to use neural networks to generate materials recipes have run up against two problems, which the researchers describe as sparsity and scarcity.

Any recipe for a material can be represented as a vector, which is essentially a long string of numbers. Each number represents a feature of the recipe, such as the concentration of a particular chemical, the solvent in which it’s dissolved, or the temperature at which a reaction takes place.

Since any given recipe will use only a few of the many chemicals and solvents described in the literature, most of those numbers will be zero. That’s what the researchers mean by “sparse.”

Similarly, to learn how modifying reaction parameters — such as chemical concentrations and temperatures — can affect final products, a system would ideally be trained on a huge number of examples in which those parameters are varied. But for some materials — particularly newer ones — the literature may contain only a few recipes. That’s scarcity.

“People think that with machine learning, you need a lot of data, and if it’s sparse, you need more data,” Kim says. “When you’re trying to focus on a very specific system, where you’re forced to use high-dimensional data but you don’t have a lot of it, can you still use these neural machine-learning techniques?”

Neural networks are typically arranged into layers, each consisting of thousands of simple processing units, or nodes. Each node is connected to several nodes in the layers above and below. Data is fed into the bottom layer, which manipulates it and passes it to the next layer, which manipulates it and passes it to the next, and so on. During training, the connections between nodes are constantly readjusted until the output of the final layer consistently approximates the result of some computation.

The problem with sparse, high-dimensional data is that for any given training example, most nodes in the bottom layer receive no data. It would take a prohibitively large training set to ensure that the network as a whole sees enough data to learn to make reliable generalizations.

Artificial bottleneck

The purpose of the MIT researchers’ network is to distill input vectors into much smaller vectors, all of whose numbers are meaningful for every input. To that end, the network has a middle layer with just a few nodes in it — only two, in some experiments.

The goal of training is simply to configure the network so that its output is as close as possible to its input. If training is successful, then the handful of nodes in the middle layer must somehow represent most of the information contained in the input vector, but in a much more compressed form. Such systems, in which the output attempts to match the input, are called “autoencoders.”

Autoencoding compensates for sparsity, but to handle scarcity, the researchers trained their network on not only recipes for producing particular materials, but also on recipes for producing very similar materials. They used three measures of similarity, one of which seeks to minimize the number of differences between materials — substituting, say, just one atom for another — while preserving crystal structure.

During training, the weight that the network gives example recipes varies according to their similarity scores.

Playing the odds

In fact, the researchers’ network is not just an autoencoder, but what’s called a variational autoencoder. That means that during training, the network is evaluated not only on how well its outputs match its inputs, but also on how well the values taken on by the middle layer accord with some statistical model — say, the familiar bell curve, or normal distribution. That is, across the whole training set, the values taken on by the middle layer should cluster around a central value and then taper off at a regular rate in all directions.

After training a variational autoencoder with a two-node middle layer on recipes for manganese dioxide and related compounds, the researchers constructed a two-dimensional map depicting the values that the two middle nodes took on for each example in the training set.

Remarkably, training examples that used the same precursor chemicals stuck to the same regions of the map, with sharp boundaries between regions. The same was true of training examples that yielded four of manganese dioxide’s common “polymorphs,” or crystal structures. And combining those two mappings indicated correlations between particular precursors and particular crystal structures.

“We thought it was cool that the regions were continuous,” Olivetti says, “because there’s no reason that that should necessarily be true.”

Variational autoencoding is also what enables the researchers’ system to generate new recipes. Because the values taken on by the middle layer adhere to a probability distribution, picking a value from that distribution at random is likely to yield a plausible recipe.

“This actually touches upon various topics that are currently of great interest in machine learning,” Jegelka says. “Learning with structured objects, allowing interpretability by and interaction with experts, and generating structured complex data — we integrate all of these.”

“‘Synthesizability’ is an example of a concept that is central to materials science yet lacks a good physics-based description,” says Bryce Meredig, founder and chief scientist at Citrine Informatics, a company that brings big-data and artificial-intelligence techniques to bear on materials science research. “As a result, computational screens for new materials have been hamstrung for many years by synthetic inaccessibility of the predicted materials. Olivetti and colleagues have taken a novel, data-driven approach to mapping materials syntheses and made an important contribution toward enabling us to computationally identify materials that not only have exciting properties but also can be made practically in the laboratory.”

The research was supported by the National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, the U.S. Office of Naval Research, the MIT Energy Initiative, and the U.S. Department of Energy’s Basic Energy Science Program.



de MIT News http://ift.tt/2CT55Ce

miércoles, 20 de diciembre de 2017

Letter regarding the new federal tax bill and its impact on MIT

The following email was sent today to the MIT community by President L. Rafael Reif.

To the members of the MIT community:

As Congress was shaping its tax legislation, I wrote to you about possible provisions that would have damaging effects on the Institute and members of our community. Now that the bill is final, I offer an update.
In our community, as elsewhere, people hold a range of views about the tax bill as a whole. It is my responsibility to let you know the bill’s immediate consequences for MIT.

  • No new tax on tuition remission for graduate students

First, some very good news. The final bill does not include the provision that would have had the harshest and most immediate impact on thousands of members of our community: the proposal to treat tuition remission for graduate students as taxable income.

While our students enjoy the relief of this moment, I hope they will also accept our admiring congratulations; this good outcome resulted in no small part from the persistent advocacy of our graduate students and their peers across the country. They organized phone banks to Congress. They wrote op-eds. They did interviews. They blogged, they tweeted, they coordinated with students at other schools — and together they made clear that the proposed tax could derail not only thousands of individual careers, but the nation’s strength in science and innovation. I am grateful for both their success and their example.

Further credit is due to Chancellor Cynthia Barnhart and other senior MIT leaders, who worked to keep our students informed, and to MIT’s Washington office as well as many alumni and MIT Corporation members who also strove to make sure this proposal did not appear in the final bill.

  • New tax on MIT's net investment income

Unfortunately — despite extensive efforts by MIT, on our own and in concert with peer schools — the tax bill does include a tax on the net income from the endowment and other investments of about 30 schools that meet a certain endowment threshold, including MIT.

According to our current analysis, this new tax will cost MIT at least $10 million a year. I have asked Provost Marty Schmidt and Executive Vice President and Treasurer Israel Ruiz to develop options to meet this new tax obligation while minimizing its impact on MIT’s capacity to serve its mission.

This is the first time that Congress has taxed university endowments, and the first time it has targeted a tax at specific universities. As an offset for a bill that will cut $1.5 trillion in taxes over ten years, this tax is expected to bring in less than $200 million in revenue annually.

However, it will reduce MIT’s ability to undertake exactly the kind of activities that Congress wants us to pursue — extensive financial aid for students, innovative education, pioneering research — the same activities that MIT’s alumni and friends have so strongly endorsed for many decades through contributions to our endowment.

It is clear that we must do a better job of convincing lawmakers that the work of MIT and other research universities — maintaining America’s scientific leadership, fueling innovation, creating prosperity and educating leaders — is vital to the national interest, and that, from need-blind admissions to life-saving research, we are focused on advancing the public good.

I look forward to working with you to convey this important message, while MIT continues to carry out its timeless mission, to the enduring benefit of the students we teach and the nation we serve.

You can find more detail about the impact of this tax and other provisions of the bill in this interview with Vice President for Research Maria Zuber and Executive Vice President and Treasurer Israel Ruiz.

Beyond the immediate consequences of the tax bill, I draw your attention to a possible issue on the horizon. If the federal budget deficit increases, over time this will likely trigger calls for federal spending cuts, including cuts to research. As I have argued many times — from the Wall Street Journal to Foreign Affairs — such cuts would be a matter of mission-critical concern for institutions like MIT, with harmful repercussions for our country.

On all these questions, we will closely monitor developments in Washington and keep you informed as appropriate.

I urge each of you to do all you can to help the public understand that research, education and innovation are the signature investments of a nation that believes in its future.

Sincerely,
L. Rafael Reif



de MIT News http://ift.tt/2ktp3fP

Q&A: Israel Ruiz and Maria Zuber on new tax law’s implications for MIT

Both houses of Congress have now passed the Tax Cuts and Jobs Act of 2017, and President Trump is expected to sign it into law soon. The bill will make significant changes to federal tax law. MIT officials have closely tracked the bill, and its likely impacts on Institute students, faculty and staff, since it was first proposed last month — and have engaged with elected representatives to ensure that these representatives understand how the legislation will affect higher education and research in the United States.

Israel Ruiz, MIT’s executive vice president and treasurer, and Maria Zuber, vice president for research and the E.A. Griswold Professor of Geophysics, spoke with MIT News about the new law’s foreseeable impacts on members of the MIT community.

Q: Overall, what does this legislation mean for MIT?

Ruiz: Thanks to great efforts by MIT students, staff, alumni, and friends, the final bill is less damaging to the mission of MIT than the version passed last month by the House. Provisions that would have been particularly harmful to MIT and members of our community, such as new income taxes on many of our graduate students and on employees’ tuition assistance, are not in the bill just passed by Congress.

One of the most harmful aspects of the initial bill would have reclassified as taxable income graduate students’ tuition remission — essentially, contributions from MIT  and others to offset the cost of tuition for graduate students who work as teaching and research assistants. Our analysis found that this provision would have increased income taxes for approximately half of MIT’s 7,000 graduate students.

Other earlier provisions detrimental to MIT or members of our community have been struck from the final bill. These include a proposed elimination of the deduction for interest on student loans and the repeal or scaling back of certain education credits.

We are relieved that all of these harmful provisions were ultimately excluded from the bill that has now been passed by Congress. However, the legislation still includes provisions that will hurt MIT.

The tax increase will affect MIT’s ability to educate the next generation of society’s leaders, to support research breakthroughs, and to power economic growth through the many startups and far-reaching technological innovations that come out of MIT.

The law also represents a fundamental rethinking of the status of our nation’s educational institutions, by levying a new tax on a small number of private universities despite our tax-exempt status.

Q: How did MIT’s leaders, students, alumni, and friends engage in this process?

Zuber:  I’ve spent several recent days on Capitol Hill, informing elected officials on how university endowments are structured and used by institutions, and on how investments made to support education and research benefit the nation. In my meetings in Washington, I have emphasized that America’s top-notch universities, its best-in-the-world scientific enterprise, and its longstanding technological leadership have played, and continue to play, an essential role in keeping our nation’s economy growing and making our nation more prosperous and secure. Tax policies that hurt our universities hurt all Americans.

Of course, I’m not MIT’s only voice in Washington: President Reif was in contact with people both inside and outside the government who could influence the final shape of the bill. And the MIT Washington Office has been working on this issue virtually nonstop for weeks now.

The leaders of our Graduate Student Council [GSC] worked with Chancellor Cynthia Barnhart, Vice Chancellor Ian Waitz, Vice President and General Counsel Mark DiVincenzo, and MIT Washington Office Director David Goldston as our students mobilized and advocated against the provision — ultimately removed from the bill now passed by Congress — that would have added onerous new taxes for many of our graduate students. Many MIT students joined in conveying to Congress just how devastating and shortsighted this proposal was, by participating in GSC-sponsored phone banks, authoring op-eds, engaging in media interviews, and traveling to Washington to meet with elected officials. They made a compelling case for how, as researchers and teachers, MIT’s graduate students contribute to making a better world by driving innovation, discovery, and economic growth.

Several MIT offices — including not only the Washington Office and Chancellor’s office, but also the Office of the Vice President for Finance and the Office of the General Counsel — have been working together since early November to analyze the provisions of the two bills, assess the implications of the provisions, and express our concerns strategically in D.C. In addition, MIT is working closely with the Association of American Universities, with other university groups, and with our peer institutions — all of whom have been active in building Congressional opposition to these tax proposals.

Finally, we have engaged throughout this process with MIT’s alumni, friends, and members of the MIT Corporation to help communicate the impact of this legislation.

Q: How will the new tax law immediately affect MIT?

Ruiz: We are still studying the final bill. With the taxation of graduate students’ tuition remission gone, the most onerous provision for MIT will be an “excise tax” — a new type of income tax imposed on a very small number of colleges and universities — equal to 1.4 percent of MIT’s annual investment income. The new law will impose the tax only on private academic institutions with at least 500 students and an endowment valued at more than $500,000 per student — affecting, as we understand, about 32 colleges and universities nationwide, including MIT.

It’s important to understand that MIT’s endowment is not a bank account that we can simply tap as we need money: Such an approach would be a bit like raiding your retirement savings to buy groceries or pay your rent. Instead, investment earnings from MIT’s endowment — built through the generosity of alumni and donors over the past 150-some years — support current and future generations of MIT scholars with the resources needed to advance knowledge, research, and innovation.

Indeed, income from MIT’s investments provided 31 percent of total MIT campus revenues last fiscal year: These funds underpin all campus activities, including education, research, renewal of our facilities, faculty work, and student financial aid. On this last point, our endowment allows MIT to be generous in providing money for student scholarships, financial aid, and fellowships. This year, 33 percent of our undergraduates are attending MIT tuition-free.

Our analysis suggests that this new excise tax will cost the Institute at least $10 million next year, based on our investments’ performance over the past five years.

Q: In what significant ways might the new tax law affect MIT in the longer term?

Ruiz: A less clear impact of the law, but one that could also be damaging to MIT over the long term, is the possibility of a reduction in charitable giving associated with its changes to the tax code. Under the law, the standard deduction for taxpayers has been doubled, decreasing the tax benefit of charitable contributions for donors who no longer itemize deductions. In addition, the new law will double the amount excluded from the estate tax through 2025, a change that will enable donors to pass substantially more money to their heirs tax-free, reducing incentives for charitable giving to nonprofit organizations like MIT.

Zuber: I’ll add that another long-term impact could be reductions in federal support for higher education, and especially in federal research support: If federal debt grows, there are likely to be calls for reductions in discretionary spending. Last fiscal year, nearly half of MIT’s revenues — 49 percent — came in the form of grants and other funding in support of MIT’s research enterprise (including Lincoln Lab). Close to 80 percent of this funding came from the federal government. So future belt-tightening by the federal government, if the nation’s debt continues to mount, could exert significant financial pressure on the Institute in the years ahead.



de MIT News http://ift.tt/2BImdeg

Advancing integrated photonics at Lincoln Laboratory

Researchers at Lincoln Laboratory have been using silicon and compound semiconductor substrates to build photonic integrated circuits, or PICs, that enable devices such as optical communication receivers, wideband ladar transmitters, interconnects for trapped-ion quantum computers, inertial sensors, and microwave signal processors. Now, a recently awarded state grant will fund a germanium deposition reactor that will allow the researchers to exploit germanium as a key optoelectronic material in the fabrication of PICs operating at nontraditional wavelengths and under harsh environmental conditions.

Photonic integrated circuits are in demand for routing the enormous volume of traffic passing through data centers today. In addition, high-speed, reliable photonic circuits that lessen systems' electrical power requirements can improve the performance of quantum and all-optical computing systems, as well as the throughput of the advanced microprocessors embedded in highly sensitive sensors and increasingly capable autonomous vehicles. 

"The addition of the reactor will allow Lincoln Laboratory to manufacture trusted silicon photonic integrated circuits," says Daniel Pulver, the manager of Lincoln Laboratory's Microelectronics Laboratory.

The semiconductor facility has already been certified as a Trusted Foundry by the Defense Microelectronics Activity Trusted Program, which was established by the Department of Defense (DoD) and the National Security Agency to assure the security of the fabrication process and supply chain behind microelectronics used in defense systems. Pulver said that this grant will enable Lincoln Laboratory's microelectronics facility to become the first DoD trusted silicon photonics foundry. 

The improved in-house capabilities that the reactor will bring to the Microelectronics Laboratory will also support innovations that Lincoln Laboratory is developing, for example, low-size, low-weight, and low-power inertial navigation sensors. Currently, the laboratory has to outsource the deposition of germanium on silicon PICs that require such deposition for some of the systems under development. The new germanium reactor will allow this deposition to be performed in-house to facilitate reduced fabrication time for next-generation photonic integrated circuits.

"This tool will allow us to optimize germanium-based photodetectors for operation at cryogenic temperatures, develop new classes of optical modulators, and extend the operating wavelength range of photonic integrated circuits to the long-wavelength infrared region of the spectrum," says Paul Juodawlkis, assistant leader of the Quantum Information and Integrated Nanosystems Group and principal investigator for Lincoln Laboratory's integrated photonics activities.

Massachusetts Governor Charlie Baker and Secretary of Housing and Economic Development Jay Ash presented the $1.9 million grant award to Lincoln Laboratory to celebrate National Manufacturing Day earlier this fall. It was one of seven grants awarded this year under the Massachusetts Manufacturing Innovation Initiative (M2I2) to support infrastructure improvements that will enable research and development leading to the manufacturing of advanced functional fabrics, integrated photonics, robotics, and flexible hybrid electronics.

"These awards will ensure the Commonwealth remains a leader in advanced manufacturing to spur job growth and train students for valuable career opportunities," Baker said.

The M2I2 was established to facilitate Massachusetts' participation in the Manufacturing USA program, a network of regional institutes, each with a distinct technology focus, whose member organizations collaborate to leverage existing resources, to foster education, and to promote manufacturing innovation and commercialization. Currently, nine institutes with main offices in cities across the country are connecting member organizations from academia, industry, and the research community.

Lincoln Laboratory has also become a member of AIM Photonics (American Institute for Manufacturing Integrated Photonics), a Manufacturing USA institute headquartered in Rochester, New York. The M2I2 grant will accelerate Lincoln Laboratory's involvement with AIM Photonics through advanced process development, internship and apprenticeship opportunities, and engagement with companies in Massachusetts and throughout the nation.

"We look forward to partnering with AIM Photonics and the Commonwealth on these exciting activities," Juodawlkis says. "We plan to use these new resources as a springboard to provide educational outreach and internships to students at both universities and community colleges."



de MIT News http://ift.tt/2kufw8t

New technique allows rapid screening for new types of solar cells

The worldwide quest by researchers to find better, more efficient materials for tomorrow’s solar panels is usually slow and painstaking. Researchers typically must produce lab samples — which are often composed of multiple layers of different materials bonded together — for extensive testing.

Now, a team at MIT and other institutions has come up with a way to bypass such expensive and time-consuming fabrication and testing, allowing for a rapid screening of far more variations than would be practical through the traditional approach.

The new process could not only speed up the search for new formulations, but also do a more accurate job of predicting their performance, explains Rachel Kurchin, an MIT graduate student and co-author of a paper describing the new process that appears this week in the journal Joule. Traditional methods “often require you to make a specialized sample, but that differs from an actual cell and may not be fully representative” of a real solar cell’s performance, she says.

For example, typical testing methods show the behavior of the “majority carriers,” the predominant particles or vacancies whose movement produces an electric current through a material. But in the case of photovoltaic (PV) materials, Kurchin explains, it is actually the minority carriers — those that are far less abundant in the material — that are the limiting factor in a device’s overall efficiency, and those are much more difficult to measure. In addition, typical procedures only measure the flow of current in one set of directions — within the plane of a thin-film material — whereas it’s up-down flow that is  actually harnessed in a working solar cell. In many materials, that flow can be “drastically different,” making it critical to understand in order to properly characterize the material, she says.

“Historically, the rate of new materials development is slow — typically 10 to 25 years,” says Tonio Buonassisi, an associate professor of mechanical engineering at MIT and senior author of the paper. “One of the things that makes the process slow is the long time it takes to troubleshoot early-stage prototype devices,” he says. “Performing characterization takes time — sometimes weeks or months — and the measurements do not always have the necessary sensitivity to determine the root cause of any problems.”

So, Buonassisi says, “the bottom line is, if we want to accelerate the pace of new materials development, it is imperative that we figure out faster and more accurate ways to troubleshoot our early-stage materials and prototype devices.” And that’s what the team has now accomplished. They have developed a set of tools that can be used to make accurate, rapid assessments of proposed materials, using a series of relatively simple lab tests combined with computer modeling of the physical properties of the material itself, as well as additional modeling based on a statistical method known as Bayesian inference.

The system involves making a simple test device, then measuring its current output under different levels of illumination and different voltages, to quantify exactly how the performance varies under these changing conditions. These values are then used to refine the statistical model.

“After we acquire many current-voltage measurements [of the sample] at different temperatures and illumination intensities, we need to figure out what combination of materials and interface variables make the best fit with our set of measurements,” Buonassisi explains. “Representing each parameter as a probability distribution allows us to account for experimental uncertainty, and it also allows us to suss out which parameters are covarying.”

The Bayesian inference process allows the estimates of each parameter to be updated based on each new measurement, gradually refining the estimates and homing in ever closer to the precise answer, he says.

In seeking a combination of materials for a particular kind of application, Kurchin says, “we put in all these materials properties and interface properties, and it will tell you what the output will look like.”

The system is simple enough that, even for materials that have been less well-characterized in the lab, “we’re still able to run this without tremendous computer overhead.” And, Kurchin says, making use of the computational tools to screen possible materials will be increasingly useful because “lab equipment has gotten more expensive, and computers have gotten cheaper. This method allows you to minimize your use of complicated lab equipment.”

The basic methodology, Buonassisi says, could be applied to a wide variety of different materials evaluations, not just solar cells — in fact, it may apply to any system that involves a computer model for the output of an experimental measurement. “For example, this approach excels in figuring out which material or interface property might be limiting performance, even for complex stacks of materials like batteries, thermoelectric devices, or composites used in tennis shoes or airplane wings.” And, he adds, “It is especially useful for early-stage research, where many things might be going wrong at once.”

Going forward, he says, “our vision is to link up this fast characterization method with the faster materials and device synthesis methods we’ve developed in our lab.” Ultimately, he says,  “I’m very hopeful the combination of high-throughput computing, automation, and machine learning will help us accelerate the rate of novel materials development by more than a factor of five. This could be transformative, bringing the timelines for new materials-science discoveries down from 20 years to about three to five years.”

The research team also included Riley Brandt '11, SM '13, PhD '16; former postdoc Vera Steinmann; MIT graduate student Daniil Kitchaev and visiting professor Gerbrand Ceder, Chris Roat at Google Inc.; and Sergiu Levcenco and Thomas Unold at Hemholz Zentrum in Berlin. The work was supported by a Google Faculty Research Award, the U.S. Department of Energy, and a Total research grant.



de MIT News http://ift.tt/2BN1pFb

martes, 19 de diciembre de 2017

Inventing the “Google” for predictive analytics

Companies often employ number-crunching data scientists to gather insights such as which customers want certain services or where to open new stores and stock products. Analyzing the data to answer one or two of those queries, however, can take weeks or even months.

Now MIT spinout Endor has developed a predictive-analytics platform that lets anyone, tech-savvy or not, upload raw data and input any business question into an interface — similar to using an online search engine — and receive accurate answers in just 15 minutes.

The platform is based on the science of “social physics,” co-developed at the MIT Media Lab by Endor co-founders Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, and Yaniv Altshuler, a former MIT postdoc. Social physics uses mathematic models and machine learning to understand and predict crowd behaviors.

Users of the new platform upload data about customers or other individuals, such as records of mobile phone calls, credit card purchases, or web activity. They use Endor’s “query-builder” wizard to ask questions, such as “Where should we open our next store?” or “Who is likely to try product X?” Using the questions, the platform identifies patterns of previous behavior among the data and uses social physics models to predict future behavior. The platform can also analyze fully encrypted data-streams, allowing customers such as banks or credit card operators to maintain data privacy.

“It’s just like Google. You don’t have to spend time thinking, ‘Am I going to spend time asking Google this question?’ You just Google it,” Altshuler says. “It’s as simple as that.”

Financially backed by Innovation Endeavors, the private venture capital firm of Eric Schmidt, executive chairman of Google parent company Alphabet, Inc., the startup has found big-name customers, such as Coca-Cola, Mastercard, and Walmart, among other major retail and banking firms.

Recently, Endor analyzed Twitter data for a defense agency to detect potential terrorists. Endor was given 15 million data points containing examples of 50 Twitter accounts of identified ISIS activists, based on identifiers in the metadata. From that, they asked the startup to detect 74 with identifiers extremely well hidden in the metadata. Someone at Endor completed the task on a laptop in 24 minutes, detecting 80 “lookalike” ISIS accounts, 45 of which were from the pool of 74 well-hidden accounts named by the agency. The false positive rate was also extremely low (35 accounts), meaning that human analysts could afford to have experts investigating the accounts.

Clusters of commonality

Machine learning is used for complex computational problems that are relatively static, such as image recognition and voice recognition. Written and spoken English, for instance, has been essentially unchanged for centuries.

Human behavior, on the other hand, is ever-changing. Predicting human behavior means analyzing a large number of small signals over a short period of time, perhaps days or weeks. Traditional machine-learning algorithms rely mainly on constructed models that analyze data over much longer periods.

“In general, you need a lot of data to build accurate models for human behavior, and that means you have to rely on the past. Because you rely on the past, you cannot detect things that recently happened, and you can’t predict human behavior,” Altshuler says.

Throughout the early- and mid-2000s, Pentland and Altshuler developed “social physics” in the Human Dynamics Lab, with aims of capturing and analyzing short-term data to understand and predict crowd dynamics. In their research, they found all big data contain certain mathematical patterns that indicate how social interactions spread and converge, and those patterns can help predict future behaviors.

Using those mathematical patterns, they built a platform — the core technology of Endor’s platform — that can extract “clusters” of behavioral commonalities from millions of raw data points, much more quickly and accurately than machine-learning algorithms. A cluster may represent families of four, people who buy similar foods, or individuals who visit the same locations. “Most of those data patterns would be indistinguishable from noise with any other technologies,” Altshuler says.

It isn’t immediately clear what clusters represent, just that there is a strong correlation. Querying the data, however, provides context. With customer data, for instance, someone might query which customers are most likely to buy a specific product. Using keywords, the platform matches behavioral traits — such as location and spending habits — of customers who have bought that product with those who haven’t. This overlap creates a list of possible new customers that are apt to buy the product.

In short, uploading data and asking the right question presents the platform with a basic request: Here is an example X, find me more of X. “As long as you can phrase a question in that way, you’ll get an accurate response,” Altshuler says.

Endor and Endor-ish

To test the platform, the researchers worked early on with the U.S. Defense Advanced Research Project Agency (DARPA) to analyze mobile data in certain cities in times of civil unrest to show how emerging patterns can help predict future riots. Altshuler also spent a couple months in Singapore analyzing taxi ride data to predict traffic jams in the city.

In 2014, Altshuler connected with Schmidt through Doron Alter, a friend and Stanford University graduate, who at that time was a partner in Innovation Endeavors. The investors asked if the technology could be wrapped “into a product that could be used by anyone,” Altshuler says.

That year, with Schmidt’s financial support, Altshuler and Pentland, a serial entrepreneur, co-founded Endor to transform the platform into commercial software. The team was joined by Alter and Stav Grinshpon, a tech-industry veteran and former leading technical expert at 8200, an Israeli Intelligence Corps. unit.

The company had soon earned an early partner in Mastercard through the credit card company’s StartPath program. Altshuler was asked by Mastercard to answer queries reserved for data scientists, such as who is going to fly abroad soon, take out loans, or increase credit card activity.

On a single flight from Tel Aviv, Israel, to New York City, Altshuler crunched billions of data points on financial transactions of 1 million card-holders and received accurate answers to 10 questions. Traditionally, data scientists would need to spend weeks, or months, cleaning the data and designing machine-learning models to answer each question individually. “It would have taken the company, say, two months to develop models to answer those questions. I did 10 on one transatlantic flight,” Altshuler says.

Companies may employ their own analytics-savvy staff to use Endor. Others will set up brief weekly meetings with Endor representatives to determine the best phrasing for questions. “It takes about five minutes to translate their English to what we call ‘Endor-ish,’ meaning the way our system can understand questions,” Altshuler says.

The startup’s webpage offers an example of results and a comparison with traditional machine-learning engines. A marketing department for a bank asks, “Who is going to get a mortgage in the next six months?” Machine-learning engines may detect a pool of, say, 5,000 customers who have a bank credit card and a high credit score, and are married — many of which may be false positives. Endor detects more specific clusters of, say, couples about to get married or going through a divorce, founders who recently sold their startups to Facebook, or customers who recently graduated from a local real-estate course. Results from Endor offer far fewer false positives and dig up far more additional potential customers, according to the startup.

Importantly, Altshuler says, Endor isn’t aimed at replacing data scientists; it’s designed as a tool to empower them. Data scientists, he says, are most familiar with their organization’s business semantics and can incorporate Endor into their workflow. By opening a “bottleneck” — where data input comes in faster than anyone can produce an output — Endor aims to help data scientists improve their companies. “Data scientists understand we can make them heroes,” Altshuler says.

Endor recently won the “Cool Vendor” status by Gartner, reserved for industry disrupters, and was acknowledged as a “Technological Pioneer” by the World Economic Forum. As word spreads, Endor is now gaining customers across the U.S., with first customers also in Europe and Latin America. “It’s exciting times,” Altshuler says.



de MIT News http://ift.tt/2BkbKc2

Street signs

Day after day in early 2011, massive crowds gathered in Cairo’s Tahrir Square, calling for the ouster of Egyptian President Hosni Mubarak. Away from the square, the protests had another effect, as a study co-authored by an MIT professor shows. The demonstrations lowered the stock market valuations of politically connected firms — and showed how much people thought a full democratic revolution was possible.

“When there’s street mobilization, you expect that the future will be different,” says MIT economist Daron Acemoglu, co-author of a paper detailing the results.

The study opens a keyhole into the hopes and fears of Egyptians at a time of great political uncertainty. After weeks of protest, caused in part by perceptions of government corruption, Mubarak resigned in February 2011, replaced by an interim military government. The moment passed, however. In June 2012, the Islamist leader Mohamed Morsi was elected president, only to be replaced by another phase of military rule, starting in July 2013. Military leader Abdel Fattah el-Sisi was then elected president in May 2014 with 97 percent of the vote.

Still, in the first half of 2011, an open democracy seemed conceivable — indeed, a democratic revolution was taking place in Tunisia — and that was reflected in market sentiment. In the nine days of market activity after Mubarak left power, the valuations of the firms most politically connected to his National Democratic Party (NDP) fell by 13 percent relative to other firms.

Moreover, the support for NDP-linked stocks was not shifted to firms linked to other power centers in Egyptian life, including the military or Morsi’s Muslim Brotherhood. Investors were, in part, devaluing the worth of political connections in the country.

“It’s not just redistribution of a given amount of spoils, but perhaps street mobilization is reducing what the market thinks the available spoils are,” Acemoglu says of investor activity in early 2011.

More specifically, Acemoglu adds, some investors thought politically connected firms would be “less capable of capturing rents,” the revenues flowing from noncompetitive business activity, and would have “less room for engaging in these corrupt activities.”

The study also shows a connection to protest-crowd size; an estimated one-day turnout of 500,000 protestors in Tahrir Square would lower the valuation of NDP-connected firms by 0.8 percent relative to other listed firms.

Among its other findings, the study sheds light on the much-discussed relationship between social media and the Arab Spring uprisings of 2011. In this case, the scholars also found that Twitter activity forecast the amount of street protest that would ensue. By itself, social media activity did not immediately affect stock market valuations, but by encouraging public demonstrations, it had an indirect effect.

The paper, “The Power of the Street: Evidence from Egypt’s Arab Spring,” is forthcoming in print form by the Review of Financial Studies and currently appears in advance online form. The authors are Acemoglu, the Elizabeth and James Killian Professor of Economics at MIT; Tarek A. Hassan, an associate professor of economics at Boston University; and Ahmed Tahoun, an assistant professor of accounting at London Business School.

Taking stock of protests

To conduct the study, the researchers used stock-market data concerning 177 firms listed on the Egyptian stock exchange in early 2011, and examined daily closing prices for those firms between 2005 and 2013, as well as total firm assets and leverage (the amount of debt as a fraction of total assets).

Looking at board members and principal shareholders, Acemoglu and his colleagues divided the firms into four main groups: those with connections to the NDP, those with military connections, those with Muslim brotherhood connections, and those that were unconnected to the other groups.

The scholars also used published estimates of crowd sizes from Tahrir Square demonstrations, and to derive the conclusions about Twitter they examined 311 million tweets by over 300,000 Egyptian accounts between Jan. 1, 2001, and July 31, 2013.

In the paper, the researchers consider but largely rule out a couple of alternate explanations for stock market behavior during this time. One would be that Mubarak’s fall simply created instability which affected firms in varying ways. But the study controls for firm-level qualities and industrial sectors, and the devaluation effect was specific to NDP-aligned companies.

A second possible alternative is that the stock market was still expecting top-down control over the Egyptian government, but investors were simply altering their bets and identifying the next group of firms they expected to benefit from useful political connections. Acemoglu says that “is definitely a possibility” in theory, but as the paper notes, “there is no evidence of such offsetting shifts” in market investments.

To be clear, the 13 percent drop experienced by NDP-connected firms shows that many investors were not fazed by the protests, or at least did not expect the protests to lead to massive political changes. On the other hand, a significant portion did think that a ground-up, populist uprising could succeed — even if that ultimately proved not to be the case.

“It’s whoever the marginal investor is, and obviously the marginal investor was wrong,” Acemoglu says. “If you had perfect foresight, the day Mubarak fell, you would just be selling all the NDP stocks but buying all the military stocks.”

The social media moment

The study’s Twitter data suggest a slightly more subtle picture than some commentators described during the eventful days of 2011. Twitter activity did not lead to immediate stock market effects. On the other hand, protest hashtags did predict the occurrence of large demonstrations, and those protests subsequently moved the market.

“You can scream and shout whatever you want on social media, and it doesn’t [directly] change anything, but if social media acts as a vehicle for people organizing, then it might have an effect,” Acemoglu says.

Acemoglu, who pursues research projects in multiple areas of economics, is perhaps best-known for his work on the relationship between democratic institutions and economic growth, which is summarized in his 2012 book “Why Nations Fail” but remains very much an ongoing project.

At the same time, Acemoglu has conducted a wide-ranging series of studies analyzing and modeling political change in many countries, often with political scientist James Robinson of the University of Chicago. The paper on Egypt flows, in part, from that vein of research. It also builds on other academic studies of other countries, such as a 2001 paper demonstrating that links to the Indonesian government accounted for about a quarter of the value of well-connected firms in that country during the 1990s.

Acemoglu, for one, says he does not anticipate a sea change in Egypt’s current system of government any time soon. The current stasis in the country’s politics makes it all the more useful, however, to consider how fluid the political situation appeared in real time as recently as six years ago.

“Looking at it from the vantage point of 2011, none of that was obvious, that Tunisia would go one way, Egypt would go another way,” Acemoglu says.



de MIT News http://ift.tt/2BQ4SCI

Campus lights bring holiday cheer

Tech Twinkles, a celebration of holiday lights on MIT’s campus, drew hundreds of students and community members to the Stratton Student Center on December 6. In its fourth year, the event featured hot apple cider, cupcakes, and performances by the Logarhythms, the Chorallaries, and comedy improv troupe Roadkill Buffet.

Tech Twinkles was started in 2014 by Veronika Jedryka ’17, Teresa de Figueiredo ’17, and Jane He ’15. The idea came about when they realized how early it gets dark outside during the winter. “We thought it would be great to add some kind of brightness to MIT’s campus and lift people’s spirits,” Jedryka explains, “especially during a tough time with finals and final projects.”

In its first three years, the event was a partnership between the founders and the Division of Student Life. After Jedryka and Figueiredo graduated, the Undergraduate Association’s (UA) Events Committee volunteered to continue the tradition. This year, the UA added some furry fun to the proceedings with therapy dogs from MIT’s Puppy Lab, sponsored in part by the MindHandHeart Initiative.

Sophomore Christine You, UA Events Committee co-chair, explains, “For me, it’s a little sad when I’m done with my classes around 4 p.m. but the sun has already set and it’s very dark.”

This also marks the first year that MIT Facilities and Grounds took on the challenge of stringing thousands of lights on trees across campus. Sogna Scott, administrative assistant in Grounds Services, explains that it was a positive experience for the team “because it takes us out of the realm of doing our everyday work and it kind of gives us something to be a little more proud of.”

Chancellor Cynthia Barnhart, who has attended the event each year, was pleased with this year’s outcome. “What is truly amazing to me — and I get to witness it every day, it seems — is the students here at MIT and how much energy they have,” she says. “The organization is phenomenal and the commitment and excitement that students bring to it is just amazing.”

Tech Twinkles has become a welcome addition to December, encouraging students to take a break from preparing for finals and enjoy a few stress-free hours together. “It’s beginning to feel like winter and that means, for many of us, darkness,” Barnhart explains. “When you walk by those lights … they just make you happy.”



de MIT News http://ift.tt/2yYBg0N