viernes, 30 de septiembre de 2016

A better way to assay

Arrays of microparticles are common in many material science and bioengineering applications, but can be tedious to use because of their limited capacity. Large-scale microparticle arrays (LSMAs) can make analysis more efficient and precise, allowing for the placement and study of many items at once. Unfortunately, today’s techniques of moving to a large-scale platform cannot simultaneously accomplish the requirements of scalability, precision, specificity and versatility that would make the use of LSMAs practical.

Researchers from MIT and the Massachusetts General Hospital (MGH) have developed a new technique using porous microwells that pushes the precision and scalability of LSMAs to a new extreme. This new method, described in the Sept. 5 issue of Nature Materials, uses fluid flow to guide tens of thousands of microparticles at once, pushing them into microwells as the fluid moves through small open pores at the bottom of the porous well arrays. The new LSMA technique sorts and arrays particles on the basis of their size, shape, or modulus. This sequential particle assembly allows for contiguous and nested particle arrangements, as well as particle recollection and pattern transfer.

“Today’s applications are increasingly complex; this new technique creates the most precise arrangement of particles, allowing for a more detailed and accurate array,” says Patrick Doyle, the Robert T. Haslam (1911) Professor of Chemical Engineering and Singapore Research Professor at MIT. “This technique opens the way to new applications, including the study of diseased cells and anti-counterfeiting practices.”

Led by Doyle and Daniel Irimia, associate director of the BioMEMS Resource Center at MGH, the team developed this porous microwell platform where guided microparticles are inserted into congruent microwells, whereas geometrically mismatched particles are removed in a washing step. “Microwells have been used as an assembly template in the past, but they were useful only for single-particle arrangement,” says Irimia. “Scaling-up efforts resulted in particle arrangements with some degree of randomness. In our technique, controllable driving forces allow for the positioning of tens of thousands of particles with high specificity.”

Bioengineering applications

The ability to generate large arrays of cells is important for cell-screening applications, which aid immunology and the fight against cancer. For example, arranging cells in 2-D arrays for the study of cellular processes that progress over time has significant advantages compared with serial approaches, such as flow cytometry. Cells in a 2-D array can be analyzed more than once and several cells imaged simultaneously. This creates a higher yield which helps to avoid potential differences between the first and last cell analyzed.

The team tested the performance of LSMA techniques by generating arrays of more than 10,000 mammalian glioma cells, which can cause brain tumors. An acceptable yield for each array took approximately 60 seconds, significantly faster than the previous method using passive cell settling in microwells, which requires between five and 40 minutes. “Because of the speed in which the cells are arranged, there is little change in the cell’s state,” says Doyle. “This gives us more time to observe how cells respond to drugs and disease.”

This research began when a team at MGH, led by Irimia, attempted a new way to arrange cells for analysis. “Understanding how neutrophils [the most abundant type of white blood cells in mammals] react to stimuli helps us to understand how inflammation starts and evolves inside the body,” he says, “This technique allows a level of complexity we’ve not had before; we can analyze the same cells repeatedly and therefore gather more information about their function and interactions in less time.”

Anti-counterfeiting applications

The team also demonstrated that this new technique is compatible with particle recollection and pattern transfer. To demonstrate the encoding/decoding capacity of LSMAs, the researchers generated a 2-D arrangement of nanocrystal-laden microparticles for use in anti-counterfeiting. These nanocrystals, developed by Doyle and his team specifically for use in anti-counterfeiting applications, glow when exposed to near-infrared light. They can be altered to emit any color, allowing for the creation of unique barcodes invisible to the naked eye.

Conventional printing of the microparticle barcodes resulted in limited precision and resolution. By using a prealigned microwell array, this new approach generates a high-resolution, multicomponent pattern. The pattern is then transferred to a target object, like a poker chip. In tests, an image of the transferred pattern was taken with an iPhone under near-infrared exposure and was successfully decoded within 10 seconds.

“This development can impact future precision medicine since the platform can be effectively applied in many precision high-throughput molecular diagnostics, single cell analysis, and other innovative quantitative cell biological experiments,” says Luke Lee, the Arnold and Barbara Silverman Distinguished Professor of Bioengineering, Electrical Engineering and Computer Science, and Biophysics at the University of California at Berkeley, who was not involved in the research. “Since smart microparticle technology with barcodes has great potential in life sciences and clinical applications, this team’s new solution for scalability is a great accomplishment for a large-scale automated precision biology and medicine.”

Other authors on the paper were Jae Jung Kim, Ki Wan Bong, and Eduardo Reátegui. The research was sponsored by grants by the National Science Foundation and the National Institutes of Health.



de MIT News http://ift.tt/2dx01fl

A better way to assay

Arrays of microparticles are common in many material science and bioengineering applications, but can be tedious to use because of their limited capacity. Large-scale microparticle arrays (LSMAs) can make analysis more efficient and precise, allowing for the placement and study of many items at once. Unfortunately, today’s techniques of moving to a large-scale platform cannot simultaneously accomplish the requirements of scalability, precision, specificity and versatility that would make the use of LSMAs practical.

Researchers from MIT and the Massachusetts General Hospital (MGH) have developed a new technique using porous microwells that pushes the precision and scalability of LSMAs to a new extreme. This new method, described in the Sept. 5 issue of Nature Materials, uses fluid flow to guide tens of thousands of microparticles at once, pushing them into microwells as the fluid moves through small open pores at the bottom of the porous well arrays. The new LSMA technique sorts and arrays particles on the basis of their size, shape, or modulus. This sequential particle assembly allows for contiguous and nested particle arrangements, as well as particle recollection and pattern transfer.

“Today’s applications are increasingly complex; this new technique creates the most precise arrangement of particles, allowing for a more detailed and accurate array,” says Patrick Doyle, the Robert T. Haslam (1911) Professor of Chemical Engineering and Singapore Research Professor at MIT. “This technique opens the way to new applications, including the study of diseased cells and anti-counterfeiting practices.”

Led by Doyle and Daniel Irimia, associate director of the BioMEMS Resource Center at MGH, the team developed this porous microwell platform where guided microparticles are inserted into congruent microwells, whereas geometrically mismatched particles are removed in a washing step. “Microwells have been used as an assembly template in the past, but they were useful only for single-particle arrangement,” says Irimia. “Scaling-up efforts resulted in particle arrangements with some degree of randomness. In our technique, controllable driving forces allow for the positioning of tens of thousands of particles with high specificity.”

Bioengineering applications

The ability to generate large arrays of cells is important for cell-screening applications, which aid immunology and the fight against cancer. For example, arranging cells in 2-D arrays for the study of cellular processes that progress over time has significant advantages compared with serial approaches, such as flow cytometry. Cells in a 2-D array can be analyzed more than once and several cells imaged simultaneously. This creates a higher yield which helps to avoid potential differences between the first and last cell analyzed.

The team tested the performance of LSMA techniques by generating arrays of more than 10,000 mammalian glioma cells, which can cause brain tumors. An acceptable yield for each array took approximately 60 seconds, significantly faster than the previous method using passive cell settling in microwells, which requires between five and 40 minutes. “Because of the speed in which the cells are arranged, there is little change in the cell’s state,” says Doyle. “This gives us more time to observe how cells respond to drugs and disease.”

This research began when a team at MGH, led by Irimia, attempted a new way to arrange cells for analysis. “Understanding how neutrophils [the most abundant type of white blood cells in mammals] react to stimuli helps us to understand how inflammation starts and evolves inside the body,” he says, “This technique allows a level of complexity we’ve not had before; we can analyze the same cells repeatedly and therefore gather more information about their function and interactions in less time.”

Anti-counterfeiting applications

The team also demonstrated that this new technique is compatible with particle recollection and pattern transfer. To demonstrate the encoding/decoding capacity of LSMAs, the researchers generated a 2-D arrangement of nanocrystal-laden microparticles for use in anti-counterfeiting. These nanocrystals, developed by Doyle and his team specifically for use in anti-counterfeiting applications, glow when exposed to near-infrared light. They can be altered to emit any color, allowing for the creation of unique barcodes invisible to the naked eye.

Conventional printing of the microparticle barcodes resulted in limited precision and resolution. By using a prealigned microwell array, this new approach generates a high-resolution, multicomponent pattern. The pattern is then transferred to a target object, like a poker chip. In tests, an image of the transferred pattern was taken with an iPhone under near-infrared exposure and was successfully decoded within 10 seconds.

“This development can impact future precision medicine since the platform can be effectively applied in many precision high-throughput molecular diagnostics, single cell analysis, and other innovative quantitative cell biological experiments,” says Luke Lee, the Arnold and Barbara Silverman Distinguished Professor of Bioengineering, Electrical Engineering and Computer Science, and Biophysics at the University of California at Berkeley, who was not involved in the research. “Since smart microparticle technology with barcodes has great potential in life sciences and clinical applications, this team’s new solution for scalability is a great accomplishment for a large-scale automated precision biology and medicine.”

Other authors on the paper were Jae Jung Kim, Ki Wan Bong, and Eduardo Reátegui. The research was sponsored by grants by the National Science Foundation and the National Institutes of Health.



de MIT News http://ift.tt/2djbiMS

jueves, 29 de septiembre de 2016

Professor Emeritus Ali Javan, inventor of the first gas laser, dies at 89

MIT Professor Emeritus Ali Javan, the institute's first Francis Wright Davis Professor of Physics, who was a trailblazer in the fields of laser technology and quantum electronics, died of natural causes in Los Angeles on Sept. 12, at the age of 89. In 1960, while working at Bell Laboratories, Javan invented the world’s first gas laser. The technology would be applied to telecommunications, internet data transmission, holography, bar-code scanners, medical devices, and more.

Javan came to MIT as an associate professor of physics in 1961, and founded the nation’s first large-scale research center in laser technology. Javan also developed the first method for accurately measuring the speed of light and launched the field of high-resolution laser spectroscopy. 

“In the 1960s and 1970s, Professor Javan's laser group at MIT was a hotbed of innovation and advances in amazingly broad areas in laser physics,” said Irving P. Herman PhD '77, who studied with Javan and is currently the Edwin Howard Armstrong Professor of Applied Physics at Columbia University. “His group was key to understanding the fundamentals of the interactions of laser with matter, and in implementing them. He will be remembered by his many students and colleagues as a brilliant man, a pioneer, an inspiring man, and a kind and dear man.”

From Tehran to New York City

Ali Javan was born in Tehran, Iran, in 1926, and came to the United States in 1949, where he studied and worked at Columbia University with Nobel prize-winning physicist Charles H. Townes. Not having received either a bachelor’s degree or a master’s degree, Javan earned his PhD in physics at Columbia in 1954, with Townes serving as his thesis advisor.

While at Columbia, Javan also studied music, continuing a lifelong passion for the arts that he often connected to his groundbreaking scientific work. “Physics and music — you find the same spirit in both of them,” Javan once wrote. “It just manifests itself in different directions. There’s something immensely beautiful about physics, even though it’s very difficult. Take the atom — a single atom is absolutely gorgeous. Ask anybody in physics.”

Making history: The first gas laser

In 1958, Javan developed the working principle of the first gas discharge helium neon laser. In the following two years, he worked at Bell Laboratories to build it, along with colleague William Bennett.

“The first laser, the ruby laser by Ted Maiman, used optical pumping to create the population inversion necessary to achieve lasing,” Herman notes. “At the time this was difficult and not applicable to all systems. Javan was able to see how a population inversion can be created in a gas discharge by selective, resonant energy. This was key to his invention of the first gas laser, the He-Ne laser, which was also the first continuous wave laser.”

Javan’s breakthrough came on Dec. 12, 1960, after a snowstorm had forced an early closure of the Murray, New Jersey-based Bell Labs. At 4:20 pm that day (Javan checked his watch), for the first time in history, a continuous laser light beam emanated from a gas laser apparatus. As Javan later described it, he “drove the design into its self-sustained oscillation mode. Emanating at its output, for this very first-time ever, a continuous-wave (CW), collimated light beam, at a color purity as it proved to the limits that the law of nature will permit.”

On Dec. 13, 1960, Javan and his Bell Labs colleagues used the laser light beam to place a telephone call, the first time in history that a laser beam had been used to transmit a telephone conversation.

Joining the MIT community

Javan was already an internationally-acclaimed scientist when he came to MIT in 1961. He would spend the next four decades working to drive advances in atomic, molecular, and optical physics. From 1978 to 1996, he was the first Francis Wright Davis Professor of Physics, and was emeritus professor of physics from 1996 until his death.

Javan sought to be at the scientific forefront, making the next important advance. He once told an interviewer why he worked so tirelessly to answer difficult and diverse scientific questions: “There is something very beautiful at the end of the line that you're looking for. There's an aesthetic element.” Lila Javan, his daughter, says: “He always wanted to break new ground. For example, he was working very hard on nanotechnology at the end of his career.”

Javan was the recipient of numerous awards. In 1993, he was presented the Albert Einstein World Medal of Science in recognition for “his more than 30 years of research into the physics of lasers.” In 2006, he was inducted into the National Inventors Hall of Fame. Javan’s original 1960 helium-neon laser device is currently on display at the Smithsonian Institution’s National Museum of American History.

A passionate, inspiring teacher

Javan was a passionate teacher who developed lifelong bonds with generations of students, not only sharing his passion for science but for music and the arts. He wanted students to be well-rounded individuals conversant in more than just physics. As Javan’s former student and colleague Said Nazemi PhD '81, who helped him found Laser Science, Inc., recalls, “I spent a lot of personal time with him and his family, and knew him not just as a great teacher but as someone with a big sense of humor who also loved classical music and gourmet cooking.”

Another of Javan’s associates was Ramachandra Dasari, the associate director of MIT’s George R. Harrison Spectrography Lab, who first came to MIT in 1966. Javan helped shape his entire career, says Dasari: “I became a new person in science because Javan taught me about lasers.” Dasari fondly remembers Javan’s enthusiastic, hands-on approach to pedagogy: “He used to come into the lab often and see what his students were doing with their experiments. He liked to prod and touch things, which made students so nervous, but he just couldn’t help himself.”

Javan helped Dasari gain the financial resources to bring his Indian family to the United States in the late 1960s. Dasari notes that he’d been working at MIT for $8 per day (paid by the U.S. Agency for International Development) when Javan helped him obtain a visiting scientist position at MIT that paid $8,000 per year.

Dasari recalls one memorable, late-night interaction with Javan. They were seeking to measure a laser’s frequency, something that hadn’t been done before, and they’d been working at the lab for about week. “I was doing the experiment, and finally it succeeded around midnight,” says Dasari, who noted that Javan was resting at home. “I thought to myself, well it’s midnight and I shouldn’t call him at home, but I called him anyway. Lo and behold, he came to the lab at 3 a.m. because he was so excited and wanted to see it for himself.”

Javan’s daughter Maia recalls, “He found MIT, and the community of students around him, to be the perfect place for him to grow and flourish. He loved teaching his postdocs, and treated them like part of our family. They’d often have dinner over our house, and then go back to work at the lab. Sharing food and laughter, and enjoying life, that was so important to him.”

A doting father

Javan’s two daughters, Maia and Lila, remember their father as someone with a wide-ranging passion for life, someone brimming with enthusiasms, including science, music, museums, the outdoors, fine food, and more. Javan loved to ride around Cambridge on his bicycle, his daughter Lila Javan recalls, often stopping to buy flowers or chocolate to bring back to his family. “He was a supportive, fun father who was also a great teacher,” she says. “He loved to bring us to his lab, to ‘turn the knobs’ as he liked to say, having us there among his students, and sharing in the fun.”

He could be “extraordinarily absent-minded” at times, explains Javan’s daughter Maia: “His mind was always engaged — he loved to think expansively. On many occasions, dad would drive the family car to work at MIT in the morning and then, lost in thought, walk home in the evening, which would take him about 45 minutes, forgetting that he’d left the car parked at work. So we’d remind him, and send him back to MIT on his bicycle to bring the car back home.”

Final days: Family, music, and physics

During his final days, Javan was surrounded by family and friends in Los Angeles, spending his time “very peacefully,” says daughter Lila. “He was listening to Mahler and Mozart, two of his favorite composers, and having family members read to him from physics journals,” she says.

Ali Javan is survived by his daughters Maia and Lila, his grandchildren Valerik and Riva Perelman, and the mother of his children, Marjorie Javan.



de MIT News http://ift.tt/2dIarHL

Making smarter decisions about classroom technologies

In the 21st century, the proliferation of digital media and technology has fundamentally changed the way we learn. More than ever, children carry computers in their pockets and ever-expanding internet connectivity promises to reach even the most remote classrooms, putting a wealth of information at student’s fingertips. And there are growing demands from parents, educators, governments, and donors to incorporate educational technologies as part of children’s core curricula. 

But how does a teacher or administrator decide which technology is a good fit for their classroom? And especially in a global development context, how does a donor know that an investment in technology is the right approach to ensure learning outcomes — that a donation of tablets won’t end up in the corner after a year, collecting dust?

To answer these kinds of common, but challenging questions, MIT researchers have just launched a new decision-making tool for teachers, administrators, governments, global development practitioners, and other stakeholders trying to make smart decisions about incorporating technology in the classroom.

The tool, “A Framework for Evaluating Appropriateness of Educational Technology Use in Global Development Programs,” is an initiative of the Comprehensive Initiative on Technology Evaluation (CITE), a program supported by the U.S. Agency for International Development (USAID) and led by a multidisciplinary team of faculty, staff, and students at MIT. Launched at MIT in 2012, CITE is a pioneering program dedicated to developing methods for product evaluation in global development.

The framework seeks to help stakeholders explore how well a particular technology may fit their educational context by posing straight-forward questions such as: “Does the technology create a burden of extra management for the teacher?” and “Is there evidence that use of this technology aids learning? Is this evidence generalizable to your context?” Questions fall into eight categories: teachers; students; culture; sustainability; community, social, and political; learning; infrastructure; and scalability and market impact. 

The framework was developed following an extensive literature review by MIT researchers, and then tested in India by CITE’s partners at the India Institute of Management Ahmedabad, who looked at the deployment of English language learning technologies by NGO and government initiatives.

Why educational technologies? 

Despite the enthusiasm and promise of emerging technologies for education, there are a myriad of reasons technologies can fail that have little to do with the technology itself. Variables such as school funding, teacher preparedness, educational philosophy, and technical infrastructure play a major role in determining whether or not, for example, an English-language learning software actually helps children learn English.

“In doing this research, you realize how often adoption of educational technology is done without much forethought,” says Scot Osterweil, CITE Educational Technology Evaluation lead and creative director of the MIT Education Arcade. “Frequently, the decision to use a particular technology is based on who can make the most appealing sales pitch to the buyer, who will not be the user. Then, [the technology] ends up in a classroom where people haven’t prepared for it. There’s always a need to improve the process, even more so in developing countries where these technologies are new.”

MIT research assistant and PhD student Jennifer Groff adds that it's easy to see the positives about a new technology without thinking through potential challenges.

“Naturally, we get excited by the opportunity of something new,” Jennifer explains. “On a deeper level, we have to pause and ask ourselves what the challenges are to implementing something like this meaningfully. The framework is a tool that we hope helps people think through all of the facets of incorporating something new.”

Testing the framework

To test the framework, CITE’s research partners at the Indian Institute of Management Ahmedabad conducted a pilot study which entailed semi-structured interviews and group discussions with various stakeholders in sites where technologies studied were being deployed. The schools involved in the study varied from the most basic to the most modern, including everything from a rural village school in Uttar Pradesh without a building or dedicated classroom to a series of computer labs in public schools and community centers in Mumbai run by an influential NGO, the Pratham Education Foundation.

“One of the most interesting things that became apparent during our fieldwork was the role of continual intermediation by the developers at the sites of implementation,” said local research lead, Professor Ankur Sarin of Indian Institute of Management Ahmedabad. “Technology still remains an external intervention and facilitators play a critical role in determining the efficacy with which the education technology comes to be used. Based on what we learned, the framework was then adapted to account for the role, background and motivations for the facilitators.”

Putting the framework to use

The tool is designed to be useful for many different kinds of stakeholders who work with educational technologies, including developers, adopters, and funders of new technologies. It can be used before the adoption of an intervention, or as an assessment of an intervention as it is being deployed.

The next phase of work for this framework will be turning it into an online, interactive version that would guide the user step-by-step through a series of questions to identify the potential challenges around deploying a certain technology in the classroom.

“We would also like to create a knowledge network of people in different developing countries interested in using the framework who could help us refine it,” Scot says. “We tested our framework in India, but could learn other things in Latin America, Africa, or other parts of Asia if we identify partners elsewhere who could play a role in supporting or advancing the framework.”

In addition, the researchers hope illuminating the complexities of deploying an educational technology in a developing country setting will add to the existing literature in a meaningful way.

“There is a significant body of literature on educational technology and factors for change in developed contexts,” Jennifer explains. “And sometimes in developing countries, there’s an assumption that a technology is just better than what already exists, so a school is pressured to adopt it with questions. But there are many of the same barriers to change in these classrooms. It’s still change management, which isn’t just about bits and bytes, it’s about the way of delivering teaching and learning being much, much different.”

“As someone who’s worked in this area for 25 years, I know that a class could be successful without any technology,” Scot says. “You should only be using technology when you’ve identified a goal it can help you achieve. And we hope to help stakeholders make smart decisions from the very beginning with this new tool.”  

CITE’s research is funded by the USAID U.S. Global Development Lab. CITE is led by principal investigator Bishwapriya Sanyal of MIT’s Department of Urban Studies and Planning, and supported by MIT faculty and staff from D-Lab, Priscilla King Gray Public Service Center, Sociotechnical Systems Research Center, the Center for Transportation and Logistics, School of Engineering, and Sloan School of Management.

In addition to Osterweil and Groff, co-authors on the report include Eric Klopfer, Ankur Sarin, Prateek Shah, Stacey Allen, Sai Priya Kodidala, and Ilana Schoenfeld. CITE conducted its research in partnership with the Indian Institute of Management Ahmedabad.



de MIT News http://ift.tt/2dhwjrb

Engaging industry in addressing climate change

The mission of the Oil and Gas Climate Initiative (OGCI), a two-year-old organization comprised of 10 major oil and gas companies, is one not commonly associated with the industry: to catalyze practical action to reduce greenhouse gas emissions. Intent on confronting the challenge of climate change head-on, the OGCI committed last October to support the Paris climate agreement’s target of capping the rise in mean global surface temperature since preindustrial times at 2 degrees Celsius by 2100, and has been working ever since to develop a plan for the industry to help advance this objective.

To that end, the OGCI held its second Low Emission Roadmap Roundtable on Sept. 23 at the World Economic Forum in New York City. During the three-hour event, timed to coincide with the city’s annual Climate Week, the OGCI sought input from stakeholders as they develop practical steps for reducing the industry’s emissions. While the formal membership of the OGCI represents about 20 percent of global oil and gas production, invitations were open to a broader group of industry and environmental non-governmental organizations. To bring academic rigor to the discussion of how to help reduce greenhouse gas emissions in alignment with the 2 C goal, the OGCI partnered in the roundtable with the MIT Joint Program on the Science and Policy of Global Change.

Early in the event, Joint Program Co-Director John Reilly previewed a set of scenarios illustrating the significant transformation of the global energy system that’s needed, and various technology paths that could be pursued, to achieve the 2 C goal. These scenarios were released on Sept. 28 in the Joint Program’s 2016 Food, Water, Energy and Climate Outlook.

“A key to understanding steps that industry can take to reduce greenhouse gas emissions are estimates of emissions pathways consistent with stabilization of greenhouse gases in the atmosphere,” said Reilly, who is also a senior lecturer at the Sloan School of Management. “Our contribution to the discussion was to develop illustrative pathways and to suggest the potential role of a variety of low-carbon technologies in enabling deep emissions cuts, emphasizing that uncertainty in climate response and in technology development call for a risk-based planning process.”

Joint Program Co-Director Emeritus Henry D. Jacoby, the William F. Pounds Professor of Management (Emeritus) at the MIT Sloan School of Management, who moderated the Roundtable, added, “There is a growing effort in the international negotiations to involve non-state actors in managing climate risk, and this OGCI effort is a timely response.” To that end, the Roundtable included opening presentations by Vidar Helgesen, the minister of climate and environment of Norway, two executives of OGCI companies — Bjorn Otto Sverdrup of Statoil and Valérie Quiniou-Ramus of Total — and Granville Martin from JP MorganChase. The event covered both the intentions and plans of the OGCI member companies and active discussion of ways they could make their most effective contribution to the goals of the Paris Agreement.

The OGCI chose to partner with the MIT Joint Program because it is “one of the key research centers that’s driving the thinking around the energy transition, with strong technical capabilities,” says OGCI Executive Board Chair Gerard Moutet. “The MIT Joint Program’s input at the Low Emission Roadmap Roundtable, along with that of OGCI stakeholders, helps us to determine what oil and gas companies should focus on to enable an efficient energy transition to zero net emissions,” said Moutet. 

Informed by the Joint Program’s analysis that a sharp turn in the current direction of the energy system is needed, the discussion centered on how fast the system could respond; whether a technological revolution was already underway that would sweep aside the fossil energy industry; the need to protect and expand forest carbon sinks; and prospects for global carbon pricing, which many in attendance deemed essential to providing enough incentive for investment in low and zero-carbon sources of energy. Practical steps the OGCI is considering to reduce greenhouse gas emissions include improving the energy efficiency of their operations and products, developing carbon capture and storage, reducing carbon dioxide and methane emissions by utilizing gas instead of flaring it, and developing and deploying new low-carbon energy technologies.

Viewing private sector engagement as essential to solving the climate problem, the Joint Program has for 25 years engaged with industry, including OGCI member companies, on comprehensive studies of the impact of the changing climate, and the need to transform the energy system.

“Through its comprehensive modeling and analysis, the MIT Joint Program provided us with useful insights into the nature and timeline of changes that will be needed to transition to a lower-carbon energy system,” said Charlotte Wolff-Bye, host and organizer of the roundtable; co-chair of the OGCI Low Emission Roadmap work stream; and vice president for sustainability at Statoil.



de MIT News http://ift.tt/2do2lUl

Scientists identify neurons devoted to social memory

Mice have brain cells that are dedicated to storing memories of other mice, according to a new study from MIT neuroscientists. These cells, found in a region of the hippocampus known as the ventral CA1, store “social memories” that help shape the mice’s behavior toward each other.

The researchers also showed that they can suppress or stimulate these memories by using a technique known as optogenetics to manipulate the cells that carry these memory traces, or engrams.

“You can change the perception and the behavior of the test mouse by either inhibiting or activating the ventral CA1 cells,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory.

Tonegawa is the senior author of the study, which appears in the Sept. 29 online edition of Science. MIT postdoc Teruhiro Okuyama is the paper’s lead author.

Tracking social memory

In a well-known study published in 2005, researchers at Caltech identified neurons in the human brain that respond specifically to images of celebrities such as Halle Berry or Brad Pitt, leading them to conclude that the brain has cells devoted to storing memories of people who are familiar.

Many of these cells were found in and around the hippocampus, which is also where the brain stores memories of events, known as episodic memories. The MIT team suspected that in mice, social memories may be stored in the hippocampus’ ventral CA1, in part because previous studies have suggested that this region is not involved in storing episodic memories.

The researchers set out to test this hypothesis using optogenetics: By engineering neurons of the ventral CA1 to express light-sensitive proteins, they could artificially activate or inhibit these cells by shining light on them as the mice interacted with each other.

First, the researchers allowed one mouse, known as the “test mouse,” to spend time with another mouse for two hours, letting the mice become familiar with each other. Soon after, the test mouse was placed in a cage with the familiar mouse and a new mouse.

Under normal circumstances, mice prefer to interact with mice they haven’t seen before. However, when the researchers used light to shut off a circuit that connects the ventral CA1 to another part of the brain called the nucleus accumbens, the test mouse interacted with both of the other mice equally, because its memory of the familiar mouse was blocked.

“The inhibition of ventral CA1 leads to impairment of the social memory,” Okuyama says. “They cannot show any preference for the novel mouse. They approach both mice equally.”

On the other hand, when the researchers stimulated ventral CA1 cells while the test mouse was interacting with a novel mouse, the test mouse began to treat the novel mouse as if they were already acquainted.

This effect was specific to social interactions: Interfering with the ventral CA1 did not have any effect on the mice’s ability to recognize objects or locations that they had previously seen.

Re-awakening memories

When the researchers monitored activity of neurons in the ventral CA1, they found that after a mouse was familiarized with another mouse, a certain population of these neurons began to respond specifically to the familiar mouse.

These patterns could be seen even after the mice appeared to “forget” the once-familiar mice. After about 24 hours of separation, the test mice began to treat their former acquaintances as strangers, but the neurons that had been tuned to the familiar mice still fired, although not as frequently. This suggests that the memories are still being stored even though the test mice no longer appear to remember the mice they once knew.

Furthermore, the researchers were able to “re-awaken” these memories using optogenetics. In one experiment, when the test mouse first interacted with another mouse, the researchers used a light-sensitive protein called channelrhodopsin to tag only the ventral CA1 cells that were turned on by the familiarization treatment. When these neurons were re-activated with light 24 hours later, the memory of the once-familiar mouse returned. The researchers were also able to artificially link the memory of the familiar mouse with a positive or negative emotion.  

Tonegawa’s lab has previously used this technique to identify hippocampal cells that store engrams representing episodic memories. The new study offers strong evidence that memory traces for specific individuals are being stored in the neurons of the ventral CA1, Tonegawa says. “There is some kind of persistent change that takes place in those cells as long as memory is still detectable,” he says.

Larry Young, a professor of psychiatry and director of the Center for Translational Social Science at Emory University, described the study as “one of the most fascinating papers related to social neuroscience I’ve ever seen.”

“In this paper, they identified a subset of cells in a particular brain region that is the engram — a set of cells that through its connections in the nucleus accumbens, actually holds the memory of another individual,” says Young, who was not involved in the study. “They showed that the same group of neurons fired repeatedly in response to the same animal, which is absolutely incredible. Then to go in and control those specific cells is really on the cutting edge.”

The MIT researchers are now investigating a possible link between social memory and autism. Some people with autism have a mutation of the receptor for a hormone called oxytocin, which is abundant on the surface of ventral CA1 cells. Tonegawa’s lab hopes to uncover whether these mutations might impair social interactions.

The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, the JPB Foundation, and the Japan Society for the Promotion of Science.



de MIT News http://ift.tt/2dnYd6P

Collaborating with community colleges to innovate educational technology

Nearly one in every two undergraduates in the United States attends community college. Serving a large and diverse pool of students, community colleges are critical in bridging the job-skills gap, in empowering students to transition to four-year institutions, and in enabling opportunities for non-traditional pathways. But community colleges face an important hurdle: How can they scale so as to offer high-quality, affordable education to growing student numbers?

Now, researchers from MIT’s Department of Aeronautics and Astronautics, Office of Digital Learning, and Teaching and Learning Lab are collaborating with community colleges to develop innovative educational technology that tackles this issue.

A need for on-demand resources

The question of scaling up is one faced by Arapahoe Community College (ACC), located in the urban-metro area of Denver, Colorado, with a population of over 10,000 students. The average ACC student juggles between taking classes and working at least one part-time job, and many are considered to have a higher probability of failing or dropping out of school.

Jose Albareda is such a student.* He graduated from high school six years ago, and after a decade working different jobs, now looks to earn a four-year degree. To do so, he must pass College Algebra, a course that is standardized across the state of Colorado and guaranteed for transfer to a four-year institution. This semester, he is enrolled in a section of College Algebra. Five weeks into term, he is feeling anxious. He feels rusty on math, and with every assignment, slips a little more behind. He tries to attend office hours when they do not conflict with his part-time job, but wishes he had more immediate help during his homework time in the evenings.

This case highlights key issues of access and student completion. Even though the community college provides out-of-class resources, it cannot provide on-demand resources to meet the diverse needs of every student. And failing algebra is often the biggest hurdle for students, regardless of their ultimate degree goals.

Heidi Barrett and Danielle Staples are instructors of the College Algebra course at ACC. Typically in a semester, Staples teaches four classes, each with 25-30 students, totaling up to 120 students. Barrett, echoing faculty across community colleges, says: “It is very difficult to keep tabs on every individual student, to give the student the individualized attention they need, when every week you have a few hundred assignments to look at, to punch in grades for, to keep track of.” 

Fine-tuning educational needs in real-time

Enter Fly-by-Wire.

A blended-learning technology developed at MIT, Fly-by-Wire draws inspiration from aerospace engineering and artificial intelligence. There are two components: The Fly-by-Wire Student App adapts to a student’s need in real-time, serving up dynamic formative assessments that scaffold the student to mastery, while the Fly-by-Wire Instructor App analyzes the data and makes recommendations to the instructor in a way that lets them fine-tune their instruction in real-time.

Project lead Karen Willcox, professor of aeronautics and astronautics at MIT, explains: “An instructor using Fly-by-Wire in-class is like a pilot using a computer to help fly a plane. Just as a pilot cannot keep track of hundreds of flashing sensors, an instructor cannot keep tabs on hundreds of students learning the material in different ways and at different rates. This is a big challenge for all faculty members — not just in community colleges, but also for us at MIT. But just as digital fly-by-wire systems have revolutionized how human pilots fly modern aircraft, we believe that digital technologies open the door to personalized instruction at scale in the modern classroom.”

How it works

At the start of a recent College Algebra class, Barrett tapped on the Fly-By-Wire Instructor App and saw the questions students got wrong on the last homework — and why they got them wrong. She reviewed the fundamental misunderstanding and tapped on her screen to get another question with which to ask students. At the end of class, she assigned six questions as homework on the Fly-By-Wire Student App.  

In the evening after work, Albareda sat down to complete his homework on the student app. He got stuck on the first problem – solving a quadratic equation. But instead of giving up or hopelessly flipping through his notes, he got specific feedback from the Fly-By-Wire app on why he got the question wrong. The app gave him an easier formative assessment question to target his misunderstanding. After six such adaptive questions, he correctly answered the original question.

The next day, he approached Barrett and showed her the path he took through the Fly-By-Wire app.

“This is really cool. I really like that it tells me why I got this wrong and what I need to work on. It’s like the teacher is next to me,” Albareda said enthusiastically during a user experience interview. “But I don’t like that it doesn’t give me anything for going through this [scaffold].”

The Fly-By-Wire team is eager for such comments. Willcox emphasizes, “This is a collaborative, iterative, and agile design effort with the community colleges; user needs drive the design of Fly-by-Wire technology.”

According to Sanjay Sarma, MIT’s vice president for open learning, the collaboration between MIT and Arapahoe Community College “allows us to work closely with teachers to develop digitally-enabled education technologies that increase the likelihood of improved learning outcomes for all. We may be able to provide new ways to personalize learning to fit students’ individual needs.”

The Fly-by-Wire project is progressing rapidly thanks to funding from the U.S. Department of Education's Fund for the Improvement of Postsecondary Education First in the World program. In October, the team plans to launch a pre-pilot, during which students in a section of College Algebra at Arapahoe Community College use the student app to complete a single assignment. In November, faculty from Arapahoe and Quinsigamond Community Colleges will visit MIT to participate in a Fly-by-Wire tech workshop. Next spring, the team plans to pilot in four class sections, and in fall 2017 will conduct a randomized control trial.

*Real name and specific details have been changed.



de MIT News http://ift.tt/2dau8cS

Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles

MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.

The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.

In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A  — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.

From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.

“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”

The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.

Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.

Expanding circles

Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.

Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.

The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.

The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.

The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.

Cascading probabilities

The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.

On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.

One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.

“People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”

“Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says.



de MIT News http://ift.tt/2cYAXc4

miércoles, 28 de septiembre de 2016

Nanosensors could help determine tumors’ ability to remodel tissue

MIT researchers have designed nanosensors that can profile tumors and may yield insight into how they will respond to certain therapies. The system is based on levels of enzymes called proteases, which cancer cells use to remodel their surroundings.

Once adapted for humans, this type of sensor could be used to determine how aggressive a tumor is and help doctors choose the best treatment, says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science and a member of MIT’s Koch Institute for Integrative Cancer Research.

“This approach is exciting because people are developing therapies that are protease-activated,” Bhatia says. “Ideally you’d like to be able to stratify patients based on their protease activity and identify which ones would be good candidates for these therapies.”

Once injected into the tumor site, the nanosensors are activated by a magnetic field that is harmless to healthy tissue. After interacting with and being modified by the target tumor proteins, the sensors are secreted in the urine, where they can be easily detected in less than an hour.

Bhatia and Polina Anikeeva, the Class of 1942 Associate Professor of Materials Science and Engineering, are the senior authors of the paper, which appears in the journal Nano Letters. The paper’s lead authors are Koch Institute postdoc Simone Schurle and graduate student Jaideep Dudani.

Heat and release

Tumors, especially aggressive ones, often have elevated protease levels. These enzymes help tumors spread by cleaving proteins that compose the extracellular matrix, which normally surrounds cells and holds them in place.

In 2014, Bhatia and colleagues reported using nanoparticles that interact with a type of protease known as matrix metalloproteinases (MMPs) to diagnose cancer. In that study, the researchers delivered nanoparticles carrying peptides, or short protein fragments, designed to be cleaved by the MMPs. If MMPs were present, hundreds of cleaved peptides would be excreted in the urine, where they could be detected with a simple paper test similar to a pregnancy test.

In the new study, the researchers wanted to adapt the sensors so that they could report on the traits of tumors in a known location. To do that, they needed to ensure that the sensors were only producing a signal from the target organ, unaffected by background signals that might be produced in the bloodstream. They first designed sensors that could be activated with light once they reached their target. That required the use of ultraviolet light, however, which doesn’t penetrate very far into tissue.

“We started thinking about what kinds of energy we might use that could penetrate further into the body,” says Bhatia, who is also a member of MIT’s Institute for Medical Engineering and Science.

To achieve that, Bhatia teamed up with Anikeeva, who specializes in using magnetic fields to remotely activate materials. The researchers decided to encapsulate Bhatia’s protease-sensing nanoparticles along with magnetic particles that heat up when exposed to an alternating magnetic field. The field is produced by a small magnetic coil that changes polarity some half million times per second.

The heat-sensitive material that encapsulates the particles disintegrates as the magnetic particles heat up, allowing the protease sensors to be released. However, the particles do not produce enough heat to damage nearby tissue.

“It has been challenging to examine tumor-specific protease activities from patients’ biofluids because these proteases are also present in blood and other organs,” says Ji Ho (Joe) Park, an associate professor of bio and brain engineering at the Korea Advanced Institute of Science and Technology.

“The strength of this work is the magnetothermally responsive protease nanosensors with spatiotemporal controllability,” says Park, who was not involved in the research. “With these nanosensors, the MIT researchers could assay protease activities involved more in tumor progression by reducing off-target activation significantly.”

Choosing treatments

In a study of mice, the researchers showed that they could use these particles to correctly profile different types of colon tumors based on how much protease they produce.

Cancer treatments based on proteases, now in clinical trials, consist of antibodies that target a tumor protein but have “veils” that prevent them from being activated before reaching the tumor. The veils are cleaved by proteases, so this therapy would be most effective for patients with high protease levels.

The MIT team is also exploring using this type of sensor to image cancerous lesions that spread to the liver from other organs. Surgically removing such lesions works best if there are fewer than four, so measuring them could help doctors choose the best treatment.

Bhatia says this type of sensor could be adapted to other tumors as well, because the magnetic field can penetrate deep into the body. This approach could also be expanded to make diagnoses based on detecting other kinds of enzymes, including those that cut sugar chains or lipids.

The study was funded in part by the Ludwig Center for Molecular Oncology, a Koch Institute Support Grant from the National Cancer Institute, and a Core Center Grant from the National Institute of Environmental Health Sciences.



de MIT News http://ift.tt/2dbzq3s

Innovation for everyone

Four rising firms snagged top honors in MIT’s Inclusive Innovation Competition (IIC) on Tuesday evening, as part of a new $1 million contest rewarding companies whose technologies can improve economic opportunity for people from a full range of income levels and social circumstances. 

Among 20 companies receiving monetary awards, four were named as grand-prize champions: the work-force training firm Year Up, software job-training firm Laboratoria, apparel maker 99Degrees Custom, and health care delivery service Iora Health.

The winners, chosen from a field of 243 applicants worldwide, were honored at a reception on Tuesday evening at the MIT Media Lab, following an afternoon showcase where leaders from nominated firms made on-stage presentations about their work.

Speaking at the afternoon event, MIT Provost Martin Schmidt said it was “really exciting today to see the finalists here to present their pitches,” and noted that the IIC was a natural outgrowth of the now-annual MIT Solve conference. Founded in 2015, Solve highlights the role of innovative technologies in addressing “the world’s most challenging problems,” as Schmidt put it. The IIC event, developed by the MIT Initiative on the Digital Economy in collaboration with MIT Solve, is part of the “Make” pillar of Solve, one of the conference’s four thematic categories.

MIT President L. Rafael Reif has stated that Solve’s purpose is to “accelerate positive change” in the world. Solve is aligned with Boston’s HUBWeek, a celebration of innovation and creativity in Greater Boston.

Tuesday’s IIC pitches featured a broad array of startup firms working globally to advance social progress in areas such as access to health care, job-training skills, and advanced manufacturing jobs, among other things. Some firms that were IIC finalists also focus on outreach to underrepresented social groups.

“All the studies show that when we have diversity at the table in a meeting, everybody performs better,” said Maria Contreras-Sweet, the administrator of the U.S. Small Business Administration (SBA), in an on-stage discussion at the IIC event on Tuesday afternoon.

In a related vein, Contreras-Sweet noted, the SBA has developed tools to encourage further access to capital for small business founders regardless of gender or ethnicity; currently, she noted, only about 4 percent of venture capital goes to firms led by women and only about 1 percent is invested in firms led by African-Americans.

The four grand prize champions received $125,000 each, while 16 other firms received $25,000 apiece. The competition received support from the Rockefeller Foundation, the Joyce Foundation, the NASDAQ Foundation, Joseph Eastin, and Eric and Wendy Schmidt.

Entries to the IIC fell into four main categories: “Skills,” for companies centered on job training; “Matching,” for firms finding new ways of linking workers to jobs; “Human + Machines,” for companies using technology to augment human labor; and “New Models,” novel business practices or business models creating new labor-market opportunities.

The four grand prize champions include the Boston-based nonprofit firm Year Up, which won in the “Skills” category; it provides market-driven job training to low-income young adults.

Laboratoria, a firm that provides job-training focused on software, and helps women enter the information technology profession, won in the “Matching” category. Laboratoria is based in Peru but has expanded to Chile and Mexico.

99Degrees Custom, an apparel manufacturer based in Lawrence, Massachusetts, was the winner of the “Human + Machines” category. The firm uses technology to automate some aspects of the clothing-production process, while paying a living wage and benefits to its 50 employees. The company would like to expand nationally.

In the “New Models” category, Iora Health took the top honors. The company, headquartered in Boston but located in 10 cities, offers health care services using “health coaches” and aims to reduce clients’ medical costs by keeping them healthy.

The categories represent marketplace trends, said Devin Wardell Cook, executive producer of the IIC, which suggests “that we truly can create an economy that works for people … for the many, and not just for the few.”

The full list of honorees is available at the MIT Inclusive Innovation website.



de MIT News http://ift.tt/2dtfxZn

martes, 27 de septiembre de 2016

Researchers find explanation for interacting giant, hidden ocean waves

In certain parts of the ocean, towering, slow-motion rollercoasters called internal tides trundle along for miles, rising and falling for hundreds of feet in the ocean’s interior while making barely a ripple at the surface. These giant, hidden swells are responsible for alternately drawing warm surface waters down to the deep ocean and pulling marine nutrients up from the abyss.

Internal tides are generated in part by differences in water density, and created along continental shelf breaks, where a shallow seafloor suddenly drops off like a cliff, creating a setting where lighter water meets denser seas. In such regions, tides on the surface produce oscillating, vertical currents, which in turn generate waves below the surface, at the interface between warmer, shallow water, and colder, deeper water. These subsurface waves are called “internal tides,” as they are “internal” to the ocean and travel at the same frequency as surface tides. Internal tides are largely calm in some regions but can become chaotic near shelf breaks, where scientists have been unable to predict their paths.

Now for the first time, ocean engineers and scientists from MIT, the University of Minnesota at Duluth (UMD), and the Woods Hole Oceanographic Institution (WHOI) have accurately simulated the motion of internal tides along a shelf break called the Middle Atlantic Bight — a region off the coast of the eastern U.S. that stretches from Cape Cod in Massachusetts to Cape Hatteras in North Carolina. They found that the tides’ chaotic patterns there could be explained by two oceanic “structures”: the ocean front at the shelf break itself, and the Gulf Stream — a powerful Atlantic current that flows some 250 miles south of the shelf break.  

From the simulations, the team observed that both the shelf break and the Gulf Stream can act as massive oceanic walls, between which internal tides ricochet at angles and speeds that the scientists can now predict.

The researchers have published their findings in the Journal of Geophysical Research: Oceans and the Journal of Physical Oceanography. The team includes Samuel Kelly, an assistant professor at UMD who was a postdoc at MIT for this research; Pierre Lermusiaux, an associate professor of mechanical engineering and ocean science and engineering at MIT; Tim Duda, a senior scientist at WHOI; and Patrick Haley, a research scientist at MIT.

Lermusiaux says the team’s simulations of internal tides could help to improve sonar communications and predict ecosystems and fishery populations, as well as protect offshore oil rigs and provide a better understanding of the ocean’s role in a changing climate.

“Internal tides are a big chunk of energy that’s input to the ocean’s interior from the common [surface] tides,” he explains. “If you know how that energy is dissipated and where it goes, you can provide better predictions and better understand the ocean and climate in general.”

“Dead calm”

The effects of internal waves were first reported in the late 1800s, when Norwegian sailors, attempting to navigate a fjord, experienced a strange phenomenon: Even though the water’s surface appeared calm, their ship seemed to strongly resist sailing forward — a phenomenon later dubbed “dead water.”

“It would be dead calm in the water, and you’d turn your ship on but it wouldn’t move,” Lermusiaux says. “Why? Because the ship is generating internal waves because of the density difference between the light water on top and the salty water on the bottom in the fjord, that keep you in place.”

Since then, scientists have found that surface tides, just like internal tides, are generated by the cyclical, gravitational pull of the sun and the moon, and travel between density-varying mediums. Surface waves travel at the boundary between the ocean and the air, while internal waves and internal tides flow between water layers of varying density.

“What people didn’t really know was, why can those internal tides be so variable and intermittent?” Duda says.

Following the tide

In the summer of 2006, oceanographers embarked on a large-scale scientific cruise, named “Shallow Water ’06,” to generate a detailed picture of how sound waves travel through complex coastal waters, specifically along part of the Middle Atlantic Bight region. The experiment confirmed that internal tides stemmed from the region’s shelf break at predictable intervals. Puzzlingly, the experiment also showed that internal tides arrived back at the shelf break at unpredictable times and locations.

“One would think if they were all generated at the shelf break, they would be more or less uniform, in and out,” Lermusiaux says.

To solve this puzzle, Lermusiaux, Haley, and their colleagues incorporated data from the 2006 cruise into hydrodynamic simulations to represent tides in a realistic ocean environment. These data-driven simulations included not only tides but also “background structures,” such as density gradients, eddies, and currents such as the Gulf Stream, with which tides might interact.

After completing more than 2,500 simulations of the Middle Atlantic Bight region, they observed that internal tides generated close to the shelf break seemed to flow out toward the ocean, only to bounce back once they reached the Gulf Stream. As the Gulf Stream meandered, the exact direction and location of the internal tides became more variable.

"Looking at the initial plots from the simulations, it was obvious that some type of interaction was happening between the internal tide and Gulf Stream,” Kelly says. “But the simulations could produce a huge number of complicated interactions and there are lots of theories for different types of interactions. So we started testing different theories.”

Terms of agreement

The researchers sought to find mathematical equations that would describe the underlying fluid dynamics that they observed in their simulations. To do this, they started with an existing equation that characterizes the behavior of internal tides but involves an idealized scenario, with limited interactions with other features. The team added new “interaction terms,” or factors, into the equations that described the dynamics of the Gulf Stream and the shelf break front, which they derived from their data-driven simulations.

“It was really exciting when we wrote down a set of slightly idealized equations and saw that the internal tides extracted from the complex simulations were obeying almost the exact same equations," Kelly says.

The match between their simulations and equations indicated to the researchers that the Gulf Stream and the shelf break front were indeed influencing the behavior of the internal tides. With this knowledge, they were able to accurately predict the speed and arrival times of internal waves at the shelf break, by first predicting the strength and position of the Gulf Stream over time. They also showed that the strength of the shelf break front alters the speed and arrival times of internal tides.

The team is currently applying their simulations to oceanic regions around Martha’s Vineyard, the Pacific Islands, and Australia, where internal tides are highly variable and their behavior can have a large role in shaping marine ecosystems and mediating the effects of climate change.

“Our work shows that, with data-driven simulations, you can find and add missing terms, and really explain the ocean’s interactions,” Lermusiaux says. “If you look at ocean or atmospheric sciences today, understanding interactions of features is where big questions are.”

This research was funded in part by the Office of Naval Research and the National Science Foundation.



de MIT News http://ift.tt/2d8AcOS

Barnhart, Stuopis establish medical leave and hospitalization policy review committee

Responding to one of the key recommendations from last spring’s Committee on Academic Performance’s (CAP) report on undergraduate withdrawal and readmissions practices, Chancellor Cynthia Barnhart and Medical Director Cecilia Stuopis have established a committee to review MIT’s medical leave and hospitalization practices and policies for undergraduate and graduate students. This action is one in a series of recent steps the administration has taken to implement the CAP’s recommendations to strengthen the leave and return system for students.

In the charge to the committee, Barnhart and Stuopis wrote, “While MIT has policies and procedures for involuntary medical leave for both undergraduate and graduate students, they are intended to be used only as a last resort. The CAP learned, however, that the specter of an involuntary medical leave creates anxiety in many students.”

Calling this a “clear and pressing issue for the community at large,” Barnhart and Stuopis have tapped Department of Brain and Cognitive Sciences professors Rebecca Saxe and Laura Schulz to co-chair the committee of faculty, students, and staff. The committee will consider whether current policies adequately express the approach MIT should follow for students who present a danger to themselves, to the community, or who otherwise are not able to participate in campus life due to mental or physical health issues; whether the implementation of the current policies are adequate and clear; and, if necessary, the committee will make recommendations for improvements.

Barnhart and Stuopis’ charge also requests that the committee consider what practices the Institute should implement to “minimize uncertainty, fear, and distrust” surrounding the hospitalization process and “what procedures are in place, or should be in place, to ensure that students who are hospitalized are supported by MIT.”

The committee’s report is expected in the upcoming spring semester. The full membership list is available here.

“When we were approached about chairing this committee, we were obviously interested because this is such an important topic,” Saxe and Schulz said. “However, we did not accept until we had spoken with a number of administrators around MIT to ensure that there was a true openness to change. What we heard was that people wanted an honest assessment of MIT’s hospitalization and medical leave policies, and that there was political will to revise our practices. We are excited to have such a great committee, and welcome everyone's input into how to tackle this hard problem.”

Saxe and Schulz convened the committee for the first time last week, and have a series of fall meetings scheduled. Additionally, they will be holding two community conversations — one for undergraduates, the other for graduate students — in order for students to share their insights on the current policies.

Undergraduates are invited to join committee members Tamar Weseley, Taylor Sutton — both MIT seniors — and Saxe on Monday, Oct. 3 from 7 to 8:30 p.m. in Room 4-370.

Graduate students are encouraged to join graduate student committee members Joy Louveau and Kyle Kotowick and Professor Tamar Schapiro on Tuesday, Oct. 4 from 7 to 8:30 p.m. in Room 4-370.

Saxe and Schulz have also set up two other platforms for community feedback to inform the committee’s final report: a survey open to all MIT community members who have thoughts to add, and especially those who have had experience with current medical leave and hospitalization practices. The survey will be open through Oct. 31. The committee will also be accepting public comments via hospitalizationfeedback@mit.edu throughout the fall semester.

Barnhart and Stuopis’ work to set up this new committee represents just one of the action steps the Institute has taken in recent months to respond to the CAP report’s recommendations. The following progress has been made:

  • A new flexible "leave of absence" category with fewer administrative requirements has been established so that students can depart for a variety of educational, professional, or wellness reasons. Thirty-four students have requested this new type of leave for the current semester.
  • Terminology changes from “withdrawal and readmission” to “leave and return” are complete and leave letters from Student Support Services (S3) are now more supportive in tone and include a concrete action plan, a list of key supportive contacts, and clear expectations for what is required in order for students to return from a leave. Expectations for coursework while on leave are only set by S3 after consultation with the CAP.
  • CAP is now the sole decision-maker for return requests, ensuring that S3 can fulfill its core mission of providing support, advice, and guidance to students. And, in an early indication that the CAP is adhering to a key report principle that MIT should help all students who wish to return from a leave and earn a degree to do so, a record high percentage (98 percent) of return requests were granted during the most recent review process.
  • All students who returned to MIT from a leave this fall and requested on-campus housing were given offers for on-campus room assignments.


de MIT News http://ift.tt/2dANE0v

Engineer, explain thyself

A graduate student doing research on materials for circuit design might not share lab space with someone working on machine learning, but they still have a shared need: to explain what they’re working on to other people. Whether it’s to their advisor, a room full of their peers, a startup accelerator, a project’s funders, or Uncle Frank — at some point everyone who does research is faced with an audience.

And so here comes CommKit, a new online resource poised to make the lives of engineering graduate students easier. Launched on Sept. 22, the CommKit is a website that provides discipline-specific aid to those seeking writing, speaking, and visual design support on a tight deadline. Designed for engineers by engineers — specifically, 50 graduate students who have been working as department- and area-based peer coaches at the MIT Communication Lab — the CommKit was designed to demystify effective scientific communication.

Like most good ideas at MIT, the one for CommKit started with students. While doing peer coaching sessions, graduate students Diana Chien and Scott Olesen, discovered themselves answering many of the same questions, or providing many of the same examples and templates, over and over again. “I remember how overwhelmed I felt as an early graduate student writing abstracts and fellowships,” says Chien. “I googled frantically for support, but the advice I found was scattered randomly and rarely relevant to my field.”

Chien and Olesen approached Jaime Goldstein, who directs the Communication Lab (or Comm Lab), about developing a solution to the need for a consistent go-to resource. The resulting website is organized first by department, and then by communication task — such as poster, grant application, manuscript, or oral presentation. Each task page includes quick tips, structural diagrams, and annotated field-specific examples. “We’re scientists and engineers,” Olesen says. “We thought like scientists and engineers when making the CommKit.”

Enthusiasm about the CommKit, and the Comm Lab’s peer coaching model, is already getting attention beyond campus. “Effectively communicating technical information is a key determinant of the success of students and postdocs at Caltech,” says Trity Pourbahrami, from the Caltech Engineering and Applied Science Division. “I’ve been delighted to connect with the MIT Communication Lab community and look forward to sharing the CommKit with the Caltech community.”

Goldstein, who has directed the Comm Lab since its inception in 2012, says the development of an online resource marks an innovative step forward. The site will continue to evolve, she adds, just as the Comm Lab’s programming has changed and broadened over time so that it could serve more students and be more effective.

Started initially as a cocurricular support program within the Department of Biological Engineering, the Comm Lab has since expanded to include the Department of Nuclear Science and Engineering, Department of Electrical Engineering and Computer Science, and the Broad Institute. This semester, the program is working with the Sandbox Innovation Program to develop expertise, and coaches, to support students with an interest in innovation.

Each time the program expands to a new department or field, Goldstein adds coaches and content so the program is always immediately relevant for the students it’s there to serve. “Communications can be a very broad topic, but students don’t experience it that way,” Goldstein says. “For them — and for the Comm Lab coaches — it’s about specific tasks, projects, and skills. Students crave help improving things like fellowship applications, grant proposals, and posters, so that’s where we focus our efforts.”  



de MIT News http://ift.tt/2dAzEnI

Study: Low-emissions vehicles are less expensive overall

You might think cars with low carbon emissions are expensive. Think again. A newly-published study by MIT researchers shows that when operating and maintenance costs are included in a vehicle’s price, autos emitting less carbon are among the market’s least expensive options, on a per-mile basis.

“If you look in aggregate at the most popular vehicles on the market today, one doesn’t have to pay more for a lower carbon-emitting vehicle,” says Jessika Trancik, the Atlantic Richfield Associate Professor in Energy Studies at the Institute for Data, Systems, and Society (IDSS) at MIT, and the study’s senior author. “In fact, the group of vehicles at the lower end of costs are also at the lowest end of emissions, even across a diverse set of alternative and conventional engines.”

The study also evaluates the U.S. automotive fleet — as represented by these 125 model types — against emissions-reduction targets the U.S. has set for the years from 2030 to 2050. Overall, the research finds, the average carbon intensity of vehicles that consumers bought in 2014 is more than 50 percent higher than the level it must meet to help reach the 2030 target. However, the lowest-emissions autos have surpassed the 2030 target.

“Most hybrids and electric vehicles on the road today meet the 2030 target, even with today’s electricity supply mix,” Trancik observes.

The new paper, “Personal Vehicles Evaluated against Climate Change Mitigation Targets,” is being published in the latest issue of the journal Environmental Science and Technology. The research group is also releasing the results in the form of an app that consumers can use to evaluate any or all of the 125 vehicle types.

“Private citizens are the investors that will ultimately decide whether a clean-energy transition occurs in personal transportation. It’s important to consider the problem from the viewpoint of consumers on the ground,” Trancik says. “The goal here is to bring this information on the performance of cars to people’s fingertips, to empower them with the information needed to make emission- and energy-saving choices.”

Along with Trancik, the authors of the study are Marco Miotti, a doctoral student in IDSS; Geoffrey Supran PhD '16, recent graduate in MIT's Department of Materials Science and Engineering and now a postdoc in IDSS; and Ella Kim, a doctoral student in MIT’s Department of Urban Studies and Planning.

The 60-percent solution

Transportation accounts for about 28 percent of greenhouse gas emissions in the driving-intensive U.S., and about 13 percent of emissions worldwide. Within the transportation sector, light-duty vehicles (LDVs) — passenger cars, and trucks with 12 seats or fewer that meet certain weight measurements — account for about 60 percent of emissions.

In order to estimate the cost of the vehicles, the researchers accounted for both the sticker price and the operating costs over the vehicle lifetime. When estimating emissions, the calculations included emissions from each vehicle’s operations and the emissions stemming from manufacturing it, and producing its fuel.

The study takes a further step in calculating how far along today’s passenger-vehicle fleet is in relation to the pledged U.S. emissions reductions goals. To do so, the researchers looked at overall amount of reduction needed in the 2030-2050 time period, the fraction of it likely to come from LDVs, and incorporated the total distance these vehicles are estimated to travel in those years.

“To enable a fair comparison between cars of all technologies, we include all emissions coming from the fuel, electricity, and vehicle production supply chains,” Miotti says.

The researchers’ chart yields some clear trends among the vehicles. Smaller hybrids and electric vehicles such as the Toyota Prius and Nissan Leaf fare very well and are among the cheapest per mile driven. Small combustion-engine cars are also low in cost, but emit up to 40 percent more greenhouse gases than their hybrid and electric counterparts. The Chevrolet Suburban, by contrast, is among the most expensive and highest-emitting popular vehicles. Luxury sedans such as the Mercedes E350, the study found, are the one vehicle that is more expensive per mile than the Suburban, but emits about two-thirds as much carbon.

“Our results show that popular alternative-technology cars such as the Nissan Leaf can already save a considerable amount of emissions today, while being quite affordable when operating costs are considered,” Miotti observes. “Notably, the benefit of the efficient electric powertrain far outweighs the added emissions of manufacturing a battery.”

Supran says that “there are a lot of myths floating around about hybrid and electric cars,” for example concerning the manufacture of those vehicles, or their reliance on conventional electricity sources. This often leads people to claim that “they’re no better than your average gasoline vehicle. Our study shows that’s just not true.”

The enduring popularity of trucks and SUVs, Supran adds, shows that “[w]e’ve got a long way to go. Obviously the best option is to use public transport and, when possible, to not drive at all. But for those who have to, hopefully our work can help inform a generation of more climate-conscious car buyers.”

Opportunities for decarbonization

To be sure, as Trancik notes, larger vehicles such as the Suburban may also be transporting more people around, and thus may fare better when measured on a per-passenger-mile basis. But a central aim of the study, she adds, is to let consumers access more data, from which they can make their own additional assessments about their vehicle needs.

“There are a lot of opportunities for decarbonization in the transportation sector,” Trancik says. “It’s fairly easy to buy a lower-emissions vehicle if you have easy access to this information.”

To reach a wider audience, the team developed an app with which people can look up their current car, or a car they are considering buying or leasing, and see how it performs in terms of costs and carbon emissions.

Vehicle costs and emissions also vary regionally, as the study notes. For instance: Western states draw from renewable energy sources (mostly solar and wind) to a greater extent than, say, states in the Midwest. On aggregate, therefore, plug-in electric vehicles will draw upon cleaner sources of electricity in the West than in the Midwest, and produce lower emissions overall.

On a national basis, though, the study reinforces the need to continue modernizing the country’s vehicle fleet and decarbonizing it in the next few decades. 

“To meet mid-century climate policy targets, what we would likely need to see is a near-complete electrification of vehicles within a few decades, alongside a decarbonization of electricity,” Trancik says.

Funding for the study was provided by The New England University Transportation Center at MIT, under a Department of Transportation grant; the Singapore National research Foundation through the Singapore-MIT Alliance for Research and Technology Centre; the Reed Foundation; and the MIT Leading Technology and Policy Initiative.



de MIT News http://ift.tt/2doMbvl

Webinar gratuito sobre mantenimiento y reparación de cajas de engranajes



de MOS INGENIEROS - BLOG DE INGENIERÍA http://ift.tt/2d5Vpuw

lunes, 26 de septiembre de 2016

Pinpointing a brain circuit that can keep fears at bay

People who are too frightened of flying to board an airplane, or too scared of spiders to venture into the basement, can seek a kind of treatment called exposure therapy. In a safe environment, they repeatedly face cues such as photos of planes or black widows, as a way to stamp out their fearful response — a process known as extinction.

Unfortunately, the effects of exposure therapy are not permanent, and many people experience a relapse. MIT scientists have now identified a way to enhance the long-term benefit of extinction in rats, offering a way to improve the therapy in people suffering from phobias and more complicated conditions such as post-traumatic stress disorder (PTSD).

Work conducted in the laboratory of Ki Goosens, an assistant professor in MIT’s Department of Brain and Cognitive Sciences and a member of the McGovern Institute for Brain Research, has pinpointed a neural circuit that becomes active during exposure therapy in the rats. In a study published Sept. 27 in eLife, the researchers showed that they could stretch the therapy’s benefits for at least two months by boosting the circuit’s activity during treatment.

“When you give extinction training to humans or rats, and you wait long enough, you observe a phenomenon called spontaneous recovery, in which the fear that was originally learned comes back,” Goosens explains. “It’s one of the barriers to this type of therapy. You spend all this time going through it, but then it’s not a permanent fix for your problem.”

According to statistics from the National Institute of Mental Health, 18 percent of U.S. adults are diagnosed with a fear or anxiety disorder each year, with 22 percent of those patients experiencing severe symptoms.

How to quench a fear

The neural circuit identified by the scientists connects a part of the brain involved in fear memory, called the basolateral amygdala (BLA), with another region called the nucleus accumbens (NAc), that helps the brain process rewarding events. Goosens and her colleagues call it the BLA-NAc circuit.

Researchers have been considering a link between fear and reward for some time, Goosens says. “The amygdala is a part of the brain that is tightly linked with fear memory but it’s also been linked to positive reward learning as well, and the accumbens is a key reward area in the brain,” she explains. “What we’ve been thinking about is whether extinction is rewarding. When you’re expecting something bad and you don’t get it, does your brain treat that like it’s a good thing?”

To find out if there was a specific brain circuit involved, the researchers first trained rats to fear a certain noise by pairing it with foot shock. They later gave the rats extinction training, during which the noise was presented in the absence of foot shock, and they looked at markers of neural activity in the brain. The results revealed the BLA-NAc reward circuit was recruited by the brain during exposure therapy, as the rats gave up their fear of the bad noise.

Once Goosens and her colleagues had identified the circuit, they looked for ways to boost its activity. First, they paired a sugary drink with the fear-related sound during extinction training, hoping to associate the sound with a reward. This type of training, called counterconditioning, associates fear-eliciting cues with rewarding events or memories, instead of with neutral events as in most extinction training.

Rats that received the counterconditioning were significantly less likely to spontaneously revert to their fearful states, compared to those that received regular extinction training for up to 55 days later, the scientists found.

They also found that the benefits of extinction could be prolonged with optogenetic stimulation, in which the circuit was genetically modified so that it could be stimulated directly with tiny bursts of light from an optical fiber.

The ongoing benefit that came from stimulating the circuit was one of the most surprising — and welcome — findings from the study, Goosens says. “The effect that we saw was one that really emerged months later, and we want to know what’s happening over those two months. What is the circuit doing to suppress the recovery of fear over that period of time? We still don’t understand what that is.”

Another interesting finding from the study was that the circuit was active during both fear learning and fear extinction, says lead author Susana Correia, a research scientist in the Goosens lab. “Understanding if these are molecularly different subcircuits within this projection could allow the development of a pharmaceutical approach to target the fear extinction pathway and to improve cognitive therapy,” Correia says.

Immediate and future impacts on therapy

Some therapists are already using counterconditioning in treating PTSD, and Goosens suggests that the rat study might encourage further exploration of this technique in human therapy.

And while it isn’t likely that humans will receive direct optogenetic therapy any time soon, Goosens says there is a benefit to knowing exactly which circuits are involved in extinction.

In neurofeedback studies, for instance, brain scan technologies such as fMRI or EEG could be used to help a patient learn to activate specific parts of their brain, including the BLA-NAc reward circuit, during exposure therapy.

Studies like this one, Goosens says, offer a “target for a personalized medicine approach where feedback is used during therapy to enhance the effectiveness of that therapy.”

Other MIT authors on the paper include technical assistant Anna McGrath, undergraduate Allison Lee, and McGovern principal investigator and Institute Professor Ann Graybiel.

The study was funded by the U.S. Army Research Office, the Defense Advanced Research Projects Agency (DARPA), and the National Institute of Mental Health.



de MIT News http://ift.tt/2d5upte

How data can help change the world

The vast amount of data generated daily across society is widely touted as a game-changer for research, technological innovation, and even policy making. But “big data will not change the world unless it’s collected and synthesized into tools that have a public benefit,” said Sarah Williams, an assistant professor of urban planning at MIT, in a panel discussion on the future of cities, at a conference convened last week by the Institute for Data, Systems and Society (IDSS).

The ways in which data can be used to produce change was a common theme among speakers at the IDSS celebration, which focused on how the deluge of data being gathered in the big data era can be used to tackle society’s most pressing challenges.  The two-day event brought together experts from a variety of fields, including energy, health care, finance, urban planning, engineering, computer science, and political science. The lineup even featured one speaker who, MIT President L. Rafael Reif joked, “knows who will win the election in November.” That would be Nate Silver, founder and editor-in-chief of the political poll analysis website FiveThirtyEight.

The event participants had much to celebrate. Launched in July 2015, IDSS accomplished a number of milestones in its first year, including the introduction of a new undergraduate minor in statistics and data science, a new doctoral program in social engineering and systems, a professional education course in data science, and a center focused on statistics and data sciences.

The all-star speaker lineup at the event was a testament to IDSS’s ability to bring together “data scientists and systems engineers with experts in economics, finance, urban planning, energy, public health, political science, social networks, and more,” Reif said. He added that IDSS is “a unit that can magnify individual talents through collaborations, a unit that aspires to generate groundbreaking ways to understand society’s most difficult problems and lead us to badly needed solutions.”

At IDSS, researchers are focused on taking “an analytical, data-driven approach to problems,” said Munther Dahleh, director of IDSS and the William A. Coolidge Professor of Electrical Engineering and Computer Science. “We collect the data, we develop the models, and from these models we develop insights, policies, and decisions.”

Data in the political process

The event opened with a panel discussion focused on the future of voting and elections. Charles Stewart, the Kenan Sahin Distinguished Professor in the MIT Department of Political Science, set the stage by noting the increasing role of data in the political process. Stewart, who co-directs the Caltech/MIT Voting Technology Project, described how data is collected from voter registration files, campaigns and politicians, public opinion polls, campaign contribution records, and more. He added that many citizens might be surprised to learn that the identity of anyone who has registered to vote is public record, while the data and computer code in voting machines is not always available to the public or election officials.

“Interest in election data is not simply about choosing the best candidates or policies,” Stewart explained. “It’s also about who controls the data and how it is used.”

MIT alumna Kassia DeVorsey ’04, who worked for the Obama campaign and is now the chief analytics officer at the Messina Group and founder of Minerva Insights, explained that while previously only presidential campaigns invested in gathering and analyzing data, nowadays, “if you’re running for mayor in a small town, you’re thinking strategically about ‘how can I use data to best run my campaign.’” She noted that the voter-information data compiled by the Obama campaign was the team’s most valuable resource in trying to address and influence the electorate.

During his talk, Silver explained that FiveThirtyEight is empirically minded and draws from publicly available information to generate probabilistic election forecasts. As for the 2016 presidential election, the high number of undecided voters has introduced more volatility, according to Silver. “This year, even relatively minor events have produced a shift. Therefore the debates … can matter quite a bit,” he said.

Silver said that while the polls show Democratic presidential nominee Hillary Clinton is the favored candidate, the race has tightened and Republican nominee Donald Trump does have a chance to win. Based on the high level of uncertainty surrounding this year’s election, Silver said he and his colleagues are “urging caution. … You can build models and you can do the data science, but sometimes the conclusion can be: Be careful.”

Regarding the role of gender in the presidential election, DeVorsey described how during the 2008 election, the Obama campaign asked voters oblique questions about race, to try to gauge whether polling was capturing how racism might impact the election outcome. The Clinton campaign is probably trying a similar tactic, she suggested. Meanwhile, Silver questioned whether Clinton’s high unfavorability rating can be explained without reference to her gender, adding that he thinks “the sexism question is, frankly, badly understudied.”

Data-driven policy and financial risk

Beyond the use of data in elections, Alberto Abadie, a professor of economics at MIT, and Enrico Giovannini, a professor at the University of Rome Tor Vergata, explored how data can be used to drive policy. Abadie questioned whether automatic policymaking might be possible in the future, thanks to insights from data collection.

Giovannini urged the audience to use data to help transform policies, in order to improve people’s well being and encourage sustainable development. “We produce statistics because we believe facts can improve decision-making on many levels,” he explained. Giovannini also cautioned against potential pitfalls of relying too heavily on data, adding that policymakers need to use data to not just understand problems but also develop solutions.

Another difficulty of data collection, raised by Bengt Holmstrom, the Paul A. Samuelson Professor of Economics, lies in financial risk, particularly in money markets. While there have been calls for increased transparency following the 2008 financial crisis, Holmstrom argued that in money markets, more transparency can lead to less liquidity. Unlike the stock market, “money markets are fundamentally information-sparse and opaque,” Holmstrom explained. In terms of managing systemic risk in money markets, he said “transparency is not likely to be the way unless you think that maybe will regulate the markets to be less liquid.”

Urban planning

One area where speakers called for greater transparency in the use of data is urban planning. A panel moderated by Williams examined how data can be used to make cities better places for people to live.

Panelists described how data can be used to alleviate congestion and noise, and also examined the ethical and privacy implications for residents in places where governments are collecting and analyzing data. 

During her talk, Williams displayed data visualizations her group created to illustrate the cost of incarceration in Brownsville, Brooklyn. The images exposed systemic issues in the neighborhood, including areas lacking services that could alleviate mass incarceration. The goal of her research, Williams explained, is to transform data sets “into visualizations that I hope expose urban policy issues.”

In addition to a panel discussion on social networks, the event also featured a panel discussion on the future of the electric grid, moderated by Robert Armstrong, director of the MIT Energy Initiative and the Chevron Professor of Chemical Engineering; and a session on how data can be used to analyze our health, moderated by professor of computer science and engineering Peter Szolovits.



de MIT News http://ift.tt/2dezGNF