miércoles, 31 de enero de 2018

Modeling the universe

A supercomputer simulation of the universe has produced new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed throughout the cosmos, and where magnetic fields originate. 

Astrophysicists from MIT, Harvard University, the Heidelberg Institute for Theoretical Studies, the Max-Planck Institutes for Astrophysics and for Astronomy, and the Center for Computational Astrophysics gained new insights into the formation and evolution of galaxies by developing and programming a new simulation model for the universe — “Illustris - The Next Generation” or IllustrisTNG

Mark Vogelsberger, an assistant professor of physics at MIT and the MIT Kavli Institute for Astrophysics and Space Research, has been working to develop, test, and analyze the new IllustrisTNG simulations. Along with postdocs Federico Marinacci and Paul Torrey, Vogelsberger has been using IllustrisTNG to study the observable signatures from large-scale magnetic fields that pervade the universe. 

Vogelsberger used the IllustrisTNG model to show that the turbulent motions of hot, dilute gases drive small-scale magnetic dynamos that can exponentially amplify the magnetic fields in the cores of galaxies — and that the model accurately predicts the observed strength of these magnetic fields.

“The high resolution of IllustrisTNG combined with its sophisticated galaxy formation model allowed us to explore these questions of magnetic fields in more detail than with any previous cosmological simulation," says Vogelsberger, an author on the three papers reporting the new work, published today in the Monthly Notices of the Royal Astronomical Society.

Modeling a (more) realistic universe 

The IllustrisTNG project is a successor model to the original Illustris simulation developed by this same research team but has been updated to include some of the physical processes that play crucial roles in the formation and evolution of galaxies. 

Like Illustris, the project models a cube-shaped piece of the universe. This time, the project followed the formation of millions of galaxies in a representative region of the universe with nearly 1 billion light years on a side (up from 350 million light years on a side just four years ago). lllustrisTNG is the largest hydrodynamic simulation project to date for the emergence of cosmic structures, says Volker Springel, principal investigator of IllustrisTNG and a researcher at Heidelberg Institute for Theoretical Studies, Heidelberg University, and the Max-Planck Institute for Astrophysics.

The cosmic web of gas and stars predicted by IllustrisTNG produces galaxies quite similar to the shape and size of real galaxies. For the first time, hydrodynamical simulations could directly compute the detailed clustering pattern of galaxies in space. In comparison with observational data — including the newest large galaxy surveys such as the Sloan Digital Sky Survey — IllustrisTNG demonstrates a high degree of realism, says Springel. 

In addition, the simulations predict how the cosmic web changes over time, in particular in relation to the underlying backbone of the dark matter cosmos. “It is particularly fascinating that we can accurately predict the influence of supermassive black holes on the distribution of matter out to large scales,” says Springel. “This is crucial for reliably interpreting forthcoming cosmological measurements.” 

Astrophysics via code and supercomputers 

For the project, the researchers developed a particularly powerful version of their highly parallel moving-mesh code AREPO and used it on the "Hazel-Hen" machine at the Supercomputing Center in Stuttgart, Germany's fastest mainframe computer.

To compute one of the two main simulation runs, more than 24,000 processors were used over the course of more than two months.

“The new simulations produced more than 500 terabytes of simulation data,” says Springel. “Analyzing this huge mountain of data will keep us busy for years to come, and it promises many exciting new insights into different astrophysical processes." 

Supermassive black holes squelch star formation

In another study, Dylan Nelson, researcher at the Max-Planck Institute for Astrophysics, was able to demonstrate the important impact of black holes on galaxies.

Star-forming galaxies shine brightly in the blue light of their young stars until a sudden evolutionary shift quenches the star formation, such that the galaxy becomes dominated by old, red stars, and joins a graveyard full of old and dead galaxies. 

“The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers,” explains Nelson. “The ultrafast outflows of these gravity traps reach velocities up to 10 percent of the speed of light and affect giant stellar systems that are billions of times larger than the comparably small black hole itself.“

New findings for galaxy structure

IllustrisTNG also improves researchers' understanding of the hierarchical structure formation of galaxies. Theorists argue that small galaxies should form first, and then merge into ever-larger objects, driven by the relentless pull of gravity. The numerous galaxy collisions literally tear some galaxies apart and scatter their stars onto wide orbits around the newly created large galaxies, which should give them a faint background glow of stellar light.

These predicted pale stellar halos are very difficult to observe due to their low surface brightness, but IllustrisTNG was able to simulate exactly what astronomers should be looking for. 

“Our predictions can now be systematically checked by observers,” says Annalisa Pillepich, a researcher at Max-Planck Institute for Astronomy, who led a further Illustris-TNG study. “This yields a critical test for the theoretical model of hierarchical galaxy formation.” 



de MIT News http://ift.tt/2DTGJcp

Reading and writing DNA

Thanks to the invention of genome sequencing technology more than three decades ago, we can now read the genetic blueprint of virtually any organism. After the ability to read came the ability to edit — adding, subtracting, and eventually altering DNA wherever we saw fit. And yet, for George Church, a professor at Harvard Medical School, associate member of the Broad Institute, and founding core faculty and lead for synthetic biology at the Wyss Institute — who co-pioneered direct genome sequencing in 1984 — the ultimate goal is not just to read and edit, but also to write.

What if you could engineer a cell resistant to all viruses, even the ones it hadn’t yet encountered? What if you could grow your own liver in a pig to replace the faulty one you were born with? What if you could grow an entire brain in a dish? In his lecture on Jan. 24 — which opened the Department of Biology’s Independent Activities Period (IAP) seminar series, Biology at Transformative Frontiers — Church promised all this and more.

“We began by dividing the Biology IAP events into two tracks: one related to careers in academia and another equivalent track for industry,” says Jing-Ke Weng, assistant professor and IAP faculty coordinator for the department. “But then it became clear that George Church, Patrick Brown, and other speakers we hoped to invite blurred the boundaries between those two tracks. The Biology at Transformative Frontiers seminar series became about the interface of these trajectories, and how transferring technologies from lab bench to market is altering society as we know it.”

The seminar series is a staple in the Department of Biology’s IAP program, but during the past several years it has been oriented more toward quantitative biology. Weng recalls these talks as being relegated to the academic sphere, and wanted to show students that the lines between academia, industry, and scientific communication are actually quite porous.

“We chose George Church to kick off the series because he’s been in synthetic biology for a long time, and continues to have a successful academic career even while starting so many companies,” says Weng.

Church’s genomic sequencing methods inspired the Human Genome Project in 1984 and resulted in the first commercial genome sequence (the bacterium Helicobacter pylori) 10 years later. He also serves as the director of the Personal Genome Project, the “Wikipedia” of open-access human genomic data. Beyond these ventures, he’s known for his work on barcoding, DNA assembly from chips, genome editing, and stem cell engineering.

He’s also the same George Church who converted the book he co-authored with Ed Regis, "Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves," into a four-letter code based on the four DNA nucleotides (A, T, C, and G), subsisted on nutrient broth from a lab vendor for an entire year, and dreams of eventually resurrecting woolly mammoths. He’s being featured in an upcoming Netflix Original documentary, so when he arrived at the Stata Center to give his lecture last week he was trailed by a camera crew.

According to Church, the transformative technologies that initially allowed us to read and edit DNA have grown exponentially in recent years with the invention of molecular multiplexing and CRISPR-Cas9 (think Moore’s Law but even more exaggerated). But there’s always room for improvement.

“There’s been a little obsession with CRISPR-Cas9s and other CRISPRs,” said Church. “Everybody is saying how great it is, but it’s important to say what’s wrong with it as well, because that tells us where we’re going next and how to improve on it.”

He outlined several of his own collaborations, including those aimed at devising more precise methods of genome editing, one resulting in 321 changes to the Escherichia coli genome — the largest change in any genome yet — rendering the bacterium resistant to all viruses, even those it had not yet come into contact with. The next step? Making similarly widespread changes in plants, animals, and eventually perhaps even human tissue. In fact, Church and his team have set their sights on combatting the global transplantation crisis with humanlike organs grown in animals.

“Since the dawn of transplantation as a medical practice, we’ve had to use either identical twins or rare matches that are very compatible immunologically, because we couldn’t engineer the donor or the recipient,” said Church.

Since it’s clearly unethical to engineer human donors, Church reasoned, why not engineer animals with compatible organs instead? Pigs, to be exact, since most of their organs are comparable in size and function to our own.

“This is an old dream; I didn’t originate it,” said Church. “It started about 20 years ago, and the pioneers of this field worked on it for a while, but dropped it largely because the number of changes to the genome were daunting, and there was a concern that the viruses all pigs make — retroviruses — would be released and infect the immunocompromised organ recipient.”

Church and his team successfully disrupted 62 of these retroviruses in pig cells back in 2015, and in 2017 they used these cells to generate living, healthy pigs. Today, the pigs are thriving and rearing piglets of their own. Church is also considering the prospect of growing augmented organs in pigs for human transplantation, perhaps designing pathogen-, cancer-, and age-resistant organs suitable for cryopreservation.

“Hopefully we’ll be doing nonhuman primate trials within a couple of years, and then almost immediately after that human trials,” he said.

Another possibility, rather than cultivating organs in animals for transplant, is to generate them in a dish. A subset of Church’s team is working on growing from scratch what is arguably the most complicated organ of all, the brain.

This requires differentiating multiple types of cells in the same dish so they can interact with each other to form the complex systems of communication characteristic of the human brain.

Early attempts at fashioning brain organoids often lacked capillaries to distribute oxygen and nutrients (roughly one capillary for each of the 86 billion neurons in the human brain). However, thanks to their new human transcription factor library, Church and colleagues have begun to generate the cell types necessary to create such capillaries, plus the scaffolding needed to promote the three-dimensional organization of these and additional brain structures. Church and his team have not only successfully integrated the structures with one another, but have also created an algorithm that spits out the list of molecular ingredients required to generate each cell type.

Church noted these de novo organoids are extremely useful in determining which genetic variants are responsible for certain diseases. For instance, you could sequence a patient’s genome and then create an entire organoid with the mutation in question to test whether it was the root cause of the condition.

“I’m still stunned by the breadth of projects and approaches that he’s running simultaneously,” says Emma Kowal, a second-year graduate student, member of Weng’s planning committee, and a former researcher in Church’s lab. “The seminar series is called Biology at Transformative Frontiers, and George is very much a visionary, so we thought it would be a great way to start things off.”

The four-part series also features Melissa Moore, chief scientific officer of the Moderna Therapeutics mRNA Research Platform, Jay Bradner, president of the Novartis Institutes for BioMedical Research, and Patrick Brown, CEO and founder of Impossible Foods. 



de MIT News http://ift.tt/2GycTvN

China's new engagement with the developing world

Beijing’s strategy toward developing countries is the focus of “China Steps Out: Beijing’s Major Power Engagement with the Developing World,” a new book co-edited by Eric Heginbotham PhD '04, principal research scientist at the MIT Center for International Studies and Joshua Eisenman, an assistant professor at the LBJ School of Public Affairs at the University of Texas at Austin.

The book features contributions by a diverse group of experts who independently analyze and explain China’s engagement practices in Southeast Asia, Central Asia, South Asia, Africa, the Middle East, and Latin America and evaluates their effectiveness. In addition to writing the chapters on China in Southest Asia and Africa, respectively, Heginbotham and Eisenman co-authored the volume’s introduction and concluding chapters.

The book unpacks and summarizes how China pursues its objectives and how other countries perceive and respond to China’s growing influence. Through the application of a comparative politics research design, they differentiate China’s approach based on each region’s economic, political, military, and social characteristics. In this way, Heginbotham and Eisenman identify the unique features of Chinese engagement in each region in addition to the developing world as a whole.

“‘China Steps Out’ tracks the important shifts in China’s diplomacy, including the increased weight Beijing places on political and economic relationships in the developing world; a more differentiated approach to those areas, including a distinction between ‘newly emerging powers’ and others; and its new interest in fostering strategic military relationships in parts of the developing world,” Heginbotham says. “These are all parts of Xi Jinping’s ‘major power relations.’”

Heginbotham was the lead author of the RAND Corporation’s “China’s Evolving Nuclear Deterrent” (2017) and “US–China Military Scorecard: Forces, Geography, and the Evolving Balance of Power” (2015), as well as co-author of “Chinese and Indian Strategic Behavior: Growing Power and Alarm” (2012).

Ambassador Paula Dobriansky, a former U.S. under secretary of state for global affairs, calls the book a “brilliant guide for policymakers and academics alike.”

“The authors masterfully detail China's strategic goals and expansive relations with the developing world through comparative regional analyses and unique insights,” she says.

Eisenman, who last week provided testimony to the U.S.-China Economic and Security Review Commission on themes central to the book says the book is “suited to anyone trying to gain a better understanding of China’s strategic intentions in the developing world, particularly the Belt and Road Initiative and what it means for the United States and the world.”

Eisenman is the co-author of “China and Africa: A Century of Engagement” (2012). His next book, “Red China’s Green Revolution: Technological Innovation, Institutional Change, and Economic Development Under the Commune,” will be released in April 2018.

Book launch events for “​China Steps Out” are scheduled for the Royal United Service Institute in London on March 13, and the Center for Strategic and International Studies in Washington on March 22. Additional events are also being scheduled. 



de MIT News http://ift.tt/2rSIDbN

martes, 30 de enero de 2018

Is Massachusetts ready for carbon pricing?

Many economists across the political spectrum agree that carbon pricing could provide a cost-effective strategy to accelerate a transition to a low-carbon economy and reduce carbon emissions that play a key role in global climate change. Drawing on their research, legislators in several states are now working to enact bills that impose a per-ton fee on carbon emitters, but it’s no easy task to win political support for such measures.

On Jan. 25, a panel at MIT explored the benefits, costs, and political challenges involved in translating carbon pricing from concept into law in Massachusetts and beyond. Hosted by the student-led MIT Climate Action Team and held at the MIT Stata Center, the panel disucssion included Massachusetts state Sen. Michael Barrett and state Rep. Jennifer Benson, authors of two different carbon-pricing bills; Marc Breslow, research and policy director of the carbon-pricing research and advocacy group Climate XChange; and three experts on the topic who are affiliated with the MIT Joint Program on the Science and Policy of Global Change — Department of Urban Studies and Planning Associate Professor Janelle Knox-Hayes, Joint Program Co-director and Sloan School of Management Senior Lecturer John Reilly, and Center for Energy and Environmental Policy Research Director and MIT Sloan Professor Christopher Knittel. The panelists weighed advantages and disadvantages of carbon pricing as a climate-change solution, clarified differences between the two pending bills, and discussed political challenges faced by these bills.

Both bills would ultimately impose a $40 per-ton fee on carbon dioxide-equivalent emissions. Barrett’s bill is revenue neutral, returning 100 percent of revenue to state taxpayers and businesses; Benson’s bill, which is revenue positive, would return 80 percent of revenue to these constituents while applying the other 20 percent to clean energy projects. The intent of these rebates would be to compensate consumers for the higher prices they would pay under either bill for carbon-intensive products.

Barrett predicted that putting a price on carbon would lead to lower consumption of such products while reducing the “social cost of carbon” — what society must pay to meet the added health care, water infrastructure, emergency management, and other expenses associated with carbon emissions. Benson maintained that it’s critical to divert a portion of revenue raised by statewide carbon pricing to fund energy efficiency, renewable energy, and climate adaptation infrastructure.

Preferring the passage of either bill to the status quo, MIT panelists viewed carbon pricing as an optimal way to lower carbon emissions.

“At this point, any carbon-pricing bill is a positive move, and both of yours sound strong,” said Reilly, who nonetheless noted some challenges that carbon pricing cannot solve on its own. “We also face a big challenge in adapting infrastructure to climate change; one way or another we’re going to have to come up with funds to do that. Then there’s the issue of public infrastructure that affects fossil emissions. If you raise the price of gasoline, you can buy a more efficient vehicle, but if there’s not an effective subway system near you, you can’t use that. So I think some of those sorts of actions [would make] a stronger case.”

Knittel favored subsidizing solar panels and electric vehicles through a progressive income tax rather than via a carbon pricing scheme. “But at the end of the day, we’d be more than happy to have either one of these bills, and a price on carbon is by far the most efficient way to reduce [carbon dioxide] emissions,” said Knittel.

One key concern among members of the audience was the potential adverse effects of a carbon-pricing bill on low-income citizens.

“The most progressive thing to do if you care about working people is to have absolute revenue neutrality,” said Barrett, who, like Knittel, argued that solar and other renewable energy programs could best be funded through a progressive income tax. “I want to make sure that 100 percent of a carbon fee goes back to working people.” Concerned that a revenue-positive carbon pricing bill would be framed by opponents as a tax, he cautioned that such a bill would be politically unviable for fellow legislators.

Benson countered that anyone opposed to a carbon-pricing bill would still call it a tax, and that Massachusetts state polling shows that over 70 percent of people polled say they are willing to pay more for energy if they know the money is going toward environmental protection or improvement. “We don’t currently have such a revenue source to go toward these areas,” said Benson, noting that less than 1 percent of the state budget addresses environmental concerns. “If we really care about the environment, we have to be willing to put money into it at the state level.”

To overcome political resistance to carbon pricing, Knox-Hayes urged legislators to consider the cultural framing of proposed bills. “What’s really important when putting together policies is to connect the language of the policy to what the local polity cares about,” said Knox-Hayes, suggesting that the carbon-pricing bills avoid the use of the word “tax,” focus on benefits, and show how these bills can generate positive outcomes that address local concerns.

The panel also explored how Massachusetts could serve as a pilot project for additional statewide, national, and international carbon-pricing measures.

“In my optimal scenario, a state like Massachusetts passes a carbon tax, which shows the rest of the country that the economy hasn’t gone into the tank,” said Knittel. “British Columbia has served as a demonstration project for Senator Barrett and Representative Benson. In 2020, the [U.S.] Congress can use either bill as a poster child to show that carbon pricing actually works.”

Reilly pointed out that mechanisms will be needed to enable carbon pricing to work across state and national borders, but ruled out the possibility of any international carbon fee set by the United Nations, replacing those set by member nations.

“From an economic standpoint, we’d like to have the same price across the whole world, so we want to think about how we work toward that,” said Reilly. “The challenge there, is that we think that poorer countries must bear the same costs as richer countries, so that’s one of the reasons to think about ways we could set up transfers to assist them. It will be an issue to see how we balance things out and get closer to the ideal.”

Addressing another audience question of what MIT can do to help support passage of carbon-pricing bills in Massachusetts, Barrett acknowledged Knittel for providing technical and economic advice to him and Benson over the past four years, and MIT for convening forums such as this one.

“Over the last year, I’ve been at MIT a lot for conferences like this, and these exchanges really help,” said Barrett. “Just maintaining the dialogue is critically important.” Looking ahead, he added, “I think the Massachusetts State Senate is likely to enact carbon pricing this year. ... We need the critical involvement of university people, regardless of what school you’re from, all around Greater Boston, because we’re actually on the cusp of doing something.”



de MIT News http://ift.tt/2no4QJ8

Jing Li: Applying economics to energy technology

For the past four years, Jing Li ’11 has been studying energy technologies that could help the world move to a low-carbon future. Her expertise is technology diffusion and adoption. Fresh out of an economics PhD program at Harvard, Li says she “loves thinking about how technological progress comes about, how technology is adopted.”

She’s returning to MIT to do that and more — first as a postdoc for a year and then as an assistant professor of applied economics at the MIT Sloan School of Management. 

Her research focuses on the race to introduce better batteries into the marketplace. The availability of low-cost, high-energy-density, scalable, and safe batteries is critical in both transportation and power generation, which are two of the most polluting sectors in the energy ecosystem, Li points out. Better batteries could mean higher efficiency and lower emissions.

“We’re not quite there yet in terms of battery technology that checks all the boxes, but why not? There are many patents out there, but when do we expect to see them on the market?” she says.

Li’s training in economics allows her to examine each step as a technology progresses from the lab to the marketplace. She hopes her studies will help speed up that process.

“Energy is critical to everyday life, and low-carbon energy is critical to addressing climate change concerns,” she says. “At some point, I just started thinking about that, and I couldn’t let go.”

Li organizes her research on technology adoption around three core questions. First: Why aren’t adoption rates as high as we’d like or expect for a promising technology? Cost and pricing are sometimes the impediment, but not always. Sometimes it’s a question of infrastructure, as in the example of electric cars, which Li focused on in her dissertation. Electric cars need a reliable network of charging stations before widespread adoption is possible.

Li’s second question deals with the mysteries of technological innovation. She asks: “Is technological innovation a black box, and all we need to do is wait? Or is there scope for government policy to accelerate innovation by addressing inefficiencies?” She studies instances in which more funding for basic research could make a difference, or in which the inventions are ready but firms or consumers need a push in the form of measures such as government subsidies for the product to achieve higher levels of adoption.

The final question driving her research is: How can we meet growing energy demand in developing countries while protecting human health and the environment? Over the course of her education and the beginning of her research career, Li has explored fields from development economics to environmental economics and industrial organization.

“If we’re going to improve the lives of people in developing countries, energy consumption is going to play a big role,” she says. “But at the same time, how do we make things better for human health by alleviating pollution, improving air quality?”

With her fast-approaching professorship very much on her mind, Li has plans to take a close look at the economics curriculum at the Institute to see if there are any gaps in what’s being offered.

“There’s a history of high-quality energy economics classes at MIT,” she says. “I want to learn more about the classes that are being taught currently and bring back some of the really important parts of classes that are no longer around.”

She plans to meet with a wide range of students — from Sloan MBAs to undergraduates in engineering, science, and the humanities — to formulate a sense of which energy and economics issues they feel are most important. She’s keeping learning outside the classroom in mind, too. As an undergrad, she says she benefited immensely from the Undergraduate Research Opportunities Program (UROP) “learning a lot about the grunt work of research.” And If the right research opportunity presents itself, she says she plans to create a UROP for undergrads working in energy economics.

Li says she looks forward to the chance to give back to her alma mater.

“MIT just feels special to me in a way that I cannot even articulate,” she says. “To me, it’s nerds — in the best sense of the word — coming together to celebrate learning and knowledge.”

This article appeared in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative. 



de MIT News http://ift.tt/2BDEEPS

lunes, 29 de enero de 2018

Out of the lab and onto the page

When it comes to graduate student life, what happens in the lab often stays in the lab.

The MIT Graduate Student Admissions blog is trying to change that narrative, post by post. The blog grew out of an Independent Activities Period (IAP) writing workshop held in 2017, which was so successful it was offered again this year.

Lauren Stopfer, a third-year graduate student in biological engineering who attended the inaugural workshop and now serves on the blog’s editorial board, says “the blog was started to provide a window into the reality of MIT grad life,” like the “grind” of research, moments of epiphany, and even surviving winter in New England.

This inside look has attracted a strong readership of around 5,000 views per month. The blog is written for people considering graduate school, current students at MIT and elsewhere, and anyone seeking a view beyond the Infinite Corridor. Since its inception, the site has steadily evolved in terms of its range of topics, the number of posts, and even its purpose.

Stopfer says the blog became a lifeline over the past year. Having a platform was a way for her to be “a little freer” and to find her voice on topics ranging from cheap vacations to the proposed tax on graduation tuition.

Two of her fellow bloggers and editorial board members echo that sentiment. Jared Kehe, a third-year student in biological engineering, wrote a recent post expressing his love of coffee: “For our generation, coffee is practically a deity. We worship it, ritualize it, love it. We believe in coffee.“ Leigh Ann Kesler, who is in her sixth year in nuclear science, posted a poignant piece on a longtime lab mate moving on.  “I can see the blessings of my friendships, and how my life has been enriched by the diversity of my relationships without dwelling on the sadness that comes from parting ways,” she wrote.

A quick survey of posts reveals several common themes among MIT graduate student bloggers: food, managing failure, and figuring out the maze that is MIT (especially for those outside of the U.S.).

Diana Chien, program director of the MIT School of Engineering Communication Lab and one of the blog’s champions, says it’s been “amazing to watch the blog take hold.”

“Even better, for the workshop this year, those who started off learning how to blog are now running things and teaching tips and tricks to their fellow graduate students,” Chien says.

The IAP writing workshop is practical at heart. Much of the advice to future bloggers is broadly applicable, ranging from "just write (and then revise, revise, revise)," to understanding narrative patterns, to convincing grad students to get outside their customary academic writing comfort zones and be more conversational.

In addition to the seasoned bloggers, staff communicators from around MIT served as teachers, hands-on mentors, and editors. Anne Stuart, a communications officer in the electrical engineering and computer science department and a teacher, writer, and editor, shared her 13 go-to tips. Many of them are no-brainers, she says, but they trip up even professional writers, such as “be concise” and  “aim not to be misunderstood.”

Martha Eddison, who writes extensively in her role as the special assistant to President L. Rafael Reif, shared her own trade secret: The most important word in each sentence goes at the end.

The poster advertising this year’s course promised, tongue-in-cheek, that those who participated might find fame and fortune, in addition to polishing up their writing acumen. In fact, a few of the posts have been picked up by outside media such as the Times Higher Ed Supplement.

Other fans of the blog are closer to home. Vice Chancellor Ian Waitz, who kicked off the idea for the graduate blog about two years ago, said that MIT’s administrative leaders often discuss particular posts.

“It helps us, as we might read about issues that are concerning to students, and ones that we might not have otherwise been aware of,” he says. “The blogs paint a portrait of the whole graduate student — beyond research, their families, their day to day.”



de MIT News http://ift.tt/2BDiFbz

Integrating the promise of photonics

How can driverless cars detect obstructions when it’s foggy outside? What new forms of light communications can supercharge the internal housekeeping of data centers to enable ever-faster cloud computing? Can we detect a gas leak along a 1,000-mile pipeline remotely, and at an ultralow cost? These were some of the questions participants investigated at an AIM Photonics Academy training session.

More than 60 people gathered at MIT on Jan. 16 for three days of lectures and design labs on integrated photonics. The program was organized by AIM Photonics Academy, which is part of AIM Photonics Institute, one of 14 Manufacturing USA institutes jointly funded with the federal government to accelerate advanced manufacturing in the United States. Attendees, mostly from industry, came from the U.S. and abroad.

Integrated photonics uses complex optical circuits to process and transmit signals of light, similar to the routing of electrical signals in a computer microchip. Students learned how to design device components and lay out photonic integrated circuits (PICs), for submission to AIM’s multiproject wafer facility in Albany, New York. They also learned about different applications for PICs, including datacom, sensors, and LIDAR for driverless cars.

Critical partnership

The technology is still emerging, and companies are looking for outside training to fill in the gaps they are unable to fill by themselves. “This partnership is critical for accelerating the adoption of photonic integrated chip technology across our enterprise,” says Nick Rhenwrick, Lockheed Martin’s AIM program manager.

The three-day AIM Winter Academy is part of a suite of AIM Academy education and training offerings. AIM Photonics Academy will post teaching packages and roll out online self-paced courses in integrated photonics that will be available for free on its website. In the spring it will begin rolling out edX courses to give students critical hands-on experience designing photonic integrated circuits.

These initiatives are geared for higher-skilled learners. Concurrently, AIM Photonics Academy is committed to introducing younger students to integrated photonics, and is working with TED-Ed to create three videos for K-12 students.

Sharing know-how widely

Education director Sajan Saini spoke about the feedback he received from students in the AIM Winter Academy. “They’re excited about the new technology, want to figure out how to deploy it, and are committed to the time and effort needed to master fabless photonics tools,” says Saini. “The time is ripe to disseminate our online and onsite teaching content as broadly as possible.”

Photonic integrated circuits have the potential to offer blockbuster solutions for driverless cars, data centers, gas sensors, and microwave communications in the coming years.

The emergence of an expert manufacturing platform and multiple applications-driven demands are the hallmarks of an extended period of industrial innovation, and integrated photonics is primed to offer high-performance and efficient solutions.



de MIT News http://ift.tt/2Emmvcm

Graduate Student Council launches inclusion initiative

The Graduate Student Council (GSC) has undertaken important diversity and inclusion efforts in recent months, with a particular focus on improving the student experience in the Institute’s academic departments.

The GSC’s new Diversity and Inclusion Subcommittee (DIS), which is led by SMArchS computation student Ty Austin, is at the center of this work. In the short time since its formation at the end of the 2016-17 academic year, the DIS has already made headway thanks in large part to its inaugural Department and Classroom Inclusion initiative (DCI). The peer-to-peer initiative’s mission is to establish student diversity representation in all of MIT’s graduate academic departments and programs through student diversity representatives called “conduits.”

The conduits are a cohort of over 30 graduate students serving 25 academic departments, and they recently came together in an assembly to discuss how to best implement diversity and inclusion programs campus-wide. The assembly gave them a platform to discuss how to share best practices among departments and to establish a permanent diversity and inclusion standard for the Institute.

“While it is not explicitly stated in MIT’s mission that the Institute is to provide a more diverse and inclusive environment, it does state we must advance technology and science that will best serve the nation and the world in the 21st century,” Austin says. “That’s impossible to do without being more equitable.”

Led by biological engineering graduate student Claire Duvallet, aeronautics and astronautics graduate student Arthur Brown, and chemical engineering graduate student German Parada, DCI’s signature program — the Conduit Assembly — took place on Nov. 15 and is slated to convene twice again in the spring semester.

Conduits from 25 graduate departments came together to talk about the current state of diversity and inclusion in their programs, and what they would like to see improved. Findings from the 2017 Student Quality of Life survey helped guide the conversation.

"I think what was really amazing was seeing 30 plus people there sharing what is happening in their departments and being so energized about coming together and all working toward the same goal,” Duvallet says. “A lot of people had a lot to say."

Several ideas came out of the assembly, including:

  • The conduits for each school will appoint a conduit chair, or convener, thus creating five student-only diversity committees;
  • Examine ways to create more opportunities for student involvement in faculty hiring and increase faculty-to-student mentorship;
  • Expand the ICEO Office by having five full-time diversity managers (each one would be responsible for undergraduate and graduate diversity affairs for their school);
  • Eventually adding more staff diversity representatives per department; and
  • Mandatory diversity and inclusion workshops and departmental diversity and inclusion plans, including an accountability chart.

Civil and environmental engineering conduit and ACME member Tiziana Brown said the chart could clearly state goals for each department, progress toward meeting these goals, and the end result.

Satish Gupta, the DIS treasurer says that “brilliant ideas like Tiziana’s that give me a bunch of optimism about the longevity of this initiative.”

When the Conduit Assembly convenes in the spring, they will discuss a number of issues, including departmental diversity surveys for the five departments hosting Visiting Committee meetings in the fall of 2018. DIS’s work in this area aligns with the goals of the MindHandHeart initiative, a coalition of students, faculty, and staff working to make the MIT community more healthy, welcoming, and inclusive.

Sponsored by the Office of the Chancellor and MIT Medical, MindHandHeart’s Department Support Project (MHH-DSP) is bringing together department leaders, data analysts, students, and key campus experts to share best practices and strengthen MIT’s academic climates. This spring, DIS and MindHandHeart will partner in support of advancing diversity and inclusion efforts in MIT’s academic departments.



de MIT News http://ift.tt/2FrQ0Zn

domingo, 28 de enero de 2018

Changing the color of 3-D printed objects

3-D printing has come a long way since the first rapid prototyping patent was rejected in 1980. The technology has evolved from basic designs to a wide range of highly-customizable objects. Still, there’s a big issue: Once objects are printed, they’re final. If you need a change, you’ll need a reprint.

But imagine if that weren’t the case — if, for example, you could change the color of your smartphone case or earrings on demand.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making that a reality. In a new paper, they present ColorFab, a method for repeatedly changing the colors of 3-D printed objects, after fabrication.

Using their own 3-D printable ink that changes color when exposed to ultraviolet light, the team can recolor a multicolored object in just over 20 minutes — and they say they expect that number to decrease significantly with future improvements.

While the project is currently focused on plastics and other common 3-D printing materials, the researchers say that eventually people could instantly change the color of their clothes and other items.

“Largely speaking, people are consuming a lot more now than 20 years ago, and they’re creating a lot of waste,” says Stefanie Mueller, the X-Consortium Career Development Assistant Professor in the departments of Electrical Engineering and Computer Science and Mechanical Engineering. “By changing an object’s color, you don’t have to create a whole new object every time.”

Mueller co-authored the paper with postdoc Parinya Punpongsanon, undergraduate Xin Wen, and researcher David Kim. It has been accepted to the ACM CHI Conference on Human Factors in Computing Systems, which takes place in April in Montreal.

How it works

Previous color-changing systems have been somewhat limited in their capabilities, using single colors and 2-D designs, for example.

To move beyond single-color systems, the team developed a simple hardware/software workflow. First, using the ColorFab interface, users upload their 3-D model, pick their desired color patterns, and then print their fully colored object.  

After printing, changing the multicolored objects involves using ultraviolet light to activate desired colors and visible light to deactivate others. Specifically, the team uses an ultraviolet light to change the pixels on an object from transparent to colored, and a regular office projector to turn them from colored to transparent.

The team’s custom ink is made of a base dye, a photoinitiator, and light-adaptable dyes. The light-adaptable (photochromic) dyes bring out the color in the base dye, and the photoinitiator lets the base dye harden during 3-D printing.

“Appearance adaptivity in general is always a superior feature to have, and we’ve seen many other kinds of adaptivity enabled with manufactured objects,” says Changxi Zheng, an associate professor at Columbia University who co-directs Columbia’s Computer Graphics Group. “This work is a true breakthrough in being able to change the color of objects without repainting them.”

The team tested ColorFab on three criteria: recoloring time, precision, and how quickly the color decayed. A full recoloring process took 23 minutes. However, the researchers note that they could speed up the process by using a more powerful light or adding more light-adaptable dye to the ink.

They also found the colors to be a bit grainy, which they hope to improve on by activating colors closer together on an object. For example, activating blue and red might show purple, while activating red and green would show yellow.

Mueller says that the goal is for people to be able to rapidly match their accessories to their outfits in an efficient, less wasteful way. Another idea is for retail stores to be able to customize products in real-time, if, for example, a shopper wants to try on an article of clothing or accessory in a different color.

“This is the first 3-D-printable photochromic system that has a complete printing and recoloring process that’s relatively easy for users,” Punpongsanon says. “It’s a big step for 3-D printing to be able to dynamically update the printed object after fabrication in a cost-effective manner.”



de MIT News http://ift.tt/2niKEcp

viernes, 26 de enero de 2018

Coding, thinking, sharing, building

Sharon Kipruto knew giving birth was a precarious endeavor. In her home country of Kenya, the maternal death rate is much higher than in the United States — 510 deaths versus 23 deaths, per 100,000 live births. In part, that’s because there aren’t enough doctors to meet patient demand. And without visits, women aren’t getting prenatal information that could potentially save their lives.

Kipruto realized this was a problem ripe for intervention. Instead of relying on doctor visits to disseminate information, she thought: “Why not send the information directly to the women?”

Now she’s working on a project that runs with this idea: sending informative, automated text messages. About 88 percent of people in Kenya have mobile phones, so that could be an effective way to give pregnant women information they need, when they need it, says Kipruto, an senior in the Department of Electrical Engineering and Computer Science (EECS).

Kipruto is among 135 students participating in the 2017-2018 Advanced Undergraduate Research Opportunities Program, better known as SuperUROP. Launched by EECS in 2012, the program was later expanded to all departments in the School of Engineering. This year, for the first time, the program was open to students from the School of Humanities, Arts, and Social Sciences as well.

The SuperUROP scholars’ diverse projects include investigations to improve health, keep people better informed, and make technology more attuned to people's feelings.

“It is remarkable in how many fields the students are contributing,” says Dirk Englund, an associate professor of EECS and instructor of 6.UAR, the 12-unit seminar course that all SuperUROP students take.

Focusing on health

Many student projects focus on approaches to better treat disease. Claire Goul, a junior in EECS, for example, is investigating a tiny biomedical delivery system: DNA nanoparticles. Made of single-stranded DNA, the nanoparticles fold themselves into biological containers, which can transport therapeutic molecules into cells.

Part of maintaining human health is the ability to access and share detailed medical histories. But right now, the process isn’t very streamlined, says Kevin Liu, a senior in mathematics and EECS.

“Health care data is not really in the hands of patients. It's in the hands of doctors, hospitals, and health care insurance companies,” Liu says. “We want to be able to move this data back to patients, and let patients decide whom to share it with.”

To do that, Liu is working with blockchain technology, the system that underlies the celebrated digital currency Bitcoin. What makes the blockchain so useful is that it keeps track of transactions, and when applied to medical records, patients would be able to know who sees their data. An innovative add-on to blockchain code, a feature called smart contracts, would also allow patients to determine whom they want to share data with, as well as who has the ability to update that data. Liu is hoping to build a web interface that makes this technology easy and intuitive to use, even for people who’ve never coded before.

Making information visible

Other students are looking into ways to harness information to benefit society.

Mikayla Murphy, a senior in civil and environmental engineering, is using information to hold people accountable. She’s visualizing data collected by an MIT GOV/LAB-developed machine learning pipeline, which analyzes city government websites to determine whether those governments are being transparent.

There’s reason to look. In 2010, the Los Angeles Times published an exposé on the exorbitant salaries of city administrators of Bell, California (population 38,000). Bell’s city manager was paid a whopping $800,000 per year — the nation’s highest salary for someone in that role, according to the investigation. Murphy says that practices such as publishing city budgets and meeting minutes online can help citizens keep their representatives, and their payrolls, in check.

“I've been really happy working on this project because it's something I've been interested in this entire time here at MIT: how to apply data science skills for social good,” Murphy says.

Jeremy Stroming, a senior in aeronautics and astronautics, is also working toward visualizing a better world — literally. Stroming is building a platform for visually illustrating trends in Earth's subsystems, such as oxygen levels in the oceans, melting sea ice, or changes in average surface temperature.

Stroming’s project aims to find ways to better communicate what’s happening to the Earth so users can, as he says, “have a conversation” with the planet. Not only could people better understand the planet and its systems, especially those going awry, but they could also find out about actions they can take using the platform, Stroming says. These might include recommendations for how to adjust diet, support sustainable businesses, or contact government representatives to advocate for change.

Stroming recognizes that learning about the Earth’s ills can be intimidating. He hopes to make it inviting and empowering. He has been planning a hackathon to make the portal as irresistible as possible, “so that it sucks you in, like Facebook.”

Setting moods with music

With its versatility, technology can also improve our leisure. Patrick Egbuchulam, an EECS senior, wants to enhance video game play by making the music responsive to what a player is experiencing.

Most of the time, video game music is precomposed, fixed, Egbuchulam says. Yet a person could have a totally different experience of the game, with different attendant emotions, from the first time playing to the 10th. Egbuchulam’s project is to make the soundtrack match player experience in real-time. This could include making the music slower and darker for tense, serious moments, or brighter and faster, for exciting, hopeful ones, by changing musical traits such as the melody’s tempo, mode, and key (major or minor key, for example). With this approach, he says, “the music is as unique as a game play.”

As the fall term closed, SuperUROP scholars showcased their work at Proposal Pitch, a poster session, followed by the annual SuperUROP community dinner. There, they heard guest speaker Katie Rae, CEO and managing partner of The Engine, describe the challenges facing startup founders who are developing “tough technologies” — that is, breakthrough concepts that require extensive time and funding to bring to market.

“Tough-tech companies have historically been underserved and underfunded, leaving many breakthrough inventions stuck in the lab,” Rae told the students. The Engine, an MIT-backed startup incubator and accelerator launched in 2016, provides long-term capital, equipment, lab space, and other support for such companies.

SuperUROP participants are only halfway through the year-long program, but organizers say they’ve already come a long way.

“I am deeply impressed about their progress in their research projects and their ability to communicate them,” Englund says.

The scholars return to their labs and classrooms in February.



de MIT News http://ift.tt/2nf6o85

Prototypes for the new space age

What happens when 20 researchers conduct 14 projects from entirely disparate fields of research over the course of 90 minutes — while floating in zero gravity? Thrills, learning, magic — and results.

This past November, the Media Lab Space Exploration Initiative chartered a flight with the Zero Gravity Corporation to conduct experiments that relied on the unique affordances of microgravity. Projects ranged across disciplines: design, architecture, engineering, biology, music, robotics, and beyond — manifesting the Initiative’s goal of democratizing access to space. On Jan. 23, the group reassembled to share the results of their projects and celebrate the success of the first flight, at a symposium for the MIT community.

“Space used to be for a very small number of people who had to study in a particular field and train for years. But space will soon be for everyone,” says Maria Zuber, MIT’s vice president for research and one of the initiative’s principal investigators. “The Media Lab students bring a creative view, and a lot of out-of-the-box thinking. If we expose those minds to space, we’ll have the benefit of their thinking about facilitating the opening up of the space frontier.”

Rapid prototypes, rapid results

The January symposium demonstrated the remarkable diversity of research areas represented on the flight, and also underscored the far-reaching ideas behind the projects. Even by Media Lab standards it was an unusual assortment, running the gamut from peer-reviewed publications, to architectural modeling, to futuristic fashion. These researchers are imagining and prototyping for humanity's future in space, beyond the basic concerns of survival.

The researchers had only a few weeks to submit project proposals, and between two and five months to design their experiments and get them flight-ready and approved. Every proposal had to meet strict research criteria as well as stringent safety and operational standards. Each experiment had to be designed to run in only 20-30 seconds of zero gravity at a time, over the course of 90 minutes.

The results of the 14 research projects that flew are as varied as their fields of inquiry. A few highlights:

Scratch in Space: Eric Schilling, of the Media Lab’s Scratch team, spent his time in microgravity playing games designed by members of the Scratch community, ages 8-15. He recorded his efforts and compiled them into a video.

TESSERAE: The self-assembling architecture project of Ariel Ekblaw, of the Responsive Environments group, is aimed at a future need for low-cost orbiting space infrastructure. She published the results from the flight as part of a technical paper with AIAA (American Institute of Aeronautics and Astronautics) and presented the paper at their 2018 SciTech conference.

Search for Extra-Terrestrial Genomes (SETG): A project from MIT EAPS, headed by Maria Zuber, SETG is the first experiment to sequence DNA at lunar and Martian gravity. The team published a paper on their results, and are now developing a life-detection device that they hope to send to Mars one day.

Orbit Weaver: Fluid Interfaces group alumna Xin Liu’s Orbit Weaver is a hand-mounted device that shoots out a line and attaches to a surface with a magnet, theoretically allowing her to move with greater control in 3-D space. It’s paired with the Orbit Weaver Suit, a custom flight suit made of reflective material that enhances the performance-art aspect of Liu’s work. The project has been featured in Vice China’s Creators Project.

“I’m incredibly proud of all of the Media Lab projects and Lab students that have contributed,” says Ariel Ekblaw, the initiative’s founder and leader. “A typical zero gravity research flight is about a year in planning. That our participants were able to design and execute their experiments in just a few months really speaks to the culture of the Media Lab, both in terms of rapid prototyping and deployment, and the student-led, grassroots enthusiasm.”

Next steps

At the symposium, Ekblaw also outlined what’s ahead for the Space Exploration Initiative:

  • Annual zero-gravity flight. The flight this past November will be the first of many; the goal is to get as many different projects from as many different research groups and areas of interest up and into zero gravity as possible.

  • Blue Origin flight, summer 2018. Six projects will be selected as payloads for a suborbital flight, allowing for more extended periods of microgravity.

  • International Space Station, winter 2019: One to three payloads will board the ISS, allowing for consistent zero gravity conditions.

Beyond the Cradle

In just a few weeks on March 10, Space Exploration will host its second Beyond the Cradle event, a gathering of students, scholars, and luminaries including astronauts, industry leaders, science fiction visionaries, and researchers. The event will be livestreamed; all are invited to watch and engage in imagining our space future.



de MIT News http://ift.tt/2ng4G7k

After 16 years as heads of house, Anne and Bill McCants to step down from Burton Conner

Following a 16-year head-of-house career that spanned three decades and two residence halls, Professor Anne E. C. McCants and her husband Bill have announced that they will step down from their post in Burton Conner House (BC) at the end of this academic year.

In an email to all heads of house earlier this month, Professor McCants shared that she is “starting a three-year term as the president of the International Economic History Association, a position which I realize is going to require a lot more travel of me than is feasible while serving as head of a residence hall as large and complex as BC.”

Vice President and Dean for Student Life Suzy Nelson offered praise for the McCants’ work as heads of house. “Their experience and perspective have been a great support to me and the entire head of house community, and their commitment to the students of BC will serve as an example for future heads of house to emulate,” says Nelson.

McCants is director of the Concourse program for first-year students and a professor of history in the School of Humanities, Arts, and Social Sciences (SHASS). Her research and teaching focus on the social and economic history of Europe in the Middle Ages and early modern period. She was named a MacVicar Faculty Fellow in 2004 and is a recipient of numerous honors including the Levitan Prize to support innovative and creative scholarship in SHASS. Also, she has twice won the Arthur C. Smith Award for exemplary service to undergraduate life and learning.

Anne and Bill McCants first became heads of house in Green Hall (W5) when it was a residence for women graduate students. They supported the community from 1992 to 2002, a period they recall as “wonderful.” After Professor McCants served consecutive terms as head of History at MIT, the couple found themselves missing the direct student interaction they enjoyed in Green Hall and were inspired to join BC in 2012.

Since BC is a cook-for-yourself community, food is a consistent theme in their reflections on the last six years. In an email, the McCantses said, “Every year, we have invited each of the nine floors to the head of house apartment for a home-cooked dinner. Attendance has been high and enthusiastic over all six years.” They particularly remember BC’s annual apple bake event as a showcase for the community’s “creativity, collaboration, and generosity at its best. Great food, fun, and art.”

“Anne and Bill McCants cared deeply about student well-being,” says junior Katie Fisher, the BC president. “Burton Conner residents will especially remember them for hosting floor dinners and a finals study break in their apartment, as well as their brownie recipe. This dorm will not be the same without them.”

Burton Conner House (W51) is located at 410 Memorial Drive in Cambridge, Massachusetts. It was opened in 1939 and houses more than 350 undergraduates. According to its website, BC “consists of nine floors — five on the Burton side, four on the Conner side — each of which has its own unique personality.” The floors are made up of suites — mostly coed with between four and nine residents each — that contain a bathroom and a kitchen. House amenities include lounges and conference rooms, music rooms, a snack bar, recreational table games, a weight room, barbecue pits, and elevators.

But for the McCantses, BC is about much more than the building and its contents. “This role has afforded us opportunities for one-to-one interactions with students in times of both joy and crisis, challenge and repose, that are truly unforgettable,” they wrote.

Those interested in becoming a head of house should email Judy Robinson, senior associate dean for residential education, for more information. The search process will kick off with an informal reception on Monday, Feb. 12, at 7 p.m. in BC for interested tenured faculty. Potential candidates will be able to meet current heads of house and staff to discuss this singular opportunity. A search committee of current heads of house, staff, and students will review candidate qualifications, vet potential finalists with BC residents, and make recommendations to Chancellor Cynthia Barnhart and Dean Nelson. The final selection will be made by Barnhart in time for the appointees to relocate to their new home before the fall term.

Please email Kaye Gaskins to RSVP for the reception by March 9. Those who cannot attend but would still like to apply should email a current CV and cover letter to Robinson explaining why they would like to be BC’s head of house.



de MIT News http://ift.tt/2DClYBS

New study reveals how brain waves control working memory

MIT neuroscientists have found evidence that the brain’s ability to control what it’s thinking about relies on low-frequency brain waves known as beta rhythms.

In a memory task requiring information to be held in working memory for short periods of time, the MIT team found that the brain uses beta waves to consciously switch between different pieces of information. The findings support the researchers’ hypothesis that beta rhythms act as a gate that determines when information held in working memory is either read out or cleared out so we can think about something else.  

“The beta rhythm acts like a brake, controlling when to express information held in working memory and allow it to influence behavior,” says Mikael Lundqvist, a postdoc at MIT’s Picower Institute for Learning and Memory and the lead author of the study.

Earl Miller, the Picower Professor of Neuroscience at the Picower Institute and in the Department of Brain and Cognitive Sciences, is the senior author of the study, which appears in the Jan. 26 issue of Nature Communications.

Working in rhythm

There are millions of neurons in the brain, and each neuron produces its own electrical signals. These combined signals generate oscillations known as brain waves, which vary in frequency. In a 2016 study, Miller and Lundqvist found that gamma rhythms are associated with encoding and retrieving sensory information.

They also found that when gamma rhythms went up, beta rhythms went down, and vice versa. Previous work in their lab had shown that beta rhythms are associated with “top-down” information such as what the current goal is, how to achieve it, and what the rules of the task are.

All of this evidence led them to theorize that beta rhythms act as a control mechanism that determines what pieces of information are allowed to be read out from working memory — the brain function that allows control over conscious thought, Miller says.

“Working memory is the sketchpad of consciousness, and it is under our control. We choose what to think about,” he says. “You choose when to clear out working memory and choose when to forget about things. You can hold things in mind and wait to make a decision until you have more information.”

To test this hypothesis, the researchers recorded brain activity from the prefrontal cortex, which is the seat of working memory, in animals trained to perform a working memory task. The animals first saw one pair of objects, for example, A followed by B. Then they were shown a different pair and had to determine if it matched the first pair. A followed by B would be a match, but not B followed by A, or A followed by C. After this entire sequence, the animals released a bar if they determined that the two sequences matched.

The researchers found that brain activity varied depending on whether the two pairs matched or not. As an animal anticipated the beginning of the second sequence, it held the memory of object A, represented by gamma waves. If the next object seen was indeed A, beta waves then went up, which the researchers believe clears object A from working memory. Gamma waves then went up again, but this time the brain switched to holding information about object B, as this was now the relevant information to determine if the sequence matched.

However, if the first object shown was not a match for A, beta waves went way up, completely clearing out working memory, because the animal already knew that the sequence as a whole could not be a match.

“The interplay between beta and gamma acts exactly as you would expect a volitional control mechanism to act,” Miller says. “Beta is acting like a signal that gates access to working memory. It clears out working memory, and can act as a switch from one thought or item to another.”

A new model

Previous models of working memory proposed that information is held in mind by steady neuronal firing. The new study, in combination with their earlier work, supports the researchers’ new hypothesis that working memory is supported by brief episodes of spiking, which are controlled by beta rhythms.

“When we hold things in working memory (i.e. hold something ‘in mind’), we have the feeling that they are stable, like a light bulb that we’ve turned on to represent some thought. For a long time, neuroscientists have thought that this must mean that the way the brain represents these thoughts is through constant activity. This study shows that this isn’t the case — rather, our memories are blinking in and out of existence. Furthermore, each time a memory blinks on, it is riding on top of a wave of activity in the brain,” says Tim Buschman, an assistant professor of psychology at Princeton University who was not involved in the study.

Two other recent papers from Miller’s lab offer additional evidence for beta as a cognitive control mechanism.

In a study that recently appeared in the journal Neuron, they found similar patterns of interaction between beta and gamma rhythms in a different task involving assigning patterns of dots into categories. In cases where two patterns were easy to distinguish, gamma rhythms, carrying visual information, predominated during the identification. If the distinction task was more difficult, beta rhythms, carrying information about past experience with the categories, predominated.

In a recent paper published in the Proceedings of the National Academy of Sciences, Miller’s lab found that beta waves are produced by deep layers of the prefrontal cortex, and gamma rhythms are produced by superficial layers, which process sensory information. They also found that the beta waves were controlling the interaction of the two types of rhythms.

“When you find that kind of anatomical segregation and it’s in the infrastructure where you expect it to be, that adds a lot of weight to our hypothesis,” Miller says.

The researchers are now studying whether these types of rhythms control other brain functions such as attention. They also hope to study whether the interaction of beta and gamma rhythms explains why it is so difficult to hold more than a few pieces of information in mind at once.

“Eventually we’d like to see how these rhythms explain the limited capacity of working memory, why we can only hold a few thoughts in mind simultaneously, and what happens when you exceed capacity,” Miller says. “You have to have a mechanism that compensates for the fact that you overload your working memory and make decisions on which things are more important than others.”

The research was funded by the National Institute of Mental Health, the Office of Naval Research, and the Picower JFDP Fellowship.



de MIT News http://ift.tt/2GiwDTW

jueves, 25 de enero de 2018

Startup makes labs smarter

Although Internet-connected “smart” devices have in recent years penetrated numerous industries and private homes, the technological phenomenon has left the research lab largely untouched. Spreadsheets, individual software programs, and even pens and paper remain standard tools for recording and sharing data in academic and industry labs.

TetraScience, co-founded by Spin Wang SM ’15, a graduate of electrical engineering and computer science, has developed a data-integration platform that connects disparate types of lab equipment and software systems, in-house and at outsourced drug developers and manufacturers. It then unites the data from all these sources in the cloud for speedier and more accurate research, cost savings, and other benefits.

“Software and hardware systems [in labs] cannot communicate with each other in a consistent way,” says Wang, TetraScience’s chief technology officer, who co-founded the startup with former Harvard University postdocs Salvatore Savo and Alok Tayi. “Data flows through systems in a very fragmented manner and there are a lot of siloed data sets [created] in the life sciences. Humans must manually copy and paste information or write it down on paper, [which] is a lengthy manual process that’s error prone.”

TetraScience has developed an Internet of Things (IoT) hub that plugs into most lab equipment, including freezers, ovens, incubators, scales, pH meters, syringe pumps, and autoclaves. The hub can also continuously collect relevant data — such as humidity, temperature, gas concentration and oxygen levels, vibration, light intensity, and mass air flow — and shoot it to TetraScience’s centralized data-integration platform in the cloud. TetraScience also has custom integration methods for more complicated instruments and software.

In the cloud dashboard, researchers can monitor equipment in real time and set alerts if any equipment deviates from ideal conditions. Data appears as charts, graphs, percentages, and numbers — somewhat resembling the easily readable Google Analytics dashboard. Equipment can be tracked for usage and efficiency over time to determine if, say, a freezer is slowly warming and compromising samples. Researchers can also comb through scores of archived data, all located in one place.

“Our technology is establishing a ‘data highway’ system between different entities, software and hardware, within life sciences labs. We make facilitating data seamless, faster, more accurate, and more efficient,” says Wang, who was named to this year’s Forbes 30 Under 30 list of innovators for his work with TetraScience.

More than 70 major pharmaceutical and biotech firms, including many in Cambridge, Massachusetts, use the platform. Numerous labs at MIT and Harvard are users, as well.

Pain in the lab

For Wang and his TetraScience co-founders, building their smart solution was personal.

As a Cornell University undergraduate, Wang worked in the Cornell Semiconducting RF Lab on high-energy physics research. Frustrated by the time and effort required to manually record data, he developed his own system that connected and controlled more than 10 instruments, such as a signal generator, power meter, frequency counter, and power amplifier.

Years later, as an MIT master’s student studying microelectromechanical systems, Wang worked on sensing technologies and processing of radio frequency signals under the guidance of Professor Dana Weinstein, now at Purdue University. During his final year, he wound up at the MIT Media Lab, working on a 3-D printing project with Tayi, who had spent his academic career toiling away in materials science, chemistry, and other labs. Tayi and Savo were already conducting market research around potential opportunities for IoT in labs.

All three bonded over a shared dislike for data-collecting tools that have remained relatively unchanged in labs for a half-century. “We felt the pain of manually tracking data and not having a consistent interface for all our equipment,” Wang says.

This is especially troublesome at scale. Large pharmaceutical or biotechnology firms, for instance, can have several hundreds or thousands of instruments, all with different hardware running on different software. Humans must record data and input it manually into dozens of separate recording systems, which leads to errors. People also must be physically in a lab to control experiments. Smart labs were the new frontier, Wang, Savo, and Tayi agreed.

In 2014, the three launched TetraScience to build a platform that connected equipment and pooled data into a single place in the cloud — similar to the one Wang created at Cornell, but more advanced. Back then, they used a slightly modified Raspberry Pi as their “hub,” while they refined their software and hardware.

For early-stage startup advice, the startup turned to the Industrial Liaison Program and MIT’s Venture Mentoring Service, and leveraged MIT’s vast alumni network for feedback on their technology and business plan. “We definitely benefited from MIT,” Wang says.

Saving time and money

An early trial for the platform was with the Media Lab, where researchers used the platform to monitor not equipment, but beehives. The researchers were studying how hives could be implemented into building infrastructure and how design and materials could promote bee health. As bees are sensitive to changes in environment, the researchers needed to constantly monitor temperature and humidity around hives over several months, which would be challenging if done manually.

Using TetraScience’s platform, the researchers were captured all the necessary data for their project, without suiting up and approaching all the hives daily — saving “hundreds of hours … and 686 bee stings,” according to the startup. Testing at MIT, Wang says, “helped us gain an understanding of the industry and value proposition.”

From there, the TetraScience platform found its way into more biotech companies and into more than 60 percent of the world’s top 20 pharmaceutical companies, according to the startup. Benefits of today’s TetraScience platform include speeding up research, improving compliance, producing better-quality data and, ultimately, saving millions of dollars and countless hours of work, Wang says.

Numerous case studies, listed on the startup’s webpage, showcase the platform’s efficacy and value at major pharmaceutical firms and cancer research centers, and at Harvard and MIT.

For example, in the final stages of approval of a multibillion-dollar drug, a large pharmaceutical firm conducted an accelerated lifetime test, where any prolonged deviation from preset conditions would require restarting the experiment, at the cost of millions of dollars, weeks of unusable data, and delayed commercialization. Within a few weeks of the test’s conclusion, a major deviation in one experiment occurred late at night. Within seconds, according to the study, TetraScience’s platform detected the deviation and alerted scientists, who caught it immediately, stopping any significant damage.

The platform also offers benefits for determining equipment efficiency and usage. In a 2017 case study with another pharmaceutical firm, TetraScience monitored 70 pieces of equipment. The startup flagged 23 instruments as “heavily underused.” The firm used that data to reduce service contracts for 14 instruments and sell nine instruments, leading to improved efficiency and hundreds of thousands of dollars in savings that could be put toward more research and development. 

Although the startup’s focus is on pharmaceutical and biotechnology industries, the platform could also be used in oil and gas, brewing, and food and chemistry industries to see similar benefits. “Those industries all use similar instruments [as life science labs] and produce the same kind of data, such as monitoring the pH of beer, so we will get into those industries in the future,” Wang says.



de MIT News http://ift.tt/2rJjwrU

Study: Distinct brain rhythms and regions help us reason about categories

We categorize pretty much everything we see, and remarkably, we often achieve that feat whether the items look patently similar — such as Fuji and McIntosh apples — or they share a more abstract similarity — such as a screwdriver and a drill. A new study at MIT’s Picower Institute for Learning and Memory explains how.

“Categorization is a fundamental cognitive mechanism,” says Earl Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “It’s the way the brain learns to generalize. If your brain didn’t have this ability, you’d be overwhelmed by details of the sensory world. Every time you experienced something, if it was in different lighting or at a different angle, your brain would treat it as a brand new thing.”

In the new paper in Neuron, Miller’s lab, led by postdoc Andreas Wutz and graduate student Roman Loonis, shows that the ability to categorize based on straightforward resemblance or on abstract similarity arises from the brain’s use of distinct rhythms, at distinct times, in distinct parts of the prefrontal cortex (PFC). Specifically, when animals needed to match images that bore close resemblance, an increase in the power of high-frequency gamma rhythms in the ventral lateral PFC did the trick. When they had to match images based on a more abstract similarity, that depended on a later surge of lower-frequency beta rhythms in the dorsal lateral PFC.

Miller says those findings suggest a model of how the brain achieves category abstractions. It shows that meeting the challenge of abstraction is not merely a matter of thinking the same way but harder. Instead, a different mechanism in a different part of the brain takes over when simple, sensory comparison is not enough for us to judge whether two things belong to the same category.

By precisely describing the frequencies, locations, and the timing of rhythms that govern categorization, the findings, if replicated in humans, could prove helpful in research to understand an aspect of some autism spectrum disorders (ASD), says Miller. In ASD, categorization can be challenging for patients, especially when objects or faces appear atypical. Potentially, clinicians could measure rhythms to determine whether patients who struggle to recognize abstract similarities are employing the mechanisms differently.

Connecting the dots

To conduct the study, Wutz, Loonis, Miller, and their co-authors measured brain rhythms in key areas of the PFC associated with categorization as animals played some on-screen games. In each round, animals would see a pattern of dots — a sample from one of two different categories of configurations. Then the sample would disappear and after a delay, two choices of dot designs would appear. The subject’s task was to fix its gaze on whichever one belonged to the same category as the sample. Sometimes the right answer was evident by sheer visual resemblance, but sometimes the similarity was based on a more abstract criterion the animal could infer over successive trials. The experimenters precisely quantified the degree of abstraction based on geometric calculations of the distortion of the dot pattern compared to a category archetype.

“This study was very well-defined,” says Wutz. “It provided a mathematically correct way to distinguish something so vague as abstraction. It’s a judgment call very often, but not with the paradigm that we used.”

Gamma in the ventral PFC always peaked in power when the sample appeared, as if the animals were making a “Does this sample look like category A or not?” assessment as soon as they were shown it. Beta power in the dorsal PFC peaked during the subsequent delay period when abstraction was required, as if the animals realized that there wasn’t enough visual resemblance and deeper thought would be necessary to make the upcoming choice.

Notably, the data was rich enough to reveal several nuances about what was going on. Category information and rhythm power were so closely associated, for example, that the researchers measured greater rhythm power in advance of correct category judgments than in advance of incorrect ones. They also found that the role of beta power was not based on the difficulty of choosing a category (i.e., how similar the choices were) but specifically on whether the correct answer had a more abstract or literal similarity to the sample.

By analyzing the rhythm measurements, the researchers could even determine how the animals were approaching the categorization task. They weren’t judging whether a sample belonged to one category or the other, says Wutz. Instead they were judging whether they belonged to a preferred category or not.

“That preference was reflected in the brain rhythms,” says Wutz. “We saw the strongest effects for each animal’s preferred category.”

Tim Buschman, assistant professor in the Princeton Neuroscience Institute and Department of Psychology at Princeton University, says the study helps to explain a crucial aspect of the brain’s ability to generalize: flexibility.

“Once we see one dog bark, we instantly know that all dogs bark. However, there is a right amount to generalize; we don’t want to learn that all four-legged mammals bark,” says Buschman. “The current manuscript provides insight into how the brain flexibly modulates how much we should generalize — a little (all dogs bark) or a lot (all mammals have hair). The study provides new insight into how the brain flexibly switches between two different modes — there is a ‘bottom-up’ mode that is rooted in the more concrete representations of our senses, allowing for a little generalization; and a ‘top-down’ mode that uses higher-order brain regions to generalize more broadly.

“This study is an important first step in understanding how the brain generalizes knowledge and lays the groundwork for understanding cognitive conditions, such as autism, that impair one’s ability to generalize,” says Buschman. 

The National Institute of Mental Health funded the study, which was co-authored by graduate student Jacob Donoghue and research scientist Jefferson Roy.



de MIT News http://ift.tt/2E8rP2X

miércoles, 24 de enero de 2018

Play Labs startup accelerator announces second annual open call for submissions

Play Labs and the MIT Game Lab has announced that applications are now open for the second batch of startups within the playful technology accelerator, which will run from June through August 2018 on campus at MIT in Cambridge, Massachusetts. Startups that are accepted into Play Labs will each receive an initial investment of $20,000 in either cash or Bitcoin in return for common stock. Startups that graduate from the program and meet certain criteria will be eligible for up to $80,000 in additional funding from the Play Labs Fund and its investment partners.

Deadlines for applications are due March 15, 2018, after which time finalists will be selected and a subset of those finalists will be given offers to participate in the program. Applications are open to both MIT-affiliated startups, and startups with no MIT affiliation that wish to come to MIT for the summer to participate.

Play Labs provides mentoring, facilities, and funding for early-stage startups that utilize “playful technology.” The areas of technology for this second batch of incubated startups include:

  • Digital Currency/Blockchain: The explosion of digital currencies like Bitcoin and the underlying technology, blockchain, have created a new virtual economy and opportunities for decentralizing many industries.
  • E-sports/Video Games: Video games have moved into the competitive era, and e-sports is seen as one of the biggest opportunities for expansion.
  • Virtual Reality (VR)/Augmented Reality (AR): A big focus for the first batch of incubated startups in Play Labs, now VR and AR are categories that continue to evolve and will revolutionize many industries.
  • Artificial Intelligence/Machine Learning: Artificial intelligence and machine learning software and hardware (i.e., robotics), have advanced to the point of many practical applications.

Candidate startups may apply these technology areas into any industry, including video games, e-sports, finance, healthcare, manufacturing, and more.

As before, the program will be run by Bayview Labs and its executive director, Rizwan Virk ’92 a prolific Silicon Valley angel investor, advisor, and mentor. Virk and Bayview have been early investors in Bitcoin and blockchain startups, as well as a long list of successful gaming-related tech startups including Tapjoy, Discord, Funzio, Pocket Gems, Telltale Games, and Sliver.tv.

“MIT has been the starting point for many successful startups over the years,” says Virk. “We had a successful first batch and we are excited to see what exciting technology projects MIT students, alumni, and the greater community will come up for this second batch. We started the accelerator because a lot of focus for these areas has been on the West Coast, but I believe that the ecosystem around MIT and Boston has great talent and startup ideas in these areas.”

“When I graduated from MIT and thought of doing my first startup, I wish I had this kind of accelerator program, with support from both MIT staff and industry entrepreneurs and mentors,” says Virk. “That’s why I designed the program in this way.”

Bayview will run Play Labs in conjunction with the Seraph Group, a seed stage venture capital investment firm founded by Tuff Yen. The teams will be supported by a group a successful mentors and partners, including Rajeev Surati PhD ’99 and co-founder of Flash Communications, Photo.net, and Scalable Display Technologies, based on his PhD research at MIT. Also participating is VR@MIT, a student organization on campus dedicated to fostering VR and AR entrepreneurship at MIT.

The MIT Game Lab, a research group in MIT's Comparative Media Studies/Writing program, and Ludus, the MIT Center for Games, Learning, and Playful Media, will host and conduct the educational program for Play Labs. Teams will be given workspace on the MIT campus for the duration of the program.

“MIT students thrive on innovation and creative exploration,” says Scot Osterweil, managing director for Ludus. “We are pleased that through Play Labs we will help them move their most imaginative ideas into the realm of the possible.”

“We see tremendous opportunity to invest, support, and partner with the MIT community of outstanding people, which is why we are supporting Play Labs’ second batch,” says Tuff Yen, president of Seraph Group. “Our network of successful investors will bring valuable experience, access and resources to startups.”

Full information on the program, eligibility, and benefits can be found on the Play Labs website.



de MIT News http://ift.tt/2E5wqmo

Novel methods of synthesizing quantum dot materials

For quantum dot (QD) materials to perform well in devices such as solar cells, the nanoscale crystals in them need to pack together tightly so that electrons can hop easily from one dot to the next and flow out as current. MIT researchers have now made QD films in which the dots vary by just one atom in diameter and are organized into solid lattices with unprecedented order. Subsequent processing pulls the QDs in the film closer together, further easing the electrons’ pathway. Tests using an ultrafast laser confirm that the energy levels of vacancies in adjacent QDs are so similar that hopping electrons don’t get stuck in low-energy dots along the way.

Taken together, the results suggest a new direction for ongoing efforts to develop these promising materials for high performance in electronic and optical devices.

In recent decades, much research attention has focused on electronic materials made of quantum dots, which are tiny crystals of semiconducting materials a few nanometers in diameter. After three decades of research, QDs are now being used in TV displays, where they emit bright light in vivid colors that can be fine-tuned by changing the sizes of the nanoparticles. But many opportunities remain for taking advantage of these remarkable materials.

“QDs are a really promising underlying materials technology for energy applications,” says William Tisdale, the ARCO Career Development Professor in Energy Studies and an associate professor of chemical engineering.

QD materials pique his interest for several reasons. QDs are easily synthesized in a solvent at low temperatures using standard procedures. The QD-bearing solvent can then be deposited on a surface — small or large, rigid or flexible — and as it dries, the QDs are left behind as a solid. Best of all, the electronic and optical properties of that solid can be controlled by tuning the QDs.

“With QDs, you have all these degrees of freedom,” says Tisdale. “You can change their composition, size, shape, and surface chemistry to fabricate a material that’s tailored for your application.”

The ability to adjust electron behavior to suit specific devices is of particular interest. For example, in solar photovoltaics (PVs), electrons should pick up energy from sunlight and then move rapidly through the material and out as current before they lose their excess energy. In light-emitting diodes (LEDs), high-energy “excited” electrons should relax on cue, emitting their extra energy as light.

With thermoelectric (TE) devices, QD materials could be a game-changer. When TE materials are hotter on one side than the other, they generate electricity. So TE devices could turn waste heat in car engines, industrial equipment, and other sources into power — without combustion or moving parts. The TE effect has been known for a century, but devices using TE materials have remained inefficient. The problem: While those materials conduct electricity well, they also conduct heat well, so the temperatures of the two ends of a device quickly equalize. In most materials, measures to decrease heat flow also decrease electron flow.

“With QDs, we can control those two properties separately,” says Tisdale. “So we can simultaneously engineer our material so it’s good at transferring electrical charge but bad at transporting heat.”

Making good arrays

One challenge in working with QDs has been to make particles that are all the same size and shape. During QD synthesis, quadrillions of nanocrystals are deposited onto a surface, where they self-assemble in an orderly fashion as they dry. If the individual QDs aren’t all exactly the same, they can’t pack together tightly, and electrons won’t move easily from one nanocrystal to the next.

Three years ago, a team in Tisdale’s lab led by Mark Weidman PhD ’16 demonstrated a way to reduce that structural disorder. In a series of experiments with lead-sulfide QDs, team members found that carefully selecting the ratio between the lead and sulfur in the starting materials would produce QDs of uniform size.

“As those nanocrystals dry, they self-assemble into a beautifully ordered arrangement we call a superlattice,” Tisdale says.

Scattering electron microscope images of those superlattices taken from several angles show lined-up, 5-nanometer-diameter nanocrystals throughout the samples and confirm the long-range ordering of the QDs.

For a closer examination of their materials, Weidman performed a series of X-ray scattering experiments at the National Synchrotron Light Source at Brookhaven National Laboratory. Data from those experiments showed both how the QDs are positioned relative to one another and how they’re oriented, that is, whether they’re all facing the same way. The results confirmed that QDs in the superlattices are well ordered and essentially all the same.

“On average, the difference in diameter between one nanocrystal and another was less than the size of one more atom added to the surface,” says Tisdale. “So these QDs have unprecedented monodispersity, and they exhibit structural behavior that we hadn’t seen previously because no one could make QDs this monodisperse.”

Controlling electron hopping

The researchers next focused on how to tailor their monodisperse QD materials for efficient transfer of electrical current. “In a PV or TE device made of QDs, the electrons need to be able to hop effortlessly from one dot to the next and then do that many thousands of times as they make their way to the metal electrode,” Tisdale explains.

One way to influence hopping is by controlling the spacing from one QD to the next. A single QD consists of a core of semiconducting material — in this work, lead sulfide — with chemically bound arms, or ligands, made of organic (carbon-containing) molecules radiating outward. The ligands play a critical role — without them, as the QDs form in solution, they’d stick together and drop out as a solid clump. Once the QD layer is dry, the ligands end up as solid spacers that determine how far apart the nanocrystals are.

A standard ligand material used in QD synthesis is oleic acid. Given the length of an oleic acid ligand, the QDs in the dry superlattice end up about 2.6 nanometers apart — and that’s a problem.

“That may sound like a small distance, but it’s not,” says Tisdale. “It’s way too big for a hopping electron to get across.”

Using shorter ligands in the starting solution would reduce that distance, but they wouldn’t keep the QDs from sticking together when they’re in solution. “So we needed to swap out the long oleic acid ligands in our solid materials for something shorter” after the film formed, Tisdale says.

To achieve that replacement, the researchers use a process called ligand exchange. First, they prepare a mixture of a shorter ligand and an organic solvent that will dissolve oleic acid but not the lead sulfide QDs. They then submerge the QD film in that mixture for 24 hours. During that time, the oleic acid ligands dissolve, and the new, shorter ligands take their place, pulling the QDs closer together. The solvent and oleic acid are then rinsed off.

Tests with various ligands confirmed their impact on interparticle spacing. Depending on the length of the selected ligand, the researchers could reduce that spacing from the original 2.6 nanometers with oleic acid all the way down to 0.4 nanometers. However, while the resulting films have beautifully ordered regions — perfect for fundamental studies — inserting the shorter ligands tends to generate cracks as the overall volume of the QD sample shrinks.

Energetic alignment of nanocrystals

One result of that work came as a surprise: Ligands known to yield high performance in lead-sulfide-based solar cells didn’t produce the shortest interparticle spacing in their tests.

“Reducing that spacing to get good conductivity is necessary,” says Tisdale. “But there may be other aspects of our QD material that we need to optimize to facilitate electron transfer.”

One possibility is a mismatch between the energy levels of the electrons in adjacent QDs. In any material, electrons exist at only two energy levels — a low ground state and a high excited state. If an electron in a QD film receives extra energy — say, from incoming sunlight — it can jump up to its excited state and move through the material until it finds a low-energy opening left behind by another traveling electron. It then drops down to its ground state, releasing its excess energy as heat or light.

In solid crystals, those two energy levels are a fixed characteristic of the material itself. But in QDs, they vary with particle size. Make a QD smaller and the energy level of its excited electrons increases. Again, variability in QD size can create problems. Once excited, a high-energy electron in a small QD will hop from dot to dot — until it comes to a large, low-energy QD.

“Excited electrons like going downhill more than they like going uphill, so they tend to hang out on the low-energy dots,” says Tisdale. “If there’s then a high-energy dot in the way, it takes them a long time to get past that bottleneck.”

So the greater mismatch between energy levels — called energetic disorder — the worse the electron mobility. To measure the impact of energetic disorder on electron flow in their samples, Rachel Gilmore PhD ’17 and her collaborators used a technique called pump-probe spectroscopy — as far as they know, the first time this method has been used to study electron hopping in QDs.

QDs in an excited state absorb light differently than do those in the ground state, so shining light through a material and taking an absorption spectrum provides a measure of the electronic states in it. But in QD materials, electron hopping events can occur within picoseconds — 10-12 of a second — which is faster than any electrical detector can measure.

The researchers therefore set up a special experiment using an ultrafast laser, whose beam is made up of quick pulses occurring at 100,000 per second. Their setup subdivides the laser beam such that a single pulse is split into a pump pulse that excites a sample and — after a delay measured in femtoseconds (10-15 seconds) — a corresponding probe pulse that measures the sample’s energy state after the delay. By gradually increasing the delay between the pump and probe pulses, they gather absorption spectra that show how much electron transfer has occurred and how quickly the excited electrons drop back to their ground state.

Using this technique, they measured electron energy in a QD sample with standard dot-to-dot variability and in one of the monodisperse samples. In the sample with standard variability, the excited electrons lose much of their excess energy within 3 nanoseconds. In the monodisperse sample, little energy is lost in the same time period — an indication that the energy levels of the QDs are all about the same.

By combining their spectroscopy results with computer simulations of the electron transport process, the researchers extracted electron hopping times ranging from 80 picoseconds for their smallest quantum dots to over 1 nanosecond for the largest ones. And they concluded that their QD materials are at the theoretical limit of how little energetic disorder is possible. Indeed, any difference in energy between neighboring QDs isn’t a problem. At room temperature, energy levels are always vibrating a bit, and those fluctuations are larger than the small differences from one QD to the next.

“So at some instant, random kicks in energy from the environment will cause the energy levels of the QDs to line up, and the electron will do a quick hop,” says Tisdale.

The way forward

With energetic disorder no longer a concern, Tisdale concludes that further progress in making commercially viable QD materials will require better ways of dealing with structural disorder. He and his team tested several methods of performing ligand exchange in solid samples, and none produced films with consistent QD size and spacing over large areas without cracks. As a result, he now believes that efforts to optimize that process “may not take us where we need to go.”

What’s needed instead is a way to put short ligands on the QDs when they’re in solution and then let them self-assemble into the desired structure.

“There are some emerging strategies for solution-phase ligand exchange,” he says. “If they’re successfully developed and combined with monodisperse QDs, we should be able to produce beautifully ordered, large-area structures well suited for devices such as solar cells, LEDs, and thermoelectric systems.”

QD synthesis and spectroscopy were supported by the US Department of Energy, Office of Basic Energy Sciences. Structural studies of QD solids were supported by the MIT Energy Initiative Seed Fund Program. Mark Weidman and Rachel Gilmore were partially supported by a National Science Foundation Graduate Research Fellowship. Measurements were performed at the Center for Functional Nanomaterials at Brookhaven National Laboratory, the Cornell High Energy Synchrotron Source, and the MRSEC Shared Experimental Facilities at MIT. 

This article appeared in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative.



de MIT News http://ift.tt/2rBxUlJ