domingo, 30 de junio de 2019

Teaching artificial intelligence to create visuals with more common sense

Today’s smartphones often use artificial intelligence (AI) to help make the photos we take crisper and clearer. But what if these AI tools could be used to create entire scenes from scratch?

A team from MIT and IBM has now done exactly that with “GANpaint Studio,” a system that can automatically generate realistic photographic images and edit objects inside them. In addition to helping artists and designers make quick adjustments to visuals, the researchers say the work may help computer scientists identify “fake” images.

David Bau, a PhD student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), describes the project as one of the first times computer scientists have been able to actually “paint with the neurons” of a neural network — specifically, a popular type of network called a generative adversarial network (GAN).

Available online as an interactive demo, GANpaint Studio allows a user to upload an image of their choosing and modify multiple aspects of its appearance, from changing the size of objects to adding completely new items like trees and buildings.

Boon for designers

Spearheaded by MIT professor Antonio Torralba as part of the MIT-IBM Watson AI Lab he directs, the project has vast potential applications. Designers and artists could use it to make quicker tweaks to their visuals. Adapting the system to video clips would enable computer-graphics editors to quickly compose specific arrangements of objects needed for a particular shot. (Imagine, for example, if a director filmed a full scene with actors but forgot to include an object in the background that’s important to the plot.)

GANpaint Studio could also be used to improve and debug other GANs that are being developed, by analyzing them for “artifact” units that need to be removed. In a world where opaque AI tools have made image manipulation easier than ever, it could help researchers better understand neural networks and their underlying structures.

“Right now, machine learning systems are these black boxes that we don’t always know how to improve, kind of like those old TV sets that you have to fix by hitting them on the side,” says Bau, lead author on a related paper about the system with a team overseen by Torralba. “This research suggests that, while it might be scary to open up the TV and take a look at all the wires, there’s going to be a lot of meaningful information in there.”

One unexpected discovery is that the system actually seems to have learned some simple rules about the relationships between objects. It somehow knows not to put something somewhere it doesn’t belong, like a window in the sky, and it also creates different visuals in different contexts. For example, if there are two different buildings in an image and the system is asked to add doors to both, it doesn’t simply add identical doors — they may ultimately look quite different from each other. 

“All drawing apps will follow user instructions, but ours might decide not to draw anything if the user commands to put an object in an impossible location,” says Torralba. “It’s a drawing tool with a strong personality, and it opens a window that allows us to understand how GANs learn to represent the visual world.”

GANs are sets of neural networks developed to compete against each other. In this case, one network is a generator focused on creating realistic images, and the second is a discriminator whose goal is to not be fooled by the generator. Every time the discriminator ‘catches’ the generator, it has to expose the internal reasoning for the decision, which allows the generator to continuously get better.

“It’s truly mind-blowing to see how this work enables us to directly see that GANs actually learn something that’s beginning to look a bit like common sense,”  says Jaakko Lehtinen, an associate professor at Finland’s Aalto University who was not involved in the project. “I see this ability as a crucial steppingstone to having autonomous systems that can actually function in the human world, which is infinite, complex and ever-changing.”

Stamping out unwanted “fake” images

The team’s goal has been to give people more control over GAN networks.  But they recognize that with increased power comes the potential for abuse, like using such technologies to doctor photos. Co-author Jun-Yan Zhu says that he believes that better understanding GANs — and the kinds of mistakes they make — will help researchers be able to better stamp out fakery.

“You need to know your opponent before you can defend against it,” says Zhu, a postdoc at CSAIL. “This understanding may potentially help us detect fake images more easily.”

To develop the system, the team first identified units inside the GAN that correlate with particular types of objects, like trees. It then tested these units individually to see if getting rid of them would cause certain objects to disappear or appear. Importantly, they also identified the units that cause visual errors (artifacts) and worked to remove them to increase the overall quality of the image.

“Whenever GANs generate terribly unrealistic images, the cause of these mistakes has previously been a mystery,” says co-author Hendrik Strobelt, a research scientist at IBM. “We found that these mistakes are triggered by specific sets of neurons that we can silence to improve the quality of the image.”

Bau, Strobelt, Torralba and Zhu co-wrote the paper with former CSAIL PhD student Bolei Zhou, postdoctoral associate Jonas Wulff, and undergraduate student William Peebles. They will present it next month at the SIGGRAPH conference in Los Angeles. “This system opens a door into a better understanding of GAN models, and that’s going to help us do whatever kind of research we need to do with GANs,” says Lehtinen.



de MIT News https://ift.tt/2ZUdFvQ

jueves, 27 de junio de 2019

Bridging the gap between research and the classroom

In a moment more reminiscent of a Comic-Con event than a typical MIT symposium, Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever MIT Science of Reading event dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia. 

The event, co-sponsored by the MIT Integrated Learning Initiative (MITili) and the McGovern Institute for Brain Research at MIT, took place earlier this month and brought together researchers, educators, administrators, parents, and students to explore how scientific research can better inform educational practices and policies — equipping teachers with scientifically-based strategies that may lead to better outcomes for students.

Professor John Gabrieli, MITili director, explained the great need to focus the collective efforts of educators and researchers on literacy.

“Reading is critical to all learning and all areas of knowledge. It is the first great educational experience for all children, and can shape a child’s first sense of self,” he said. “If reading is a challenge or a burden, it affects children’s social and emotional core.”

A great divide

Reading is also a particularly important area to address because so many American students struggle with this fundamental skill. More than six out of every 10 fourth graders in the United States are not proficient readers, and changes in reading scores for fourth and eighth graders have increased only slightly since 1992, according to the National Assessment of Education Progress.

Gabrieli explained that, just as with biomedical research, where there can be a “valley of death” between basic research and clinical application, the same seems to apply to education. Although there is substantial current research aiming to better understand why students might have difficulty reading in the ways they are currently taught, the research often does not necessarily shape the practices of teachers — or how the teachers themselves are trained to teach. 

This divide between the research and practical applications in the classroom might stem from a variety of factors. One issue might be the inaccessibility of research publications that are available for free to all — as well as the general need for scientific findings to be communicated in a clear, accessible, engaging way that can lead to actual implementation. Another challenge is the stark difference in pacing between scientific research and classroom teaching. While research can take years to complete and publish, teachers have classrooms full of students — all with different strengths and challenges — who urgently need to learn in real time.

Natalie Wexler, author of "The Knowledge Gap," described some of the obstacles to getting the findings of cognitive science integrated into the classroom as matters of “head, heart, and habit.” Teacher education programs tend to focus more on some of the outdated psychological models, like Piaget’s theory of cognitive development, and less on recent cognitive science research. Teachers also have to face the emotional realities of working with their students, and might be concerned that a new approach would cause students to feel bored or frustrated. In terms of habit, some new, evidence-based approaches may be, in a practical sense, difficult for teachers to incorporate into the classroom.

“Teaching is an incredibly complex activity,” noted Wexler.

From labs to classrooms

Throughout the day, speakers and panelists highlighted some key insights gained from literacy research, along with some of the implications these might have on education.

Mark Seidenberg, professor of psychology at the University of Wisconsin at Madison and author of "Language at the Speed of Sight," discussed studies indicating the strong connection between spoken and printed language. 

“Reading depends on speech,” said Seidenberg. “Writing systems are codes for expressing spoken language … Spoken language deficits have an enormous impact on children’s reading.”

The integration of speech and reading in the brain increases with reading skill. For skilled readers, the patterns of brain activity (measured using functional magnetic resonance imaging) while comprehending spoken and written language are very similar. Becoming literate affects the neural representation of speech, and knowledge of speech affects the representation of print — thus the two become deeply intertwined. 

In addition, researchers have found that the language of books, even for young children, include words and expressions that are rarely encountered in speech to children. Therefore, reading aloud to children exposes them to a broader range of linguistic expressions — including more complex ones that are usually only taught much later. Thus reading to children can be especially important, as research indicates that better knowledge of spoken language facilitates learning to read.

Although behavior and performance on tests are often used as indicators of how well a student can read, neuroscience data can now provide additional information. Neuroimaging of children and young adults identifies brain regions that are critical for integrating speech and print, and can spot differences in the brain activity of a child who might be especially at-risk for reading difficulties. Brain imaging can also show how readers’ brains respond to certain reading and comprehension tasks, and how they adapt to different circumstances and challenges.

“Brain measures can be more sensitive than behavioral measures in identifying true risk,” said Ola Ozernov-Palchik, a postdoc at the McGovern Institute. 

Ozernov-Palchik hopes to apply what her team is learning in their current studies to predict reading outcomes for other children, as well as continue to investigate individual differences in dyslexia and dyslexia-risk using behavior and neuroimaging methods.

Identifying certain differences early on can be tremendously helpful in providing much-needed early interventions and tailored solutions. Many speakers noted the problem with the current “wait-to-fail” model of noticing that a child has a difficult time reading in second or third grade, and then intervening. Research suggests that earlier intervention could help the child succeed much more than later intervention.

Speakers and panelists spoke about current efforts, including Reach Every Reader (a collaboration between MITili, the Harvard Graduate School of Education, and the Florida Center for Reading Research), that seek to provide support to students by bringing together education practitioners and scientists. 

“We have a lot of information, but we have the challenge of how to enact it in the real world,” said Gabrieli, noting that he is optimistic about the potential for the additional conversations and collaborations that might grow out of the discussions of the Science of Reading event. “We know a lot of things can be better and will require partnerships, but there is a path forward.”



de MIT News https://ift.tt/2xj459H

miércoles, 26 de junio de 2019

A new way to make droplets bounce away

In many situations, engineers want to minimize the contact of droplets of water or other liquids with surfaces they fall onto. Whether the goal is keeping ice from building up on an airplane wing or a wind turbine blade, or preventing heat loss from a surface during rainfall, or preventing salt buildup on surfaces exposed to ocean spray, making droplets bounce away as fast as possible and minimizing the amount of contact with the surface can be key to keeping systems functioning properly.

Now, a study by researchers at MIT demonstrates a new approach to minimizing the contact between droplets and surfaces. While previous attempts, including by members of the same team, have focused on minimizing the amount of time the droplet spends in contact with the surface, the new method instead focuses on the spatial extent of the contact, trying to minimize how far a droplet spreads out before bouncing away.

The new findings are described in the journal ACS Nano in a paper by MIT graduate student Henri-Louis Girard, postdoc Dan Soto, and professor of mechanical engineering Kripa Varanasi. The key to the process, they explain, is creating a series of raised ring shapes on the material’s surface, which cause the falling droplet to splash upward in a bowl-shaped pattern instead of flowing out flat across the surface.

The work is a followup on an earlier project by Varanasi and his team, in which they were able to reduce the contact time of droplets on a surface by creating raised ridges on the surface, which disrupted the spreading pattern of impacting droplets. But the new work takes this farther, achieving a much greater reduction in the combination of contact time and contact area of a droplet.

In order to prevent icing on an airplane wing, for example, it is essential to get the droplets of impacting water to bounce away in less time than it takes for the water to freeze. The earlier ridged surface did succeed in reducing the contact time, but Varanasi says “since then, we found there’s another thing at play here,” which is how far the drop spreads out before rebounding and bouncing off. “Reducing the contact area of the impacting droplet should also have a dramatic impact on transfer properties of the interaction,” Varanasi says.

The team initiated a series of experiments that demonstrated that raised rings of just the right size, covering the surface, would cause the water spreading out from an impacting droplet to splash upward instead, forming a bowl-shaped splash, and that the angle of that upward splash could be controlled by adjusting the height and profile of those rings. If the rings are too large or too small compared to the size of the droplets, the system becomes less effective or doesn’t work at all, but when the size is right, the effect is dramatic.

It turns out that reducing the contact time alone is not sufficient to achieve the greatest reduction in contact; it’s the combination of the time and area of contact that’s critical. In a graph of the time of contact on one axis, and the area of contact on the other axis, what really matters is the total area under the curve — that is, the product of the time and the extent of contact. The area of the spreading was “was another axis that no one has touched” in previous research, Girard says. “When we started doing so, we saw a drastic reaction,” reducing the total time-and-area contact of the droplet by 90 percent. “The idea of reducing contact area by forming ‘waterbowls’ has far greater effect on reducing the overall interaction than by reducing contact time alone,” Varanasi says.

As the droplet starts to spread out within the raised circle, as soon as it hits the circle’s edge it begins to deflect. “Its momentum is redirected upward,” Girard says, and although it ends up spreading outward about as far as it would have otherwise, it is no longer on the surface, and therefore not cooling the surface off, or leading to icing, or blocking the pores on a “waterproof” fabric.

Credit: Henri-Louis Girard, Dan Soto, and Kripa Varanas

The rings themselves can be made in different ways and from different materials, the researchers say — it’s just the size and spacing that matters. For some tests, they used rings 3-D printed on a substrate, and for others they used a surface with a pattern created through an etching process similar to that used in microchip manufacturing. Other rings were made through computer controlled milling of plastic.

While higher-velocity droplet impacts generally can be more damaging to a surface, with this system the higher velocities actually improve the effectiveness of the redirection, clearing even more of the liquid than at slower speeds. That’s good news for practical applications, for example in dealing with rain, which has relatively high velocity, Girard says. “It actually works better the faster you go,” he says.

In addition to keeping ice off airplane wings, the new system could have a wide variety of applications, the researchers say. For example, “waterproof” fabrics can become saturated and begin to leak when water fills up the spaces between the fibers, but when treated with the surface rings, fabrics kept their ability to shed water for longer, and performed better overall, Girard says. “There was a 50 percent improvement by using the ring structures,” he says.

The research was supported by MIT’s Deshpande Center for Technological Innovation.



de MIT News https://ift.tt/31OyDhs

Drag-and-drop data analytics

In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.

For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.

In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.  

The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.

“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”

VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.    

Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.

An “unbounded canvas” for analytics

The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.

Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.

The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.  

“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”

Approximating AutoML

With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.

Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.

According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.

“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.

“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”

The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.

Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.  

“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”



de MIT News https://ift.tt/2YmbGQQ

Confining cell-killing treatments to tumors

Cytokines, small proteins released by immune cells to communicate with each other, have for some time been investigated as a potential cancer treatment.

However, despite their known potency and potential for use alongside other immunotherapies, cytokines have yet to be successfully developed into an effective cancer therapy.

That is because the proteins are highly toxic to both healthy tissue and tumors alike, making them unsuitable for use in treatments administered to the entire body.

Injecting the cytokine treatment directly into the tumor itself could provide a method of confining its benefits to the tumor and sparing healthy tissue, but previous attempts to do this have resulted in the proteins leaking out of the cancerous tissue and into the body’s circulation within minutes.

Now researchers at the Koch Institute for Integrative Cancer Research at MIT have developed a technique to prevent cytokines escaping once they have been injected into the tumor, by adding a Velcro-like protein that attaches itself to the tissue.

In this way the researchers, led by Dane Wittrup, the Carbon P. Dubbs Professor in Chemical Engineering and Biological Engineering and a member of the Koch Institute, hope to limit the harm caused to healthy tissue, while prolonging the treatment’s ability to attack the tumor.

To develop their technique, which they describe in a paper published today in the journal Science Translational Medicine, the researchers first investigated the different proteins found in tumors, to find one that could be used as a target for the cytokine treatment. They chose collagen, which is expressed abundantly in solid tumors.

They then undertook an extensive literature search to find proteins that bind effectively to collagen. They discovered a collagen-binding protein called lumican, which they then attached to the cytokines.

“When we inject (a collagen-anchoring cytokine treatment) intratumorally, we don’t have to worry about collagen found elsewhere in the body; we just have to make sure we have a protein that binds to collagen very tightly,” says lead author Noor Momin, a graduate student in the Wittrup Lab at MIT.

To test the treatment, the researchers used two cytokines known to stimulate and expand immune cell responses. The cytokines, interleukin-2 (IL-2) and interleukin-12 (IL-12), are also known to combine well with other immunotherapies.

Although IL-2 already has FDA approval, its severe side-effects have so far prevented its clinical use. Meanwhile IL-12 therapies have not yet reached phase 3 clinical trials due to their severe toxicity.

The researchers tested the treatment by injecting the two different cytokines into tumors in mice. To make the test more challenging, they chose a type of melanoma that contains relatively low amounts of collagen, compared to other tumor types.

They then compared the effects of administering the cytokines alone and of injecting cytokines attached to the collagen-binding lumican.

“In addition, all of the cytokine therapies were given alongside a form of systemic therapy, such as a tumor-targeting antibody, a vaccine, a checkpoint blockade, or chimeric antigen receptor (CAR)-T cell therapy, as we wanted to show the potential of combining cytokines with many different immunotherapy modalities,” Momin says.

They found that when any of the treatments were administered individually, the mice did not survive. Combining the treatments improved survival rates slightly, but when the cytokine was administered with the lumican to bind to the collagen, the researchers found that over 90 percent of the mice survived with some combinations.

“So we were able to show that these combinations are synergistic, they work really well together, and that cytokines attached to lumican really helped reap the full benefits of the combination,” Momin says.

What’s more, attaching the lumican eliminated the problem of toxicity associated with cytokine treatments alone.

The paper attempts to address a major obstacle in the oncology field, that of how to target potent therapeutics to the tumor microenvironment to enable their local action, according to Shannon Turley, a staff scientist and specialist in cancer immunology at Genentech, who was not involved in the research.

“This is important because many of the most promising cancer drugs can have unwanted side effects in tissues beyond the tumor,” Turley says. “The team’s approach relies on two principles that together make for a novel approach: injection of the drug directly into the tumor site, and engineering of the drug to contain a ‘Velcro’ that attaches the drug to the tumor to keep it from leaking into circulation and acting all over the body.”

The researchers now plan to carry out further work to improve the technique, and to explore other treatments that could benefit from being combined with collagen-binding lumican, Momin says.

Ultimately, they hope the work will encourage other researchers to consider the use of collagen binding for cancer treatments, Momin says.

“We’re hoping the paper seeds the idea that collagen anchoring could be really advantageous for a lot of different therapies across all solid tumors.”



de MIT News https://ift.tt/2J7WMXX

Record-breaking DNA comparisons drive fast forensics

Forensic investigators arrive at the scene of a crime to search for clues. There are no known suspects, and every second that passes means more time for the trail to run cold. A DNA sample is discovered, collected, and then sent to a nearby forensics laboratory. There, it is sequenced and fed into a program that compares its genetic contents to DNA profiles stored in the FBI’s National DNA Index System (NDIS) — a database containing profiles of 18 million people who have passed through the criminal justice system. The hope is that the crime scene sample will match a profile from the database, pointing the way to a suspect. The sample can also be used for kinship analysis through which the sample is linked to blood relatives, as was done last April to catch the infamous Golden State Killer.

DNA forensics is a powerful tool, yet it presents a computational scaling problem when it is improved and expanded for complex samples (those containing DNA from more than one individual) and kinship analysis. Consider the volume of data that the FBI must handle for the nation. “If you think of all the police stations across the country, all operating each week, it’s a lot of data to keep track of and organize,” says Darrell Ricke from the Bioengineering Systems and Technologies Group. To put this into perspective, if each state compares 2,000 crime scene samples weekly, that’s 100,000 samples to compare against 18 million profiles per week.

Ricke is part of a team at the laboratory that developed an integrated web-based platform called IdPrism that provides expanded comparison capabilities without compromising speed or functionality. IdPrism allows identification of more than 10 individuals in a complex DNA sample, along with extended kinship results. At its heart are two algorithms that Ricke developed, FastID and TachysSTR, which encode genetic markers as bits (0 or 1) and operate quickly and smoothly. These algorithms recently won a 2018 R&D 100 Award, which is given annually by R&D Magazine to the 100 most significant inventions of the year.

These markers are two types of variations in DNA called short tandem repeats (STR) and single nucleotide polymorphisms (SNP). They are considered to be a kind of DNA fingerprint that can be used to identify individuals as well as their relatives. Each person has a unique combination of SNP or STR variations — one person’s combination presents in a specific pattern, while another person’s presents in a different pattern. When analysts run a crime-scene DNA sample against a profile in the NDIS database, finding a matching combination of these STRs shows a high chance that the DNA belongs to the same person.

The FBI currently uses software algorithms that must pass through a complex set of calculations to reveal if a sample matches a profile. Ricke’s algorithms assign a bit value to normal (0) or rare (1) versions of SNPs, or a bit for each different STR marker. The normal label indicates that the SNP or STR is common in many people and is thus not a unique marker that can be used to identify an individual. With this digital DNA encoding for both identity comparisons and complex mixtures, analysis can be done with just three hardware bit instructions: exclusive OR, logical AND, and population count.

An exclusive OR instruction allows for a comparison of whether two DNA profiles are the same or different. For the forensic comparisons, this instruction will output a 0 when an SNP or STR in a sample matches that in a profile, and it will output a 1 when they don’t match. This technique works well when the crime scene sample contains DNA from only one individual, but if there are more contributors, a matching result could be hidden among mismatches from the other people in the same sample. This issue is addressed by adding a logical AND with the database profile to the results of the exclusive OR. This step, in a sense, gets rid of the mismatch noise to reveal whether the database profile has matched against an individual in the sample. The final step is population count, which sums up all of the 1s. In the end, a match is represented by mostly 0s and a mismatch will have a high number of 1s.

Using these three hardware bit instructions, the FastID algorithm can compare 5,000 SNPs in a crime scene DNA sample against 20 million reference profiles in under 12 seconds. Alternative methods would take hours to do so on this scale. Similarly, TachysSTR can compare STRs in 1 million samples in 1.8 seconds, whereas current algorithms take 10 minutes to do the same.

The results are displayed inside the IdPrism system in which investigators can run, view, query, and store their DNA comparison data. In addition to being fast and convenient, the system has improved the accuracy of forensics by including a panel of 2,650 SNP markers that are used for complex sample and kinship analysis.

Last November, the system was transitioned to users outside of the laboratory. "Although getting IdPrism to a transition-ready product was challenging, it is awesome to think that our technology is being used," says Philip Fremont-Smith, who is also from the Bioengineering Systems and Technologies Group and was involved in the bioinformatics side of the project.

“When Hollywood finds out about this, they’re going to change their scripts,” Ricke says. “The capabilities are so different from what’s out there.”



de MIT News https://ift.tt/2Xyq5Ms

New AI programming language goes beyond deep learning

A team of MIT researchers is making it easier for novices to get their feet wet with artificial intelligence, while also helping experts advance the field.

In a paper presented at the Programming Language Design and Implementation conference this week, the researchers describe a novel probabilistic-programming system named “Gen.” Users write models and algorithms from multiple fields where AI techniques are applied — such as computer vision, robotics, and statistics — without having to deal with equations or manually write high-performance code. Gen also lets expert researchers write sophisticated models and inference algorithms — used for prediction tasks — that were previously infeasible.

In their paper, for instance, the researchers demonstrate that a short Gen program can infer 3-D body poses, a difficult computer-vision inference task that has applications in autonomous systems, human-machine interactions, and augmented reality. Behind the scenes, this program includes components that perform graphics rendering, deep-learning, and types of probability simulations. The combination of these diverse techniques leads to better accuracy and speed on this task than earlier systems developed by some of the researchers.

Due to its simplicity — and, in some use cases, automation — the researchers say Gen can be used easily by anyone, from novices to experts. “One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math,” says first author Marco Cusumano-Towner, a PhD student in the Department of Electrical Engineering and Computer Science. “We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems.”

The researchers also demonstrated Gen’s ability to simplify data analytics by using another Gen program that automatically generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data. That builds on the researchers’ previous work that let users to write a few lines of code to uncover insights into financial trends, air travel, voting patterns, and the spread of disease, among other trends. This is different from earlier systems, which required a lot of hand coding for accurate predictions.

“Gen is the first system that’s flexible, automated, and efficient enough to cover those very different types of examples in computer vision and data science and give state of-the-art performance,” says Vikash K. Mansinghka ’05, MEng ’09, PhD ’09, a researcher in the Department of Brain and Cognitive Sciences who runs the Probabilistic Computing Project.

Joining Cusumano-Towner and Mansinghka on the paper are Feras Saad and Alexander K. Lew, both CSAIL graduate students and members of the Probabilistic Computing Project.

Best of all worlds

In 2015, Google released TensorFlow, an open-source library of application programming interfaces (APIs) that helps beginners and experts automatically generate machine-learning systems without doing much math. Now widely used, the platform is helping democratize some aspects of AI. But, although it’s automated and efficient, it’s narrowly focused on deep-learning models which are both costly and limited compared to the broader promise of AI in general.

But there are plenty of other AI techniques available today, such as statistical and probabilistic models, and simulation engines. Some other probabilistic programming systems are flexible enough to cover several kinds of AI techniques, but they run inefficiently.

The researchers sought to combine the best of all worlds — automation, flexibility, and speed — into one. “If we do that, maybe we can help democratize this much broader collection of modeling and inference algorithms, like TensorFlow did for deep learning,” Mansinghka says.

In probabilistic AI, inference algorithms perform operations on data and continuously readjust probabilities based on new data to make predictions. Doing so eventually produces a model that describes how to make predictions on new data.

Building off concepts used in their earlier probabilistic-programming system, Church, the researchers incorporate several custom modeling languages into Julia, a general-purpose programming language that was also developed at MIT. Each modeling language is optimized for a different type of AI modeling approach, making it more all-purpose. Gen also provides high-level infrastructure for inference tasks, using diverse approaches such as optimization, variational inference, certain probabilistic methods, and deep learning. On top of that, the researchers added some tweaks to make the implementations run efficiently.

Beyond the lab

External users are already finding ways to leverage Gen for their AI research. For example, Intel is collaborating with MIT to use Gen for 3-D pose estimation from its depth-sense cameras used in robotics and augmented-reality systems. MIT Lincoln Laboratory is also collaborating on applications for Gen in aerial robotics for humanitarian relief and disaster response.


Gen is beginning to be used on ambitious AI projects under the MIT Quest for Intelligence. For example, Gen is central to an MIT-IBM Watson AI Lab project, along with the U.S. Department of Defense’s Defense Advanced Research Projects Agency’s ongoing Machine Common Sense project, which aims to model human common sense at the level of an 18-month-old child. Mansinghka is one of the principal investigators on this project.

“With Gen, for the first time, it is easy for a researcher to integrate a bunch of different AI techniques. It’s going to be interesting to see what people discover is possible now,” Mansinghka says.

Zoubin Ghahramani, chief scientist and vice president of AI at Uber and a professor at Cambridge University, who was not involved in the research, says, "Probabilistic programming is one of most promising areas at the frontier of AI since the advent of deep learning. Gen represents a significant advance in this field and will contribute to scalable and practical implementations of AI systems based on probabilistic reasoning.”

Peter Norvig, director of research at Google, who also was not involved in this research, praised the work as well. “[Gen] allows a problem-solver to use probabilistic programming, and thus have a more principled approach to the problem, but not be limited by the choices made by the designers of the probabilistic programming system,” he says. “General-purpose programming languages … have been successful because they … make the task easier for a programmer, but also make it possible for a programmer to create something brand new to efficiently solve a new problem. Gen does the same for probabilistic programming.”

Gen’s source code is publicly available and is being presented at upcoming open-source developer conferences, including Strange Loop and JuliaCon. The work is supported, in part, by DARPA.



de MIT News https://ift.tt/2LjnjnO

Translating proteins into music, and back

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described today in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The new method translates an amino acid sequence of proteins into this sequence of percussive and rhythmic sounds. Courtesy of Markus Buehler.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

The percussive, rhythmic, and musical sounds heard here are generated entirely from amino acid sequences. Courtesy of Markus Buehler.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.”

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

“Markus Buehler has been gifted with a most creative soul, and his explorations into the inner workings of biomolecules are advancing our understanding of the mechanical response of biological materials in a most significant manner,” says Marc Meyers, a professor of materials science at the University of California at San Diego, who was not involved in this work.

Meyers adds, “The focusing of this imagination to music is a novel and intriguing direction. This is experimental music at its best. The rhythms of life, including the pulsations of our heart, were the initial sources of repetitive sounds that engendered the marvelous world of music. Markus has descended into the nanospace to extract the rythms of the amino acids, the building blocks of life.”

The team also included research scientist Zhao Qin and Francisco Martin-Martinez at MIT. The work was supported by the U.S. Office of Naval Research and the National Institutes of Health.



de MIT News https://ift.tt/2KDmWoE

martes, 25 de junio de 2019

Study: Social robots can benefit hospitalized children

A new study demonstrates, for the first time, that “social robots” used in support sessions held in pediatric units at hospitals can lead to more positive emotions in sick children.

Many hospitals host interventions in pediatric units, where child life specialists will provide clinical interventions to hospitalized children for developmental and coping support. This involves play, preparation, education, and behavioral distraction for both routine medical care, as well as before, during, and after difficult procedures. Traditional interventions include therapeutic medical play and normalizing the environment through activities such as arts and crafts, games, and celebrations.

For the study, published today in the journal Pediatrics, researchers from the MIT Media Lab, Boston Children’s Hospital, and Northeastern University deployed a robotic teddy bear, “Huggable,” across several pediatric units at Boston Children’s Hospital. More than 50 hospitalized children were randomly split into three groups of interventions that involved Huggable, a tablet-based virtual Huggable, or a traditional plush teddy bear. In general, Huggable improved various patient outcomes over those other two options.  

The study primarily demonstrated the feasibility of integrating Huggable into the interventions. But results also indicated that children playing with Huggable experienced more positive emotions overall. They also got out of bed and moved around more, and emotionally connected with the robot, asking it personal questions and inviting it to come back later to meet their families. “Such improved emotional, physical, and verbal outcomes are all positive factors that could contribute to better and faster recovery in hospitalized children,” the researchers write in their study.

Although it is a small study, it is the first to explore social robotics in a real-world inpatient pediatric setting with ill children, the researchers say. Other studies have been conducted in labs, have studied very few children, or were conducted in public settings without any patient identification.

But Huggable is designed only to assist health care specialists — not replace them, the researchers stress. “It’s a companion,” says co-author Cynthia Breazeal, an associate professor of media arts and sciences and founding director of the Personal Robots group. “Our group designs technologies with the mindset that they’re teammates. We don’t just look at the child-robot interaction. It’s about [helping] specialists and parents, because we want technology to support everyone who’s invested in the quality care of a child.”

“Child life staff provide a lot of human interaction to help normalize the hospital experience, but they can’t be with every kid, all the time. Social robots create a more consistent presence throughout the day,” adds first author Deirdre Logan, a pediatric psychologist at Boston Children’s Hospital. “There may also be kids who don’t always want to talk to people, and respond better to having a robotic stuffed animal with them. It’s exciting knowing what types of support we can provide kids who may feel isolated or scared about what they’re going through.”

Joining Breazeal and Logan on the paper are: Sooyeon Jeong, a PhD student in the Personal Robots group; Brianna O’Connell, Duncan Smith-Freedman, and Peter Weinstock, all of Boston Children’s Hospital; and Matthew Goodwin and James Heathers, both of Northeastern University.

Boosting mood

First prototyped in 2006, Huggable is a plush teddy bear with a screen depicting animated eyes. While the eventual goal is to make the robot fully autonomous, it is currently operated remotely by a specialist in the hall outside a child’s room. Through custom software, a specialist can control the robot’s facial expressions and body actions, and direct its gaze. The specialists could also talk through a speaker — with their voice automatically shifted to a higher pitch to sound more childlike — and monitor the participants via camera feed. The tablet-based avatar of the bear had identical gestures and was also remotely operated.

During the interventions involving Huggable — involving kids ages 3 to 10 years — a specialist would sing nursery rhymes to younger children through robot and move the arms during the song. Older kids would play the I Spy game, where they have to guess an object in the room described by the specialist through Huggable.  

Through self-reports and questionnaires, the researchers recorded how much the patients and families liked interacting with Huggable. Additional questionnaires assessed patient’s positive moods, as well as anxiety and perceived pain levels. The researchers also used cameras mounted in the child’s room to capture and analyze speech patterns, characterizing them as joyful or sad, using software.

A greater percentage of children and their parents reported that the children enjoyed playing with Huggable more than with the avatar or traditional teddy bear. Speech analysis backed up that result, detecting significantly more joyful expressions among the children during robotic interventions. Additionally, parents noted lower levels of perceived pain among their children.

The researchers noted that 93 percent of patients completed the Huggable-based interventions, and found few barriers to practical implementation, as determined by comments from the specialists.

A previous paper based on the same study found that the robot also seemed to facilitate greater family involvement in the interventions, compared to the other two methods, which improved the intervention overall. “Those are findings we didn’t necessarily expect in the beginning,” says Jeong, also a co-author on the previous paper. “We didn’t tell family to join any of the play sessions — it just happened naturally. When the robot came in, the child and robot and parents all interacted more, playing games or in introducing the robot.”

An automated, take-home bot

The study also generated valuable insights for developing a fully autonomous Huggable robot, which is the researchers’ ultimate goal. They were able to determine which physical gestures are used most and least often, and which features specialists may want for future iterations. Huggable, for instance, could introduce doctors before they enter a child’s room or learn a child’s interests and share that information with specialists. The researchers may also equip the robot with computer vision, so it can detect certain objects in a room to talk about those with children.

“In these early studies, we capture data … to wrap our heads around an authentic use-case scenario where, if the bear was automated, what does it need to do to provide high-quality standard of care,” Breazeal says.

In the future, that automated robot could be used to improve continuity of care. A child would take home a robot after a hospital visit to further support engagement, adherence to care regimens, and monitoring well-being.

“We want to continue thinking about how robots can become part of the whole clinical team and help everyone,” Jeong says. “When the robot goes home, we want to see the robot monitor a child’s progress. … If there’s something clinicians need to know earlier, the robot can let the clinicians know, so [they’re not] surprised at the next appointment that the child hasn’t been doing well.”

Next, the researchers are hoping to zero in on which specific patient populations may benefit the most from the Huggable interventions. “We want to find the sweet spot for the children who need this type of of extra support,” Logan says.



de MIT News https://ift.tt/2xbNwfL

For Catherine Drennan, teaching and research are complementary passions

Catherine Drennan says nothing in her job thrills her more than the process of discovery. But Drennan, a professor of biology and chemistry, is not referring to her landmark research on protein structures that could play a major role in reducing the world’s waste carbons. 

“Really the most exciting thing for me is watching my students ask good questions, problem-solve, and then do something spectacular with what they’ve learned,” she says. 

For Drennan, research and teaching are complementary passions, both flowing from a deep sense of “moral responsibility.” Everyone, she says, “should do something, based on their skill set, to make some kind of contribution.” 

Drennan’s own research portfolio attests to this sense of mission. Since her arrival at MIT 20 years ago, she has focused on characterizing and harnessing metal-containing enzymes that catalyze complex chemical reactions, including those that break down carbon compounds. 

She got her start in the field as a graduate student at the University of Michigan, where she became captivated by vitamin B12. This very large vitamin contains cobalt and is vital for amino acid metabolism, the proper formation of the spinal cord, and prevention of certain kinds of anemia. Bound to proteins in food, B12 is released during digestion. 

“Back then, people were suggesting how B12-dependent enzymatic reactions worked, and I wondered how they could be right if they didn’t know what B12-dependent enzymes looked like,” she recalls. “I realized I needed to figure out how B12 is bound to protein to really understand what was going on.” 

Drennan seized on X-ray crystallography as a way to visualize molecular structures. Using this technique, which involves bouncing X-ray beams off a crystallized sample of a protein of interest, she figured out how vitamin B12 is bound to a protein molecule. 

“No one had previously been successful using this method to obtain a B12-bound protein structure, which turned out to be gorgeous, with a protein fold surrounding a novel configuration of the cofactor,” says Drennan. 

Carbon-loving microbes show the way 

These studies of B12 led directly to Drennan’s one-carbon work. “Metallocofactors such as B12 are important not just medically, but in environmental processes,” she says. “Many microbes that live on carbon monoxide, carbon dioxide, or methane — eating carbon waste or transforming carbon — use metal-containing enzymes in their metabolic pathways, and it seemed like a natural extension to investigate them.” 

Some of Drennan’s earliest work in this area, dating from the early 2000s, revealed a cluster of iron, nickel, and sulfur atoms at the center of the enzyme carbon monoxide dehydrogenase (CODH). This so-called C-cluster serves hungry microbes, allowing them to “eat” carbon monoxide and carbon dioxide. 

Recent experiments by Drennan analyzing the structure of the C-cluster-containing enzyme CODH showed that in response to oxygen, it can change configurations, with sulfur, iron, and nickel atoms cartwheeling into different positions. Scientists looking for new avenues to reduce greenhouse gases took note of this discovery. CODH, suggested Drennan, might prove an effective tool for converting waste carbon dioxide into a less environmentally destructive compound, such as acetate, which might also be used for industrial purposes. 

Drennan has also been investigating the biochemical pathways by which microbes break down hydrocarbon byproducts of crude oil production, such as toluene, an environmental pollutant. 

“It’s really hard chemistry, but we’d like to put together a family of enzymes to work on all kinds of hydrocarbons, which would give us a lot of potential for cleaning up a range of oil spills,” she says. 

The threat of climate change has increasingly galvanized Drennan’s research, propelling her toward new targets. A 2017 study she co-authored in Science detailed a previously unknown enzyme pathway in ocean microbes that leads to the production of methane, a formidable greenhouse gas: “I’m worried the ocean will make a lot more methane as the world warms,” she says. 

Drennan hopes her work may soon help to reduce the planet’s greenhouse gas burden. Commercial firms have begun using the enzyme pathways that she studies, in one instance employing a proprietary microbe to capture carbon dioxide produced during steel production — before it is released into the atmosphere — and convert it into ethanol. 

“Reengineering microbes so that enzymes take not just a little, but a lot of carbon dioxide out of the environment — this is an area I’m very excited about,” says Drennan. 

Creating a meaningful life in the sciences 

At MIT, she has found an increasingly warm welcome for her efforts to address the climate challenge.  

“There’s been a shift in the past decade or so, with more students focused on research that allows us to fuel the planet without destroying it,” she says. 

In Drennan’s lab, a postdoc, Mary Andorfer, and a rising junior, Phoebe Li, are currently working to inhibit an enzyme present in an oil-consuming microbe whose unfortunate residence in refinery pipes leads to erosion and spills. “They are really excited about this research from the environmental perspective and even made a video about their microorganism,” says Drennan. 

Drennan delights in this kind of enthusiasm for science. In high school, she thought chemistry was dry and dull, with no relevance to real-world problems. It wasn’t until college that she “saw chemistry as cool.” 

The deeper she delved into the properties and processes of biological organisms, the more possibilities she found. X-ray crystallography offered a perfect platform for exploration. “Oh, what fun to tell the story about a three-dimensional structure — why it is interesting, what it does based on its form,” says Drennan. 

The elements that excite Drennan about research in structural biology — capturing stunning images, discerning connections among biological systems, and telling stories — come into play in her teaching. In 2006, she received a $1 million grant from the Howard Hughes Medical Institute (HHMI) for her educational initiatives that use inventive visual tools to engage undergraduates in chemistry and biology. She is both an HHMI investigator and an HHMI professor, recognition of her parallel accomplishments in research and teaching, as well as a 2015 MacVicar Faculty Fellow for her sustained contribution to the education of undergraduates at MIT. 

Drennan attempts to reach MIT students early. She taught introductory chemistry classes from 1999 to 2014, and in fall 2018 taught her first introductory biology class. 

“I see a lot of undergraduates majoring in computer science, and I want to convince them of the value of these disciplines,” she says. “I tell them they will need chemistry and biology fundamentals to solve important problems someday.” 

Drennan happily migrates among many disciplines, learning as she goes. It’s a lesson she hopes her students will absorb. “I want them to visualize the world of science and show what they can do,” she says. “Research takes you in different directions, and we need to bring the way we teach more in line with our research.” 

She has high expectations for her students. “They’ll go out in the world as great teachers and researchers,” Drennan says. “But it’s most important that they be good human beings, taking care of other people, asking what they can do to make the world a better place.” 

This article appears in the Spring 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. 



de MIT News https://ift.tt/2Nokbtg

Smart workout apparel, “Vegetable Assassins,” and inspiration from medieval music

It doesn’t get any better than this — at least not at MIT. There’s the roar of raucous laughter as students play games or test products that they themselves have designed and built. There’s the chatter of questions asked and answered, all to the effect of “How did you do that?” and “Here’s what I did.”  

To top it off, there’s the welcoming smell of pizza, slices being pulled from rapidly cooling boxes by a group of students and teaching assistants from the four sections of 6.08 (Introduction to EECS via Interconnected Embedded Systems). They have gathered for a special occasion during the last week of spring term: to show off their class final projects.

“This is the best class I've taken here,” says Mussie Demisse, a sophomore in EECS, dressed in a hoodie with a square contraception on his back that could have fallen off Iron Man. He and his team have designed a “Smart Suit” that analyzes and assesses a user’s pushup form.

“The class has given me the opportunity to do research on my own,” Demisse says. “It’s introduced us to many things and it now falls on us to pursue the things we like.”

The course introduces students to working with multiple platforms, servers, databases, and microcontrollers. For the final project, four-person teams design, program, build, and demonstrate their own cloud-connected, handheld, or wearable Internet of Things systems. The result: about 85 projects ranging from a Frisbee that analyzes velocity and acceleration to a “better” GPS system for tracking the location of the MIT shuttle.

“Don’t hit the red apple! Noooo,” yells first-year student Bradley Albright as Joe Steinmeyer, EECS lecturer and 6.08 instructor, hits the wrong target while playing “Vegetable Assassins.” The object of the game is to slice the vegetables scrolling by on a computer screen, but Steinmeyer, using an internet-connected foam sword, has managed to hit an apple instead.  

Albright had the idea for a “Fruit Ninja”-style game during his first days at MIT, when he envisioned the visceral experience of slicing the air with a katana, or Japanese sword, and hitting a virtual target. Then, he and his team of Johnny Bui and Eesam Hourani, both sophomores in EECS, and Tingyu Li, a junior in management, were able to, as they put it, “take on the true villains of the food pyramid: vegetables.” They built a server-client model in which data from the sword is sent to a browser via a server connection. The server facilitates communication between all components through multiple WebSocket connections.

“It took a lot of work. Coming down to the last night, we had some problems that we had to spend a whole night finishing but I think we are all incredibly happy with the work we put into it,” Albright says.

Steinmeyer teaches 6.08 with two EECS colleagues: Max Shulaker, the Emmanuel E. Landsman (1958) Career Development Assistant Professor, and Stefanie Mueller, the X-Window Consortium Career Development Assistant Professor. The course was co-created by Steinmeyer and Joel Voldman, an EECS professor and associate department head.

Mueller, for one, is impressed with the students’ collaborative efforts as they developed their projects in just four weeks: “They really had to pull together to work,” she says. 

Even projects that don’t quite work as expected are learning experiences, Steinmeyer notes. “I’m a big fan of having people do work early on and then go and do it again later. That’s how I learned the best. I always had to learn a dumb way first.”

Demisse and his team — Amadou Bah and Stephanie Yoon, both sophomores in EECS, and Sneha Ramachandran, a junior in EECS — confronted a few setbacks in developing their Smart Suit. “We wanted something to force ourselves to play around with electronics and hardware,” he explains. “During our brainstorming session, we thought of things that would monitor your heart rate.”

Initially, they considered something that runners might use to track their form. “But running’s pretty hard. [We thought,] ‘Let’s take a step back,” Demisse recalls. “It was a natural evolution from that to pushups.”

They designed a zip-up hoodie with inertial measurement unit sensors on an elbow, the upper back, and the lower back to measure the acceleration of each body part as the user does pushups for 10 seconds. Those data are then analyzed and compared to the measurements of what is considered the “ideal” pushup form. 

A particular challenge: getting the data from various sources analyzed in reasonable amount of time. The system uses a multiplex approach, but just “listens” to one input at a time. “That makes it easier to record data at a faster rate,” Demisse says.

Another team developed a fishing game in which users cast a handheld pole and pick up “fish” viewed on a nearby screen. First-year Rafael Olivera-Cintron demonstrates by casting; a soft noise accompanies the movement. “Do you hear that ambient sound? That’s lake sounds, the sounds of water and mosquitos,” he says. He casts again and waits. And waits. “Yes, it’s a lot like fishing. A lot of waiting,” he says. “That’s my favorite part.” His teammates included EECS juniors Mohamadou Bella Bah and Chad Wood and EECS sophomores Julian Espada and Veronica Muriga.

Several teams’ projects involve music. Diana Voronin, Julia Moseyko, and Terryn Brunelle, all first-year students, are happy to show off  “DJam,” an interconnected spin on Guitar Hero. Rather than pushing buttons that correspond to imaginary guitar chords, users spin a turntable to different positions — all to the beat of a song playing in the background. 

“We just knew we wanted to do something with music because it would be fun,” Moseyko says. “We also wanted to work with something that turned. From a technical point of view, it was interesting to use that kind of sensor.”

Music from the Middle Ages inspired the team of Shahir Rahman and Patrick Kao, both sophomores in EECS, and Adam Potter and Lilia Luong, both first-years. Using a plywood version of a medieval instrument called a hurdy-gurdy, they created “Hurdy-Gurdy Hero,” which uses a built-in microphone to capture and save favorite songs to a database that processes the audio into a playable game.

“The idea is to give joy, to be able to play an actual instrument but not necessarily just for those who [already] know to play,” Rahman says. He cranks the machine and slightly squeaky but oddly harmonic notes emerge. Other students are clearly impressed by what they’re hearing. Olivera-Cintron sums up in just three words: “That is awesome.”



de MIT News http://bit.ly/2KzN3wN

Want to learn how to train an artificial intelligence model? Ask a friend.

The MIT Machine Intelligence Community began with a few friends meeting over pizza to discuss landmark papers in machine learning. Three years later, the undergraduate club boasts 500 members, an active Slack channel, and an impressive lineup of student-led reading groups and workshops meant to demystify machine learning and artificial intelligence (AI) generally. This year, MIC and MIT Quest for Intelligence joined forces to advance their common cause of making AI tools accessible to all.

Starting last fall, the MIT Quest opened its offices to MIC members and extended access to IBM and Google-donated cloud credits, providing a boost of computing power to students previously limited to running their AI models on desktop machines loaded with extra graphics processors. The MIT Quest and MIC are now collaborating on a host of projects, independently and through MIT’s Undergraduate Research Opportunities Program (UROP).

“We heard about their mission to spread machine learning to all undergrads and thought, ‘That’s what we’re trying to do — let’s do it together!” says Joshua Joseph, chief software engineer with the MIT Quest Bridge. 

A makerspace for AI

U.S. Army ROTC students Ian Miller and Rishi Shah came to MIC for the free cloud credits, but stayed for the workshop on neural computing sticks. A compute stick allows mobile devices to do image processing on the fly, and when the cadets learned what one could do, they knew their idea for a portable computer vision system would work. 

“Without that, we’d have to send images to a central place to do all this computing,” says Miller, a rising junior. “It would have been a logistical headache.”

Built in two months, for $200, their wallet-sized device is designed to plug into a tablet strapped to an Army soldier’s chest and scan the surrounding area for cars and people. With more training, they say, it could learn to spot cellphones and guns. In May, the cadets demo'd their device at MIT’s Soldier Design Competition and were invited by an Army sergeant to visit Fort Devens to continue working on it. 

Rose Wang, a rising senior majoring in computer science, was also drawn to MIC by the free cloud credits, and a chance to work on projects with quest and other students. This spring, she used IBM cloud credits to run a reinforcement learning model that’s part of her research with MIT Professor Jonathan How, training robot agents to cooperate on tasks that involve limited communication and information. She recently presented her results at a workshop at the International Conference on Machine Learning.  

“It helped me try out different techniques without worrying about the compute bottleneck and running out of resources,” she says. 

Improving AI access at MIT

The MIC has launched several AI projects of its own. The most ambitious is Monkey, a container-based, cloud-native service that would allow MIT undergraduates to log in and train an AI model from anywhere, tracking the training as it progresses and managing the credits allotted to each student. On a Friday afternoon in April, the team gathered in a quest conference room as Michael Silver, a rising senior, sketched out the modules Monkey would need. 

As Silver scrawled the words "Docker Image Build Service" on the board, the student assigned to research the module apologized. “I didn’t make much progress on it because I had three midterms!” he said. 

The planning continued, with Steven Shriver, a software engineer with the Quest Bridge, interjecting bits of advice. The students had assumed the container service they planned to use, Docker, would be secure. It isn’t. 

“Well, I guess we have another task here,” said Silver, adding the word “security” to the white board. 

Later, the sketch would be turned into a design document and shared with the two UROP students helping to execute Monkey. The team hopes to launch sometime next year. 

“The coding isn’t the difficult part,” says UROP student Amanda Li, a member of MIC Dev-Ops. “It’s the exploring the server side of machine learning — Docker, Google Cloud, and the API. The most important thing I’ve learned is how to efficiently design and pipeline a project as big as this.” 

Silver knew he wanted to be an AI engineer in 2016, when the computer program AlphaGo defeated the world’s reigning Go champion. As a senior at Boston University Academy, Silver worked on natural language processing in the lab of MIT Professor Boris Katz, and has continued to work with Katz since coming to MIT. Seeking more coding experience, he left HackMIT, where he had been co-director, to join MIC Dev-Ops.

“A lot of students read about machine learning models, but have no idea how to train one,” he says. “Even if you know how to train one, you’d need to save up a few thousand dollars to buy the GPUs to do it. MIC lets students interested in machine learning reach that next level.” 

Conceived by MIC members, a second project is focused on making AI research papers posted on arXiv easier to explore. Nearly 14,000 academic papers are uploaded each month to the site, and although papers are tagged by field, drilling into subtopics can be overwhelming.

Wang, for one, grew frustrated while doing a basic literature search on reinforcement learning. “You have a ton of data and no effective way of representing it to the user,” she says. “It would have been useful to see the papers in a larger context, and to explore by number of citations or their relevance to each other.”

A third MIC project focuses on crawling MIT’s hundreds of listservs for AI-related talks and events to populate a Google calendar. The tool will be closely patterned after an app Silver helped build during MIT’s Independent Activities Period in January. Called Dormsp.am, the app classifies listserv emails sent to MIT undergraduates and plugs them into a calendar-email client. Students can then search for events by day or by a color-coded topic, such as tech, food, or jobs. Once Dormsp.am launches, Silver will adapt it to search for and post AI-related events at MIT to an MIC calendar.

Silver says the team spent extra time on the user interface, taking a page from MIT Professor Daniel Jackson’s Software Studio class. “This is an app that can live or die on its usability, so the front end is really important,” he says.  

Wang is now collaborating with Moin Nadeem, MIC’s outgoing president, to build the visualization tool. It’s exactly the kind of hands-on experience MIC was intended to provide, says Nadeem, a rising senior. “Students learn fundamental concepts in class but don’t know how to implement them,” he says. “I’m trying to build what freshman me would have liked to have had: a community of people excited to do interesting stuff with machine learning.” 



de MIT News http://bit.ly/2YlXmHG

Projects advance naval ship design and capabilities

For the past 20 years, officials from the U.S. Navy and leaders in the shipbuilding industry have convened on MIT’s campus each spring for the MIT Ship Design and Technology Symposium. The daylong event is a platform to update industry and military leaders on the latest groundbreaking research in naval construction and engineering being conducted at MIT.

The main event of the symposium was the design project presentations given by Course 2N (Naval Construction and Engineering) graduate students. These projects serve as a capstone of their three-year curriculum.

This year, recent graduate Andrew Freeman MEng '19, SM '19, who was advised by Dick K. P. Yue, the Philip J. Solondz Professor of Engineering, and William Taft MEng '19, SM '19, who works with James Kirtley, professor of electrical engineering and computer science, presented their current research. Rear Admiral Ronald A. Boxall, director of surface warfare at the U.S. Navy, served as keynote speaker at the event, which took place in May.

“The Ship Design and Technology Symposium gives students in the 2N program the opportunity to present ship and submarine design and conversions, as well as thesis research, to the leaders of the U.S. Navy and design teams from industry,” explains Joe Harbour, professor of the practice of naval construction at MIT. “Through the formal presentation and poster sessions, the naval and industrial leaders can better understand opportunities to improve designs and design processes.”

Since 1901, the Course 2N program has been educating active-duty officers in the Navy and U.S. Coast Guard, in addition to foreign naval officers. This year, eight groups of 2N students presented design or conversion project briefs to an audience of experts in the Samberg Conference Center.

The following three projects exemplify the ways in which these students are adapting existing naval designs and creating novel designs that can help increase the capabilities and efficiency of naval vessels.

The next generation of hospital ships

The Navy has a fleet of hospital ships ready for any major combat situations that might arise. These floating hospitals allow doctors to care for large numbers of casualties, perform operations, stabilize patients, and help transfer patients to other medical facilities.

Lately, these ships have been instrumental in response efforts during major disasters — such as the recent hurricanes in the Caribbean. The ships also provide an opportunity for doctors to train local medical professionals in developing countries.

The Navy's current fleet of hospital ships is aging. Designed in the 1980s, these ships require an update to complement the way naval operations are conducted in modern times. As such, the U.S. Navy is looking to launch the next fleet of hospital ships in 2035.

A team of Course 2N students including Aaron Sponseller, Travis Rapp, and Robert Carelli was tasked with assessing current hospital ship designs and proposing a design for the next generation of hospital ships.

“We looked at several different hull form sizes that could achieve the goals of our sponsors, and assigned scores to rank their attributes and determine which one could best achieve their intended mission,” explains Carelli.

In addition to visiting the USS Mercy, a hospital ship that was commissioned during World War II, the team toured nearby Tufts Medical Center to get a sense of what a state-of-the-art medical facility looked like. One thing that immediately struck the team was how different the electrical needs of a modern-day medical facility are from the needs nearly 40 years ago, when the medical ships were first being designed.

“Part of the problem with the current ships is they scaled their electrical capacity with older equipment from the 1980s in mind,” adds Rapp. This capacity doesn’t account for the increased electrical burden of digital CT scans, high-tech medical devices, and communication suites.

The current ships have a separate propulsion plant and electrical generation plant. The team found that combining the two would increase the ship’s electrical capacity, especially while "on station" — a term used when a ship maintains its position in the water.

“These ships spend a lot of time on station while doctors operate on patients,” explains Carelli. “By using the same system for propelling and electrical generation, you have a lot more capacity for these medical operations when it’s on station and for speed when the ship is moving.”

The team also recommended that the ship be downsized and tailored to treat intensive care cases rather than having such large stable patient areas. “We trimmed the fat, so to speak, and are moving the ship toward what really delivers value — intensive care capability for combat operations,” says Rapp.

The team hopes their project will inform the decisions the Navy makes when they do replace large hospital ships in 2035. “The Navy goes through multiple iterations of defining how they want their next ship to be designed and we are one small step in that process,” adds Sponseller.

Autonomous fishing vessels

Over the past few decades, advances in artificial intelligence and sensory hardware have led to increasingly sophisticated unmanned vehicles in the water. Sleek autonomous underwater vehicles operate below the water’s surface. Rather than work on these complex and often expensive machines, Course 2N students Jason Barker, David Baxter, and Brian Stanfield assessed the possibility of using something far more commonplace for their design project: fishing vessels.

“We were charged with looking at the possibility of going into a port, acquiring a low-end vessel like a fishing boat, and making that boat an autonomous machine for various missions,” explains Barker.

With such a broad scope, Barker and his teammates set some parameters to guide their research. They honed in on one fishing boat in particular: a 44 four-drum seiner.

The next step was determining how such a vessel could be outfitted with sensors to carry out a range of missions including measuring marine life, monitoring marine traffic in a given area, carrying out intelligence, surveillance and reconnaissance (ISR) missions, and, perhaps most importantly, conducting search and rescue operations.

The team estimated that the cost of transforming an everyday fishing boat into an autonomous vehicle would be roughly $2 million — substantially lower than building a new autonomous vehicle. The relatively low cost could make this an appealing exercise in areas where piracy is a potential concern. “Because the price of entry is so low, it’s not as risky as using a capital asset in these areas,” Barker explains.

The low price could also lead to a number of such autonomous vehicles in a given area. “You could put out a lot of these vessels,” adds Barker. “With the advances of swarm technologies you could create a network or grid of autonomous boats.”

Increasing endurance and efficiency in Freedom-class ships

For Course 2N student Charles Hasenbank, working on a conversion project for the engineering plant of Freedom-class ships was a natural fit. As a lieutenant in the U.S. Navy, Hasenbank served on the USS Freedom.

Freedom-class ships can reach upwards of 40 knots, 10 knots faster than most combat ships. “To get those extra knots requires a substantial amount of power,” explains Hasenbank. This power is generated by two diesel engines and two gas turbines that are also used to power large aircraft like the Dreamliner.

For their new frigate program, the Navy is looking to achieve a maximum speed of 30 knots, making the extra power provided by these engines unnecessary. The endurance range of these new frigates, however, would be higher than what the current Freedom-class ships allow. As such, Hasenbank and his fellow students Tikhon Ruggles and Cody White were tasked with exploring alternate forms of propulsion.

The team had five driving criteria in determining how to best convert the ships’ power system — minimize weight changes, increase efficiency, maintain or decrease acquisition costs, increase simplicity, and improve fleet commonality.

“The current design is a very capable platform, but the efficiencies aren’t there because speed was a driving factor,” explains Hasenbank.

When redesigning the engineering plant, the team landed on the use of four propellers, which would maintain the amount of draft currently experienced by these ships. To accommodate this change, the structure of the stern would need to be altered.

By removing a step currently in the stern design, the team made an unexpected discovery. Above 12 knots, their stern design would decrease hull resistance. “Something we didn’t initially expect was we improved efficiency and gained endurance through decreasing the hull resistance,” adds Hasenbank. “That was a nice surprise along the way.”

The team’s new design would be able to meet the 30 knot speed requirement of the new frigate program and it would add anywhere between 500 and 1,000 nautical miles of endurance to the ship.

Along with the other design projects presented at the MIT Ship Design and Technology Symposium, the work conducted by Hasenbank and his team could inform important decisions the U.S. Navy has to make in the coming years as it looks to update and modernize its fleet.



de MIT News http://bit.ly/2NcUpZf

Letter to the MIT community: Immigration is a kind of oxygen

The following email was sent today to the MIT community by President L. Rafael Reif.

To the members of the MIT community,

MIT has flourished, like the United States itself, because it has been a magnet for the world’s finest talent, a global laboratory where people from every culture and background inspire each other and invent the future, together.

Today, I feel compelled to share my dismay about some circumstances painfully relevant to our fellow MIT community members of Chinese descent. And I believe that because we treasure them as friends and colleagues, their situation and its larger national context should concern us all.

The situation

As the US and China have struggled with rising tensions, the US government has raised serious concerns about incidents of alleged academic espionage conducted by individuals through what is widely understood as a systematic effort of the Chinese government to acquire high-tech IP.

As head of an institute that includes MIT Lincoln Laboratory, I could not take national security more seriously. I am well aware of the risks of academic espionage, and MIT has established prudent policies to protect against such breaches.

But in managing these risks, we must take great care not to create a toxic atmosphere of unfounded suspicion and fear. Looking at cases across the nation, small numbers of researchers of Chinese background may indeed have acted in bad faith, but they are the exception and very far from the rule. Yet faculty members, post-docs, research staff and students tell me that, in their dealings with government agencies, they now feel unfairly scrutinized, stigmatized and on edge – because of their Chinese ethnicity alone. 

Nothing could be further from – or more corrosive to ­– our community’s collaborative strength and open-hearted ideals. To hear such reports from Chinese and Chinese-American colleagues is heartbreaking. As scholars, teachers, mentors, inventors and entrepreneurs, they have been not only exemplary members of our community but exceptional contributors to American society. I am deeply troubled that they feel themselves repaid with generalized mistrust and disrespect.

The signal to the world

For those of us who know firsthand the immense value of MIT’s global community and of the free flow of scientific ideas, it is important to understand the distress of these colleagues as part of an increasingly loud signal the US is sending to the world.

Protracted visa delays. Harsh rhetoric against most immigrants and a range of other groups, because of religion, race, ethnicity or national origin. Together, such actions and policies have turned the volume all the way up on the message that the US is closing the door – that we no longer seek to be a magnet for the world’s most driven and creative individuals. I believe this message is not consistent with how America has succeeded. I am certain it is not how the Institute has succeeded. And we should expect it to have serious long-term costs for the nation and for MIT.

For the record, let me say with warmth and enthusiasm to every member of MIT’s intensely global community: We are glad, proud and fortunate to have you with us! To our alumni around the world: We remain one community, united by our shared values and ideals! And to all the rising talent out there: If you are passionate about making a better world, and if you dream of joining our community, we welcome your creativity, we welcome your unstoppable energy and aspiration – and we hope you can find a way to join us. 

* * *

In May, the world lost a brilliant creative force: architect I.M. Pei, MIT Class of 1940. Raised in Shanghai and Hong Kong, he came to the United States at 17 to seek an education. He left a legacy of iconic buildings from Boston to Paris and China to Washington, DC, as well on our own campus. By his own account, he consciously stayed alive to his Chinese roots all his life. Yet, when he died at the age of 102, the Boston Globe described him as “the most prominent American architect of his generation.”

Thanks to the inspired American system that also made room for me as an immigrant, all of those facts can be true at the same time.

As I have discovered through 40 years in academia, the hidden strength of a university is that every fall, it is refreshed by a new tide of students. I am equally convinced that part of the genius of America is that it is continually refreshed by immigration – by the passionate energy, audacity, ingenuity and drive of people hungry for a better life.

There is certainly room for a wide range of serious positions on the actions necessary to ensure our national security and to manage and improve our nation’s immigration system. But above the noise of the current moment, the signal I believe we should be sending, loud and clear, is that the story of American immigration is essential to understanding how the US became, and remains, optimistic, open-minded, innovative and prosperous – a story of never-ending renewal.

In a nation like ours, immigration is a kind of oxygen, each fresh wave reenergizing the body as a whole. As a society, when we offer immigrants the gift of opportunity, we receive in return vital fuel for our shared future. I trust that this wisdom will always guide us in the life and work of MIT. And I hope it can continue to guide our nation.

Sincerely,

L. Rafael Reif



de MIT News http://bit.ly/2IIdTk6