jueves, 30 de junio de 2022

Better living through multicellular life cycles

Cooperation is a core part of life for many organisms, ranging from microbes to complex multicellular life. It emerges when individuals share resources or partition a task in such a way that each derives a greater benefit when acting together than they could on their own. For example, birds and fish flock to evade predators, slime mold swarms to hunt for food and reproduce, and bacteria form biofilms to resist stress.

Individuals must live in the same “neighborhood” to cooperate. For bacteria, this neighborhood can be as small as tens of microns. But in environments like the ocean, it’s rare for cells with the same genetic makeup to co-occur in the same neighborhood on their own. And this necessity poses a puzzle to scientists: In environments where survival hinges on cooperation, how do bacteria build their neighborhood?

To study this problem, MIT professor Otto X. Cordero and colleagues took inspiration from nature: They developed a model system around a common coastal seawater bacterium that requires cooperation to eat sugars from brown algae. In the system, single cells were initially suspended in seawater too far away from other cells to cooperate. To share resources and grow, the cells had to find a mechanism of creating a neighborhood. “Surprisingly, each cell was able to divide and create its own neighborhood of clones by forming tightly packed clusters,” says Cordero, associate professor in the Department of Civil and Environmental Engineering.

A new paper, published today in Current Biology, demonstrates how an algae-eating bacterium solves the engineering challenge of creating local cell density starting from a single-celled state.

“A key discovery was the importance of phenotypic heterogeneity in supporting this surprising mechanism of clonal cooperation,” says Cordero, lead author of the new paper.

Using a combination of microscopy, transcriptomics, and labeling experiments to profile a cellular metabolic state, the researchers found that cells phenotypically differentiate into a sticky “shell” population and a motile, carbon-storing “core.” The researchers propose that shell cells create the cellular neighborhood needed to sustain cooperation while core cells accumulate stores of carbon that support further clonal reproduction when the multicellular structure ruptures.

This work addresses a key piece in the bigger challenge of understanding the bacterial processes that shape our earth, such as the cycling of carbon from dead organic matter back into food webs and the atmosphere. “Bacteria are fundamentally single cells, but often what they accomplish in nature is done through cooperation. We have much to uncover about what bacteria can accomplish together and how that differs from their capacity as individuals,” adds Cordero.

Co-authors include Julia Schwartzman and Ali Ebrahimi, former postdocs in the Cordero Lab. Other co-authors are Gray Chadwick, a former graduate student at Caltech; Yuya Sato, a senior researcher at Japan’s National Institute of Advanced Industrial Science and Technology; Benjamin Roller, a current postdoc at the University of Vienna; and Victoria Orphan of Caltech.

Funding was provided by the Simons Foundation. Individual authors received support from the Swiss National Science Foundation, Japan Society for the Promotion of Science, the U.S. National Science Foundation, the Kavli Institute of Theoretical Physics, and the National Institutes of Health.



de MIT News https://ift.tt/rtHBcA2

miércoles, 29 de junio de 2022

Building explainability into the components of machine-learning models

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’s risk of developing cardiac disease, a physician might want to know how strongly the patient’s heart rate data influences that prediction.

But if those features are so complex or convoluted that the user can’t understand them, does the explanation method do any good?

MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.

“We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself,” says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.

To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning model’s prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.

They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact.

MIT co-authors include Dongyu Liu, a postdoc; visiting professor Laure Berti-Équille, research director at IRD; and senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a principal data scientist at Corelight. The research is published in the June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining’s peer-reviewed Explorations Newsletter.

Real-world lessons

Features are input variables that are fed to machine-learning models; they are usually drawn from the columns in a dataset. Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains.

For several years, he and his team have worked with decision makers to identify machine-learning usability challenges. These domain experts, most of whom lack machine-learning knowledge, often don’t trust models because they don’t understand the features that influence predictions.

For one project, they partnered with clinicians in a hospital ICU who used machine learning to predict the risk a patient will face complications after cardiac surgery. Some features were presented as aggregated values, like the trend of a patient’s heart rate over time. While features coded this way were “model ready” (the model could process the data), clinicians didn’t understand how they were computed. They would rather see how these aggregated features relate to original values, so they could identify anomalies in a patient’s heart rate, Liu says.

By contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like “number of posts a student made on discussion forums” they would rather have related features grouped together and labeled with terms they understood, like “participation.”

“With interpretability, one size doesn’t fit all. When you go from area to area, there are different needs. And interpretability itself has many levels,” Veeramachaneni says.

The idea that one size doesn’t fit all is key to the researchers’ taxonomy. They define properties that can make features more or less interpretable for different decision makers and outline which properties are likely most important to specific users.

For instance, machine-learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the model’s performance.

On the other hand, decision makers with no machine-learning experience might be better served by features that are human-worded, meaning they are described in a way that is natural for users, and understandable, meaning they refer to real-world metrics users can reason about.

“The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with,” Zytek says.

Putting interpretability first

The researchers also outline feature engineering techniques a developer can employ to make features more interpretable for a specific audience.

Feature engineering is a process in which data scientists transform data into a format machine-learning models can process, using techniques like aggregating data or normalizing values. Most models also can’t process categorical data unless they are converted to a numerical code. These transformations are often nearly impossible for laypeople to unpack.

Creating interpretable features might involve undoing some of that encoding, Zytek says. For instance, a common feature engineering technique organizes spans of data so they all contain the same number of years. To make these features more interpretable, one could group age ranges using human terms, like infant, toddler, child, and teen. Or rather than using a transformed feature like average pulse rate, an interpretable feature might simply be the actual pulse rate data, Liu adds.

“In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible,” Zytek says.

Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations in a more efficient manner, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats that can be understood by decision makers.



de MIT News https://ift.tt/jSHdTER

3 Questions: Marking the 10th anniversary of the Higgs boson discovery

This July 4 marks 10 years since the discovery of the Higgs boson, the long-sought particle that imparts mass to all elementary particles. The elusive particle was the last missing piece in the Standard Model of particle physics, which is our most complete model of the universe.

In early summer of 2012, signs of the Higgs particle were detected in the Large Hadron Collider (LHC), the world’s largest particle accelerator, which is operated by CERN, the European Organization for Nuclear Research. The LHC is engineered to smash together billions upon billions of protons for the chance at producing the Higgs boson and other particles that are predicted to have been created in the early universe.

In analyzing the products of countless proton-on-proton collisions, scientists registered a Higgs-like signal in the accelerator’s two independent detectors, ATLAS and CMS (the Compact Muon Solenoid). Specifically, the teams observed signs that a new particle had been created and then decayed to two photons, two Z bosons or two W bosons, and that this new particle was likely the Higgs boson.

The discovery was revealed within the CMS collaboration, including over 3,000 scientists, on June 15, and ATLAS and CMS announced their respective observations to the world on July 4. More than 50 MIT physicists and students contributed to the CMS experiment, including Christoph Paus, professor of physics, who was one of the experiment’s two lead investigators to organize the search for the Higgs boson.

As the LHC prepares to start back up on July 5 with “Run 3,” MIT News spoke with Paus about what physicists have learned about the Higgs particle in the last 10 years, and what they hope to discover with this next deluge of particle data.

Q: Looking back, what do you remember as the key moments leading up to the Higgs boson’s discovery?

A: I remember that by the end of 2011, we had taken a significant amount of data, and there were some first hints that there could be something, but nothing that was conclusive enough. It was clear to everybody that we were entering the critical phase of a potential discovery. We still wanted to improve our searches, and so we decided, which I felt was one of the most important decisions we took, that we had to remove the bias — that is, remove our knowledge about where the signal could appear. Because it’s dangerous as a scientist to say, “I know the solution,” which can influence the result unconsciously. So, we made that decision together in the coordination group and said, we are going to get rid of this bias by doing what people refer to as a “blind” analysis. This allowed the analyzers to focus on the technical aspects, making sure everything was correct without having to worry about being influenced by what they saw.

Then, of course, there had to be the moment where we unblind the data and really look to see, is the Higgs there or not. And about two weeks before the scheduled presentations on July 4 where we eventually announced the discovery, there was a meeting on June 15 to show the analysis with its results to the collaboration. The most significant analysis turned out to be the two-photon analysis. One of my students, Joshua Bendavid PhD ’13, was leading that analysis, and the night before the meeting, only he and another person on the team were allowed to unblind the data. They were working until 2 in the morning, when they finally pushed a button to see what it looks like. And they were the first in CMS to have that moment of seeing that [the Higgs boson] was there. Another student of mine who was working on this analysis, Mingming Yang PhD ’15, presented the results of that search to the Collaboration at CERN that following afternoon. It was a very exciting moment for all of us. The room was hot and filled with electricity.

The scientific process of the discovery was very well-designed and executed, and I think it can serve as a blueprint for how people should do such searches.

Q: What more have scientists learned of the Higgs boson since the particle’s detection?

A: At the time of the discovery, something interesting happened I did not really expect. While we were always talking about the Higgs boson before, we became very careful once we saw that “narrow peak.” How could we be sure that it was the Higgs boson and not something else? It certainly looked like the Higgs boson, but our vision was quite blurry. It could have turned out in the following years that it was not the Higgs boson. But as we now know, with so much more data, everything is completely consistent with what the Higgs boson is predicted to look like, so we became comfortable with calling the narrow resonance not just a Higgs-like particle but rather simply the Higgs boson. And there were a few milestones that made sure this is really the Higgs as we know it.

The initial discovery was based on Higgs bosons decaying to two photons, two Z bosons or two W bosons. That was only a small fraction of decays that the Higgs could undergo. There are many more. The amount of decays of the Higgs boson into a particular set of particles depends critically on their masses. This characteristic feature is essential to confirm that we are really dealing with the Higgs boson.

What we found since then is that the Higgs boson does not only decay to bosons, but also to fermions, which is not obvious because bosons are force carrier particles while fermions are matter particles. The first new decay was the decay to tau leptons, the heavier sibling of the electron. The next step was the observation of the Higgs boson decaying to b quarks, the heaviest quark that the Higgs can decay to. The b quark is the heaviest sibling of the down quark, which is a building block of protons and neutrons and thus all atomic nuclei around us. These two fermions are part of the heaviest generation of fermions in the standard model. Only recently the Higgs boson was observed to decay to muons, the charge lepton of the second and thus lighter generation, at the expected rate. Also, the direct coupling to the heaviest  top quark was established, which spans together with the muons four orders of magnitudes in terms of their masses, and the Higgs coupling behaves as expected over this wide range.

Q: As the Large Hadron Collider gears up for its new “Run 3,” what do you hope to discover next?

One very interesting question that Run 3 might give us some first hints on is the self-coupling of the Higgs boson. As the Higgs couples to any massive particle, it can also couple to itself. It is unlikely that there is enough data to make a discovery, but first hints of this coupling would be very exciting to see, and this constitutes a fundamentally different test than what has been done so far.

Another interesting aspect that more data will help to elucidate is the question of whether the Higgs boson might be a portal and decay to invisible particles that could be candidates for explaining the mystery of dark matter in the universe. This is not predicted in our standard model and thus would unveil the Higgs boson as an imposter.

Of course, we want to double down on all the measurements we have made so far and see whether they continue to line up with our expectations.

This is true also for the upcoming major upgrade of the LHC (runs starting in 2029) for what we refer to as the High Luminosity LHC (HL-LHC). Another factor of 10 more events will be accumulated during this program, which for the Higgs boson means we will be able to observe its self-coupling. For the far future, there are plans for a Future Circular Collider, which could ultimately measure the total decay width of the Higgs boson independent of its decay mode, which would be another important and very precise test whether the Higgs boson is an imposter.

As any other good physicist, I hope though that we can find a crack in the armor of the Standard Model, which is so far holding up all too well. There are a number of very important observations, for example the nature of dark matter, that cannot be explained using the Standard Model. All of our future studies, from Run 3 starting on July 5 to the very in the future FCC, will give us access to entirely uncharted territory. New phenomena can pop up, and I like to be optimistic.



de MIT News https://ift.tt/36Nd1KE

Kerry Emanuel: A climate scientist and meteorologist in the eye of the storm

Kerry Emanuel once joked that whenever he retired, he would start a “hurricane safari” so other people could experience what it’s like to fly into the eye of a hurricane.

“All of a sudden, the turbulence stops, the sun comes out, bright sunshine, and it's amazingly calm. And you're in this grand stadium [of clouds miles high],” he says. “It’s quite an experience.”

While the hurricane safari is unlikely to come to fruition — “You can’t just conjure up a hurricane,” he explains — Emanuel, a world-leading expert on links between hurricanes and climate change, is retiring from teaching in the Department of Earth Atmospheric and Planetary Sciences (EAPS) at MIT after a more than 40-year career.

Best known for his foundational contributions to the science of tropical cyclones, climate, and links between them, Emanuel has also been a prominent voice in public debates on climate change, and what we should do about it.

“Kerry has had an enormous effect on the world through the students and junior scientists he has trained,” says William Boos PhD ’08, an atmospheric scientist at the University of California at Berkeley. “He’s a brilliant enough scientist and theoretician that he didn’t need any of us to accomplish what he has, but he genuinely cares about educating new generations of scientists and helping to launch their careers.”

In recognition of Emanuel’s teaching career and contributions to science, a symposium was held in his honor at MIT on June 21 and 22, organized by several of his former students and collaborators, including Boos. Research presented at the symposium focused on the many fields influenced by Emanuel’s more than 200 published research papers — on everything from forecasting the risks posed by tropical cyclones to understanding how rainfall is produced by continent-sized patterns of atmospheric circulation.

Emanuel’s career observing perturbations of Earth’s atmosphere started earlier than he can remember. “According to my older brother, from the age of 2, I would crawl to the window whenever there was a thunderstorm,” he says. At first, those were the rolling thunderheads of the Midwest where he grew up, then it was the edges of hurricanes during a few teenage years in Florida. Eventually, he would find himself watching from the very eye of the storm, both physically and mathematically.

Emanuel attended MIT both as an undergraduate studying Earth and planetary sciences, and for his PhD in meteorology, writing a dissertation on thunderstorms that form ahead of cold fronts. Within the department, he worked with some of the central figures of modern meteorology such as Jule Charney, Fred Sanders, and Edward Lorenz — the founder of chaos theory.

After receiving his PhD in 1978, Emanuel joined the faculty of the University of California at Los Angeles. During this period, he also took a semester sabbatical to film the wind speeds of tornadoes in Texas and Oklahoma. After three years, he returned to MIT and joined the Department of Meteorology in 1981. Two years later, the department merged with Earth and Planetary Sciences to form EAPS as it is known today, and where Emanuel has remained ever since.

At MIT, he shifted scales. The thunderstorms and tornadoes that had been the focus of Emanuel’s research up to then were local atmospheric phenomena, or “mesoscale” in the language of meteorologists. The larger “synoptic scale” storms that are hurricanes blew into Emanuel’s research when as a young faculty member he was asked to teach a class in tropical meteorology; in prepping for the class, Emanuel found his notes on hurricanes from graduate school no longer made sense.

“I realized I didn’t understand them because they couldn’t have been correct,” he says. “And so I set out to try to find a much better theoretical formulation for hurricanes.”

He soon made two important contributions. In 1986, his paper “An Air-Sea Interaction Theory for Tropical Cyclones. Part 1: Steady-State Maintenance” developed a new theory for upper limits of hurricane intensity given atmospheric conditions. This work in turn led to even larger-scale questions to address. “That upper bound had to be dependent on climate, and it was likely to go up if we were to warm the climate,” Emanuel says — a phenomenon he explored in another paper, “The Dependence of Hurricane Intensity on Climate,” which showed how warming sea surface temperatures and changing atmospheric conditions from a warming climate would make hurricanes more destructive.

“In my view, this is among the most remarkable achievements in theoretical geophysics,” says Adam Sobel PhD ’98, an atmospheric scientist at Columbia University who got to know Emanuel after he graduated and became interested in tropical meteorology. “From first principles, using only pencil-and-paper analysis and physical reasoning, he derives a quantitative bound on hurricane intensity that has held up well over decades of comparison to observations” and underpins current methods of predicting hurricane intensity and how it changes with climate.

This and diverse subsequent work led to numerous honors, including membership to the American Philosophical Society, the National Academy of Sciences, and the American Academy of Arts and Sciences.

Emanuel’s research was never confined to academic circles, however; when politicians and industry leaders voiced loud opposition to the idea that human-caused climate change posed a threat, he spoke up.

“I felt kind of a duty to try to counter that,” says Emanuel. “I thought it was an interesting challenge to see if you could go out and convince what some people call climate deniers, skeptics, that this was a serious risk and we had to treat it as such.”

In addition to many public lectures and media appearances discussing climate change, Emanuel penned a book for general audiences titled “What We Know About Climate Change,” in addition to a widely-read primer on climate change and risk assessment designed to influence business leaders.

“Kerry has an unmatched physical understanding of tropical climate phenomena,” says Emanuel’s colleague, Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at EAPS. “But he’s also a great communicator and has generously given his time to public outreach. His book ‘What We Know About Climate Change’ is a beautiful piece of work that is readily understandable and has captivated many a non-expert reader.”

Along with a number of other prominent climate scientists, Emanuel also began advocating for expanding nuclear power as the most rapid path to decarbonizing the world’s energy systems.

“I think the impediment to nuclear is largely irrational in the United States,” he says. “So, I've been trying to fight that just like I’ve been trying to fight climate denial.”

One lesson Emanuel has taken from his public work on climate change is that skeptical audiences often respond better to issues framed in positive terms than to doom and gloom; he’s found emphasizing the potential benefits rather than the sacrifices involved in the energy transition can engage otherwise wary audiences.

“It's really not opposition to science, per se,” he says. “It’s fear of the societal changes they think are required to do something about it.”

He has also worked to raise awareness about how insurance companies significantly underestimate climate risks in their policies, in particular by basing hurricane risk on unreliable historical data. One recent practical result has been a project by the First Street Foundation to assess the true flood risk of every property in the United States using hurricane models Emanuel developed.

“I think it's transformative,” Emanuel says of the project with First Street. “That may prove to be the most substantive research I've done.”

Though Emanuel is retiring from teaching, he has no plans to stop working. “When I say ‘retire’ it’s in quotes,” he says. In 2011, Emanuel and Professor of Geophysics Daniel Rothman founded the Lorenz Center, a climate research center at MIT in honor of Emanuel’s mentor and friend Edward Lorenz. Emanuel will continue to participate in work at the center, which aims to counter what Emanuel describes as a trend away from “curiosity-driven” work in climate science.

“Even if there were no such thing as global warming, [climate science] would still be a really, really exciting field,” says Emanuel. “There's so much to understand about climate, about the climates of the past, about the climates of other planets.”

In addition to work with the Lorenz Center, he’s become interested once again in tornadoes and severe local storms, and understanding whether climate also controls such local phenomena. He’s also involved in two of MIT’s Climate Grand Challenges projects focused on translating climate hazards to explicit financial and health risks — what will bring the dangers of climate change home to people, he says, is for the public to understand more concrete risks, like agricultural failure, water shortages, electricity shortages, and severe weather events. Capturing that will drive the next few years of his work.

“I’m going to be stepping up research in some respects,” he says, now living full-time at his home in Maine.

Of course, “retiring” does mean a bit more free time for new pursuits, like learning a language or an instrument, and “rediscovering the art of sailing,” says Emanuel. He’s looking forward to those days on the water, whatever storms are to come.



de MIT News https://ift.tt/yD4cRzH

Could carbon monoxide foam help fight inflammation?

Carbon monoxide is best known as a potentially deadly gas. However, in small doses it also has beneficial qualities: It has been shown to reduce inflammation and can help stimulate tissue regeneration.

A team of researchers led by MIT, Brigham and Women’s Hospital, the University of Iowa, and Beth Israel Deaconess Medical Center has now devised a novel way to deliver carbon monoxide to the body while bypassing its potentially hazardous effects. Inspired by techniques used in molecular gastronomy, they were able to incorporate carbon monoxide into stable foams that can be delivered to the digestive tract.

In a study of mice, the researchers showed that these foams reduced inflammation of the colon and helped to reverse acute liver failure caused by acetaminophen overdose. The new technique, described today in a Science Translational Medicine paper, could also be used to deliver other therapeutic gases, the researchers say.

“The ability to deliver a gas opens up whole new opportunities of how we think of therapeutics. We generally don’t think of a gas as a therapeutic that you would take orally (or that could be administered rectally), so this offers an exciting new way to think about how we can help patients,” says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.

Traverso and Leo Otterbein, a professor of surgery at Harvard Medical School and Beth Israel Deaconess Medical Center, are the senior authors of the paper. The lead authors are James Byrne, a physician-scientist and radiation oncologist at the University of Iowa (formerly a resident in the Mass General Brigham/Dana Farber Radiation Oncology Program), and a research affiliate at MIT’s Koch Institute for Integrative Cancer Research; David Gallo, a researcher at Beth Israel Deaconess; and Hannah Boyce, a research engineer at Brigham and Women’s.

Delivery by foam

Since the late 1990s, Otterbein has been studying the therapeutic effects of low doses of carbon monoxide. The gas has been shown to impart beneficial effects in preventing rejection of transplanted organs,  reducing tumor growth, and modulating inflammation and acute tissue injury. 

When inhaled at high concentrations, carbon monoxide binds to hemoglobin in the blood and prevents the body from obtaining enough oxygen, which can lead to serious health effects and even death. However, at lower doses, it has beneficial effects such as reducing inflammation and promoting tissue regeneration, Otterbein says.

“We’ve known for years that carbon monoxide can impart beneficial effects in all sorts of disease pathologies, when given as an inhaled gas,” he says. “However, it’s been a challenge to use it in the clinic, for a number of reasons related to safe and reproducible administration, and health care workers’ concerns, which has led to people wanting to find other ways to administer it.”

A few years ago, Traverso and Otterbein were introduced by Christoph Steiger, a former MIT postdoc and an author of the new study. Traverso’s lab specializes in developing novel methods for delivering drugs to the gastrointestinal tract. To tackle the challenge of delivering a gas, they came up with the idea of incorporating the gas into a foam, much the way that chefs use carbon dioxide to create foams infused with fruits, vegetables, or other flavors.

Culinary foams are usually created by adding a thickening or gelling agent to a liquid or a solid that has been pureed, and then either whipping it to incorporate air or using a specialized siphon that injects gases such as carbon dioxide or compressed air.

The MIT team created a modified siphon that could be attached to any kind of gas cannister, allowing them to incorporate carbon monoxide into their foam. To create the foams, they used food additives such as alginate, methyl cellulose, and maltodextrin. Xantham gum was also added to stabilize the foams. By varying the amount of xantham gum, the researchers could control how long it would take for the gas to be released once the foams were administered.

After showing that they could control the timing of the gas release in the body, the researchers decided to test the foams for a few different applications. First, they studied two types of topical applications, analogous to applying a cream to soothe itchy or inflamed areas. In a study of mice, they found that delivering the foam rectally reduced inflammation caused by colitis or radiation-induced proctitis (inflammation of the rectum that can be caused by radiation treatment for cervical or prostate cancer).

Current treatments for colitis and other inflammatory conditions such as Crohn’s disease usually involve drugs that suppress the immune system, which can make patients more susceptible to infections. Treating those conditions with a foam that can be applied directly to inflamed tissue offers a potential alternative, or complementary approach, to those immunosuppressive treatments, the researchers say. While the foams were given rectally in this study, it could also be possible to deliver them orally, the researchers say.

“The foams are so easy to use, which will help with the translation to patient care,” Byrne says.  

Controlling the dose

The researchers then set out to investigate possible systemic applications, in which carbon monoxide could be delivered to remote organs, such as the liver, because of its ability to diffuse from the GI tract elsewhere in the body. For this study, they used a mouse model of acetaminophen overdose, which causes severe liver damage. They found that gas delivered to the lower GI tract was able to reach the liver and greatly reduce the amount of inflammation and tissue damage seen there.

In these experiments, the researchers did not find any adverse effects after the carbon monoxide administration. Previous studies in humans have shown that small amounts of carbon monoxide can be safely inhaled. A healthy individual has a carbon monoxide concentration of about 1 percent in the bloodstream, and studies of human volunteers have shown that levels as high as 14 percent can be tolerated without adverse effects.

“We think that with the foam used in this study, we’re not even coming close to the levels that we would be concerned about,” Otterbein says. “What we have learned from the inhaled gas trials has paved a path to say it’s safe, as long as you know and can control how much you’re giving, much like any medication. That’s another nice aspect of this approach — we can control the exact dose.”

In this study, the researchers also created carbon-monoxide containing gels, as well as gas-filled solids, using techniques similar to those used to make Pop Rocks, the hard candies that contain pressurized carbon dioxide bubbles. They plan to test those in further studies, in addition to developing the foams for possible tests in human patients.

The research was funded, in part, by a Prostate Cancer Foundation Young Investigator Award, a Department of Defense Prostate Cancer Program Early Investigator Award, a Hope Funds for Cancer Research fellowship, the National Football League Players Association, the Department of Defense, and MIT’s Department of Mechanical Engineering.



de MIT News https://ift.tt/tXEJ1Vs

Researchers pioneer a new way to detect microbial contamination in cell cultures

Researchers from the Critical Analytics For Manufacturing Personalized-Medicine (CAMP) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a new method of detecting adventitious microbial contamination in mesenchymal stromal cell (MSC) cultures, ensuring the rapid and accurate testing of cell therapy products intended for use in patients. Utilizing machine learning to predict if a culture is clean or contaminated in near-real time, this breakthrough method can be used during the cell manufacturing process, compared to less efficient end-point testing.

Cell therapy has, in recent years, become a vital treatment option for a variety of diseases, injuries, and illnesses. By transferring healthy human cells into a patient’s body to heal or replace damaged cells, cell therapy has shown increasing promise in effectively treating cancers, autoimmune diseases, spinal cord injuries, and neurological conditions, among others. As cell therapies advance and hold the potential to save more lives, researchers continue to refine cell culture manufacturing methods and processes to ensure the safety, efficiency, and sterility of these products for patient use.

The anomaly-detection model developed by CAMP is a rapid, label-free process analytical technology for detecting microbial contamination in cell cultures. The team's research is explained in an oral abstract "Process Development and Manufacturing: Anomaly Detection for Microbial Contamination In Mesenchymal Stromal Cell Culture," published recently in the journal Cytotherapy.

The machine learning model was developed by first collecting sterile cell culture media samples from a range of MSC cultures of different culture conditions. Some of the collected samples were spiked with different bacteria strains at different colony-forming units, a measurement of the estimated concentration of microorganisms in a test sample. The absorbance spectra of the sterile, unspiked and bacteria-spiked samples were obtained with ultraviolet-visible spectrometry, and the spectra of the sterile samples were used to train the machine learning model. Testing the model with a mixture of sterile and bacteria-spiked samples demonstrated the model's performance in accurately predicting sterility.

"The practical application of this discovery is vast. When combined with at-line technologies, the model can be used to continuously monitor cultures grown in bioreactors at Good Manufacturing Practice (GMP) facilities in-process," says Shruthi Pandi Chelvam, lead author and research engineer at SMART CAMP who worked with Derrick Yong and Stacy Springs, SMART CAMP principal investigators, on the development of this method. "Consequently, GMP facilities can conduct sterility tests for bacteria in spent culture media more quickly with less manpower under closed-loop operations. Lastly, patients receiving cell therapy as part of their treatment can be assured that products have been thoroughly evaluated for safety and sterility."

During the process of cell therapy manufacturing, this anomaly-detection model can be used to detect the presence of adventitious microbial contamination in cell cultures within a few minutes. This in-process method can help to save time and resources, as contaminated cultures can immediately be discarded and reconstructed. This method provides a rapid alternative to conventional sterility tests and other microbiological bacteria detection methods, often taking a few days and almost always performed on finished products.

"Our increased adoption of machine learning in microbial anomaly detection has enabled us to develop a unique test which quickly performs in-process contamination monitoring, marking a huge step forward in further streamlining the cell therapy manufacturing process. Besides ensuring the safety and sterility of cell products prior to infusion in patients, this method also offers cost and resource effectiveness for manufacturers, as it allows for decisive batch restarting and stoppage should the culture be contaminated,” adds Yie Hou Lee, scientific director of SMART CAMP.

Moving forward, CAMP aims to develop an in-process monitoring pipeline in which this anomaly detection model can be integrated with some of the in-house at-line technologies that are being developed, which would allow for periodic culture analysis using a bioreactor. This would open the possibilities for further, long-term experimental studies in continuous culture monitoring.

Lead author Shruthi Pandi Chelvam also won the Early Stage Professionals Abstract Award, which is presented to three outstanding scholars, and abstracts are scored through a blinded peer-review process. The research was also accepted for the oral presentation at the 2022 International Society for Cell and Gene Therapy (ISCT) conference, a prestigious event in cell and gene therapies.

“This team-based, interdisciplinary approach to technology development that addresses critical bottlenecks in cell therapy manufacturing including rapid safety assessment that allows on intermittent or at-line monitoring of plausible adventitious agent contamination is a hallmark of SMART CAMP’s research goals,” adds MIT's Krystyn Van Vliet, who is associate vice president for research, associate provost, a professor of materials science and engineering, and co-lead of SMART CAMP with Hanry Yu, professor at the National University of Singapore.

The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program. The study collaborated with a team from the Integrated Manufacturing Program for Autologous Cell Therapy, one of the sister programs in the Singapore Cell Therapy Advanced Manufacturing Program, of which CAMP is a part, to help develop an automated sampling system. This technology would integrate into the anomaly detection model.

CAMP is a SMART interdisciplinary research group launched in June 2019. It focuses on better ways to produce living cells as medicine, or cellular therapies, to provide more patients access to promising and approved therapies. The investigators at CAMP address two key bottlenecks facing the production of a range of potential cell therapies: critical quality attributes (CQA) and process analytic technologies (PAT). Leveraging deep collaborations within Singapore and MIT in the United States, CAMP invents and demonstrates CQA/PAT capabilities from stem to immune cells. Its work addresses ailments ranging from cancer to tissue degeneration, targeting adherent and suspended cells, with and without genetic engineering.

CAMP is the R&D core of a comprehensive national effort on cell therapy manufacturing in Singapore.

SMART was established by MIT in partnership with the NRF in 2007. SMART is the first entity in CREATE developed by NRF. SMART serves as an intellectual and innovation hub for cutting-edge research interactions of interest to both MIT and Singapore. SMART currently comprises an Innovation Center and five IRGs: Antimicrobial Resistance (AMR), CAMP, Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP), Future Urban Mobility (FM), and Low Energy Electronic Systems (LEES).



de MIT News https://ift.tt/m5VhHcM

martes, 28 de junio de 2022

Robot overcomes uncertainty to retrieve buried objects

For humans, finding a lost wallet buried under a pile of items is pretty straightforward — we simply remove things from the pile until we find the wallet. But for a robot, this task involves complex reasoning about the pile and objects in it, which presents a steep challenge.

MIT researchers previously demonstrated a robotic arm that combines visual information and radio frequency (RF) signals to find hidden objects that were tagged with RFID tags (which reflect signals sent by an antenna). Building off that work, they have now developed a new system that can efficiently retrieve any object buried in a pile. As long as some items in the pile have RFID tags, the target item does not need to be tagged for the system to recover it.

The algorithms behind the system, known as FuseBot, reason about the probable location and orientation of objects under the pile. Then FuseBot finds the most efficient way to remove obstructing objects and extract the target item. This reasoning enabled FuseBot to find more hidden items than a state-of-the-art robotics system, in half the time.

This speed could be especially useful in an e-commerce warehouse. A robot tasked with processing returns could find items in an unsorted pile more efficiently with the FuseBot system, says senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab.

“What this paper shows, for the first time, is that the mere presence of an RFID-tagged item in the environment makes it much easier for you to achieve other tasks in a more efficient manner. We were able to do this because we added multimodal reasoning to the system — FuseBot can reason about both vision and RF to understand a pile of items,” adds Adib.

Joining Adib on the paper are research assistants Tara Boroushaki, who is the lead author; Laura Dodds; and Nazish Naeem. The research will be presented at the Robotics: Science and Systems conference.

Targeting tags

A recent market report indicates that more than 90 percent of U.S. retailers now use RFID tags, but the technology is not universal, leading to situations in which only some objects within piles are tagged.

This problem inspired the group’s research.

With FuseBot, a robotic arm uses an attached video camera and RF antenna to retrieve an untagged target item from a mixed pile. The system scans the pile with its camera to create a 3D model of the environment. Simultaneously, it sends signals from its antenna to locate RFID tags. These radio waves can pass through most solid surfaces, so the robot can “see” deep into the pile. Since the target item is not tagged, FuseBot knows the item cannot be located at the exact same spot as an RFID tag.

Algorithms fuse this information to update the 3D model of the environment and highlight potential locations of the target item; the robot knows its size and shape. Then the system reasons about the objects in the pile and RFID tag locations to determine which item to remove, with the goal of finding the target item with the fewest moves.

It was challenging to incorporate this reasoning into the system, says Boroushaki.

The robot is unsure how objects are oriented under the pile, or how a squishy item might be deformed by heavier items pressing on it. It overcomes this challenge with probabilistic reasoning, using what it knows about the size and shape of an object and its RFID tag location to model the 3D space that object is likely to occupy.

As it removes items, it also uses reasoning to decide which item would be “best” to remove next.

“If I give a human a pile of items to search, they will most likely remove the biggest item first to see what is underneath it. What the robot does is similar, but it also incorporates RFID information to make a more informed decision. It asks, ‘How much more will it understand about this pile if it removes this item from the surface?’” Boroushaki says.

After it removes an object, the robot scans the pile again and uses new information to optimize its strategy.

Retrieval results

This reasoning, as well as its use of RF signals, gave FuseBot an edge over a state-of-the-art system that used only vision. The team ran more than 180 experimental trials using real robotic arms and piles with household items, like office supplies, stuffed animals, and clothing. They varied the sizes of piles and number of RFID-tagged items in each pile.

FuseBot extracted the target item successfully 95 percent of the time, compared to 84 percent for the other robotic system. It accomplished this using 40 percent fewer moves, and was able to locate and retrieve targeted items more than twice as fast.

“We see a big improvement in the success rate by incorporating this RF information. It was also exciting to see that we were able to match the performance of our previous system, and exceed it in scenarios where the target item didn’t have an RFID tag,” Dodds says.

FuseBot could be applied in a variety of settings because the software that performs its complex reasoning can be implemented on any computer — it just needs to communicate with a robotic arm that has a camera and antenna, Boroushaki adds.

In the near future, the researchers are planning to incorporate more complex models into FuseBot so it performs better on deformable objects. Beyond that, they are interested in exploring different manipulations, such as a robotic arm that pushes items out of the way. Future iterations of the system could also be used with a mobile robot that searches multiple piles for lost objects.

This work was funded, in part, by the National Science Foundation, a Sloan Research Fellowship, NTT DATA, Toppan, Toppan Forms, and the MIT Media Lab.



de MIT News https://ift.tt/rqeupaY

Exploring emerging topics in artificial intelligence policy

Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies.

The virtual event, hosted by the AI Policy Forum (AIPF) — an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing — brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility.

In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries — most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own.

Each of these developments represents a different approach to legislating AI, but what makes a good AI law? And when should AI legislation be based on binding rules with penalties versus establishing voluntary guidelines?

Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the internet had its limitations with companies struggling to balance their interests with those of their industry and the public.

“One lesson might be that actually having representative government take an active role early on is a good idea,” he says. “It’s just that they’re challenged by the fact that there appears to be two phases in this environment of regulation. One, too early to tell, and two, too late to do anything about it. In AI I think a lot of people would say we’re still in the ‘too early to tell’ stage but given that there’s no middle zone before it’s too late, it might still call for some regulation.”

A theme that came up repeatedly throughout the first panel on AI laws — a conversation moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum — was the notion of trust. “If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and is the same, then I would say it's trusted AI,” says Bitange Ndemo, professor of entrepreneurship at the University of Nairobi and the former permanent secretary of Kenya’s Ministry of Information and Communication.

Eva Kaili, vice president of the European Parliament, adds that “In Europe, whenever you use something, like any medication, you know that it has been checked. You know you can trust it. You know the controls are there. We have to achieve the same with AI.” Kalli further stresses that building trust in AI systems will not only lead to people using more applications in a safe manner, but that AI itself will reap benefits as greater amounts of data will be generated as a result.

The rapidly increasing applicability of AI across fields has prompted the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. In health care, for example, new techniques in machine learning have shown enormous promise for improving quality and efficiency, but questions of equity, data access and privacy, safety and reliability, and immunology and global health surveillance remain at large.

MIT’s Marzyeh Ghassemi, an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, and David Sontag, an associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to organize AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. The organizers assembled experts devoted to AI, policy, and health from around the world with the goal of understanding what can be done to decrease barriers to access to high-quality health data to advance more innovative, robust, and inclusive research results while being respectful of patient privacy.

Over the course of the series, members of the group presented on a topic of expertise and were tasked with proposing concrete policy approaches to the challenge discussed. Drawing on these wide-ranging conversations, participants unveiled their findings during the symposium, covering nonprofit and government success stories and limited access models; upside demonstrations; legal frameworks, regulation, and funding; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations that are summarized in a report that will be released soon.

One of the findings calls for the need to make more data available for research use. Recommendations that stem from this finding include updating regulations to promote data sharing to enable easier access to safe harbors such as the Health Insurance Portability and Accountability Act (HIPAA) has for de-identification, as well as expanding funding for private health institutions to curate datasets, amongst others. Another finding, to remove barriers to data for researchers, supports a recommendation to decrease obstacles to research and development on federally created health data. “If this is data that should be accessible because it's funded by some federal entity, we should easily establish the steps that are going to be part of gaining access to that so that it's a more inclusive and equitable set of research opportunities for all,” says Ghassemi. The group also recommends taking a careful look at the ethical principles that govern data sharing. While there are already many principles proposed around this, Ghassemi says that “obviously you can't satisfy all levers or buttons at once, but we think that this is a trade-off that's very important to think through intelligently.”

In addition to law and health care, other facets of AI policy explored during the event included auditing and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business, and policy challenges for autonomous vehicles in particular.

The AI Policy Forum Symposium was an effort to bring together communities of practice with the shared aim of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasized the importance of collaboration and the need for different communities to communicate with each other in order to truly make an impact in the AI policy space.

“The dream here is that we all can meet together — researchers, industry, policymakers, and other stakeholders — and really talk to each other, understand each other's concerns, and think together about solutions,” Madry said. “This is the mission of the AI Policy Forum and this is what we want to enable.”



de MIT News https://ift.tt/8YFqMbc

lunes, 27 de junio de 2022

Tissue model reveals key players in liver regeneration

The human liver has amazing regeneration capabilities: Even if up to 70 percent of it is removed, the remaining tissue can regrow a full-sized liver within months.

Taking advantage of this regenerative capability could give doctors many more options for treating chronic liver disease. MIT engineers have now taken a step toward that goal, by creating a new liver tissue model that allows them to trace the steps involved in liver regeneration more precisely than has been possible before.

The new model can yield information that couldn’t be gleaned from studies of mice or other animals, whose biology is not identical to that of humans, says Sangeeta Bhatia, the leader of the research team.

“For years, people have been identifying different genes that seem to be involved in mouse liver regeneration, and some of them seem to be important in humans, but they have never managed to figure out all of the cues to make human liver cells proliferate,” says Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.

The new study, which appears this week in the Proceedings of the National Academy of Sciences, has identified one molecule that appears to play a key role, and also yielded several other candidates that the researchers plan to explore further.

The lead author of the paper is Arnav Chhabra, a former MIT graduate student and postdoc.

Regeneration on a chip

Most of the patients who need liver transplants suffer from chronic illnesses such as viral hepatitis, fatty liver disease, or cancer. However, if researchers had a reliable way to stimulate the liver to regenerate on its own, some transplants could be avoided, Bhatia says. Or, such stimulation might be used to help a donated liver grow after being transplanted.

From studies in mice, researchers have learned a great deal about some of the regeneration pathways that are activated after liver injury or illness. One key factor is the reciprocal relationship between hepatocytes (the main type of cell found in the liver) and endothelial cells, which line the blood vessels. Hepatocytes produce factors that help blood vessels develop, and endothelial cells generate growth factors that help hepatocytes proliferate.

Another contributor that researchers have identified is fluid flow in the blood vessels. In mice, an increase in blood flow can stimulate the endothelial cells to produce signals that promote regeneration.

To model all of these interactions, Bhatia’s lab teamed up with Christopher Chen, the William F. Warren Distinguished Professor of Biomedical Engineering at Boston University, who designs microfluidic devices with channels that mimic blood vessels. To create these models of “regeneration on a chip,” the researchers grew blood vessels along one of these microfluidic channels and then added multicellular spheroid aggregates derived from liver cells from human organ donors.

The chip is designed so that molecules such as growth factors can flow between the blood vessels and the liver spheroids. This setup also allows the researchers to easily knock out genes of interest in a specific cell type and then see how it affects the overall system.

Using this system, the researchers showed that increased fluid flow on its own did not stimulate hepatocytes to enter the cell division cycle. However, if they also delivered an inflammatory signal (the cytokine IL-1-beta), hepatocytes did enter the cell cycle.

When that happened, the researchers were able to measure what other factors were being produced. Some were expected based on earlier mouse studies, but others had not been seen before in human cells, including a molecule called prostaglandin E2 (PGE2).

The MIT team found high levels of this molecule, which is also involved in zebrafish regeneration, in their liver regeneration system. By knocking out the gene for PGE2 biosynthesis in endothelial cells, the researchers were able to show that those cells are the source of PGE2, and they also demonstrated that this molecule stimulates human liver cells to enter the cell cycle.

Human-specific pathways

The researchers now plan to further explore some of the other growth factors and molecules that are produced on their chip during liver regeneration.

“We can look at the proteins that are being produced and ask, what else on this list has the same pattern as the other molecules that stimulate cell division, but is novel?” Bhatia says. “We think we can use this to discover new human-specific pathways.”

In this study, the researchers focused on molecules that stimulate cells to enter cell division, but they now hope to follow the process further along and identify molecules needed to complete the cell cycle. They also hope to discover the signals that tell the liver when to stop regenerating.

Bhatia hopes that eventually researchers will be able to harness these molecules to help treat patients with liver failure. Another possibility is that doctors could use such factors as biomarkers to determine how likely it is that a patient’s liver will regrow on its own.

“Right now when patients come in with liver failure, you have to transplant them because you don’t know if they’re going to recover on their own. But if we knew who had a robust regenerative response, and if we just needed to stabilize them for a little while, we could spare those patients from transplant,” Bhatia says.

The research was funded in part by the National Institutes of Health, the National Science Foundation Graduate Research Fellowship Program, Wellcome Leap, and the Paul and Daisy Soros Fellowship Program.



de MIT News https://ift.tt/cO7iD8k

Making hydrogen power a reality

For decades, government and industry have looked to hydrogen as a potentially game-changing tool in the quest for clean energy. As far back as the early days of the Clinton administration, energy sector observers and public policy experts have extolled the virtues of hydrogen — to the point that some people have joked that hydrogen is the energy of the future, “and always will be.”

Even as wind and solar power have become commonplace in recent years, hydrogen has been held back by high costs and other challenges. But the fuel may finally be poised to have its moment. At the MIT Energy Initiative Spring Symposium — entitled “Hydrogen’s role in a decarbonized energy system” — experts discussed hydrogen production routes, hydrogen consumption markets, the path to a robust hydrogen infrastructure, and policy changes needed to achieve a “hydrogen future.”

During one panel, “Options for producing low-carbon hydrogen at scale,” four experts laid out existing and planned efforts to leverage hydrogen for decarbonization. 

“The race is on”

Huyen N. Dinh, a senior scientist and group manager at the National Renewable Energy Laboratory (NREL), is the director of HydroGEN, a consortium of several U.S. Department of Energy (DOE) national laboratories that accelerates research and development of innovative and advanced water splitting materials and technologies for clean, sustainable, and low-cost hydrogen production.

For the past 14 years, Dinh has worked on fuel cells and hydrogen production for NREL. “We think that the 2020s is the decade of hydrogen,” she said. Dinh believes that the energy carrier is poised to come into its own over the next few years, pointing to several domestic and international activities surrounding the fuel and citing a Hydrogen Council report that projected the future impacts of hydrogen — including 30 million jobs and $2.5 trillion in global revenue by 2050.

“Now is the time for hydrogen, and the global race is on,” she said.

Dinh also explained the parameters of the Hydrogen Shot — the first of the DOE’s “Energy Earthshots” aimed at accelerating breakthroughs for affordable and reliable clean energy solutions. Hydrogen fuel currently costs around $5 per kilogram to produce, and the Hydrogen Shot’s stated goal is to bring that down by 80 percent to $1 per kilogram within a decade.

The Hydrogen Shot will be facilitated by $9.5 billion in funding for at least four clean hydrogen hubs located in different parts of the United States, as well as extensive research and development, manufacturing, and recycling from last year’s bipartisan infrastructure law. Still, Dinh noted that it took more than 40 years for solar and wind power to become cost competitive, and now industry, government, national lab, and academic leaders are hoping to achieve similar reductions in hydrogen fuel costs over a much shorter time frame. In the near term, she said, stakeholders will need to improve the efficiency, durability, and affordability of hydrogen production through electrolysis (using electricity to split water) using today’s renewable and nuclear power sources. Over the long term, the focus may shift to splitting water more directly through heat or solar energy, she said.

“The time frame is short, the competition is intense, and a coordinated effort is critical for domestic competitiveness,” Dinh said.

Hydrogen across continents

Wambui Mutoru, principal engineer for international commercial development, exploration, and production international at the Norwegian global energy company Equinor, said that hydrogen is an important component in the company’s ambitions to be carbon-neutral by 2050. The company, in collaboration with partners, has several hydrogen projects in the works, and Mutoru laid out the company’s Hydrogen to Humber project in Northern England. Currently, the Humber region emits more carbon dioxide than any other industrial cluster in the United Kingdom — 50 percent more, in fact, than the next-largest carbon emitter.   

“The ambition here is for us to deploy the world’s first at-scale hydrogen value chain to decarbonize the Humber industrial cluster,” Mutoru said.

The project consists of three components: a clean hydrogen production facility, an onshore hydrogen and carbon dioxide transmission network, and offshore carbon dioxide transportation and storage operations. Mutoru highlighted the importance of carbon capture and storage in hydrogen production. Equinor, she said, has captured and sequestered carbon offshore for more than 25 years, storing more than 25 million tons of carbon dioxide during that time.

Mutoru also touched on Equinor’s efforts to build a decarbonized energy hub in the Appalachian region of the United States, covering territory in Ohio, West Virginia, and Pennsylvania. By 2040, she said, the company's ambition is to produce about 1.5 million tons of clean hydrogen per year in the region — roughly equivalent to 6.8 gigawatts of electricity — while also storing 30 million tons of carbon dioxide.

Mutoru acknowledged that the biggest challenge facing potential hydrogen producers is the current lack of viable business models. “Resolving that challenge requires cross-industry collaboration, and supportive policy frameworks so that the market for hydrogen can be built and sustained over the long term,” she said.

Confronting barriers

Gretchen Baier, executive external strategy and communications leader for Dow, noted that the company already produces hydrogen in multiple ways. For one, Dow operates the world’s largest ethane cracker, in Texas. An ethane cracker heats ethane to break apart molecular bonds to form ethylene, with hydrogen one of the byproducts of the process. Also, Baier showed a slide of the 1891 patent for the electrolysis of brine water, which also produces hydrogen. The company still engages in this practice, but Dow does not have an effective way of utilizing the resulting hydrogen for their own fuel.

“Just take a moment to think about that,” Baier said. “We’ve been talking about hydrogen production and the cost of it, and this is basically free hydrogen. And it’s still too much of a barrier to somewhat recycle that and use it for ourselves. The environment is clearly changing, and we do have plans for that, but I think that kind of sets some of the challenges that face industry here.”

However, Baier said, hydrogen is expected to play a significant role in Dow’s future as the company attempts to decarbonize by 2050. The company, she said, plans to optimize hydrogen allocation and production, retrofit turbines for hydrogen fueling, and purchase clean hydrogen. By 2040, Dow expects more than 60 percent of its sites to be hydrogen-ready.

Baier noted that hydrogen fuel is not a “panacea,” but rather one among many potential contributors as industry attempts to reduce or eliminate carbon emissions in the coming decades. “Hydrogen has an important role, but it’s not the only answer,” she said.

“This is real”

Colleen Wright is vice president of corporate strategy for Constellation, which recently separated from Exelon Corporation. (Exelon now owns the former company’s regulated utilities, such as Commonwealth Edison and Baltimore Gas and Electric, while Constellation owns the competitive generation and supply portions of the business.) Wright stressed the advantages of nuclear power in hydrogen production, which she said include superior economics, low barriers to implementation, and scalability.

“A quarter of emissions in the world are currently from hard-to-decarbonize sectors — the industrial sector, steel making, heavy-duty transportation, aviation,” she said. “These are really challenging decarbonization sectors, and as we continue to expand and electrify, we’re going to need more supply. We’re also going to need to produce clean hydrogen using emissions-free power.”

“The scale of nuclear power plants is uniquely suited to be able to scale hydrogen production,” Wright added. She mentioned Constellation’s Nine Mile Point site in the State of New York, which received a DOE grant for a pilot program that will see a proton exchange membrane electrolyzer installed at the site.

“We’re very excited to see hydrogen go from a [research and development] conversation to a commercial conversation,” she said. “We’ve been calling it a little bit of a ‘middle-school dance.’ Everybody is standing around the circle, waiting to see who’s willing to put something at stake. But this is real. We’re not dancing around the edges. There are a lot of people who are big players, who are willing to put skin in the game today.”



de MIT News https://ift.tt/CVnIpDl

MIT-WHOI Joint Program announces new leadership

After 13 years as director of the MIT-Woods Hole Oceanographic Institution (WHOI) Joint Program in Oceanography/Applied Ocean Science and Engineering, Ed Boyle, professor of ocean geochemistry in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), is stepping down at the end of June. Professor Mick Follows, who holds joint appointments in EAPS and the Department of Civil and Environmental Engineering, will take on the directorship beginning July 1.

The leadership succession was announced by MIT Vice President for Research Maria Zuber in an email to the MIT-WHOI Joint Program community.

In her letter, Zuber noted that, “under Ed’s leadership, the Joint Program has continued to be recognized as one of the world’s premier graduate programs in oceanography, a national and global asset to education and research in ocean science. Ed’s positive impact on the program will benefit students, faculty, and staff for years to come.”

Boyle received his PhD in oceanography from the MIT-WHOI Joint Program in 1976 and joined the MIT faculty the following year. As a marine geochemist, his research focuses on the oceanic dispersal of anthropogenic emissions and the evolution of the Earth’s climate. Boyle is a member of the National Academy of Sciences and a recipient of the Urey Medal of the European Association of Geochemistry. He assumed the role of director of the Joint Program in 2009.

Follows, who joined the MIT faculty in 2013, has been closely involved with the MIT-WHOI Joint Program for many years, advising students and contributing to program development. In addition to his new position as the program’s director, Follows is lead investigator for both the MIT Darwin Project and the Simons Collaboration on Computational Biogeochemical Modeling of Marine Ecosystems, where he studies the biogeochemical cycles of carbon and nutrients in the ocean.

Follows “is fully invested in the program’s ongoing success, and will make an excellent director,” Zuber wrote in her email.



de MIT News https://ift.tt/oaQVNdw

sábado, 25 de junio de 2022

Making art through computation

Chelsi Cocking is an interdisciplinary artist who explores the human body with the help of computers. For her work, she develops sophisticated software to use as her artistic tools, including facial detection techniques, body tracking software, and machine learning algorithms.

Cocking’s interest in the human body stems from her childhood training in modern dance. Growing up in Kingston, Jamaica, she equally loved the arts and sciences, refusing to pick one over the other. For college, “I really wanted to find a way to do both, but it was hard,” she says. “Luckily, through my older brother, I found [the field of] computational media at Georgia Tech.” There, she learned to develop technology for computer-based media, such as animation and graphics.

In her final year of undergrad, Cocking took a studio class where she worked with two other students on a dance performance piece. Together, they tracked the movements of three local dancers and projected visualizations of these movements in real-time. Cocking quickly fell in love with this medium of computational art. But before she could really explore it, she graduated and left to start a full-time job in product design that she had already lined up. 

Cocking worked in product design for four years, first at a startup, then at Dropbox. “In the back of my mind, I always wanted to go back to grad school” to continue exploring computational art, she says. “But I didn’t really have the courage to do so.” When the pandemic hit and everything moved online, she saw an opportunity to chase her dreams. With encouragement from her family, she sought out online courses at the School for Poetic Computation, while still keeping her day job. As soon as she started, everything clicked: “This is what I want to do,” she says.

Through the school, Cocking heard that her current advisor, Zach Lieberman, an adjunct associate professor in the Media Lab, had an opening in his research group, the Future Sketches group. Now, she spends each day exploring new ideas for making art through computation. “Fun is enough justification for my research,” she says.

A long-awaited return to computational art

When Cocking first joined the Future Sketches group last fall, she was filled with ideas and armed with strong design skills, which she had developed as a product designer. But she had also been on a four-year hiatus from full-time coding and needed to get back in shape. After consulting with Lieberman, she set out on a project where she could ramp up her coding skills while still exploring her interests in the human body.

For this project, Cocking delved into a new medium: photography. In a series of images entitled Photorythms, she took photographic portraits of people and manipulated them using techniques from facial detection. “Within facial detection, you get 68 points of your face,” she says. “Using those points, you can manipulate how the image looks to create more expressive portrait photography.” Many of her images slice portraits using a particular shape, such as concentric rings or vertical stripes, and reassemble them in different configurations, reminiscent of cubism.

Through Photorythms, Cocking also adopted a practice of “daily sketching” from her advisor, where she develops new code every day to generate a new piece of art. If the resulting work turns out to be something she’s proud of, she shares it with the world, sometimes through Instagram. But “even if the code doesn’t amount to anything, [I’m] sharpening [my] coding skills every day,” she says.

Now that she’s reacclimated to intensive coding, “I really want to dive into body tracking this summer,” Cocking says. She’s currently in the ideation phase, brainstorming different ways to interactively combine body tracking and live performance. “I am half-scared and half-excited,” she says.

To help generate ideas, she’s participating in an intensive five-day workshop in early July that will bring together artists interested in computational art for dance. Cocking plans to attend the workshop with her best friend from college, Raianna Brown, who’s a dancer. “We’re going to be there for a week in Chatham [UK], just playing around with choreography and code,” she says. “Hopefully that can spark new ideas and new relationships” for future collaborations.

Spreading love for coding and design

Throughout her circuitous and hard-working journey to computational art, “I’ve never taken the position that I was in for granted,” Cocking says. She recognizes the value of having access to opportunities from her own experience, with a self-sustaining cycle of access in one place opening doors for her in another place. But “there’s so many people that I’m surrounded by who are intelligent and talented but don’t have access to opportunities,” especially in computer science and design, she says. Because of this, since college, Cocking has devoted some of her time to providing access to these fields to children and young professionals from underrepresented backgrounds.

This past spring, Cocking worked with fellow Media Lab student Cecilé Sadler to develop a workshop for introducing kids to coding concepts in a fun way. The two partners taught the workshop in parallel at different places in May and June: Sadler taught a series in Cambridge in collaboration with blackyard, a grassroots organization centering Black, Indigenous, and POC youth, while Cocking returned to her home country of Jamaica and taught at the Freedom Skatepark youth center near Kingston.

To get the workshop curriculum to Jamaica, Cocking reached out to her friend Rica G., who teaches computer science at the Freedom Skatepark youth center. Together, they co-taught the curriculum over several weeks. “I was so nervous [the kids] would just walk out,” Cocking says. “But they actually liked it!”

Cocking hopes to use this workshop as a stepping stone to someday establish “a core center for kids in Jamaica to explore creative coding or computational art,” she says. “Hopefully people will see coding as a tool for creation and expression without feeling intimidated, and use it to make the world a little weirder.”



de MIT News https://ift.tt/LAmuCr7

viernes, 24 de junio de 2022

Q&A: Neil Thompson on computing power and innovation

Moore’s Law is the famous prognostication by Intel co-founder Gordon Moore that the number of transistors on a microchip would double every year or two. This prediction has mostly been met or exceeded since the 1970s — computing power doubles about every two years, while better and faster microchips become less expensive.

This rapid growth in computing power has fueled innovation for decades, yet in the early 21st century researchers began to sound alarm bells that Moore’s Law was slowing down. With standard silicon technology, there are physical limits to how small transistors can get and how many can be squeezed onto an affordable microchip.

Neil Thompson, an MIT research scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Sloan School of Management, and his research team set out to quantify the importance of more powerful computers for improving outcomes across society. In a new working paper, they analyzed five areas where computation is critical, including weather forecasting, oil exploration, and protein folding (important for drug discovery). The working paper is co-authored by research assistants Gabriel F. Manso and Shuning Ge.

They found that between 49 and 94 percent of improvements in these areas can be explained by computing power. For instance, in weather forecasting, increasing computer power by a factor of 10 improves three-day-ahead predictions by one-third of a degree.

But computer progress is slowing, which could have far-reaching impacts across the economy and society. Thompson spoke with MIT News about this research and the implications of the end of Moore’s Law.

Q: How did you approach this analysis and quantify the impact computing has had on different domains?

A: Quantifying the impact of computing on real outcomes is tricky. The most common way to look at computing power, and IT progress more generally, is to study how much companies are spending on it, and look at how that correlates to outcomes. But spending is a tough measure to use because it only partially reflects the value of the computing power being purchased. For example, today’s computer chip may cost the same amount as last year’s, but it is also much more powerful. Economists do try to adjust for that quality change, but it is hard to get your hands around exactly what that number should be. For our project, we measured the computing power more directly — for instance, by looking at capabilities of the systems used when protein folding was done for the first time using deep learning. By looking directly at capabilities, we are able to get more precise measurements and thus get better estimates of how computing power influences performance.

Q: How are more powerful computers enabling improvements in weather forecasting, oil exploration, and protein folding?

A: The short answer is that increases in computing power have had an enormous effect on these areas. With weather prediction, we found that there has been a trillionfold increase in the amount of computing power used for these models. That puts into perspective how much computing power has increased, and also how we have harnessed it. This is not someone just taking an old program and putting it on a faster computer; instead users must constantly redesign their algorithms to take advantage of 10 or 100 times more computer power. There is still a lot of human ingenuity that has to go into improving performance, but what our results show is that much of that ingenuity is focused on how to harness ever-more-powerful computing engines.

Oil exploration is an interesting case because it gets harder over time as the easy wells are drilled, so what is left is more difficult. Oil companies fight that trend with some of the biggest supercomputers in the world, using them to interpret seismic data and map the subsurface geology. This helps them to do a better job of drilling in exactly the right place.

Using computing to do better protein folding has been a longstanding goal because it is crucial for understanding the three-dimensional shapes of these molecules, which in turn determines how they interact with other molecules. In recent years, the AlphaFold systems have made remarkable breakthroughs in this area. What our analysis shows is that these improvements are well-predicted by the massive increases in computing power they use.

Q: What were some of the biggest challenges of conducting this analysis?

A: When one is looking at two trends that are growing over time, in this case performance and computing power, one of the most important challenges is disentangling what of the relationship between them is causation and what is actually just correlation. We can answer that question, partially, because in the areas we studied companies are investing huge amounts of money, so they are doing a lot of testing. In weather modeling, for instance, they are not just spending tens of millions of dollars on new machines and then hoping they work. They do an evaluation and find that running a model for twice as long does improve performance. Then they buy a system that is powerful enough to do that calculation in a shorter time so they can use it operationally. That gives us a lot of confidence. But there are also other ways that we can see the causality. For example, we see that there were a number of big jumps in the computing power used by NOAA (the National Oceanic and Atmospheric Administration) for weather prediction. And, when they purchased a bigger computer and it got installed all at once, performance really jumps.

Q: Would these advancements have been possible without exponential increases in computing power?

A: That is a tricky question because there are a lot of different inputs: human capital, traditional capital, and also computing power. All three are changing over time. One might say, if you have a trillionfold increase in computing power, surely that has the biggest effect. And that’s a good intuition, but you also have to account for diminishing marginal returns. For example, if you go from not having a computer to having one computer, that is a huge change. But if you go from having 100 computers to having 101, that extra one doesn’t provide nearly as much gain. So there are two competing forces — big increases in computing on one side but decreasing marginal benefits on the other side. Our research shows that, even though we already have tons of computing power, it is getting bigger so fast that it explains a lot of the performance improvement in these areas.

Q: What are the implications that come from Moore’s Law slowing down?

A: The implications are quite worrisome. As computing improves, it powers better weather prediction and the other areas we studied, but it also improves countless other areas we didn’t measure but that are nevertheless critical parts of our economy and society. If that engine of improvement slows down, it means that all those follow-on effects also slow down.

Some might disagree, arguing that there are lots of ways of innovating — if one pathway slows down, other ones will compensate. At some level that is true. For example, we are already seeing increased interest in designing specialized computer chips as a way to compensate for the end of Moore’s Law. But the problem is the magnitude of these effects. The gains from Moore’s Law were so large that, in many application areas, other sources of innovation will not be able to compensate.



de MIT News https://ift.tt/BWNotI1

Building a decentralized bank for micro businesses in Latin America

In Barranquilla, Colombia, Edinson Flores has run a small, family-owned fast-food business for many years. But when his family became sick with Covid-19, he had to stop working and pay for medical care. When his family recovered, Flores still had customers and equipment, but he couldn’t afford the supplies he needed to get his business running again. He applied for a loan at a local bank but was denied because of his low credit score.

The situation was not unique: Small businesses like Flores’s make up the bulk of the economy in Latin America. Most operate in the informal economy and are not fully recognized by the government, making it difficult to get loans to grow or survive volatility.

That’s what Quipu Market is trying to solve. The company, founded by two MIT alumni, is using data from the informal economy to offer a series of small loans to entrepreneurs. Quipu has developed an online marketplace that helps entrepreneurs publish product catalogs, record transactions, and increase their sales. By digitizing business activity, Quipu is able to assess credit worthiness in a new way and provide loans at rates comparable to traditional banks.

“It's all about using new data and networks to help entrepreneurs not only access financial products, but create wealth, because if you don't create wealth, then the money is not ultimately improving the economy,” Quipu co-founder and CEO Mercedes Bidart SM ’19 says.

For now, Quipu is the one providing the loans, which gradually increase as entrepreneurs demonstrate the ability to pay them back. By the end of this year, the company plans to open up its blockchain-based lending system to allow anyone to buy interest-bearing tokens with money that is then allocated to entrepreneurs using Quipu’s algorithms assessing credit worthiness.

“We see ourselves as a digital, decentralized bank,” says Bidart, who co-founded the company with Juan Constain SM ’18 and Viviana Siless. “On top of the microloans, we want to add financial services and become a decentralized bank tailored for the informal economy.”

Quipu already has more than 10,000 users on its platform across Colombia, including Flores, who was able to access Quipu’s loans based on his strong customer base and not only get back up and running, but grow his business.

Finding a new path

Before coming to MIT, Bidart had worked for a think tank in her home country of Argentina to craft economic development policies. She also helped a grassroots organization that worked with families in informal settlements. But she began questioning if the grassroots work could scale, while also seeing that the top-down government approach was limited by a lack of data on the informal economy. She came to MIT to learn how to overcome those problems.

Bidart joined the Department of Urban Studies and Planning (DUSP) in 2017. While studying under associate professors J. Phillip Thompson and Gabriella Carolini, she was introduced to new financing models that centered around social banks, community currencies, and the blockchain. She also worked with Katrin Kaeufer, a research fellow at DUSP’s Community Innovators Lab.

“I started wondering how we could implement those models in places where there's scarcity and economic urgency all the time,” Bidart says.

Although she came to MIT without a finance background and with no knowledge of startups, she started taking entrepreneurship classes at Sloan School of Management and eventually received support through the PKG Center and the MIT Innovation Initiative to explore her ideas further.

“When I got into MIT, I knew there was a problem and I had been thinking about solutions,” Bidart says. “But I had no idea there was this other way of doing things — not through grassroots work or public policy or big companies — but actually starting something myself that could scale using technology.”

Bidart spent the summer of 2018 designing a prototype financing system with a group of entrepreneurs living in a public housing complex in Baranquilla, Colombia. When she returned to MIT, she continued developing the platform and partnered with Siless and Constain.

In 2019, the co-founders got into the School of Architecture and Planning’s MITdesignX accelerator, an experience Bidart calls “game changing” because the program helped them realize they could build a profitable business around the new data they were collecting.

Today when entrepreneurs make a profile on Quipu’s platform, the company digitizes information like the location of the business, the goods or services offered, and who their customers are.

“We turn that social capital into economic capital with an artificial intelligence-based algorithm that assesses credit worthiness in an alternative way,” Bidart says. “We create a credit score that serves as a digital financial ID. With that ID, they can access rotative loans that start at around $25 and increase in value as users repay.”

Many of Quipu’s entrepreneurs have poor credit scores and cannot access traditional loans from banks. Private financiers are available, but they charge high interest rates and can use violent collection practices. Bidart says other microfinance solutions are slow to disperse money because they rely on people to travel to businesses and analyze operations, while Quipu can disperse money within three days of a request.

Quipu is building a blockchain-based system so that the tokens tied to its loans will increase in value as users repay the loans and pay back interest. The system will allow anyone in the world to loan money to entrepreneurs on Quipu’s platform.

Transforming economies

Quipu is currently operating across Colombia and planning to expand to Mexico by June of next year. Bidart sees Quipu as an engine of economic growth for lower-income neighborhoods that have been overlooked by traditional institutions.

“The problem is people don't have access to financial products that are designed for their economy, so then there's no economic development,” Bidart says. “Supporting people with loans and providing them with other ways to sell more can improve how their business works, and they can start using the data they already have to access not just our loans but other financial services at better rates.”

Better borrowing rates will even the playing field for families that have historically had to pay more for a number of financial services.

“We want to shift the reality that being poor is very expensive,” Bidart says. “Being born poor forces you to accept higher rates from banks, and it forces you to access supplies at higher rates because you are far away from the city or there’s a lot of middlemen. And what we want to do is say, ‘It doesn’t matter where you were born. We all have data around what we are doing —non-financial data that we can use to assess credit worthiness — and that will give all of us the ability to grow financially at the same rates.”



de MIT News https://ift.tt/doqexAC

jueves, 23 de junio de 2022

Taking the guesswork out of dental care with artificial intelligence

When you picture a hospital radiologist, you might think of a specialist who sits in a dark room and spends hours poring over X-rays to make diagnoses. Contrast that with your dentist, who in addition to interpreting X-rays must also perform surgery, manage staff, communicate with patients, and run their business. When dentists analyze X-rays, they do so in bright rooms and on computers that aren’t specialized for radiology, often with the patient sitting right next to them.

Is it any wonder, then, that dentists given the same X-ray might propose different treatments?

“Dentists are doing a great job given all the things they have to deal with,” says Wardah Inam SM ’13, PhD ’16.

Inam is the co-founder of Overjet, a company using artificial intelligence to analyze and annotate X-rays for dentists and insurance providers. Overjet seeks to take the subjectivity out of X-ray interpretations to improve patient care.

“It’s about moving toward more precision medicine, where we have the right treatments at the right time,” says Inam, who co-founded the company with Alexander Jelicich ’13. “That’s where technology can help. Once we quantify the disease, we can make it very easy to recommend the right treatment.”

Overjet has been cleared by the Food and Drug Administration to detect and outline cavities and to quantify bone levels to aid in the diagnosis of periodontal disease, a common but preventable gum infection that causes the jawbone and other tissues supporting the teeth to deteriorate.

In addition to helping dentists detect and treat diseases, Overjet’s software is also designed to help dentists show patients the problems they’re seeing and explain why they’re recommending certain treatments.

The company has already analyzed tens of millions of X-rays, is used by dental practices nationwide, and is currently working with insurance companies that represent more than 75 million patients in the U.S. Inam is hoping the data Overjet is analyzing can be used to further streamline operations while improving care for patients.

“Our mission at Overjet is to improve oral health by creating a future that is clinically precise, efficient, and patient-centric,” says Inam.

It’s been a whirlwind journey for Inam, who knew nothing about the dental industry until a bad experience piqued her interest in 2018.

Getting to the root of the problem

Inam came to MIT in 2010, first for her master’s and then her PhD in electrical engineering and computer science, and says she caught the bug for entrepreneurship early on.

“For me, MIT was a sandbox where you could learn different things and find out what you like and what you don't like,” Inam says. “Plus, if you are curious about a problem, you can really dive into it.”

While taking entrepreneurship classes at the Sloan School of Management, Inam eventually started a number of new ventures with classmates.

“I didn't know I wanted to start a company when I came to MIT,” Inam says. “I knew I wanted to solve important problems. I went through this journey of deciding between academia and industry, but I like to see things happen faster and I like to make an impact in my lifetime, and that's what drew me to entrepreneurship.”

During her postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Inam and a group of researchers applied machine learning to wireless signals to create biomedical sensors that could track a person’s movements, detect falls, and monitor respiratory rate.

She didn’t get interested in dentistry until after leaving MIT, when she changed dentists and received an entirely new treatment plan. Confused by the change, she asked for her X-rays and asked other dentists to have a look, only to receive still another variation in diagnosis and treatment recommendations.

At that point, Inam decided to dive into dentistry for herself, reading books on the subject, watching YouTube videos, and eventually interviewing dentists. Before she knew it, she was spending more time learning about dentistry than she was at her job.

The same week Inam quit her job, she learned about MIT’s Hacking Medicine competition and decided to participate. That’s where she started building her team and getting connections. Overjet’s first funding came from the Media Lab-affiliated investment group the E14 Fund.

The E14 fund wrote the first check, and I don't think we would've existed if it wasn't for them taking a chance on us,” she says.

Inam learned that a big reason for variation in treatment recommendations among dentists is the sheer number of potential treatment options for each disease. A cavity, for instance, can be treated with a filling, a crown, a root canal, a bridge, and more.

When it comes to periodontal disease, dentists must make millimeter-level assessments to determine disease severity and progression. The extent and progression of the disease determines the best treatment.

“I felt technology could play a big role in not only enhancing the diagnosis but also to communicate with the patients more effectively so they understand and don't have to go through the confusing process I did of wondering who's right,” Inam says.

Overjet began as a tool to help insurance companies streamline dental claims before the company began integrating its tool directly into dentists’ offices. Every day, some of the largest dental organizations nationwide are using Overjet, including Guardian Insurance, Delta Dental, Dental Care Alliance, and Jefferson Dental and Orthodontics.

Today, as a dental X-ray is imported into a computer, Overjet’s software analyzes and annotates the images automatically. By the time the image appears on the computer screen, it has information on the type of X-ray taken, how a tooth may be impacted, the exact level of bone loss with color overlays, the location and severity of cavities, and more.

The analysis gives dentists more information to talk to patients about treatment options.

“Now the dentist or hygienist just has to synthesize that information, and they use the software to communicate with you,” Inam says. “So, they'll show you the X-rays with Overjet's annotations and say, 'You have 4 millimeters of bone loss, it's in red, that's higher than the 3 millimeters you had last time you came, so I'm recommending this treatment.”

Overjet also incorporates historical information about each patient, tracking bone loss on every tooth and helping dentists detect cases where disease is progressing more quickly.

“We’ve seen cases where a cancer patient with dry mouth goes from nothing to something extremely bad in six months between visits, so those patients should probably come to the dentist more often,” Inam says. “It’s all about using data to change how we practice care, think about plans, and offer services to different types of patients.”

The operating system of dentistry

Overjet’s FDA clearances account for two highly prevalent diseases. They also put the company in a position to conduct industry-level analysis and help dental practices compare themselves to peers.

“We use the same tech to help practices understand clinical performance and improve operations,” Inam says. “We can look at every patient at every practice and identify how practices can use the software to improve the care they're providing.”

Moving forward, Inam sees Overjet playing an integral role in virtually every aspect of dental operations.

“These radiographs have been digitized for a while, but they've never been utilized because the computers couldn't read them,” Inam says. “Overjet is turning unstructured data into data that we can analyze. Right now, we're building the basic infrastructure. Eventually we want to grow the platform to improve any service the practice can provide, basically becoming the operating system of the practice to help providers do their job more effectively.”



de MIT News https://ift.tt/6n2BTWw