viernes, 29 de septiembre de 2023

J-PAL North America and Results for America announce 18 collaborations with state and local governments

J-PAL North America and Results for America have announced 18 new partnerships with state and local governments across the country through their Leveraging Evidence and Evaluation for Equitable Recovery (LEVER) programming, which launched in April of this year

As state and local leaders leverage federal relief funding to invest in their communities, J-PAL North America and Results for America are providing in-depth support to agencies in using data, evaluation, and evidence to advance effective and equitable government programming for generations to come. The 18 new collaborators span the contiguous United States and represent a wide range of pressing and innovative uses of federal Covid-19 recovery funding.

These partnerships are a key component of the LEVER program, run by J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — and Results for America — a nonprofit organization that helps government agencies harness the power of evidence and data. Through 2024, LEVER will continue to provide a suite of resources, training, and evaluation design services to prepare state and local government agencies to rigorously evaluate their own programs and to harness existing evidence in developing programs and policies using federal recovery dollars.

J-PAL North America is working with four leading government agencies following a call for proposals to the LEVER Evaluation Incubator in June. These agencies will work with J-PAL staff to design randomized evaluations to understand the causal impact of important programs that contribute to their jurisdictions’ recovery from Covid-19.

Connecticut’s Medicaid office, operating out of the state’s Department of Social Services, is working to improve vaccine access and awareness among youth. “Connecticut Medicaid is thrilled to work with J-PAL North America. The technical expertise and training that we receive will expand our knowledge during ‘testing and learning’ interventions that improve the health of our members,” says Gui Woolston, the director of Medicaid and Division of Health Services. 

Athens-Clarke County Unified Government is invested in evaluating programming for youth development and violence prevention implemented by the Boys and Girls Club of Athens. Their goal is “to measure and transparently communicate program impact,” explains Paige Seago, the data and outcomes coordinator for the American Rescue Plan Act. “The ability to continually iterate and tailor programs to better meet community goals is crucial to long-term success.”

The County of San Diego’s newly formed Office of Evaluation, Performance, and Analytics is evaluating a pilot program providing rental subsidies for older adults. “Randomized evaluation can help us understand if rent subsidies will help prevent seniors from becoming homeless and will give us useful information about how to move forward,” says Chief Evaluation Officer Ricardo Basurto-Dávila. 

In King County, Washington, the Executive Climate Office is planning to evaluate efforts to increase equitable access to household energy efficiency programs. “Because of J-PAL's support, we have confidence that we can reduce climate impacts and extend home electrification benefits to lower-income homeowners in King County — homeowners who otherwise may not have the ability to participate in the clean energy transition,” says King County Climate Director Marissa Aho.

Fourteen additional state and local agencies are working with Results for America as part of the LEVER Training Sprint. Together, they will develop policies that catalyze sustainable evidence building within government. 

Jurisdictions selected for the Training Sprint represent government leaders at the city, county, and state levels — all of whom are committed to creating an evaluation framework for policy that will prioritize evidence-based decision-making across the country. Over the course of 10 weeks, with access to tools and coaching, each team will develop an internal implementation policy by embedding key evaluation and evidence practices into their jurisdiction’s decision-making processes. Participants will finish the Training Sprint with a robust decision-making framework that translates their LEVER implementation policies into actionable planning guidance. 

Government leaders will utilize the LEVER Training Sprint to build a culture of data and evidence focused on leveraging evaluation policies to invest in delivering tangible results for their residents. About their participation in the LEVER Training Sprint, Dana Williams from Denver, Colorado says, “Impact evaluation is such an integral piece to understanding the past, present, and future. I'm excited to participate in the LEVER Training Sprint to better inform and drive evidence-based programming in Denver.”

The Training Sprint is a part of a growing movement to ground government innovation in data and evidence. Kermina Hanna from the State of New Jersey notes, “It’s vital that we cement a data-driven commitment to equity in government operations, and I’m really excited for this opportunity to develop a national network of colleagues in government who share this passion and dedication to responsive public service.”

Jurisdictions selected for the Training Sprint are: 

  • Boston, Massachusetts;
  • Carlsbad, California;
  • Connecticut;
  • Dallas, Texas;
  • Denver City/County, Colorado;
  • Fort Collins, Colorado;
  • Guilford County, North Carolina;
  • King County, Washington;
  • Long Beach, California;
  • Los Angeles, California;
  • New Jersey;
  • New Mexico;
  • Pittsburgh, Pennsylvania; and
  • Washington County, Oregon.

Those interested in learning more can fill out the LEVER intake form. Please direct any questions about the Evaluation Incubator to Louise Geraghty and questions about the Training Sprint to Chelsea Powell.



de MIT News https://ift.tt/CviBVJ5

Studying cancer in context to stop its growth

Proteins called transcription factors are like molecular traffic cops that tell genes when to stop and go. If they malfunction — what scientists refer to as dysregulation — transcription factors stop orchestrating healthy gene expression and instead become a driving force for diseases like cancer.

Unsurprisingly, dysregulated transcription factors have garnered a lot of attention from researchers hoping to create new treatments for disease. But transcription factors have proven hard to drug, in part because they work in the context of various interdependent signaling molecules in the cell.

The MIT spinout Kronos Bio is studying those larger signaling networks to find new ways to disrupt transcription factor activity. By viewing transcription factors in their natural cellular context, the company believes it can develop more effective treatments to the many diseases that are driven by out-of-control transcription.

A key enabling technology for Kronos is a screening tool that allows scientists to study how transcription factors interact with other molecules. Kronos founder and MIT associate professor of biological engineering Angela Koehler has made important contributions to the tool over nearly two decades, and she continues to use it to study transcription factors in her lab today.

“Transcription factors never work in isolation,” Kronos Bio CEO Norbert Bischofberger says. “They work through multiple complex protein complexes. Angela spearheaded screening compounds in the cellular context they work in, and we’re building on that work.”

Kronos is already targeting the mother of all disease-associated transcription factors, known as MYC, in clinical trials. MYC is in every cell, but certain tumor cells overexpress MYC dramatically, relying on its constant transcription to drive cancer growth. Kronos is currently running a phase 1/2 study with patients who have relapsed or resistant MYC-dependent tumors, including patients with ovarian cancer and triple-negative breast cancer. The company’s other drug in clinical trials targets a molecule associated with dysregulated transcription in acute myeloid leukemia.

If the trials are successful, Kronos believes its approach will allow it to develop treatments for a number of other cancers associated with transcription dysfunction.

“If you look at the Tumor Genome Atlas, roughly half of all tumors have amplified MYC, and if you look at triple negative breast and ovarian cancer, it’s 80 percent,” Bischofberger explains. “If you could find drugs that essentially reduce amplified MYC levels, you could take out a broad swath of human tumors for which MYC is a driver of the malignant phenotype. It’s a huge opportunity to improve patient lives.”

From platform to product

Koehler’s interest in transcription factors dates back to the early 2000s. As an investigator at the Broad Institute of MIT and Harvard, where she is still a member, she was part of a group that developed a low-cost way to screen molecules for different binding properties. The approach could be used to find molecules that modulate transcription factors, and it garnered interest from pharmaceutical companies.

“What industry really liked was we didn’t need to purify a protein to run a screen,” Koehler explains. “We could come in with large protein complexes from cells, or potentially even patient cells, and look for our target of interest in a protein complex, which reflected a more native state to evaluate molecules.”

When Koehler started her lab at MIT, she used the approach to find molecules that bind to MYC. Many attempts to target MYC have failed over decades of drug development because it’s a difficult protein for molecules to latch on to.

“The problem is MYC is in this bucket of targets many call undruggable,” Koehler says. “It’s a transcription factor and it’s super floppy. It lacks shape and it’s highly disordered, so it’s difficult for molecules to find a binding pocket.”

Koehler and her collaborators presented their early work on the MYC-binding molecule at a conference, sparking interest from investors.

“The next two or three months, my office was like a revolving door for venture capitalists wanting to talk not just about the molecule, but to understand the platform we used to discover the molecule — that’s actually where there was more interest,” Koehler recalls.

She started Kronos Bio later that year, working with MIT’s Technology Licensing Office to license the screening platform and a few specific molecules for the company. The Deshpande Center for Technological Innovation funded some of Koehler’s early work, and Koehler, who became faculty director of the center this summer, also says it helped connect her to investors and others in the biotech industry.

Two members of Koehler’s lab became the first two employees of the company. Then Koehler met Bischofberger, who had spent 27 years as the head of research and development at Gilead Sciences and was looking to move into a startup.

Since then, Kronos has taken a winding path to developing the final molecules currently being studied in its clinical trials. (That initial molecule she presented at the conference didn’t pan out.) Some of Kronos’ preclinical work was done in conjunction with the Broad Institute, where Koehler is an Institute Member. Koehler, who is also the associate director of the Koch Institute for Integrative Cancer Research and the founder of the MIT Center for Precision Cancer Medicine, sits on the Kronos Bio scientific advisory board and says she’s following along with the company’s clinical progress like everyone else.

“What you’re looking for as a founder is the right group of people who you trust to make the right decisions,” Koehler says. “I’m a mom of four, and I often say it’s like you’re looking for the right college to send your kids to, but then you’ve got to step back and let them live their own life. That’s how I view it.”

Drugging the undruggable

Kronos Bio’s drug candidates are taken orally once a day, which allows patients to skip frequent trips to hospitals for IV infusions. In addition to targeting MYC-dependent tumors, Kronos’ drug is also being tested in humans to address other transcriptionally addicted cancers like sarcomas.

“Sarcomas are not widely mutated like other tumors; it’s often just a simple transcription factor fusion,” Bischofberger says. “The best example is Ewing’s sarcoma. That exists with two transcription factors fused together. Those are driven by aberrant transcription, and that’s something we’re excited to be going after.”

The company plans to present safety data from its trials by the end of this year, and by the middle of 2024 to present data showing whether its lead candidate can shrink MYC-dependent tumors.

“What you want to see is tumor shrinkage because none of these tumors shrink by themselves,” Bischofberger says.

Regardless of those drug candidate’s success, Bischofberger believes Kronos is making important contributions to an understudied area of therapeutics.

“There are about 1,500 transcription factors, and about 200 of those are known to be involved in cancers, but very few have been drugged,” Bischofberger says. “The transcription factors that have been drugged have been widely successful — in multiple myeloma, for instance. This is a huge, open field to be working in.”



de MIT News https://ift.tt/L6QpOVB

Who will benefit from AI?

What if we’ve been thinking about artificial intelligence the wrong way?

After all, AI is often discussed as something that could replicate human intelligence and replace human work. But there is an alternate future: one in which AI provides “machine usefulness” for human workers, augmenting but not usurping jobs, while helping to create productivity gains and spread prosperity.

That would be a fairly rosy scenario. However, as MIT economist Daron Acemoglu emphasized in a public campus lecture on Tuesday night, society has started to move in a different direction — one in which AI replaces jobs and rachets up societal surveillance, and in the process reinforces economic inequality while concentrating political power further in the hands of the ultra-wealthy.

“There are transformative and very consequential choices ahead of us,” warned Acemoglu, Institute Professor at MIT, who has spent years studying the impact of automation on jobs and society.

Major innovations, Acemoglu suggested, are almost always bound up with matters of societal power and control, especially those involving automation. Technology generally helps society increase productivity; the question is how narrowly or widely those economic benefits are shared. When it comes to AI, he observed, these questions matter acutely “because there are so many different directions in which these technologies can be developed. It is quite possible they could bring broad-based benefits — or they might actually enrich and empower a very narrow elite.”

But when innovations augment rather than replace workers’ tasks, he noted, it creates conditions in which prosperity can spread to the work force itself.

“The objective is not to make machines intelligent in and of themselves, but more and more useful to humans,” said Acemoglu, speaking to a near-capacity audience of almost 300 people in Wong Auditorium.

The Productivity Bandwagon

The Starr Forum is a public event series held by MIT’s Center for International Studies (CIS), and focused on leading issues of global interest. Tuesday’s event was hosted by Evan Lieberman, director of CIS and the Total Professor of Political Science and Contemporary Africa.

Acemoglu’s talk drew on themes detailed in his book “Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity,” which was co-written with Simon Johnson and published in May by PublicAffairs. Johnson is the Ronald A. Kurtz Professor of Entrepreneurship at the MIT Sloan School of Management.

In Tuesday’s talk, as in his book, Acemoglu discussed some famous historial examples to make the point that the widespread benefits of new technology cannot be assumed, but are conditional on how technology is implemented.

It took at least 100 years after the 18th-century onset of the Industrial Revolution, Acemoglu noted, for the productivity gains of industrialization to be widely shared. At first, real earnings did not rise, working hours increased by 20 percent, and labor conditions worsened as factory textile workers lost much of the autonomy they had held as independent weavers.

Similarly, Acemoglu observed, Eli Whitney’s invention of the cotton gin made the conditions of slavery in the U.S. even worse. That overall dynamic, in which innovation can potentially enrich a few at the expense of the many, Acemoglu said, has not vanished.

“We’re not saying that this time is different,” Acemoglu said. “This time is very similar to what went on in the past. There has always been this tension about who controls technology and whether the gains from technology are going to be widely shared.”

To be sure, he noted, there are many, many ways society has ultimately benefitted from technologies. But it’s not something we can take for granted.

“Yes indeed, we are immeasurably more prosperous, healthier, and more comfortable today than people were 300 years ago,” Acemoglu said. “But again, there was nothing automatic about it, and the path to that improvement was circuitous.”

Ultimately what society must aim for, Acemoglu said, is what he and Johnson term “The Productivity Bandwagon” in their book. That is the condition in which technological innovation is adapted to help workers, not replace them, spreading economic growth more widely. In this way, productivity growth is accompanied by shared prosperity.

“The Productivity Bandwagon is not a force of nature that applies under all circumstances automatically, and with great force, but it is something that’s conditional on the nature of technology and how production is organized and the gains are shared,” Acemoglu said.

Crucially, he added, this “double process” of innovation involves one more thing: a significant amount of worker power, something which has eroded in recent decades in many places, including the U.S.

That erosion of worker power, he acknowledged, has made it less likely that multifaceted technologies will be used in ways that help the labor force. Still, Acemoglu noted, there is a healthy tradition within the ranks of technologists, including innovators such as Norbert Wiener and Douglas Engelbart, to “make machines more useable, or more useful to humans, and AI could pursue that path.”

Conversely, Acemoglu noted, “There is every danger that overemphasizing automation is not going to get you many productivity gains either,” since some technologies may be merely cheaper than human workers, not more productive.

Icarus and us

The event included a commentary from Fotini Christia, the Ford International Professor of the Social Sciences and director of the MIT Sociotechnical Systems Research Center. Christia offered that “Power and Progress” was “a tremendous book about the forces of technology and how to channel them for the greater good.” She also noted “how prevalent these themes have been even going back to ancient times,” referring to Greek myths involving Daedalus, Icarus, and Prometheus.

Additionally, Christia raised a series of pressing questions about the themes of Acemoglu’s talk, including whether the advent of AI represented a more concerning set of problems than previous episodes of technological advancement, many of which ultimately helped many people; which people in society have the most ability and responsibility to help produce changes; and whether AI might have a different impact on developing countries in the Global South.

In an extensive audience question-and-answer session, Acemoglu fielded over a dozen questions, many of them about the distribution of earnings, global inequality, and how workers might organize themselves to have a say in the implementation of AI.

Broadly, Acemoglu suggested it is still to be determined how greater worker power can be obtained, and noted that workers themselves should help suggest productive uses for AI. At multiple points, he noted that workers cannot just protest circumstances, but must also pursue policy changes as well — if possible.

“There is some degree of optimism in saying we can actually redirect technology and that it’s a social choice,” Acemoglu acknowledged.

Acemoglu also suggested that countries in the global South were also vulnerable to the potential effects of AI, in a few ways. For one thing, he noted, as the work of MIT economist Martin Beraja shows, China has been exporting AI surveillance technologies to governments in many developing countries. For another, he noted, countries that have made overall economic progress by employing more of their citizens in low-wage industries might find labor force participation being undercut by AI developments.

Separately, Acemoglu warned, if private companies or central governments anywhere in the world amass more and more information about people, it is likely to have negative consequences for most of the population.

“As long as that information can be used without any constraints, it’s going to be antidemocratic and it’s going to be inequality-inducing,” he said. “There is every danger that AI, if it goes down the automation path, could be a highly unequalizing technology around the world.”



de MIT News https://ift.tt/kJajNbX

jueves, 28 de septiembre de 2023

Giving students the computational chops to tackle 21st-century challenges

Graduate student Nikasha Patel ’22 is using artificial intelligence to build a computational model of how infants learn to walk, which could help robots acquire motor skills in a similar fashion.

Her research, which sits at the intersection of reinforcement learning and motor learning, uses tools and techniques from computer science to study the brain and human cognition.

It’s an area of research she wasn’t aware of before she arrived at MIT in the fall of 2018, and one Patel likely wouldn’t have considered if she hadn’t enrolled in a newly launched blended major, Course 6-9: Computation and Cognition, the following spring.

Patel was drawn to the flexibility offered by Course 6-9, which enabled her to take a variety of courses from the brain and cognitive sciences major (Course 9) and the computer science major (Course 6). For instance, she took a class on neural computation and a class on algorithms at the same time, which helped her better understand some of the computational approaches to brain science she is currently using in her research.

After earning her undergraduate degree last spring, Patel enrolled in the 6-9 master’s program and is now pursuing a PhD in computation and cognition. While a PhD wasn’t initially on her radar, the blended major opened her eyes to unique opportunities in cross-disciplinary research. In the future, she hopes to study motor control and the computational building blocks that our brains use for movement.

“Looking back on my experience at MIT, being in Course 6-9 really led me up to this moment. You can’t just think of the world through one lens. You need to have both perspectives so you can tackle these complex problems together,” she says.

Blending disciplines

The Department of Brain and Cognitive Sciences’ Course 6-9 is one of four blended majors available through the MIT Schwarzman College of Computing. Each of the majors is offered jointly by the Department of Electrical Engineering and Computer Science and a different MIT department. Course 6-7, Computer Science and Molecular Biology, is offered with the Department of Biology; Course 6-14, Computer Science, Economics, and Data Science, is offered with the Department of Economics; and Course 11-6, Urban Science and Planning with Computer Science, is offered with the Department of Urban Studies and Planning.

Each major is designed to give students a solid grounding in computational fundamentals, such as coding, algorithms, and ethical AI, while equipping them to tackle hard problems in different fields like neurobiology, economics, or urban design, using tools and insights from the realm of computer science.

The four majors, all launched between 2017 and 2019, have grown rapidly and now encompass about 360 undergraduates, or roughly 8 percent of MIT’s total undergraduate enrollment.

With so much focus on generative AI and machine learning in many disciplines, even those not traditionally associated with computer science, it is no surprise to associate professor Mehrdad Jazayeri that blended majors, and Course 6-9 in particular, have grown so rapidly. Course 6-9 launched with 40 students and has since quadrupled its enrollment.

Many students who come to MIT are enamored with machine-learning tools and techniques, so the opportunity to utilize those skills in a field like neurobiology is a great opportunity for students with varied interests, says Jazayeri, who is also director of education for the Department of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research.

“It is pretty clear that new developments and insights in industry and technology will be heavily dependent on computational power. Fields related to the human mind are no different from that, from the study of neurodegenerative diseases, to research into child development, to understanding how marketing affects the human psyche,” he says.

Computation to improve medicine

Using the power of computer science to make an impact in biological research inspired senior Charvi Sharma to major in Course 6-7.

Though she was interested in medicine from a young age, it wasn’t until she came to MIT that she began to explore the role computation could play in medical care.

Coming to college with interests in both computer science and biology, Sharma considered a double major; however, she soon realized that what really interested her was the intersection of the two disciplines, and Course 6-7 was a perfect fit.

Sharma, who is planning to attend medical school, sees computer science and medicine dovetail through her work as an undergraduate researcher at MIT’s Koch Institute for Cancer Research. She and her fellow researchers seek to understand how signaling pathways contribute to a cell’s ability to escape from cell cycle arrest, or the inability of a cell to continue dividing, after DNA damage. Their work could ultimately lead to improved cancer treatments.

The data science and analysis skills she has honed through computer science courses help her understand and interpret the results of her research. She expects those same skills will prove useful in her future career as a physician.

“A lot of the tools used in medicine do require some knowledge of technology. But more so than the technical skills that I’ve learned through my computer science foundation, I think the computational mindset — the problem solving and pattern recognition — will be incredibly helpful in treatment and diagnosis as a physician,” she says.

AI for better cities

While biology and medicine are areas where machine learning is playing an increasing role, urban planning is another field that is rapidly becoming dependent on big data and the use of AI.

Interested in learning how computation could enhance urban planning, senior Kwesi Afrifa decided to apply to MIT after reading about the blended major Course 11-6, urban sciences and planning with computer science.

His experiences growing up in the Ghanian capital of Accra, situated in the midst of a rapidly growing and sprawling metro area of about 5.5 million people, convinced Afrifa that data can be used to shape urban environments in a way that would make them more livable for residents.

The combination of fundamentals from Course 6, like software engineering and data science, with important concepts from urban planning, such as equity and environmental management, has helped him understand the importance of working with communities to create AI-driven software tools in an ethical manner for responsible development.

“We can’t just be the smart engineers from MIT who come in and tell people what to do. Instead, we need to understand that communities have knowledge about the issues they face, and tools from tech and planning are a way to enhance their development in their own way,” he says.

As an undergraduate researcher, Afrifa has been working on tools for pedestrian impact analysis, which has shown him how ideas from planning, such as spatial analysis and mapping, and software engineering techniques from computer science can build off one another.

Ultimately, he hopes the software tools he creates enable planners, policymakers, and community members to make faster progress at reshaping neighborhoods, towns, and cities so they meet the needs of the people who live and work there.



de MIT News https://ift.tt/fVYyArz

Decoding the complexity of Alzheimer’s disease

Alzheimer’s disease affects more than 6 million people in the United States, and there are very few FDA-approved treatments that can slow the progression of the disease.

In hopes of discovering new targets for potential Alzheimer’s treatments, MIT researchers have performed the broadest analysis yet of the genomic, epigenomic, and transcriptomic changes that occur in every cell type in the brains of Alzheimer’s patients.

Using more than 2 million cells from more than 400 postmortem brain samples, the researchers analyzed how gene expression is disrupted as Alzheimer’s progresses. They also tracked changes in cells’ epigenomic modifications, which help to determine which genes are turned on or off in a particular cell. Together, these approaches offer the most detailed picture yet of the genetic and molecular underpinnings of Alzheimer’s.

The researchers report their findings in a set of four papers appearing today in Cell. The studies were led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and Manolis Kellis, a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Broad Institute of MIT and Harvard.

“What we set out to do was blend together our computational and our biological expertise and take an unbiased look at Alzheimer’s at an unprecedented scale across hundreds of individuals — something that has just never been undertaken before,” Kellis says.

The findings suggest that an interplay of genetic and epigenetic changes feed on each other to drive the pathological manifestations of the disease.

“It’s a multifactorial process,” Tsai says. “These papers together use different approaches that point to a converging picture of Alzheimer’s disease where the affected neurons have defects in their 3D genome, and that is causal to a lot of the disease phenotypes we see.”

A complex interplay

Many efforts to develop drugs for Alzheimer’s disease have focused on the amyloid plaques that develop in patients’ brains. In their new set of studies, the MIT team sought to uncover other possible approaches by analyzing the molecular drivers of the disease, the cell types that are the most vulnerable, and the underlying biological pathways that drive neurodegeneration.

To that end, the researchers performed transcriptomic and epigenomic analyses on 427 brain samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. These samples included 146 people with no cognitive impairment, 102 with mild cognitive impairment, and 144 diagnosed with Alzheimer’s-linked dementia.

In the first Cell paper, focused on gene expression changes, the researchers used single-cell RNA-sequencing to analyze the gene expression patterns of 54 types of brain cells from these samples, and identified cellular functions that were most affected in Alzheimer’s patients. Among the most prominent, they found impairments in the expression of genes involved in mitochondrial function, synaptic signaling, and protein complexes needed to maintain the structural integrity of the genome.

This gene expression study, which was led by former MIT postdoc Hansruedi Mathys, graduate student Zhuyu (Verna) Peng, and former graduate student Carles Boix, also found that genetic pathways related to lipid metabolism were highly disrupted. In work published in Nature last year, the Tsai and Kellis labs showed that the strongest genetic risk for Alzheimer’s, called APOE4, interferes with normal lipid metabolism, which can then lead to defects in many other cell processes.

In the study led by Mathys, the researchers also compared gene expression patterns in people who showed cognitive impairments and those who did not, including some who remained sharp despite having some degree of amyloid buildup in the brain, a phenomenon known as cognitive resilience. That analysis revealed that cognitively resilient people had larger populations of two subsets of inhibitory neurons in the prefrontal cortex. In people with Alzheimer’s-linked dementia, those cells appear to be more vulnerable to neurodegeneration and cell death.

“This revelation suggests that specific inhibitory neuron populations might hold the key to maintaining cognitive function even in the presence of Alzheimer’s pathology,” Mathys says. “Our study pinpoints these specific inhibitory neuron subtypes as a crucial target for future research and has the potential to facilitate the development of therapeutic interventions aimed at preserving cognitive abilities in aging populations.”

Epigenomics

In the second Cell paper, led by former MIT postdoc Xushen Xiong, graduate student Benjamin James, and former graduate student Carles Boix PhD ’22, the researchers examined some of the epigenomic changes that occurred in 92 people, including 48 healthy individuals and 44 with early or late-stage Alzheimer’s. Epigenomic changes are alterations in the chemical modifications or packaging of DNA that affect the usage of a particular gene within a given cell.

To measure those changes, the researchers used a technique called ATAC-Seq, which measures the accessibility of sites across the genome at single-cell resolution. By combining this data with single-cell RNA-sequencing data, the researchers were able to link information about how much a gene is expressed with data on how accessible that gene is. They could also start to group genes into regulatory circuits that control specific cell functions such as synaptic communication — the primary way that neurons transmit messages throughout the brain.

Using this approach, the researchers were able to track changes in gene expression and epigenomic accessibility that occur in genes that have previously been linked with Alzheimer’s. They also identified the types of cells that were most likely to express these disease-linked genes, and found that many of them occur most often in microglia, the immune cells responsible for clearing debris from the brain.

This study also revealed that every type of cell in the brain undergoes a phenomenon known as epigenomic erosion as Alzheimer’s disease progresses, meaning that the cells’ normal pattern of accessible genomic sites is lost, which contributes to loss of cell identity.

The role of microglia

In a third Cell paper, led by MIT graduate student Na Sun and research scientist Matheus Victor, the researchers focused primarily on microglia, which make up 5 to 10 percent of the cells in the brain. In addition to clearing debris from the brain, these immune cells also respond to injury or infection and help neurons communicate with each other.

This study builds on a 2015 paper from Tsai and Kellis in which they found that many of the genome-wide association study (GWAS) variants associated with Alzheimer’s disease are predominantly active in immune cells like microglia, much more than in neurons or other types of brain cells.

In the new study, the researchers used RNA sequencing to classify microglia into 12 different states, based on hundreds of genes that are expressed at different levels during each state. They also showed that as Alzheimer’s disease progresses, more microglia enter inflammatory states. The Tsai lab has also previously shown that as more inflammation occurs in the brain, the blood-brain barrier begins to degrade and neurons begin to have difficulty communicating with each other.

At the same time, fewer microglia in the Alzheimer’s brain exist in a state that promotes homeostasis and helps the brain function normally. The researchers identified transcription factors that turn on the genes that keep microglia in that homeostatic state, and the Tsai lab is now exploring ways to activate those factors, in hopes of treating Alzheimer’s disease by programming inflammation-inducing microglia to switch back to a homeostatic state.

DNA damage

In the fourth Cell study, led by MIT research scientist Vishnu Dileep and Boix, the researchers examined how DNA damage contributes to the development of Alzheimer’s disease. Previous work from Tsai’s lab has shown that DNA damage can appear in neurons long before Alzheimer’s symptoms appear. This damage is partly a consequence of the fact that during memory formation, neurons create many double-stranded DNA breaks. These breaks are promptly repaired, but the repair process can become faulty as neurons age.

This fourth study found that as more DNA damage accumulates in neurons, it becomes more difficult for them to repair the damage, leading to genome rearrangements and 3D folding defects.

“When you have a lot of DNA damage in neurons, the cells, in their attempt to put the genome back together, make mistakes that cause rearrangements,” Dileep says. “The analogy that I like to use is if you have one crack in an image, you can easily put it back together, but if you shatter an image and try to piece it back together, you’re going to make mistakes.”

These repair mistakes also lead to a phenomenon known as gene fusion, which occurs when rearrangements take place between genes, leading to dysregulation of genes. Alongside defects in genome folding, these changes appear to predominantly impact genes related to synaptic activity, likely contributing to the cognitive decline seen in Alzheimer’s disease.

The findings raise the possibility of seeking ways to enhance neurons’ DNA repair capabilities as a way to slow down the progression of Alzheimer’s disease, the researchers say.

In addition, Kellis’ lab now hopes to use artificial intelligence algorithms such as protein language models, graph neural networks, and large language models to discover drugs that might target some of the key genes that the researchers identified in these studies.

The researchers also hope that other scientists will make use of their genomic and epigenomic data. “We want the world to use this data,” Kellis says. “We've created online repositories where people can interact with the data, can access it, visualize it, and conduct analyses on the fly.”

The research was funded, in part, by the National Institutes of Health and the Cure Alzheimer’s Foundation CIRCUITS consortium.



de MIT News https://ift.tt/V8ab1QY

miércoles, 27 de septiembre de 2023

Re-imagining the opera of the future

In the mid-1980s, composer Tod Machover came across a copy of Philip K. Dick’s science fiction novel “VALIS” in a Parisian bookstore. Based on a mystical vision Dick called his “pink light experience,” “VALIS” was an acronym for “vast active living intelligence system.” The metaphysical novel would become the basis for Machover’s opera of the same name, which first premiered at the Pompidou Center in 1987, and was recently re-staged at MIT for a new generation.

At the time, Machover was in his 20s and the director of musical research at the renowned French Institute IRCAM, a hotbed of the avant-garde known for its pioneering research in music technology. The Pompidou, Machover says, had given him carte blanche to create a new piece for its 10th anniversary. So, throughout the summer and fall, the composer had gone about constructing an elaborate theater inside the center’s cavernous entrance hall, installing speakers and hundreds of video monitors.

Creating the first computer opera

Machover, who is now Muriel R. Cooper Professor of Music and Media and director of the MIT Media Lab’s Opera of the Future research group, had originally wanted to use IRCAM founder Pierre Boulez’s Ensemble Intercontemporain, but was turned down when he asked to rehearse with them for a full two months. “Like a rock band,” he says. “I went back and thought, ‘Well, what’s the smallest number of players that can make and generate the richness and layered complexity of music that I was thinking about?’”

He decided his orchestra would consist of only two musicians: a keyboardist and a percussionist. With tools like personal computers, MIDI, and the DX7 newly available, the possibilities of digital sound and intelligent interaction were beginning to expand. Soon, Machover took a position as a founding faculty member of MIT’s Media Lab, shuttling back and forth between Cambridge, Massachusetts, and Paris. “That’s when we invented hyperinstruments,” says Machover. The hyperinstruments, developed at the Media Lab in collaboration with Machover’s very first graduate student RA Joe Chung, allowed the musician to control a much fuller range of sound. At the time, he says, “no serious composers were using real-time computer instruments for concert music.”

Word spread at IRCAM that Machover’s opera was, to say the least, unusual. Over the course of December 1987, “VALIS” opened to packed houses in Paris, eliciting both cheers and groans of horror. “It was really controversial,” Machover says, “It really stirred people up. It was like, ‘Wow, we’ve never heard anything like this. It has melody and harmonies and driving rhythms in a way that new music isn’t supposed to.’” “VALIS” existed somewhere between an orchestra and a rock band, the purely acoustic dissolving into the electric as the opera progressed. In today’s era of the remix, audiences might be accustomed to a mélange of musical styles, but then this hybrid approach was new. Machover — who trained as a cellist in addition to playing bass in rock bands — has always borrowed freely from high and low, classical and rock, human and synthetic, acoustic and hi-tech, combining parts to create new wholes.

The story of Dick’s philosophical novel is itself a study of fragments, of the divided self, as the main character, Phil, confronts his fictional double, Horselover Fat, while entering on a hallucinatory spiritual quest after the suicide of a friend. At the time of Dick’s writing, the term artificial intelligence had yet to achieve widespread use. And yet, in “VALIS,” he combines ideas about AI and mysticism to explore questions of existence. In Dick’s vision, “VALIS” was the grand unifying theory that connected a vast array of seemingly disparate ideas. “For him, that’s what God was: this complex technological system,” Machover says, “His big question was: Is it possible for technology to be the answer? Is it possible for anything to be the answer, or am I just lost? He was looking for what could possibly reconnect him to the world and reconnect the parts of his personality, and envisioned a technology to do that.”

A performance for the contemporary era

A full production of “VALIS” hasn’t been mounted in over 30 years, but it’s a fitting moment to re-stage the opera as Dick’s original vision of the living artificial intelligence system — as well as hopes for its promise and fears for its pitfalls — seems increasingly prophetic. The new performance was developed at MIT over the course of the last few years with funding from the MIT Center for Art, Science and Technology, among other sources. Performed at MIT Theater Building W97, the production stars baritone Davóne Tines and mezzo-soprano Anaïs Reno. Joining them also were vocalists Timur Bekbosunov, David Cushing, Maggie Finnegan, Rose Hegele, and Kristin Young, as well as pianist/keyboardist Julia Carey and multi-percussionist Maria Finkelmeier. New AI-enhanced technologies, created and performed by Max Addae, Emil Droga, Nina Masuelli, Manaswi Mishra, and Ana Schon, were developed in the MIT Media Lab’s Opera of the Future group, which Machover directs.

At MIT, Machover collaborated with theater director Jay Scheib, Class of 1949 Professor of Music and Theater Arts, whose augmented reality theater productions have long probed the confused border between the simulacra and the real. “We took camera feeds of live action, process the signal and then project it back, like a strange film, on a variety of surfaces, both TV- and screen-like but also diaphonous and translucent,” says Scheib, “It’s lots and lots of images accumulating at a really high speed, and a mix of choreography and styles of film acting, operatic acting.” Against an innovative set designed by Oana Botez, lighting by Yuki Link, and media by Peter A. Torpey PhD ’13, actors played multiple characters as time splinters and refracts. “Reality is constantly shifting,” says Scheib.

As the opera sped toward the hallucinatory finale, becoming progressively disorienting, a computer music composer named Mini appeared, originally played by Machover, conjuring the angelic hologram Sophia who delivers Phil/Fat to a state of wholeness. In the opera’s libretto, Mini is described as “sculpting sound” instead of simply playing the keyboard, “setting off musical structures with the flick of his hand — he seemed to be playing the orchestra of the future.” Machover composed Mini’s section beforehand in the original production, but the contemporary performance used a custom-built AI model, fed with Machover’s own compositions, to create new music in real time. “It’s not an instrument, exactly. It’s a living system that gets explored during the performance,” says Machover, “It’s like a system that Mini might actually have built.”

As they were developing the project this past spring, the Opera of the Future group wrestled with the question: How would Mini “perform” the system? “Because this is live, this is real, we wanted it to feel fresh and new, and not just be someone waving hands in the air,” says Machover. One day, Nina Masuelli ’23, who had recently completed her undergraduate degree at MIT, brought a large clear plastic jar into the lab. The group experimented with applying sensors to the jar, and then connected it to the AI system. As Mini manipulates the jar, the machine’s music responds in turn. “It’s incredibly magical,” says Machover. “It’s this new kind of object that allows a living system to be explored and to form right in front of you. It’s different every time, and every time it makes me smile with delight as something unexpected is revealed.”

As the performance neared, and Machover watched Masuelli continue to sculpt sound with the hollow jug, a string of Christmas lights coiled inside, something occurred to him: “Why don’t you be Mini?”

In some ways, in the age of ChatGPT and DALL-E, Mini’s exchange with the AI system is symbolic of humanity’s larger dance with machine intelligence, as we experiment with ways to exist and create alongside it: an ongoing venture that will eventually be for the next generation to explore. Writing thousands of sprawling pages in what he called his “exegesis,” Philip K. Dick spent the rest of his life after his “pink light experience” trying to make sense of a universe “transformed by information.” Though the many questions raised by “VALIS” — Is technology the answer? — might never be fully explained, says Machover, “you can feel them through music.”

Audiences apparently felt the same way. As one reviewer wrote, “'VALIS' is an operatic tour-de-force.” The three shows were filled to capacity, with long waiting lists, and response was wildly enthusiastic.

“It has been deeply gratifying to see that “VALIS” has captured the imagination of a new group of creative collaborators and astonishing performers, of brilliant student inventors and artists, and of the public, wonderfully diverse in age and background,” says Machover, “This is partially due to the visionary nature of Philip K. Dick’s novel (much of which is even more relevant today than when the book and opera first appeared). I hope it also reflects something of the musical vitality and richness of the score, which feels as fresh to me as when I composed it over 35 years ago. I am truly delighted that “VALIS” is back, and hope very much that it is here to stay!”



de MIT News https://ift.tt/Hsu2a56

MIT welcomes nine MLK Visiting Professors and Scholars for 2023-24

Established in 1990, the MLK Visiting Professors and Scholars Program at MIT welcomes outstanding scholars to the Institute for visiting appointments. MIT aspires to attract candidates who are, in the words of Martin Luther King Jr., “trailblazers in human, academic, scientific and religious freedom.” The program honors King’s life and legacy by expanding and extending the reach of our community. 

The MLK Scholars Program has welcomed more than 140 professors, practitioners, and professionals at the forefront of their respective fields to MIT. They contribute to the growth and enrichment of the community through their interactions with students, staff, and faculty. They pay tribute to Martin Luther King Jr.’s life and legacy of service and social justice, and they embody MIT’s values: excellence and curiosity, openness and respect, and belonging and community.  

Each new cohort of scholars actively participates in community engagement and supports MIT’s mission of “advancing knowledge and educating students in science, technology, and other areas of scholarship that will best serve the nation and the world in the 21st century." 

The 2023-2024 MLK Scholars:

Tawanna Dillahunt is an associate professor at the University of Michigan’s School of Information with a joint appointment in their electrical engineering and computer science department. She is joining MIT at the end of a one-year visiting appointment as a Harvard Radcliffe Fellow. Her faculty hosts at the Institute are Catherine D’Ignazio in the Department of Urban Studies and Planning and Fotini Christia in the Institute for Data, Systems, and Society (IDSS). Dillahunt’s research focuses on equitable and inclusive computing. During her appointment, she will host a podcast to explore ethical and socially responsible ways to engage with communities, with a special emphasis on technology. 

Kwabena Donkor is an assistant professor of marketing at Stanford Graduate School of Business; he is hosted by Dean Eckles, an associate professor of marketing at MIT Sloan School of Management. Donkor’s work bridges economics, psychology, and marketing. His scholarship combines insights from behavioral economics with data and field experiments to study social norms, identity, and how these constructs interact with policy in the marketplace.

Denise Frazier joins MIT from Tulane University, where she is an assistant director in the New Orleans Center for the Gulf South. She is a researcher and performer and brings a unique interdisciplinary approach to her work at the intersection of cultural studies, environmental justice, and music. Frazier is hosted by Christine Ortiz, the Morris Cohen Professor in the Department of Materials Science and Engineering. 

Wasalu Jaco, an accomplished performer and artist, is renewing his appointment at MIT for a second year; he is hosted jointly by Nick Montfort, a professor of digital media in the Comparative Media Studies Program/Writing, and Mary Fuller, a professor in the Literature Section and the current chair of the MIT faculty. In his second year, Jaco will work on Cyber/Cypher Rapper, a research project to develop a computational system that participates in responsive and improvisational rap.

Morgane Konig first joined the Center for Theoretical Physics at MIT in December 2021 as a postdoc. Now a member of the 2023–24 MLK Visiting Scholars Program cohort, she will deepen her ties with scholars and research groups working in cosmology, primarily on early-universe inflation and late-universe signatures that could enable the scientific community to learn more about the mysterious nature of dark matter and dark energy. Her faculty hosts are David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, and Alan Guth, the Victor F. Weisskopf Professor of Physics, both from the Department of Physics.

The former minister of culture for Colombia and a transformational leader dedicated to environmental protection, Angelica Mayolo-Obregon joins MIT from Buenaventura, Colombia. During her time at MIT, she will serve as an advisor and guest speaker, and help MIT facilitate gatherings of environmental leaders committed to addressing climate action and conserving biodiversity across the Americas, with a special emphasis on Afro-descendant communities. Mayolo-Obregon is hosted by John Fernandez, a professor of building technology in the Department of Architecture and director of MIT's Environmental Solutions Initiative, and by J. Phillip Thompson, an associate professor in the Department of Urban Studies and Planning (and a former MLK Scholar).

Jean-Luc Pierite is a member of the Tunica-Biloxi Tribe of Louisiana and the president of the board of directors of North American Indian Center of Boston. While at MIT, Pierite will build connections between MIT and the local Indigenous communities. His research focuses on enhancing climate resilience planning by infusing Indigenous knowledge and ecological practices into scientific and other disciplines. His faculty host is Janelle Knox-Hayes, the Lister Brothers Professor of Economic Geography and Planning in the Department of Urban Studies and Planning.

Christine Taylor-Butler ’81 is a children’s book author who has written over 90 books; she is hosted by Graham Jones, an associate professor of anthropology. An advocate for literacy and STEAM education in underserved urban and rural schools, Taylor-Butler will partner with community organizations in the Boston area. She is also completing the fourth installment of her middle-grade series, "The Lost Tribe." These books follow a team of five kids as they use science and technology to crack codes and solve mysteries.

Angelino Viceisza, a professor of economics at Spelman College, joins MIT Sloan as an MLK Visiting Professor and the Phyllis Wallace Visiting Professor; he is hosted by Robert Gibbons, Sloan Distinguished Professor of Management, and Ray Reagans, Alfred P. Sloan Professor of Management, professor of organization studies, and associate dean for diversity, equity, and inclusion at MIT Sloan. Viceisza has strong, ongoing connections with MIT. His research focuses on remittances, retirement, and household finance in low-income countries and is relevant to public finance and financial economics, as well as the development and organizational economics communities at MIT. 

Javit Drake, Moriba Jah, and Louis Massiah, members of last year’s cohort of MLK Scholars, will remain at MIT through the end of 2023.

There are multiple opportunities throughout the year to meet our MLK Visiting Scholars and learn more about their research projects and their social impact. 

For more information about the MLK Visiting Professors and Scholars Program and upcoming events, visit the website.



de MIT News https://ift.tt/IbREiCW

Improving US air quality, equitably

Decarbonization of national economies will be key to achieving global net-zero emissions by 2050, a major stepping stone to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius (and ideally 1.5 C), and thereby averting the worst consequences of climate change. Toward that end, the United States has pledged to reduce its greenhouse gas emissions by 50-52 percent from 2005 levels by 2030, backed by its implementation of the 2022 Inflation Reduction Act. This strategy is consistent with a 50-percent reduction in carbon dioxide (CO2) by the end of the decade.

If U.S. federal carbon policy is successful, the nation’s overall air quality will also improve. Cutting CO2 emissions reduces atmospheric concentrations of air pollutants that lead to the formation of fine particulate matter (PM2.5), which causes more than 200,000 premature deaths in the United States each year. But an average nationwide improvement in air quality will not be felt equally; air pollution exposure disproportionately harms people of color and lower-income populations.

How effective are current federal decarbonization policies in reducing U.S. racial and economic disparities in PM2.5 exposure, and what changes will be needed to improve their performance? To answer that question, researchers at MIT and Stanford University recently evaluated a range of policies which, like current U.S. federal carbon policies, reduce economy-wide CO2 emissions by 40-60 percent from 2005 levels by 2030. Their findings appear in an open-access article in the journal Nature Communications.

First, they show that a carbon-pricing policy, while effective in reducing PM2.5 exposure for all racial/ethnic groups, does not significantly mitigate relative disparities in exposure. On average, the white population undergoes far less exposure than Black, Hispanic, and Asian populations. This policy does little to reduce exposure disparities because the CO2 emissions reductions that it achieves primarily occur in the coal-fired electricity sector. Other sectors, such as industry and heavy-duty diesel transportation, contribute far more PM2.5-related emissions.

The researchers then examine thousands of different reduction options through an optimization approach to identify whether any possible combination of carbon dioxide reductions in the range of 40-60 percent can mitigate disparities. They find that that no policy scenario aligned with current U.S. carbon dioxide emissions targets is likely to significantly reduce current PM2.5 exposure disparities.

“Policies that address only about 50 percent of CO2 emissions leave many polluting sources in place, and those that prioritize reductions for minorities tend to benefit the entire population,” says Noelle Selin, supervising author of the study and a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences. “This means that a large range of policies that reduce CO2 can improve air quality overall, but can’t address long-standing inequities in air pollution exposure.”

So if climate policy alone cannot adequately achieve equitable air quality results, what viable options remain? The researchers suggest that more ambitious carbon policies could narrow racial and economic PM2.5 exposure disparities in the long term, but not within the next decade. To make a near-term difference, they recommend interventions designed to reduce PM2.5 emissions resulting from non-CO2 sources, ideally at the economic sector or community level.

“Achieving improved PM2.5 exposure for populations that are disproportionately exposed across the United States will require thinking that goes beyond current CO2 policy strategies, most likely involving large-scale structural changes,” says Selin. “This could involve changes in local and regional transportation and housing planning, together with accelerated efforts towards decarbonization.”



de MIT News https://ift.tt/cMIivuX

From physics to generative AI: An AI model for advanced pattern generation

Generative AI, which is currently riding a crest of popular discourse, promises a world where the simple transforms into the complex — where a simple distribution evolves into intricate patterns of images, sounds, or text, rendering the artificial startlingly real. 

The realms of imagination no longer remain as mere abstractions, as researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have brought an innovative AI model to life. Their new technology integrates two seemingly unrelated physical laws that underpin the best-performing generative models to date: diffusion, which typically illustrates the random motion of elements, like heat permeating a room or a gas expanding into space, and Poisson Flow, which draws on the principles governing the activity of electric charges.

This harmonious blend has resulted in superior performance in generating new images, outpacing existing state-of-the-art models. Since its inception, the “Poisson Flow Generative Model ++ (PFGM++)” has found potential applications in various fields, from antibody and RNA sequence generation to audio production and graph generation.

The model can generate complex patterns, like creating realistic images or mimicking real-world processes. PFGM++ builds off of PFGM, the team’s work from the prior year. PFGM takes inspiration from the means behind the mathematical equation known as the “Poisson” equation, and then applies it to the data the model tries to learn from. To do this, the team used a clever trick: They added an extra dimension to their model's “space,” kind of like going from a 2D sketch to a 3D model. This extra dimension gives more room for maneuvering, places the data in a larger context, and helps one approach the data from all directions when generating new samples. 

“PFGM++ is an example of the kinds of AI advances that can be driven through interdisciplinary collaborations between physicists and computer scientists,” says Jesse Thaler, theoretical particle physicist in MIT’s Laboratory for Nuclear Science's Center for Theoretical Physics and director of the National Science Foundation's AI Institute for Artificial Intelligence and Fundamental Interactions (NSF AI IAIFI), who was not involved in the work. “In recent years, AI-based generative models have yielded numerous eye-popping results, from photorealistic images to lucid streams of text. Remarkably, some of the most powerful generative models are grounded in time-tested concepts from physics, such as symmetries and thermodynamics. PFGM++ takes a century-old idea from fundamental physics — that there might be extra dimensions of space-time — and turns it into a powerful and robust tool to generate synthetic but realistic datasets. I'm thrilled to see the myriad of ways ‘physics intelligence’ is transforming the field of artificial intelligence.”

The underlying mechanism of PFGM isn't as complex as it might sound. The researchers compared the data points to tiny electric charges placed on a flat plane in a dimensionally expanded world. These charges produce an “electric field,” with the charges looking to move upwards along the field lines into an extra dimension and consequently forming a uniform distribution on a vast imaginary hemisphere. The generation process is like rewinding a videotape: starting with a uniformly distributed set of charges on the hemisphere and tracking their journey back to the flat plane along the electric lines, they align to match the original data distribution. This intriguing process allows the neural model to learn the electric field, and generate new data that mirrors the original. 

The PFGM++ model extends the electric field in PFGM to an intricate, higher-dimensional framework. When you keep expanding these dimensions, something unexpected happens — the model starts resembling another important class of models, the diffusion models. This work is all about finding the right balance. The PFGM and diffusion models sit at opposite ends of a spectrum: one is robust but complex to handle, the other simpler but less sturdy. The PFGM++ model offers a sweet spot, striking a balance between robustness and ease of use. This innovation paves the way for more efficient image and pattern generation, marking a significant step forward in technology. Along with adjustable dimensions, the researchers proposed a new training method that enables more efficient learning of the electric field. 

To bring this theory to life, the team resolved a pair of differential equations detailing these charges’ motion within the electric field. They evaluated the performance using the Frechet Inception Distance (FID) score, a widely accepted metric that assesses the quality of images generated by the model in comparison to the real ones. PFGM++ further showcases a higher resistance to errors and robustness toward the step size in the differential equations.

Looking ahead, they aim to refine certain aspects of the model, particularly in systematic ways to identify the “sweet spot” value of D tailored for specific data, architectures, and tasks by analyzing the behavior of estimation errors of neural networks. They also plan to apply the PFGM++ to the modern large-scale text-to-image/text-to-video generation.

“Diffusion models have become a critical driving force behind the revolution in generative AI,” says Yang Song, research scientist at OpenAI. “PFGM++ presents a powerful generalization of diffusion models, allowing users to generate higher-quality images by improving the robustness of image generation against perturbations and learning errors. Furthermore, PFGM++ uncovers a surprising connection between electrostatics and diffusion models, providing new theoretical insights into diffusion model research.”

“Poisson Flow Generative Models do not only rely on an elegant physics-inspired formulation based on electrostatics, but they also offer state-of-the-art generative modeling performance in practice,” says NVIDIA senior research scientist Karsten Kreis, who was not involved in the work. “They even outperform the popular diffusion models, which currently dominate the literature. This makes them a very powerful generative modeling tool, and I envision their application in diverse areas, ranging from digital content creation to generative drug discovery. More generally, I believe that the exploration of further physics-inspired generative modeling frameworks holds great promise for the future and that Poisson Flow Generative Models are only the beginning.”

The paper’s authors include three MIT graduate students: Yilun Xu of the Department of Electrical Engineering and Computer Science (EECS) and CSAIL, Ziming Liu of the Department of Physics and the NSF AI IAIFI, and Shangyuan Tong of EECS and CSAIL, as well as Google Senior Research Scientist Yonglong Tian PhD '23. MIT professors Max Tegmark and Tommi Jaakkola advised the research.

The team was supported by the MIT-DSTA Singapore collaboration, the MIT-IBM Grand Challenge project, National Science Foundation grants, The Casey and Family Foundation, the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science, and the ML for Pharmaceutical Discovery and Synthesis Consortium. Their work was presented at the International Conference on Machine Learning this summer.



de MIT News https://ift.tt/XxM8lF0

martes, 26 de septiembre de 2023

Have you heard about the “whom of which” trend?

Back in the spring of 2022, professor of linguistics David Pesetsky was talking to an undergraduate class about relative clauses, which add information to sentences. For instance: “The senator, with whom we were speaking, is a policy expert.” Relative clauses often feature “who,” “which,” “that,” and so on.

Before long a student, Kanoe Evile ’23, raised her hand.

“How does this account for the ‘whom of which’ construction?” Evile asked.

Pesetsky, who has been teaching linguistics at MIT since 1988, had never encountered the phrase “whom of which” before.

“I thought, ‘What?’” Pesetsky recalls.

But to Evile, “whom of which” seems normal, as in, “Our striker, whom of which is our best player, scores a lot of goals.” After the class she talked to Pesetsky. He suggested Evile write a paper about it for the course, 24.902 (Introduction to Syntax).

“He said, ‘I’ve never heard of that, but it might make an interesting topic,’” Evile says.  She started hunting for online examples that evening. Some of the material she ultimately found came from social media; one example was in a Connecticut state government document. Among her finds: “Dave, Carter, Stefan, LeRoi, Boyd, and Tim are special people whom of which make special music together.”

And: “Our 7th figure in the set is one of the show’s main reoccurring [sic] characters, whom of which we all love to hate.”

And: “Oh, that’s me whom which you’re looking for.” (Sometimes “of” is dropped.)

Evile, a biological engineering major, wrote the paper and went back to studying cells. But Pesetsky, after querying colleagues and others, found that “whom of which” was a largely overlooked phenomenon; virtually no scholars had heard of it. He thought the subject merited further scrutiny. In early 2023, he and Evile set up an independent study project: How does “whom of which” work?

As Evile and Pesetsky show in a newly published paper, “whom of which” obeys very specific rules, whose nature contributes to a larger discussion about sentence construction. The paper, “Wh-which relatives and the existence of pied piping,” appears this month in the journal Glossa.

“It seems to be brand new, and it’s very colloquial, but it’s extremely law-governed,” says Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics at MIT.

Diversity and unity

When Evile and Pesetsky formally analyzed their “whom of which” examples, they found that, pertaining to its semantics, people consistently use “whom of which” the same way they use “whom.” The expression is not random gibberish.

“With things like this, people are not being silly or uneducated,” Pesetsky says.

Evile and Pesetsky then dug into syntax matters. As many MIT linguists emphasize, human language is both diversified and unified. Languages seem wildly different from each other, but scholars have identified many universal features. These often involve syntax, the organization of sentences.

On this front, “whom of which” relates to “wh-movement,” the way certain sentences are reordered. Suppose we say, “Anna bought something.” To turn that into a question with wh-words, we might say, “What did Anna buy?” In the process, wh-words are placed to the left, and others get reshuffled.

But sometimes multiple words move left, a phenomenon the linguist John Roberts Ross PhD ’67 identified and called “pied piping” in his MIT dissertation. In the question, “Which kind of wine did Anna buy?” not only the wh-word, “which,” but also “kind of wine” moved to the front of the sentence — the wh-word acting like the pied piper of legend, with other words following it.

“The question has always been, why does pied piping exist?” Pesetsky says. “If the left edge of a question or relative clause wants to have a wh-word in it, why doesn’t it just take the wh-word? Why does it take a bunch of other words along with it?”

This is partly why Pesetsky was excited to examine the “whom of which” issue: It provides a new multiword construction for studying wh-movement. What Evile and Pesetsky found, though, was unexpected.

Pied piper chat

In the first place, Evile and Pesetsky found that “whom of which” may provide evidence for a controversial theory about the pied piping phenomenon, developed by linguist Seth Cable PhD ’07. What is puzzling about pied piping is its randomness, the wh-word pulling other words along with it for no apparent reason. Cable noticed that in the Alaskan language Tlingit, questions that look like they feature pied piping always have a particle, “sá,” following the moved words. In the sentence “Aadóo yaagú sá ysiteen?” — “Whose boat did you see?” — what looks like pied piping is just the normal reordering of “sá” plus the words that depend on it. To linguists, what is moving is not random, but the phrase “headed” by the particle.  

Evile and Pesetsky think “of” has much of the same function as “sá” in the “whom of which” sentences.

“Our idea is that the ‘of’ is really this sá-like element,” Pesetsky says. “The ‘of’ is really the head of the relativizing phrase, just as Tlingit “sá” is the head of the question phrase.”

Normally, linguists would expect that word to appear on the far left-hand side of the relative clause because that is where heads of phrases appear in English. (Tlingit is the opposite.) But in “whom of which” sentences, “of” is not on the far left. “Whom” is. However, as the paper notes, a similar puzzle occurs in the Mayan language Ch’ol, for which the linguist Jessica Coon PhD ’10 has provided a solution that Evile and Pesetsky easily adapted to the “whom of which” construction.

“In ‘whom of which,’ the whom moves to the left of a relative marker,” Pesetsky notes. 

But that is not the end of the story. The “whom of which” construction also appears to tolerate quite complex examples with “recursive” instances of movement not predicted by the proposals of Cable and Coon. Here, however, Evile and Pesetsky find a parallel in the way pied piping works in Finnish, and conjecturally advance a unified proposal for all these languages together.

So, it remains a somewhat open question how much the “whom of which” evidence undercuts or supports pied piping. And another, seemingly less technical question lingers on. As Evile and Pesetsky note in the paper, “the surprising preference for whom over who remains unexplained.”

Why say it?

Whatever the best interpretation is, the whole issue reinforces how syntax systems shape language.

“All this stuff is law-governed,” Pesetsky says. “Websites say, ‘You shouldn’t talk like this.’ That could be a stylistic decision. People who are condemning ‘whom of which’ call it wordy, and if you’re an editor, maybe you want to red-pencil it for that reason. But it’s never sloppy or non-lawlike. It’s always following, when you poke at it, rules of its own.”

Still, in everyday life, why do people say “whom of which,” and not just “who” or “whom”?

“My first explanation was, it felt more formal to me,” Evile says. “Also, if I’m trying to explain something slowly, by taking the extra time to use the phrase, it helps people process it a bit easier. But it’s interesting that we’ve found opposite cases, where people use it casually.”

Evile and Pesetsky think “whom of which” use may be a generational thing. But it is not, as online searches show, a regional phenomenon. “It’s spontaneously popping up around the world,” Evile says. “I’m from Hawai’i and people I know in Hawai’i use it, but my only friend at MIT who would accept the phrase was from Chicago.”

Evile, now a medical student at Columbia University, says studying the issue helped broaden her intellectual horizons.

“It pushed me to do my minor in linguistics,” Evile says. “This was a really fun experience. I am grateful to David and the linguistics department. It’s good to have this scientific mindset, but instead of only applying it in the lab, applying it to language.”

Pesetsky also posted a draft of the paper online for other linguists, some of whom of which quickly responded.

“Innovative nonstandard forms with ‘whom,’ what a time to be alive,” quipped New York University linguist Gary Thoms.

That amused Pesetsky, for whom which the paper’s unresolved issues remain a subject of keen curiosity.  

“We discovered some interesting things,” Pesetsky says. “There are also some important questions we end up without answers to. But we’ve launched a discussion.”



de MIT News https://ift.tt/D5paRGE

Five MIT faculty members named 2023 Simons Investigators

Five MIT professors have been selected to receive the 2023 Simons Investigators awards from the Simons Foundation. Virginia Vassilevska Williams and Vinod Vaikuntanathan are both professors in MIT’s Department of Electrical Engineering and Computer Science (EECS) and principal investigators in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Aram Harrow and Leonid Mirny are professors in the Department of Physics, and Davesh Maulik is a professor in the Department of Mathematics.

The Simons Investigator program supports “outstanding theoretical scientists who receive a stable base of research support from the foundation, enabling them to undertake the long-term study of fundamental questions.”

Aram Harrow '01, PhD '05, professor of physics, studies theoretical quantum information science in order to understand the capabilities of quantum computers and quantum communication devices. Harrow has developed quantum algorithms for solving large systems of linear equations and hybrid classical-quantum algorithms for machine learning, and has also contributed to the intersection of quantum information and many-body physics, with work on thermalization, random quantum dynamics, and the “monogamy” property of quantum entanglement. He was a lecturer at the University of Bristol and a research assistant professor at the University of Washington until joining MIT in 2013. His awards include the NSF CAREER award, several best paper awards, an APS Outstanding Referee Award, and the APS Rolf Landauer and Charles H Bennett Award in Quantum Computing.

Davesh Maulik joined the Department of Mathematics at MIT in 2015. He works in algebraic geometry, with an emphasis on the geometry of moduli spaces. In many cases, this involves using ideas from neighboring fields such as representation theory, symplectic geometry, and number theory. His most recent work has focused on moduli spaces of Higgs bundles and various conjectures regarding their structure. In the past, he has received a Clay Mathematics Research Fellowship and the Compositio Mathematica Prize with coauthors for an outstanding research publication.

Leonid Mirny, the Richard J. Cohen (1976) Professor in Medicine and Biomedical Physics, is a core faculty member at the Institute for Medical Engineering and Science (IMES), and is faculty at the Department of Physics. His work combines biophysical modeling with analysis of large genomics data to address fundamental problems in biology. Mirny aims to understand how exceedingly long molecules of DNA are folded in 3D, and how this 3D folding of the genome influences gene expression and execution of genetic programs in health and disease. His prediction that the genome is folded by a new class of motors that act by “loop extrusion” was experimentally confirmed, leading a paradigm shift in chromosome biology. Broadly, Mirny is interested in unraveling physical mechanisms that underlie reading, writing, and transmission of genetic and epigenetic information. He was awarded the 2019 Blaise Pascal International Chair of Excellence and was named a Fellow of the American Physical Society. He received his MS in chemistry from the Weizmann Institute of Science, and his PhD in biophysics from Harvard University, where he also served as a junior fellow at Harvard Society of Fellows.

Vinod Vaikuntanathan is a professor of computer science at MIT. The co-inventor of modern fully homomorphic encryption systems and many other lattice-based (and post-quantum secure) cryptographic primitives, Vaikuntanathan’s work has been recognized with a George M. Sprowls PhD thesis award, an IBM Josef Raviv Fellowship, a Sloan Faculty Fellowship, a Microsoft Faculty Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, a Harold E. Edgerton Faculty Award, Test of Time awards from IEEE FOCS and CRYPTO conferences, and a Gödel prize. Vaikuntanathan earned his SM and PhD degrees from MIT, and a BTech degree from the Indian Institute of Technology Madras.

Virginia Vassilevska Williams is a professor of computer science at MIT EECS. Williams’s research focuses on algorithm design and analysis of fundamental problems involving graphs, matrices and more, seeking to determine the precise (asymptotic) time complexity of these problems. She has designed the fastest algorithm for matrix multiplication and is widely regarded as the leading expert on fine-grained complexity. Among her many awards, she has received an NSF CAREER award; a Sloan Research Fellowship; a Google Faculty Research Award, a Thornton Family Faculty Research Innovation Fellowship (FRIF), and was an invited speaker at the International Congress of Mathematicians in 2018. Williams earned her MS and PhD degrees at Carnegie Mellon University, and her BS degree at Caltech.



de MIT News https://ift.tt/zphM25L

lunes, 25 de septiembre de 2023

Professor Emerita Evelyn Fox Keller, influential philosopher and historian of science, dies at 87

MIT Professor Emerita Evelyn Fox Keller, a distinguished and groundbreaking philosopher and historian of science, has died at age 87.

Keller gained acclaim for her powerful critique of the scientific establishment’s conception of objectivity, which she found lacking in its own terms and heavily laden with gendered assumptions. Her work drove many scholars toward a more nuanced and sophisticated understanding of the subjective factors and socially driven modes of thought that can shape scientific theories and hypotheses.

A trained physicist who conducted academic research in biology and then focused on the scientific enterprise and the self-understanding of scientists, Keller joined MIT in 1992, serving in the Program in Science, Technology, and Society.

Having faced outright hostility and discouragement as a female graduate student in the sciences in the late 1950s and early 1960s, Keller by the 1980s had become a prominent academic thinker and public intellectual, ready and willing to bring her ideas to a larger general audience.

“There is no magic lens that will enable us to look at, to see nature unclouded … uncolored by any values, hopes, fears, anxieties, desires, goals that we bring to it,” Keller told journalist Bill Moyers in 1990 for his “World of Ideas” show on PBS.

By that time, Keller had become well-known for two high-profile books. In “A Feeling for the Organism: The Life and Work of Barbara McClintock” published in 1983, Keller examined the work of the biologist whose close studies of corn showed that genetic elements could move around on a chromosome over time, affecting gene expression. Initially ignored, McClintock won the Nobel Prize — within a year of the book’s publication — and her distinctive, well-developed sense of her own research methods meshed with, and fed into, Keller’s ideas about the complexity of discovery.

In “Reflections on Gender and Science,” published in 1985, Keller looked broadly at how the 17th-century institutionalization of science both demarcated it strictly as an activity for men and, relatedly, generated a notion of purely objective inquiry that stood in contrast to the purportedly more emotional and less linear thinking of women. Those foundational works helped other scholars question the idea of unmediated scientific discovery and better recognize the immense gender imbalances in the sciences.

Overcoming hurdles

Keller, born Evelyn Fox, grew up in New York City, a child of Russian Jewish immigrant parents, and first attended Queens College as an undergraduate, before transferring to Brandeis University, where she received her BA in physics in 1957. She received an MA in from Radcliffe College in 1959 and earned her PhD in physics from Harvard University in 1963.

The social environment Keller encountered while working toward her PhD, however, showed her firsthand how much science could be a closed shop to women.

“I was leered at by some,” Keller later wrote, recounting “open and unbelievably rude laughter with which I was often received.” As the journalist Beth Horning wrote in a 1993 profile of Keller published in MIT Technology Review, Keller’s “seriousness and ambition were publicly derided by both her peers and her elders.”

As much as Keller was taken aback, she kept moving forward, earning her doctorate while turning her academic focus toward molecular biology. After briefly returning to physics early in her research career, Keller took a faculty position in mathematical biology at Northeastern University. Among other appointments, Keller served on the faculty at the State University of New York at Purchase, where she began expanding her teaching toward subjects such as women’s studies, and writing about the institutional difficulties she had faced in science.

By the late 1970s, Keller had met McClintock and started writing about McClintock’s work — a kind of case study in the complicated issues Keller wanted to explore. The book’s title was a McClintock phrase, about having “a feeling for the organism” one was studying; McClintock emphasized the importance of being closely attuned to the corn she was studying, which ultimately helped her detect some of the unexpected genomic behavior she identified.

However, as Keller would often emphasize later on, this approach did not mean that McClintock was pursuing science in a distinctively feminine way, either. Instead, as Horning notes, Keller’s aim, stated in “Reflections on Gender and Science,” was the “reclamation, from within science, of science as a human instead of a masculine project.” McClintock’s methods may have been considered unusual and her findings unexpected, but that reflected a narrowness on the part of the scientific establishment.

At the Institute

Starting in 1979, Keller had multiple appointments at MIT as a visiting fellow, visiting scholar, and visiting professor. In 1988, Keller joined the faculty at the University of California at Berkeley, before moving to MIT as a tenured faculty member four years later.

At MIT, Keller joined her older brother, Maurice Fox, in the Institute faculty ranks. Fox was an accomplished biologist who taught at MIT from 1962 through 1996, served as head of the Department of Biology from 1985 through 1989, and was an expert in mutation and recombination, among other subjects; he died in 2020. Keller’s sister is the prominent scholar and social activist Frances Fox Piven, whose wide-ranging work has examined social welfare, working class movements, and democratic practices in the U.S., and influenced the expansion of voting access.

In 1992 Keller received a MacArthur Foundation “genius” award for her scholarship. The foundation called her “a scholar whose interdisciplinary work raises important questions about the interrelationships among language, gender, and science,” while also noting that she had “stimulated thought about alternative styles of scientific research” through her book on McClintock.

In all, Keller wrote 11 books on science and co-edited three other volumes; her individually authored books include “The Century of the Gene” (2000, Harvard University Press), “Making Sense of Life” (2002, Harvard University Press), and  “The Mirage of a Space between Nature and Nurture” (2010, Duke University Press).

That third book examined the history and implications of nature-nurture debates. Keller found the purported distinction between nature and nurture to be a relatively recent one historically, promoted heavily in the late 19th century by the statistician (and eugenicist) Francis Galton, but not one that had much currency before then.

“We’re stuck with our DNA, but lots of things affect the way DNA is deployed,” Keller told MIT News in 2010, in an interview about the book. “It’s not enough to know what your DNA sequence is to understand about disease, behavior, and physiology.”

Most recently, in early 2023, Keller also published an autobiography, “Making Sense of My Life in Science: A Memoir,” issued by Modern Memoirs.

An intrepid scholar, Keller’s work helped make clear that, although nature exists apart from humans, our understanding of it is always mediated by our own ideas and values.

As Keller told Moyers in 1990, “it is a fantasy that any human product could be free of human values. And science is a human product. It’s a wonderful, glorious human product.”

Among other career honors, Keller was elected to the American Academy of Arts and Sciences, and to the American Philosophical Society; received a Guggenheim Fellowship; was granted the 2018 Dan David Prize; and also received honorary degrees from Dartmouth University, Lulea University of Technology, Mount Holyoke College, Rensselaer Polytechnic Institute, Simmons College, the University of Amsterdam, and Wesleyan University.

Keller is survived by her son, Jeffrey Keller; her daughter, Sarah Keller; her sister, Frances Fox Piven; her grand-daughters, Chloe Marschall and Cale Marschall; her nephews, Jonathan Fox, Gregory Fox, and Michael Fox, and her niece, Sarah Piven.



de MIT News https://ift.tt/lg57GfZ

3 Questions: The first asteroid sample returned to Earth

On Sunday morning, a capsule the size of a mini-fridge dropped from the skies over western Utah, carrying a first-of-its-kind package: about 250 grams of dirt and dust plucked from the surface of an asteroid. As a candy-striped parachute billowed open to slow its freefall, the capsule plummeted down to the sand, slightly ahead of schedule.

The special delivery came courtesy of OSIRIS-REx, the first NASA mission to travel to an asteroid and return a sample of its contents to Earth. Launched in 2016, the mission’s target was Bennu, a “near-Earth” asteroid that is thought to have formed during the solar system’s first 10 million years. The asteroid is made mostly of carbon and minerals, and has not been altered much since it formed. Samples from its surface could therefore offer valuable clues about the kinds of minerals and materials that first came together to shape the early solar system.

OSIRIS-REx journeyed for over two years to reach Bennu, where it then spent another two years circling and measuring its surface, looking for a spot to pick a sample. Among the suite of instruments aboard the spacecraft was an MIT-student-designed experiment, REXIS (the Regolith X-ray Imaging Spectrometer). The shoebox-sized instrument was the work of more than 100 MIT students, who designed the instrument to map the asteroid’s surface material in X-rays, to help determine where the spacecraft should take a sample.

On Sunday, OSIRIS-REx released the capsule to fall through the Earth’s atmosphere, as the spacecraft itself set off on a new course to the asteroid Apophis. The capsule has been transported to Houston’s Johnson Space Center, where Bennu’s dust will be examined and distributed to researchers around the world for further study.

The asteroid sample’s successful return is a huge milestone for the mission’s members, including MIT’s Richard Binzel, a leading expert in the study of asteroids, and a professor post-tenure in MIT’s Department of Earth, Atmospheric and Planetary Sciences. As an OSIRIS-REx co-investigator, Binzel helped lead the development of REXIS and its integration with the spacecraft. MIT News checked in with Binzel for his first reactions following the capsule’s landing and recovery, and what he hopes we might learn from the asteroid’s dust.

Q: First off: What a landing! As someone who’s studied asteroids in depth, and from afar, what was it like for you to see a sample of this asteroid, returned to Earth?

A: I was holding my breath just like everyone else! The parachute opening was a huge exhale, and the soft landing was a release of joy on behalf of the entire team. You work with these people for so long, you become like family, so you feel everything together. Kind of like watching your kid finishing off their balance beam routine and sticking the landing. While I wasn’t at the landing site, many of us were “together” online watching the timeline and all the procedures. What a journey it has been, more than two decades in the making, starting with our telescopic identification of Bennu as a scientifically rich and easily accessible sampling target, and then with the many evolving designs of the mission. MIT student involvement with the REXIS instrument began in 2010. It took six years to reach the launch pad and now, finally, we are seeing the mission literally come full circle in returning the sample to the Earth.

Q: The instruments aboard OSIRIS-REx made measurements of the asteroid while in orbit. What did those measurements in space reveal about the asteroid? And what more do you hope scientists can uncover, now that a sample is back on Earth?

A: Spacecraft instruments, no matter technologically advanced, cannot accomplish nearly as much as the power of laboratories on Earth. Our instruments aboard OSIRIS-REx told us that Bennu is carbon-rich, likely containing some of the earliest chemical records of the ingredients that made the Earth and even life itself. But how do we know that the spacecraft instruments making measurements while flying above the surface are fully correct in what they reveal and how we interpret the data? We can only be sure by securing the “ground truth” provided through actual samples being brought into Earth’s laboratories. The laboratory analysis of these samples, confirming our preliminary findings, will verify our ability to interpret data about asteroids from both telescopes and orbiting spacecraft. Then the laboratory analysis will take us to even greater depths about the chemistry, conditions, and processes for how our own planetary system came to be.

Q: Let’s give a shoutout to all the students who helped to put an instrument aboard the mission. Going forward, how might this asteroid sample — and the spacecraft’s continued trajectory — relate to the work at MIT?

A: It’s a reminder that the sky is no limit for what we do at MIT. MIT’s REXIS instrument represents MIT’s motto, “mens et manus” [“mind and hand”], extended hundreds of millions of miles out in to space, with actual hardware the students both designed and built, that was flown farther into space than any other MIT student project has gone before. I feel it is simply a privilege to have engaged so many students in learning and experiencing the depth of hard work, teamwork, and dedication that it takes to be successful in space exploration.



de MIT News https://ift.tt/XecrRpE

New qubit circuit enables quantum operations with higher accuracy

In the future, quantum computers may be able to solve problems that are far too complex for today’s most powerful supercomputers. To realize this promise, quantum versions of error correction codes must be able to account for computational errors faster than they occur.

However, today’s quantum computers are not yet robust enough to realize such error correction at commercially relevant scales.

On the way to overcoming this roadblock, MIT researchers demonstrated a novel superconducting qubit architecture that can perform operations between qubits — the building blocks of a quantum computer — with much greater accuracy than scientists have previously been able to achieve.

They utilize a relatively new type of superconducting qubit, known as fluxonium, which can have a lifespan that is much longer than more commonly used superconducting qubits.

Their architecture involves a special coupling element between two fluxonium qubits that enables them to perform logical operations, known as gates, in a highly accurate manner. It suppresses a type of unwanted background interaction that can introduce errors into quantum operations.

This approach enabled two-qubit gates that exceeded 99.9 percent accuracy and single-qubit gates with 99.99 percent accuracy. In addition, the researchers implemented this architecture on a chip using an extensible fabrication process.  

“Building a large-scale quantum computer starts with robust qubits and gates. We showed a highly promising two-qubit system and laid out its many advantages for scaling. Our next step is to increase the number of qubits,” says Leon Ding PhD ’23, who was a physics graduate student in the Engineering Quantum Systems (EQuS) group and is the lead author of a paper on this architecture.

Ding wrote the paper with Max Hays, an EQuS postdoc; Youngkyu Sung PhD ’22; Bharath Kannan PhD ’22, who is now CEO of Atlantic Quantum; Kyle Serniak, a staff scientist and team lead at MIT Lincoln Laboratory; and senior author William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of EQuS, and associate director of the Research Laboratory of Electronics; as well as others at MIT and MIT Lincoln Laboratory. The research appears today in Physical Review X.

A new take on the fluxonium qubit

In a classical computer, gates are logical operations performed on bits (a series of 1s and 0s) that enable computation. Gates in quantum computing can be thought of in the same way: A single qubit gate is a logical operation on one qubit, while a two-qubit gate is an operation that depends on the states of two connected qubits.

Fidelity measures the accuracy of quantum operations performed on these gates. Gates with the highest possible fidelities are essential because quantum errors accumulate exponentially. With billions of quantum operations occurring in a large-scale system, a seemingly small amount of error can quickly cause the entire system to fail.

In practice, one would use error-correcting codes to achieve such low error rates. However, there is a “fidelity threshold” the operations must surpass to implement these codes. Furthermore, pushing the fidelities far beyond this threshold reduces the overhead needed to implement error correcting codes.

For more than a decade, researchers have primarily used transmon qubits in their efforts to build quantum computers. Another type of superconducting qubit, known as a fluxonium qubit, originated more recently. Fluxonium qubits have been shown to have longer lifespans, or coherence times, than transmon qubits.

Coherence time is a measure of how long a qubit can perform operations or run algorithms before all the information in the qubit is lost.

“The longer a qubit lives, the higher fidelity the operations it tends to promote. These two numbers are tied together. But it has been unclear, even when fluxonium qubits themselves perform quite well, if you can perform good gates on them,” Ding says.

For the first time, Ding and his collaborators found a way to use these longer-lived qubits in an architecture that can support extremely robust, high-fidelity gates. In their architecture, the fluxonium qubits were able to achieve coherence times of more than a millisecond, about 10 times longer than traditional transmon qubits.

“Over the last couple of years, there have been several demonstrations of fluxonium outperforming transmons on the single-qubit level,” says Hays. “Our work shows that this performance boost can be extended to interactions between qubits as well.”

The fluxonium qubits were developed in a close collaboration with MIT Lincoln Laboratory, (MIT-LL), which has expertise in the design and fabrication of extensible superconducting qubit technologies.

“This experiment was exemplary of what we call the ‘one-team model’: the close collaboration between the EQuS group and the superconducting qubit team at MIT-LL,” says Serniak. “It’s worth highlighting here specifically the contribution of fabrication team at MIT-LL — they developed the capability to construct dense arrays of more than 100 Josephson junctions specifically for fluxoniums and other new qubit circuits.”

A stronger connection

Their novel architecture involves a circuit that has two fluxonium qubits on either end, with a tunable transmon coupler in the middle to join them together. This fluxonium-transmon-fluxonium (FTF) architecture enables a stronger coupling than methods that directly connect two fluxonium qubits.

FTF also minimizes unwanted interactions that occur in the background during quantum operations. Typically, stronger couplings between qubits can lead to more of this persistent background noise, known as static ZZ interactions. But the FTF architecture remedies this problem.

The ability to suppress these unwanted interactions and the longer coherence times of fluxonium qubits are two factors that enabled the researchers to demonstrate single-qubit gate fidelity of 99.99 percent and two-qubit gate fidelity of 99.9 percent.

These gate fidelities are well above the threshold needed for certain common error correcting codes, and should enable error detection in larger-scale systems.

“Quantum error correction builds system resilience through redundancy. By adding more qubits, we can improve overall system performance, provided the qubits are individually ‘good enough.’ Think of trying to perform a task with a room full of kindergartners. That’s a lot of chaos, and adding more kindergartners won’t make it better,” Oliver explains. “However, several mature graduate students working together leads to performance that exceeds any one of the individuals — that’s the threshold concept. While there is still much to do to build an extensible quantum computer, it starts with having high-quality quantum operations that are well above threshold.”

Building off these results, Ding, Sung, Kannan, Oliver, and others recently founded a quantum computing startup, Atlantic Quantum. The company seeks to use fluxonium qubits to build a viable quantum computer for commercial and industrial applications.

“These results are immediately applicable and could change the state of the entire field. This shows the community that there is an alternate path forward. We strongly believe that this architecture, or something like this using fluxonium qubits, shows great promise in terms of actually building a useful, fault-tolerant quantum computer,” Kannan says.

While such a computer is still probably 10 years away, this research is an important step in the right direction, he adds. Next, the researchers plan to demonstrate the advantages of the FTF architecture in systems with more than two connected qubits.

“This work pioneers a new architecture for coupling two fluxonium qubits. The achieved gate fidelities are not only the best on record for fluxonium, but also on par with those of transmons, the currently dominating qubit. More importantly, the architecture also offers a high degree of flexibility in parameter selection, a feature essential for scaling up to a multi-qubit fluxonium processor,” says Chunqing Deng, head of the experimental quantum team at the Quantum Laboratory of DAMO Academy, Alibaba’s global research institution, who was not involved with this work. “For those of us who believe that fluxonium is a fundamentally better qubit than transmon, this work is an exciting and affirming milestone. It will galvanize not just the development of fluxonium processors but also more generally that for qubits alternative to transmons.”

This work was funded, in part, by the U.S. Army Research Office, the U.S. Undersecretary of Defense for Research and Engineering, an IBM PhD fellowship, the Korea Foundation for Advance Studies, and the U.S. National Defense Science and Engineering Graduate Fellowship Program.



de MIT News https://ift.tt/HbMUkop