jueves, 2 de mayo de 2024

Three from MIT named 2024-25 Goldwater Scholars

MIT students Ben Lou, Srinath Mahankali, and Kenta Suzuki have been selected to receive Barry Goldwater Scholarships for the 2024-25 academic year. They are among just 438 recipients from across the country selected based on academic merit from an estimated pool of more than 5,000 college sophomores and juniors, approximately 1,350 of whom were nominated by their academic institution to compete for the scholarship.

Since 1989, the Barry Goldwater Scholarship and Excellence in Education Foundation has awarded nearly 11,000 Goldwater scholarships to support undergraduates who intend to pursue research careers in the natural sciences, mathematics, and engineering and have the potential to become leaders in their respective fields. Past scholars have gone on to win an impressive array of prestigious postgraduate fellowships. Almost all, including the three MIT recipients, intend to obtain doctorates in their area of research.

Ben Lou

Ben Lou is a third-year student originally from San Diego, California, majoring in physics and math with a minor in philosophy.

“My research interests are scattered across different disciplines,” says Lou. “I want to draw from a wide range of topics in math and physics, finding novel connections between them, to push forward the frontier of knowledge.”

Since January 2022, he has worked with Nergis Mavalvala, dean of the School of Science, and Hudson Loughlin, a graduate student in the LIGO group, which studies the detection of gravitational waves. Lou is working with them to advance the field of quantum measurement and better understand quantum gravity.

“Ben has enormous intellectual horsepower and works with remarkable independence,” writes Mavalvala in her recommendation letter. “I have no doubt he has an outstanding career in physics ahead of him.”

Lou, for his part, is grateful to Mavalvala and Loughlin, as well as all of his scientific mentors that have supported him along his research path. That includes MIT professors Alan Guth and Barton Zwiebach, who introduced him to quantum physics, as well as his first-year advisor, Richard Price; current advisor, Janet Conrad; Elijah Bodish and Roman Bezrukavnikov in the Department of Mathematics; and David W. Brown of the San Diego Math Circle.

In terms of his future career goals, Lou wants to be a professor of theoretical physics and study, as he says, the “fundamental aspects of reality” while also inspiring students to love math and physics.

In addition to his research, Lou is currently the vice president of the Assistive Technology Club at MIT and actively engaged in raising money for Spinal Muscular Atrophy research. In the future, he’d like to continue his philanthropy work and use his personal experience to advise an assistive technology company.

Srinath Mahankali

Srinath Mahankali is a third-year student from New York City majoring in computer science.

Since June 2022, Mahankali has been an undergraduate researcher in the MIT Computer Science and Artificial Intelligence Laboratory. Working with Pulkit Agrawal, assistant professor of electrical engineering and computer science and head of the Improbable AI Lab, Mahankali’s research is on training robots. Currently, his focus is on training quadruped robots to move in an energy-efficient manner and training agents to interact in environments with minimal feedback. But in the future, he’d like to develop robots that can complete athletic tasks like gymnastics.

“The experience of discussing research with Srinath is similar to discussions with the best PhD students in my group,” writes Agrawal in his recommendation letter. “He is fearless, willing to take risks, persistent, creative, and gets things done.”

Before coming to MIT, Mahanakli was a 2021 Regeneron STS scholar, which is one of the oldest and most prestigious awards for math and science students. In 2020, he was also a participant in the MIT PRIMES program, studying objective functions in optimization problems with Yunan Yang, an assistant professor of math at Cornell University.

“I’m deeply grateful to all my research advisors for their invaluable mentorship and guidance,” says Mahanakli, extending his thanks to PhD students Zhang-Wei Hong and Gabe Margolis, as well as assistant professor of math at Brandeis, Promit Ghosal, and all of the organizers of the PRIMES program. “I’m also very grateful to all the members of the Improbable AI Lab for their support, encouragement, and willingness to help and discuss any questions I have,”

In the future, Mahankali wants to obtain a PhD and one day lead his own lab in robotics and artificial intelligence.

Kenta Suzuki

Kenta Suzuki is a third-year student majoring in mathematics from Bloomfield Hills, Michigan, and Tokyo, Japan.

Currently, Suzuki works with professor of mathematics Roman Bezrukavnikov on research at the intersection of number and representation theory, using geometric methods to represent p-adic groups. Suzuki has also previously worked with math professors Wei Zhang and Zhiwei Yun, crediting the latter with inspiring him to pursue research in representation theory.

In his recommendation letter, Yun writes, “Kenta is the best undergraduate student that I have worked with in terms of the combination of raw talent, mathematical maturity, and research abilities.”

Before coming to MIT, Suzuki was a Yau Science Award USA finalist in 2020, receiving a gold in math, and he received honorable mention from the Davidson Institute Fellows program in 2021. He also participated in the MIT PRIMES program in 2020. Suzuki credits his PRIMES mentor, Michael Zieve at the University of Michigan, with giving him his first taste of mathematical research. In addition, he extended his thanks to all of his math mentors, including the organizers of MIT Summer Program in Undergraduate Research.

After MIT, Suzuki intends to obtain a PhD in pure math, continuing his research in representation theory and number theory and, one day, teaching at a research-oriented institution.

The Barry Goldwater Scholarship and Excellence in Education Program was established by U.S. Congress in 1986 to honor Senator Barry Goldwater, a soldier and national leader who served the country for 56 years. Awardees receive scholarships of up to $7,500 a year to cover costs related to tuition, room and board, fees, and books.



de MIT News https://ift.tt/UFT9cty

Physicists arrange atoms in extremely close proximity

Proximity is key for many quantum phenomena, as interactions between atoms are stronger when the particles are close. In many quantum simulators, scientists arrange atoms as close together as possible to explore exotic states of matter and build new quantum materials.

They typically do this by cooling the atoms to a stand-still, then using laser light to position the particles as close as 500 nanometers apart — a limit that is set by the wavelength of light. Now, MIT physicists have developed a technique that allows them to arrange atoms in much closer proximity, down to a mere 50 nanometers. For context, a red blood cell is about 1,000 nanometers wide.

The physicists demonstrated the new approach in experiments with dysprosium, which is the most magnetic atom in nature. They used the new approach to manipulate two layers of dysprosium atoms, and positioned the layers precisely 50 nanometers apart. At this extreme proximity, the magnetic interactions were 1,000 times stronger than if the layers were separated by 500 nanometers.

What’s more, the scientists were able to measure two new effects caused by the atoms’ proximity. Their enhanced magnetic forces caused “thermalization,” or the transfer of heat from one layer to another, as well as synchronized oscillations between layers. These effects petered out as the layers were spaced farther apart.

“We have gone from positioning atoms from 500 nanometers to 50 nanometers apart, and there is a lot you can do with this,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. “At 50 nanometers, the behavior of atoms is so much different that we’re really entering a new regime here.”

Ketterle and his colleagues say the new approach can be applied to many other atoms to study quantum phenomena. For their part, the group plans to use the technique to manipulate atoms into configurations that could generate the first purely magnetic quantum gate — a key building block for a new type of quantum computer.

The team has published their results today in the journal Science. The study’s co-authors include lead author and physics graduate student Li Du, along with Pierre Barral, Michael Cantara, Julius de Hond, and Yu-Kun Lu — all members of the MIT-Harvard Center for Ultracold Atoms, the Department of Physics, and the Research Laboratory of Electronics at MIT.

Peaks and valleys

To manipulate and arrange atoms, physicists typically first cool a cloud of atoms to temperatures approaching absolute zero, then use a system of laser beams to corral the atoms into an optical trap.

Laser light is an electromagnetic wave with a specific wavelength (the distance between maxima of the electric field) and frequency. The wavelength limits the smallest pattern into which light can be shaped to typically 500 nanometers, the so-called optical resolution limit. Since atoms are attracted by laser light of certain frequencies, atoms will be positioned at the points of peak laser intensity. For this reason, existing techniques have been limited in how close they can position atomic particles, and could not be used to explore phenomena that happen at much shorter distances.

“Conventional techniques stop at 500 nanometers, limited not by the atoms but by the wavelength of light,” Ketterle explains. “We have found now a new trick with light where we can break through that limit.”

The team’s new approach, like current techniques, starts by cooling a cloud of atoms — in this case, to about 1 microkelvin, just a hair above absolute zero — at which point, the atoms come to a near-standstill. Physicists can then use lasers to move the frozen particles into desired configurations.

Then, Du and his collaborators worked with two laser beams, each with a different frequency, or color, and circular polarization, or direction of the laser’s electric field. When the two beams travel through a super-cooled cloud of atoms, the atoms can orient their spin in opposite directions, following either of the two lasers’ polarization. The result is that the beams produce two groups of the same atoms, only with opposite spins.

Each laser beam formed a standing wave, a periodic pattern of electric field intensity with a spatial period of 500 nanometers. Due to their different polarizations, each standing wave attracted and corralled one of two groups of atoms, depending on their spin. The lasers could be overlaid and tuned such that the distance between their respective peaks is as small as 50 nanometers, meaning that the atoms gravitating to each respective laser’s peaks would be separated by the same 50 nanometers.

But in order for this to happen, the lasers would have to be extremely stable and immune to all external noise, such as from shaking or even breathing on the experiment. The team realized they could stabilize both lasers by directing them through an optical fiber, which served to lock the light beams in place in relation to each other.

“The idea of sending both beams through the optical fiber meant the whole machine could shake violently, but the two laser beams stayed absolutely stable with respect to each others,” Du says.

Magnetic forces at close range

As a first test of their new technique, the team used atoms of dysprosium — a rare-earth metal that is one of the strongest magnetic elements in the periodic table, particularly at ultracold temperatures. However, at the scale of atoms, the element’s magnetic interactions are relatively weak at distances of even 500 nanometers. As with common refrigerator magnets, the magnetic attraction between atoms increases with proximity, and the scientists suspected that if their new technique could space dysprosium atoms as close as 50 nanometers apart, they might observe the emergence of otherwise weak interactions between the magnetic atoms.

“We could suddenly have magnetic interactions, which used to be almost neglible but now are really strong,” Ketterle says.

The team applied their technique to dysprosium, first super-cooling the atoms, then passing two lasers through to split the atoms into two spin groups, or layers. They then directed the lasers through an optical fiber to stabilize them, and found that indeed, the two layers of dysprosium atoms gravitated to their respective laser peaks, which in effect separated the layers of atoms by 50 nanometers — the closest distance that any ultracold atom experiment has been able to achieve.

At this extremely close proximity, the atoms’ natural magnetic interactions were significantly enhanced, and were 1,000 times stronger than if they were positioned 500 nanometers apart. The team observed that these interactions resulted in two novel quantum phenomena: collective oscillation, in which one layer’s vibrations caused the other layer to vibrate in sync; and thermalization, in which one layer transferred heat to the other, purely through magnetic fluctuations in the atoms.

“Until now, heat between atoms could only by exchanged when they were in the same physical space and could collide,” Du notes. “Now we have seen atomic layers, separated by vacuum, and they exchange heat via fluctuating magnetic fields.”

The team’s results introduce a new technique that can be used to position many types of atom in close proximity. They also show that atoms, placed close enough together, can exhibit interesting quantum phenomena, that could be harnessed to build new quantum materials, and potentially, magnetically-driven atomic systems for quantum computers.

“We are really bringing super-resolution methods to the field, and it will become a general tool for doing quantum simulations,” Ketterle says. “There are many variants possible, which we are working on.”

This research was funded, in part, by the National Science Foundation and the Department of Defense.



de MIT News https://ift.tt/k1a3VAc

Epigenomic analysis sheds light on risk factors for ALS

For most patients, it’s unknown exactly what causes amyotrophic lateral sclerosis (ALS), a disease characterized by degeneration of motor neurons that impairs muscle control and eventually leads to death.

Studies have identified certain genes that confer a higher risk of the disease, but scientists believe there are many more genetic risk factors that have yet to be discovered. One reason why these drivers have been hard to find is that some are found in very few patients, making it hard to pick them out without a very large sample of patients. Additionally, some of the risk may be driven by epigenomic factors, rather than mutations in protein-coding genes.

Working with the Answer ALS consortium, a team of MIT researchers has analyzed epigenetic modifications — tags that determine which genes are turned on in a cell — in motor neurons derived from induced pluripotent stem (IPS) cells from 380 ALS patients.

This analysis revealed a strong differential signal associated with a known subtype of ALS, and about 30 locations with modifications that appear to be linked to rates of disease progression in ALS patients. The findings may help scientists develop new treatments that are targeted to patients with certain genetic risk factors.

“If the root causes are different for all these different versions of the disease, the drugs will be very different and the signals in IPS cells will be very different,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We may get to a point in a decade or so where we don’t even think of ALS as one disease, where there are drugs that are treating specific types of ALS that only work for one group of patients and not for another.”

MIT postdoc Stanislav Tsitkov is the lead author of the paper, which appears today in Nature Communications.

Finding risk factors

ALS is a rare disease that is estimated to affect about 30,000 people in the United States. One of the challenges in studying the disease is that while genetic variants are believed to account for about 50 percent of ALS risk (with environmental factors making up the rest), most of the variants that contribute to that risk have not been identified.

Similar to Alzheimer’s disease, there may be a large number of genetic variants that can confer risk, but each individual patient may carry only a small number of those. This makes it difficult to identify the risk factors unless scientists have a very large population of patients to analyze.

“Because we expect the disease to be heterogeneous, you need to have large numbers of patients before you can pick up on signals like this. To really be able to classify the subtypes of disease, we’re going to need to look at a lot of people,” Fraenkel says.

About 10 years ago, the Answer ALS consortium began to collect large numbers of patient samples, which could allow for larger-scale studies that might reveal some of the genetic drivers of the disease. From blood samples, researchers can create induced pluripotent stem cells and then induce them to differentiate into motor neurons, the cells most affected by ALS.

“We don’t think all ALS patients are going to be the same, just like all cancers are not the same. And the goal is being able to find drivers of the disease that could be therapeutic targets,” Fraenkel says.

In this study, Fraenkel and his colleagues wanted to see if patient-derived cells could offer any information about molecular differences that are relevant to ALS. They focused on epigenomic modifications, using a method called ATAC-seq to measure chromatin density across the genome of each cell. Chromatin is a complex of DNA and proteins that determines which genes are accessible to be transcribed by the cell, depending on how densely packed the chromatin is.

In data that were collected and analyzed over several years, the researchers did not find any global signal that clearly differentiated the 380 ALS patients in their study from 80 healthy control subjects. However, they did find a strong differential signal associated with a subtype of ALS, characterized by a genetic mutation in the C9orf72 gene.

Additionally, they identified about 30 regions that were associated with slower rates of disease progression in ALS patients. Many of these regions are located near genes related to the cellular inflammatory response; interestingly, several of the identified genes have also been implicated in other neurodegenerative diseases, such as Parkinson’s disease.

“You can use a small number of these epigenomic regions and look at the intensity of the signal there, and predict how quickly someone’s disease will progress. That really validates the hypothesis that the epigenomics can be used as a filter to better understand the contribution of the person’s genome,” Fraenkel says.

“By harnessing the very large number of participant samples and extensive data collected by the Answer ALS Consortium, these studies were able to rigorously test whether the observed changes might be artifacts related to the techniques of sample collection, storage, processing, and analysis, or truly reflective of important biology,” says Lyle Ostrow, an associate professor of neurology at the Lewis Katz School of Medicine at Temple University, who was not involved in the study. “They developed standard ways to control for these variables, to make sure the results can be accurately compared. Such studies are incredibly important for accelerating ALS therapy development, as they will enable data and samples collected from different studies to be analyzed together.”

Targeted drugs

The researchers now hope to further investigate these genomic regions and see how they might drive different aspects of ALS progression in different subsets of patients. This could help scientists develop drugs that might work in different groups of patients, and help them identify which patients should be chosen for clinical trials of those drugs, based on genetic or epigenetic markers.

Last year, the U.S. Food and Drug Administration approved a drug called tofersen, which can be used in ALS patients with a mutation in a gene called SOD1. This drug is very effective for those patients, who make up about 1 percent of the total population of people with ALS. Fraenkel’s hope is that more drugs can be developed for, and tested in, people with other genetic drivers of ALS.

“If you had a drug like tofersen that works for 1 percent of patients and you just gave it to a typical phase two clinical trial, you probably wouldn’t have anybody with that mutation in the trial, and it would’ve failed. And so that drug, which is a lifesaver for people, would never have gotten through,” Fraenkel says.

The MIT team is now using an approach called quantitative trait locus (QTL) analysis to try to identify subgroups of ALS patients whose disease is driven by specific genomic variants.

“We can integrate the genomics, the transcriptomics, and the epigenomics, as a way to find subgroups of ALS patients who have distinct phenotypic signatures from other ALS patients and healthy controls,” Tsitkov says. “We have already found a few potential hits in that direction.”

The research was funded by the Answer ALS program, which is supported by the Robert Packard Center for ALS Research at Johns Hopkins University, Travelers Insurance, ALS Finding a Cure Foundation, Stay Strong Vs. ALS, Answer ALS Foundation, Microsoft, Caterpillar Foundation, American Airlines, Team Gleason, the U.S. National Institutes of Health, Fishman Family Foundation, Aviators Against ALS, AbbVie Foundation, Chan Zuckerberg Initiative, ALS Association, National Football League, F. Prime, M. Armstrong, Bruce Edwards Foundation, the Judith and Jean Pape Adams Charitable Foundation, Muscular Dystrophy Association, Les Turner ALS Foundation, PGA Tour, Gates Ventures, and Bari Lipp Foundation. This work was also supported, in part, by grants from the National Institutes of Health and the MIT-GSK Gertrude B. Elion Research Fellowship Program for Drug Discovery and Disease.



de MIT News https://ift.tt/UgHlRLc

miércoles, 1 de mayo de 2024

Fostering research, careers, and community in materials science

Gabrielle Wood, a junior at Howard University majoring in chemical engineering, is on a mission to improve the sustainability and life cycles of natural resources and materials. Her work in the Materials Initiative for Comprehensive Research Opportunity (MICRO) program has given her hands-on experience with many different aspects of research, including MATLAB programming, experimental design, data analysis, figure-making, and scientific writing.

Wood is also one of 10 undergraduates from 10 universities around the United States to participate in the first MICRO Summit earlier this year. The internship program, developed by the MIT Department of Materials Science and Engineering (DMSE), first launched in fall 2021. Now in its third year, the program continues to grow, providing even more opportunities for non-MIT undergraduate students — including the MICRO Summit and the program’s expansion to include Northwestern University.

“I think one of the most valuable aspects of the MICRO program is the ability to do research long term with an experienced professor in materials science and engineering,” says Wood. “My school has limited opportunities for undergraduate research in sustainable polymers, so the MICRO program allowed me to gain valuable experience in this field, which I would not otherwise have.”

Like Wood, Griheydi Garcia, a senior chemistry major at Manhattan College, values the exposure to materials science, especially since she is not able to learn as much about it at her home institution.

“I learned a lot about crystallography and defects in materials through the MICRO curriculum, especially through videos,” says Garcia. “The research itself is very valuable, as well, because we get to apply what we’ve learned through the videos in the research we do remotely.”

Expanding research opportunities

From the beginning, the MICRO program was designed as a fully remote, rigorous education and mentoring program targeted toward students from underserved backgrounds interested in pursuing graduate school in materials science or related fields. Interns are matched with faculty to work on their specific research interests.

Jessica Sandland ’99, PhD ’05, principal lecturer in DMSE and co-founder of MICRO, says that research projects for the interns are designed to be work that they can do remotely, such as developing a machine-learning algorithm or a data analysis approach.

“It’s important to note that it’s not just about what the program and faculty are bringing to the student interns,” says Sandland, a member of the MIT Digital Learning Lab, a joint program between MIT Open Learning and the Institute’s academic departments. “The students are doing real research and work, and creating things of real value. It’s very much an exchange.”

Cécile Chazot PhD ’22, now an assistant professor of materials science and engineering at Northwestern University, had helped to establish MICRO at MIT from the very beginning. Once at Northwestern, she quickly realized that expanding MICRO to Northwestern would offer even more research opportunities to interns than by relying on MIT alone — leveraging the university’s strong materials science and engineering department, as well as offering resources for biomaterials research through Northwestern’s medical school. The program received funding from 3M and officially launched at Northwestern in fall 2023. Approximately half of the MICRO interns are now in the program with MIT and half are with Northwestern. Wood and Garcia both participate in the program via Northwestern.

“By expanding to another school, we’ve been able to have interns work with a much broader range of research projects,” says Chazot. “It has become easier for us to place students with faculty and research that match their interests.”

Building community

The MICRO program received a Higher Education Innovation grant from the Abdul Latif Jameel World Education Lab, part of MIT Open Learning, to develop an in-person summit. In January 2024, interns visited MIT for three days of presentations, workshops, and campus tours — including a tour of the MIT.nano building — as well as various community-building activities.

“A big part of MICRO is the community,” says Chazot. “A highlight of the summit was just seeing the students come together.”

The summit also included panel discussions that allowed interns to gain insights and advice from graduate students and professionals. The graduate panel discussion included MIT graduate students Sam Figueroa (mechanical engineering), Isabella Caruso (DMSE), and Eliana Feygin (DMSE). The career panel was led by Chazot and included Jatin Patil PhD ’23, head of product at SiTration; Maureen Reitman ’90, ScD ’93, group vice president and principal engineer at Exponent; Lucas Caretta PhD ’19, assistant professor of engineering at Brown University; Raquel D’Oyen ’90, who holds a PhD from Northwestern University and is a senior engineer at Raytheon; and Ashley Kaiser MS ’19, PhD ’21, senior process engineer at 6K.

Students also had an opportunity to share their work with each other through research presentations. Their presentations covered a wide range of topics, including: developing a computer program to calculate solubility parameters for polymers used in textile manufacturing; performing a life-cycle analysis of a photonic chip and evaluating its environmental impact in comparison to a standard silicon microchip; and applying machine learning algorithms to scanning transmission electron microscopy images of CrSBr, a two-dimensional magnetic material. 

“The summit was wonderful and the best academic experience I have had as a first-year college student,” says MICRO intern Gabriella La Cour, who is pursuing a major in chemistry and dual degree biomedical engineering at Spelman College and participates in MICRO through MIT. “I got to meet so many students who were all in grades above me … and I learned a little about how to navigate college as an upperclassman.” 

“I actually have an extremely close friendship with one of the students, and we keep in touch regularly,” adds La Cour. “Professor Chazot gave valuable advice about applications and recommendation letters that will be useful when I apply to REUs [Research Experiences for Undergraduates] and graduate schools.”

Looking to the future, MICRO organizers hope to continue to grow the program’s reach.

“We would love to see other schools taking on this model,” says Sandland. “There are a lot of opportunities out there. The more departments, research groups, and mentors that get involved with this program, the more impact it can have.”



de MIT News https://ift.tt/wmkb0ce

Natural language boosts LLM performance in coding, planning, and robotics

Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.

Luckily, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have found a treasure trove of abstractions within natural language. In three papers to be presented at the International Conference on Learning Representations this month, the group shows how our everyday words are a rich source of context for language models, helping them build better overarching representations for code synthesis, AI planning, and robotic navigation and manipulation.

The three separate frameworks build libraries of abstractions for their given task: LILO (library induction from language observations) can synthesize, compress, and document code; Ada (action domain acquisition) explores sequential decision-making for artificial intelligence agents; and LGA (language-guided abstraction) helps robots better understand their environments to develop more feasible plans. Each system is a neurosymbolic method, a type of AI that blends human-like neural networks and program-like logical components.

LILO: A neurosymbolic framework that codes

Large language models can be used to quickly write solutions to small-scale coding tasks, but cannot yet architect entire software libraries like the ones written by human software engineers. To take their software development capabilities further, AI models need to refactor (cut down and combine) code into libraries of succinct, readable, and reusable programs.

Refactoring tools like the previously developed MIT-led Stitch algorithm can automatically identify abstractions, so, in a nod to the Disney movie “Lilo & Stitch,” CSAIL researchers combined these algorithmic refactoring approaches with LLMs. Their neurosymbolic method LILO uses a standard LLM to write code, then pairs it with Stitch to find abstractions that are comprehensively documented in a library.

LILO’s unique emphasis on natural language allows the system to do tasks that require human-like commonsense knowledge, such as identifying and removing all vowels from a string of code and drawing a snowflake. In both cases, the CSAIL system outperformed standalone LLMs, as well as a previous library learning algorithm from MIT called DreamCoder, indicating its ability to build a deeper understanding of the words within prompts. These encouraging results point to how LILO could assist with things like writing programs to manipulate documents like Excel spreadsheets, helping AI answer questions about visuals, and drawing 2D graphics.

“Language models prefer to work with functions that are named in natural language,” says Gabe Grand SM '23, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author on the research. “Our work creates more straightforward abstractions for language models and assigns natural language names and documentation to each one, leading to more interpretable code for programmers and improved system performance.”

When prompted on a programming task, LILO first uses an LLM to quickly propose solutions based on data it was trained on, and then the system slowly searches more exhaustively for outside solutions. Next, Stitch efficiently identifies common structures within the code and pulls out useful abstractions. These are then automatically named and documented by LILO, resulting in simplified programs that can be used by the system to solve more complex tasks.

The MIT framework writes programs in domain-specific programming languages, like Logo, a language developed at MIT in the 1970s to teach children about programming. Scaling up automated refactoring algorithms to handle more general programming languages like Python will be a focus for future research. Still, their work represents a step forward for how language models can facilitate increasingly elaborate coding activities.

Ada: Natural language guides AI task planning

Just like in programming, AI models that automate multi-step tasks in households and command-based video games lack abstractions. Imagine you’re cooking breakfast and ask your roommate to bring a hot egg to the table — they’ll intuitively abstract their background knowledge about cooking in your kitchen into a sequence of actions. In contrast, an LLM trained on similar information will still struggle to reason about what they need to build a flexible plan.

Named after the famed mathematician Ada Lovelace, who many consider the world’s first programmer, the CSAIL-led “Ada” framework makes headway on this issue by developing libraries of useful plans for virtual kitchen chores and gaming. The method trains on potential tasks and their natural language descriptions, then a language model proposes action abstractions from this dataset. A human operator scores and filters the best plans into a library, so that the best possible actions can be implemented into hierarchical plans for different tasks.

“Traditionally, large language models have struggled with more complex tasks because of problems like reasoning about abstractions,” says Ada lead researcher Lio Wong, an MIT graduate student in brain and cognitive sciences, CSAIL affiliate, and LILO coauthor. “But we can combine the tools that software engineers and roboticists use with LLMs to solve hard problems, such as decision-making in virtual environments.”

When the researchers incorporated the widely-used large language model GPT-4 into Ada, the system completed more tasks in a kitchen simulator and Mini Minecraft than the AI decision-making baseline “Code as Policies.” Ada used the background information hidden within natural language to understand how to place chilled wine in a cabinet and craft a bed. The results indicated a staggering 59 and 89 percent task accuracy improvement, respectively.

With this success, the researchers hope to generalize their work to real-world homes, with the hopes that Ada could assist with other household tasks and aid multiple robots in a kitchen. For now, its key limitation is that it uses a generic LLM, so the CSAIL team wants to apply a more powerful, fine-tuned language model that could assist with more extensive planning. Wong and her colleagues are also considering combining Ada with a robotic manipulation framework fresh out of CSAIL: LGA (language-guided abstraction).

Language-guided abstraction: Representations for robotic tasks

Andi Peng SM ’23, an MIT graduate student in electrical engineering and computer science and CSAIL affiliate, and her coauthors designed a method to help machines interpret their surroundings more like humans, cutting out unnecessary details in a complex environment like a factory or kitchen. Just like LILO and Ada, LGA has a novel focus on how natural language leads us to those better abstractions.

In these more unstructured environments, a robot will need some common sense about what it’s tasked with, even with basic training beforehand. Ask a robot to hand you a bowl, for instance, and the machine will need a general understanding of which features are important within its surroundings. From there, it can reason about how to give you the item you want. 

In LGA’s case, humans first provide a pre-trained language model with a general task description using natural language, like “bring me my hat.” Then, the model translates this information into abstractions about the essential elements needed to perform this task. Finally, an imitation policy trained on a few demonstrations can implement these abstractions to guide a robot to grab the desired item.

Previous work required a person to take extensive notes on different manipulation tasks to pre-train a robot, which can be expensive. Remarkably, LGA guides language models to produce abstractions similar to those of a human annotator, but in less time. To illustrate this, LGA developed robotic policies to help Boston Dynamics’ Spot quadruped pick up fruits and throw drinks in a recycling bin. These experiments show how the MIT-developed method can scan the world and develop effective plans in unstructured environments, potentially guiding autonomous vehicles on the road and robots working in factories and kitchens.

“In robotics, a truth we often disregard is how much we need to refine our data to make a robot useful in the real world,” says Peng. “Beyond simply memorizing what’s in an image for training robots to perform tasks, we wanted to leverage computer vision and captioning models in conjunction with language. By producing text captions from what a robot sees, we show that language models can essentially build important world knowledge for a robot.”

The challenge for LGA is that some behaviors can’t be explained in language, making certain tasks underspecified. To expand how they represent features in an environment, Peng and her colleagues are considering incorporating multimodal visualization interfaces into their work. In the meantime, LGA provides a way for robots to gain a better feel for their surroundings when giving humans a helping hand. 

An “exciting frontier” in AI

“Library learning represents one of the most exciting frontiers in artificial intelligence, offering a path towards discovering and reasoning over compositional abstractions,” says assistant professor at the University of Wisconsin-Madison Robert Hawkins, who was not involved with the papers. Hawkins notes that previous techniques exploring this subject have been “too computationally expensive to use at scale” and have an issue with the lambdas, or keywords used to describe new functions in many languages, that they generate. “They tend to produce opaque 'lambda salads,' big piles of hard-to-interpret functions. These recent papers demonstrate a compelling way forward by placing large language models in an interactive loop with symbolic search, compression, and planning algorithms. This work enables the rapid acquisition of more interpretable and adaptive libraries for the task at hand.”

By building libraries of high-quality code abstractions using natural language, the three neurosymbolic methods make it easier for language models to tackle more elaborate problems and environments in the future. This deeper understanding of the precise keywords within a prompt presents a path forward in developing more human-like AI models.

MIT CSAIL members are senior authors for each paper: Joshua Tenenbaum, a professor of brain and cognitive sciences, for both LILO and Ada; Julie Shah, head of the Department of Aeronautics and Astronautics, for LGA; and Jacob Andreas, associate professor of electrical engineering and computer science, for all three. The additional MIT authors are all PhD students: Maddy Bowers and Theo X. Olausson for LILO, Jiayuan Mao and Pratyusha Sharma for Ada, and Belinda Z. Li for LGA. Muxin Liu of Harvey Mudd College was a coauthor on LILO; Zachary Siegel of Princeton University, Jaihai Feng of the University of California at Berkeley, and Noa Korneev of Microsoft were coauthors on Ada; and Ilia Sucholutsky, Theodore R. Sumers, and Thomas L. Griffiths of Princeton were coauthors on LGA. 

LILO and Ada were supported, in part, by ​​MIT Quest for Intelligence, the MIT-IBM Watson AI Lab, Intel, U.S. Air Force Office of Scientific Research, the U.S. Defense Advanced Research Projects Agency, and the U.S. Office of Naval Research, with the latter project also receiving funding from the Center for Brains, Minds and Machines. LGA received funding from the U.S. National Science Foundation, Open Philanthropy, the Natural Sciences and Engineering Research Council of Canada, and the U.S. Department of Defense.



de MIT News https://ift.tt/85Jwmzb

Nuno Loureiro named director of MIT’s Plasma Science and Fusion Center

Nuno Loureiro, professor of nuclear science and engineering and of physics, has been appointed the new director of the MIT Plasma Science and Fusion Center, effective May 1.

Loureiro is taking the helm of one of MIT’s largest labs: more than 250 full-time researchers, staff members, and students work and study in seven buildings with 250,000 square feet of lab space. A theoretical physicist and fusion scientist, Loureiro joined MIT as a faculty member in 2016, and was appointed deputy director of the Plasma Science and Fusion Center (PSFC) in 2022. Loureiro succeeds Dennis Whyte, who stepped down at the end of 2023 to return to teaching and research.

Stepping into his new role as director, Loureiro says, “The PSFC has an impressive tradition of discovery and leadership in plasma and fusion science and engineering. Becoming director of the PSFC is an incredible opportunity to shape the future of these fields. We have a world-class team, and it’s an honor to be chosen as its leader.”

Loureiro’s own research ranges widely. He is recognized for advancing the understanding of multiple aspects of plasma behavior, particularly turbulence and the physics underpinning solar flares and other astronomical phenomena. In the fusion domain, his work enables the design of fusion devices that can more efficiently control and harness the energy of fusing plasmas, bringing the dream of clean, near-limitless fusion power that much closer. 

Plasma physics is foundational to advancing fusion science, a fact Loureiro has embraced and that is relevant as he considers the direction of the PSFC’s multidisciplinary research. “But plasma physics is only one aspect of our focus. Building a scientific agenda that continues and expands on the PSFC’s history of innovation in all aspects of fusion science and engineering is vital, and a key facet of that work is facilitating our researchers’ efforts to produce the breakthroughs that are necessary for the realization of fusion energy.”

As the climate crisis accelerates, fusion power continues to grow in appeal: It produces no carbon emissions, its fuel is plentiful, and dangerous “meltdowns” are impossible. The sooner that fusion power is commercially available, the greater impact it can have on reducing greenhouse gas emissions and meeting global climate goals. While technical challenges remain, “the PSFC is well poised to meet them, and continue to show leadership. We are a mission-driven lab, and our students and staff are incredibly motivated,” Loureiro comments.

“As MIT continues to lead the way toward the delivery of clean fusion power onto the grid, I have no doubt that Nuno is the right person to step into this key position at this critical time,” says Maria T. Zuber, MIT’s presidential advisor for science and technology policy. “I look forward to the steady advance of plasma physics and fusion science at MIT under Nuno’s leadership.”

Over the last decade, there have been massive leaps forward in the field of fusion energy, driven in part by innovations like high-temperature superconducting magnets developed at the PSFC. Further progress is guaranteed: Loureiro believes that “The next few years are certain to be an exciting time for us, and for fusion as a whole. It’s the dawn of a new era with burning plasma experiments” — a reference to the collaboration between the PSFC and Commonwealth Fusion Systems, a startup company spun out of the PSFC, to build SPARC, a fusion device that is slated to turn on in 2026 and produce a burning plasma that yields more energy than it consumes. “It’s going to be a watershed moment,” says Loureiro.

He continues, “In addition, we have strong connections to inertial confinement fusion experiments, including those at Lawrence Livermore National Lab, and we’re looking forward to expanding our research into stellarators, which are another kind of magnetic fusion device.” Over recent years, the PSFC has significantly increased its collaboration with industrial partners such Eni, IBM, and others. Loureiro sees great value in this: “These collaborations are mutually beneficial: they allow us to grow our research portfolio while advancing companies’ R&D efforts. It’s very dynamic and exciting.”

Loureiro’s directorship begins as the PSFC is launching key tech development projects like LIBRA, a “blanket” of molten salt that can be wrapped around fusion vessels and perform double duty as a neutron energy absorber and a breeder for tritium (the fuel for fusion). Researchers at the PSFC have also developed a way to rapidly test the durability of materials being considered for use in a fusion power plant environment, and are now creating an experiment that will utilize a powerful particle accelerator called a gyrotron to irradiate candidate materials.

Interest in fusion is at an all-time high; the demand for researchers and engineers, particularly in the nascent commercial fusion industry, is reflected by the record number of graduate students that are studying at the PSFC — more than 90 across seven affiliated MIT departments. The PSFC’s classrooms are full, and Loureiro notes a palpable sense of excitement. “Students are our greatest strength,” says Loureiro. “They come here to do world-class research but also to grow as individuals, and I want to give them a great place to do that. Supporting those experiences, making sure they can be as successful as possible is one of my top priorities.” Loureiro plans to continue teaching and advising students after his appointment begins.

MIT President Sally Kornbluth’s recently announced Climate Project is a clarion call for Loureiro: “It’s not hyperbole to say MIT is where you go to find solutions to humanity’s biggest problems,” he says. “Fusion is a hard problem, but it can be solved with resolve and ingenuity — characteristics that define MIT. Fusion energy will change the course of human history. It’s both humbling and exciting to be leading a research center that will play a key role in enabling that change.” 



de MIT News https://ift.tt/GASv2Fc

martes, 30 de abril de 2024

Studies in empathy and analytics

Upon the advice of one of his soccer teammates, James Simon enrolled in 14.73 (The Challenge of World Poverty) as a first-year student to fulfill a humanities requirement. He went from knowing nothing about economics to learning about the subject from Nobel laureates.

The lessons created by professors Esther Duflo and Abhijit Banerjee revealed to Simon an entirely new way to use science to help humanity. One of the projects Simon learned about in this class assessed an area of India with a low vaccination rate and created a randomized, controlled trial to figure out the best way to fix this problem.

“What was really cool about the class was that it talked about huge problems in the world, like poverty, hunger, and lack of vaccinations, and it talked about how you could break them down using experiments and quantify the best way to solve them,” he says.

Galvanized by this experience, Simon joined a research project in the economics department and committed to a blended major in computer science, economics, and data. He began working on a research project with Senior Lecturer Sara Ellison in 2021 and has since contributed to multiple research papers published by the group, many concerning developmental economic issues. One of his most memorable projects explored the question of whether internet access helps bridge the gap between poor and wealthy countries. Simon collected data, conducted interviews, and did statistical analysis to develop answers to the group’s questions. Their paper was published in Competition Policy International in 2021.

Further bridging his economics studies with real-world efforts, Simon has become involved with the Guatemalan charity Project Somos, which is dedicated to challenging poverty through access to food and education. Through MIT’s Global Research and Consulting Group, he led a team of seven students to analyze the program’s data, measure its impact in the community, and provide the organization with easy-to-use data analytics tools. He has continued working with Project Somos through his undergraduate years and has joined its board of directors.

Simon hopes to quantify the most effective approaches to solutions for the people and groups he works with. “The charity I work for says ‘Use your head and your heart.’ If you can approach the problems in the world with empathy and analytics, I think that is a really important way to help a lot of people” he says.

Simon’s desire to positively impact his community is threaded through other areas of his life at MIT. He is a member of the varsity soccer team and the Phi Beta Epsilon fraternity, and has volunteered for the MIT Little Beavers Special Needs Running Club.

On the field, court, and trail

Athletics are a major part of Simon’s life, year-round. Soccer has long been his main sport; he joined the varsity soccer team as a first-year and has played ever since. In his second year with the team, Simon was recognized as an Academic All-American. He also earned the honor of NEWMAC First Team All-Conference in 2021.

Despite the long hours of practice, Simon says he is most relaxed when it’s game season. “It’s a nice, competitive outlet to have every day. You’re working with people that you like spending time with, to win games and have fun and practice to get better. Everything going on kind of fades away, and you’re just focused on playing your sport,” he explains.

Simon has also used his time at MIT to try new sports. In winter 2023, he joined the wrestling club. “I thought, ‘I’ve never done anything like this before. But maybe I’ll try it out,’” he says. “And so I tried it out knowing nothing. They were super welcoming and there were people with all experience levels, and I just really fell in love with it.” Simon also joined the MIT basketball team as a walk-on his senior year.

When not competing, Simon enjoys hiking. He recalls one of his favorite memories from the past four years being a trip to Yosemite National Park he took with friends while interning in San Francisco. There, he hiked upward of 20 miles each day. Simon also embarks on hiking trips with friends closer to campus in New Hampshire and Acadia National Park.

Social impact

Simon believes his philanthropic work has been pivotal to his experience at MIT. Through the MIT Global Research and Consulting Group, which he served as a case leader for, he has connected with charity groups around the world, including in Guatemala and South Africa.

On campus, Simon has worked to build social connections within both his school and city-wide community. During his sophomore year, he spent his Sundays with the Little Beavers Running Team, a program that pairs children from the Boston area who are on the autism spectrum with an MIT student to practice running and other sports activities. “Throughout the course of a semester when you’re working with a kid, you’re able to see their confidence and social skills improve. That’s really rewarding to me,” Simon says.

Simon is also a member of the Phi Beta Epsilon fraternity. He joined the group in his first year at MIT and has lived with the other members of the fraternity since his sophomore year. He appreciates the group’s strong focus on supporting the social and professional skills of its members. Simon served as the chapter’s president for one semester and describes his experience as “very impactful.”

“There’s something really cool about having 40 of your friends all live in a house together,” he says. “A lot of my good memories from college are of sitting around in our common rooms late at night and just talking about random stuff.”

Technical projects and helping others

Next fall, Simon will continue his studies at MIT, pursuing a master’s degree in economics. Following this, he plans to move to New York to work in finance. In the summer of 2023 he interned at BlackRock, a large finance company, where he worked on a team that invested on behalf of people looking to grow their retirement funds. Simon says, “I thought it was cool that I was able to apply things I learned in school to have an impact on a ton of different people around the country by helping them prepare for retirement.”

Simon has done similar work in past internships. In the summer after his first year at MIT, he worked for Surge Employment Solutions, a startup that connected formerly incarcerated people to jobs. His responsibility was to quantify the social impacts of the startup, which was shown to help the unemployment rate of formerly incarcerated individuals and help high-turnover businesses save money by retaining employees.

On his community work, Simon says, “There’s always a lot more similarities between people than differences. So, I think getting to know people and being able to use what I learned to help people make their lives even a little bit better is cool. You think maybe as a college student, you wouldn’t be able to do a lot to make an impact around the world. But I think even with just the computer science and economics skills that I’ve learned in college, it’s always kind of surprising to me how much of an impact you can make on people if you just put in the effort to seek out opportunities.”



de MIT News https://ift.tt/YUFyDT0

Alison Badgett named director of the Priscilla King Gray Public Service Center

Vice Chancellor for Undergraduate and Graduate Education Ian A. Waitz announced recently that Alison Badgett has been appointed the new associate dean and director of the Priscilla King Gray (PKG) Public Service Center. She succeeds Jill Bassett, who left that role to become chief of staff to Chancellor Melissa Nobles.

“Alison is a thought leader on how to integrate community-engaged learning with systematic change, making her ideally suited to actualize MIT’s mission of educating transformative leaders,” Waitz says. “I have no doubt she will make the PKG Center a model for all of higher ed, given her wealth of experience, finely honed skills, and commitment to social change.”

“I’m excited to help the PKG Center, and broader MIT community, develop a collective vision for public service education that builds on the PKG Center’s strength in social innovation programming, and leverages the Institute’s unique culture of innovation,” Badgett says. “MIT’s institutional commitment to tackling complex societal and environmental challenges, taking responsibility for outcomes and not just inputs, is exceedingly rare. I’m also especially excited to engage STEM majors, who may be less likely to enter the nonprofit or public sector, but who can have a tremendous impact on social and environmental outcomes within the systems they work.”

Badgett has over 20 years of experience leading public policy and nonprofit organizations, particularly those addressing challenging issues like affordable housing and homelessness, criminal justice, and public education. She is the founding principal of a consulting firm, From Charity to Change, which works with nonprofit leaders, educators, and philanthropists to apply systems-change strategies that target the root causes of complex social problems.

Prior to her consulting role, Badgett was executive director of the Petey Greene Program, which recruits and trains 1,000 volunteers annually from 30 universities to tutor justice-impacted students in 50 prisons and reentry programs. In addition, the program educates volunteers on the injustice of our prison system and encourages both volunteers and students to advocate for reforms.

She also served as executive director of Raise Your Hand Texas, an organization that aims to improve education by piloting innovative learning practices. During her tenure, the organization launched a five-year, $10 million initiative to showcase and scale blended learning, and a 10-year, $50 million initiative to improve teacher preparation and the status of teaching.

Before leading Raise Your Hand Texas, Badgett was executive director of several organizations related to housing and homelessness in New York and New Jersey. During that time, she developed a $3.6 million demonstration program to permanently house the chronically homeless, which served as a model for state and national replication. She also served as senior policy advisor to the governor of New Jersey, providing counsel on land use, redevelopment, and housing. 

Badgett holds a global executive EdD from the University of Southern California, an MA from Columbia University Teachers College in philosophy and education, and an BA in politics from Princeton University. 

Her appointment at the PKG Center is especially timely. Student demand for social impact experiential learning opportunities has increased significantly at MIT in recent years, and the center is expected to play a sizable role in increasing student engagement in social impact work and in helping to integrate social innovation into teaching and research.

At the same time, the Institute has made a commitment to help address complex issues with global impacts, such as climate change, economic inequality, and artificial intelligence. As part of that effort, the Office of Experiential Learning launched the Social Impact Experiential Learning Opportunity initiative last year, which has awarded nearly $1 million to fund hundreds of student opportunities. Projects cater to a broad range of interests and take place around the world — from using new computational methods to understand the role of special-interest-group funding in U.S. public policy to designing and testing a solar-powered, water-vapor condensing chamber in Madagascar.

Badgett, who is currently writing a book on re-imagining civic education at elite private schools, will begin her new role at the PKG Center in July. In the meantime, she is looking forward to bringing her experience to bear at MIT. “While leading public interest organizations was highly rewarding, I recognized that I could have a far greater impact educating future public interest leaders, and that higher education was the place to do it,” she says.



de MIT News https://ift.tt/mvEga4t

lunes, 29 de abril de 2024

Offering clean energy around the clock

As remarkable as the rise of solar and wind farms has been over the last 20 years, achieving complete decarbonization is going to require a host of complementary technologies. That’s because renewables offer only intermittent power. They also can’t directly provide the high temperatures necessary for many industrial processes.

Now, 247Solar is building high-temperature concentrated solar power systems that use overnight thermal energy storage to provide round-the-clock power and industrial-grade heat.

The company’s modular systems can be used as standalone microgrids for communities or to provide power in remote places like mines and farms. They can also be used in conjunction with wind and conventional solar farms, giving customers 24/7 power from renewables and allowing them to offset use of the grid.

“One of my motivations for working on this system was trying to solve the problem of intermittency,” 247Solar CEO Bruce Anderson ’69, SM ’73 says. “I just couldn’t see how we could get to zero emissions with solar photovoltaics (PV) and wind. Even with PV, wind, and batteries, we can’t get there, because there’s always bad weather, and current batteries aren’t economical over long periods. You have to have a solution that operates 24 hours a day.”

The company’s system is inspired by the design of a high-temperature heat exchanger by the late MIT Professor Emeritus David Gordon Wilson, who co-founded the company with Anderson. The company integrates that heat exchanger into what Anderson describes as a conventional, jet-engine-like turbine, enabling the turbine to produce power by circulating ambient pressure hot air with no combustion or emissions — what the company calls a first in the industry.

Here’s how the system works: Each 247Solar system uses a field of sun-tracking mirrors called heliostats to reflect sunlight to the top of a central tower. The tower features a proprietary solar receiver that heats air to around 1,000 Celsius at atmospheric pressure. The air is then used to drive 247Solar’s turbines and generate 400 kilowatts of electricity and 600 kilowatts of heat. Some of the hot air is also routed through a long-duration thermal energy storage system, where it heats solid materials that retain the heat. The stored heat is then used to drive the turbines when the sun stops shining.

“We offer round-the-clock electricity, but we also offer a combined heat and power option, with the ability to take heat up to 970 Celsius for use in industrial processes,” Anderson says. “It’s a very flexible system.”

The company’s first deployment will be with a large utility in India. If that goes well, 247Solar hopes to scale up rapidly with other utilities, corporations, and communities around the globe.

A new approach to concentrated solar

Anderson kept in touch with his MIT network after graduating in 1973. He served as the director of MIT’s Industrial Liaison Program (ILP) between 1996 and 2000 and was elected as an alumni member of the MIT Corporation in 2013. The ILP connects companies with MIT’s network of students, faculty, and alumni to facilitate innovation, and the experience changed the course of Anderson’s career.

“That was an extremely fascinating job, and from it two things happened,” Anderson says. “One is that I realized I was really an entrepreneur and was not well-suited to the university environment, and the other is that I was reminded of the countless amazing innovations coming out of MIT.”

After leaving as director, Anderson began a startup incubator where he worked with MIT professors to start companies. Eventually, one of those professors was Wilson, who had invented the new heat exchanger and a ceramic turbine. Anderson and Wilson ended up putting together a small team to commercialize the technology in the early 2000s.

Anderson had done his MIT master’s thesis on solar energy in the 1970s, and the team realized the heat exchanger made possible a novel approach to concentrated solar power. In 2010, they received a $6 million development grant from the U.S. Department of Energy. But their first solar receiver was damaged during shipping to a national laboratory for testing, and the company ran out of money.

It wasn’t until 2015 that Anderson was able to raise money to get the company back off the ground. By that time, a new high-temperature metal alloy had been developed that Anderson swapped out for Wilson’s ceramic heat exchanger.

The Covid-19 pandemic further slowed 247’s plans to build a demonstration facility at its test site in Arizona, but strong customer interest has kept the company busy. Concentrated solar power doesn’t work everywhere — Arizona’s clear sunshine is a better fit than Florida’s hazy skies, for example — but Anderson is currently in talks with communities in parts of the U.S., India, Africa, and Australia where the technology would be a good fit.

These days, the company is increasingly proposing combining its systems with traditional solar PV, which lets customers reap the benefits of low-cost solar electricity during the day while using 247’s energy at night.

“That way we can get at least 24, if not more, hours of energy from a sunny day,” Anderson says. “We’re really moving toward these hybrid systems, which work like a Prius: Sometimes you’re using one source of energy, sometimes you’re using the other.”

The company also sells its HeatStorE thermal batteries as standalone systems. Instead of being heated by the solar system, the thermal storage is heated by circulating air through an electric coil that’s been heated by electricity, either from the grid, standalone PV, or wind. The heat can be stored for nine hours or more on a single charge and then dispatched as electricity plus industrial process heat at 250 Celsius, or as heat only, up to 970 Celsius.

Anderson says 247’s thermal battery is about one-seventh the cost of lithium-ion batteries per kilowatt hour produced.

Scaling a new model

The company is keeping its system flexible for whatever path customers want to take to complete decarbonization.

In addition to 247’s India project, the company is in advanced talks with off-grid communities in the Unites States and Egypt, mining operators around the world, and the government of a small country in Africa. Anderson says the company’s next customer will likely be an off-grid community in the U.S. that currently relies on diesel generators for power.

The company has also partnered with a financial company that will allow it to access capital to fund its own projects and sell clean energy directly to customers, which Anderson says will help 247 grow faster than relying solely on selling entire systems to each customer.

As it works to scale up its deployments, Anderson believes 247 offers a solution to help customers respond to increasing pressure from governments as well as community members.

“Emerging economies in places like Africa don’t have any alternative to fossil fuels if they want 24/7 electricity,” Anderson says. “Our owning and operating costs are less than half that of diesel gen-sets. Customers today really want to stop producing emissions if they can, so you’ve got villages, mines, industries, and entire countries where the people inside are saying, ‘We can’t burn diesel anymore.’”



de MIT News https://ift.tt/25490Nz

An AI dataset carves new paths to tornado detection

The return of spring in the Northern Hemisphere touches off tornado season. A tornado's twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It's hard to know exactly when a tornado has formed, or even why.

A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature's most mysterious and violent phenomena.

“A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project's co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 

Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning's ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 

Swirling uncertainty

About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  

Yet tornadoes are notoriously difficult to forecast because scientists don't have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won't. We don't fully understand it,” Kurdzo says.

A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm's broad, rotating updraft. A mesocyclone doesn't always produce a tornado.

With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  

In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.

The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).

Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).

A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.

“What's beautiful about a true benchmark dataset is that we're all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”

Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.

“This dataset also means that a grad student doesn't have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.

This project was funded by Lincoln Laboratory's Climate Change Initiative, which aims to leverage the laboratory's diverse technical strengths to help address climate problems threatening human health and global security.

Chasing answers with deep learning

Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 

“We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren't searched for by forecasters,” Veillette says.

The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.

They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.

“The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”

TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.

Taking steps toward operations

On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.

“As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don't know about?” he asks.

Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 

“None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters' eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.

Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.

But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It's a first step.”

The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they'll eventually be shown to forecasters, to start a process of transitioning into operations.

In the end, the path could circle back to trust.

“We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.”



de MIT News https://ift.tt/oJQ159x

MIT faculty, instructors, students experiment with generative AI in teaching and learning

How can MIT’s community leverage generative AI to support learning and work on campus and beyond?

At MIT’s Festival of Learning 2024, faculty and instructors, students, staff, and alumni exchanged perspectives about the digital tools and innovations they’re experimenting with in the classroom. Panelists agreed that generative AI should be used to scaffold — not replace — learning experiences.

This annual event, co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, celebrates teaching and learning innovations. When introducing new teaching and learning technologies, panelists stressed the importance of iteration and teaching students how to develop critical thinking skills while leveraging technologies like generative AI.

“The Festival of Learning brings the MIT community together to explore and celebrate what we do every day in the classroom,” said Christopher Capozzola, senior associate dean for open learning. “This year's deep dive into generative AI was reflective and practical — yet another remarkable instance of ‘mind and hand’ here at the Institute.”  

Incorporating generative AI into learning experiences 

MIT faculty and instructors aren’t just willing to experiment with generative AI — some believe it’s a necessary tool to prepare students to be competitive in the workforce. “In a future state, we will know how to teach skills with generative AI, but we need to be making iterative steps to get there instead of waiting around,” said Melissa Webster, lecturer in managerial communication at MIT Sloan School of Management. 

Some educators are revisiting their courses’ learning goals and redesigning assignments so students can achieve the desired outcomes in a world with AI. Webster, for example, previously paired written and oral assignments so students would develop ways of thinking. But, she saw an opportunity for teaching experimentation with generative AI. If students are using tools such as ChatGPT to help produce writing, Webster asked, “how do we still get the thinking part in there?”

One of the new assignments Webster developed asked students to generate cover letters through ChatGPT and critique the results from the perspective of future hiring managers. Beyond learning how to refine generative AI prompts to produce better outputs, Webster shared that “students are thinking more about their thinking.” Reviewing their ChatGPT-generated cover letter helped students determine what to say and how to say it, supporting their development of higher-level strategic skills like persuasion and understanding audiences.

Takako Aikawa, senior lecturer at the MIT Global Studies and Languages Section, redesigned a vocabulary exercise to ensure students developed a deeper understanding of the Japanese language, rather than just right or wrong answers. Students compared short sentences written by themselves and by ChatGPT and developed broader vocabulary and grammar patterns beyond the textbook. “This type of activity enhances not only their linguistic skills but stimulates their metacognitive or analytical thinking,” said Aikawa. “They have to think in Japanese for these exercises.”

While these panelists and other Institute faculty and instructors are redesigning their assignments, many MIT undergraduate and graduate students across different academic departments are leveraging generative AI for efficiency: creating presentations, summarizing notes, and quickly retrieving specific ideas from long documents. But this technology can also creatively personalize learning experiences. Its ability to communicate information in different ways allows students with different backgrounds and abilities to adapt course material in a way that’s specific to their particular context. 

Generative AI, for example, can help with student-centered learning at the K-12 level. Joe Diaz, program manager and STEAM educator for MIT pK-12 at Open Learning, encouraged educators to foster learning experiences where the student can take ownership. “Take something that kids care about and they’re passionate about, and they can discern where [generative AI] might not be correct or trustworthy,” said Diaz.

Panelists encouraged educators to think about generative AI in ways that move beyond a course policy statement. When incorporating generative AI into assignments, the key is to be clear about learning goals and open to sharing examples of how generative AI could be used in ways that align with those goals. 

The importance of critical thinking

Although generative AI can have positive impacts on educational experiences, users need to understand why large language models might produce incorrect or biased results. Faculty, instructors, and student panelists emphasized that it’s critical to contextualize how generative AI works. “[Instructors] try to explain what goes on in the back end and that really does help my understanding when reading the answers that I’m getting from ChatGPT or Copilot,” said Joyce Yuan, a senior in computer science. 

Jesse Thaler, professor of physics and director of the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions, warned about trusting a probabilistic tool to give definitive answers without uncertainty bands. “The interface and the output needs to be of a form that there are these pieces that you can verify or things that you can cross-check,” Thaler said.

When introducing tools like calculators or generative AI, the faculty and instructors on the panel said it’s essential for students to develop critical thinking skills in those particular academic and professional contexts. Computer science courses, for example, could permit students to use ChatGPT for help with their homework if the problem sets are broad enough that generative AI tools wouldn’t capture the full answer. However, introductory students who haven’t developed the understanding of programming concepts need to be able to discern whether the information ChatGPT generated was accurate or not.

Ana Bell, senior lecturer of the Department of Electrical Engineering and Computer Science and MITx digital learning scientist, dedicated one class toward the end of the semester of Course 6.100L (Introduction to Computer Science and Programming Using Python) to teach students how to use ChatGPT for programming questions. She wanted students to understand why setting up generative AI tools with the context for programming problems, inputting as many details as possible, will help achieve the best possible results. “Even after it gives you a response back, you have to be critical about that response,” said Bell. By waiting to introduce ChatGPT until this stage, students were able to look at generative AI’s answers critically because they had spent the semester developing the skills to be able to identify whether problem sets were incorrect or might not work for every case. 

A scaffold for learning experiences

The bottom line from the panelists during the Festival of Learning was that generative AI should provide scaffolding for engaging learning experiences where students can still achieve desired learning goals. The MIT undergraduate and graduate student panelists found it invaluable when educators set expectations for the course about when and how it’s appropriate to use AI tools. Informing students of the learning goals allows them to understand whether generative AI will help or hinder their learning. Student panelists asked for trust that they would use generative AI as a starting point, or treat it like a brainstorming session with a friend for a group project. Faculty and instructor panelists said they will continue iterating their lesson plans to best support student learning and critical thinking. 

Panelists from both sides of the classroom discussed the importance of generative AI users being responsible for the content they produce and avoiding automation bias — trusting the technology’s response implicitly without thinking critically about why it produced that answer and whether it’s accurate. But since generative AI is built by people making design decisions, Thaler told students, “You have power to change the behavior of those tools.”



de MIT News https://ift.tt/HhGqd2t

Julie Shah named head of the Department of Aeronautics and Astronautics

Julie Shah ’04, SM ’06, PhD ’11, the H.N. Slater Professor in Aeronautics and Astronautics, has been named the new head of the Department of Aeronautics and Astronautics (AeroAstro), effective May 1.

“Julie brings an exceptional record of visionary and interdisciplinary leadership to this role. She has made substantial technical contributions in the field of robotics and AI, particularly as it relates to the future of work, and has bridged important gaps in the social, ethical, and economic implications of AI and computing,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

In addition to her role as a faculty member in AeroAstro, Shah served as associate dean of Social and Ethical Responsibilities of Computing in the MIT Schwarzman College of Computing from 2019 to 2022, helping launch a coordinated curriculum that engages more than 2,000 students a year at the Institute. She currently directs the Interactive Robotics Group in MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), and MIT’s Industrial Performance Center.

Shah and her team at the Interactive Robotics Group conduct research that aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She is expanding the use of human cognitive models for artificial intelligence and has translated her work to manufacturing assembly lines, health-care applications, transportation, and defense. In 2020, Shah co-authored the popular book “What to Expect When You’re Expecting Robots,” which explores the future of human-robot collaboration.

As an expert on how humans and robots interact in the workforce, Shah was named co-director of the Work of the Future Initiative, a successor group of MIT’s Task Force on the Work of the Future, alongside Ben Armstrong, executive director and research scientist at MIT’s Industrial Performance Center. In March of this year, Shah was named a co-leader of the Working Group on Generative AI and the Work of the Future, alongside Armstrong and Kate Kellogg, the David J. McGrath Jr. Professor of Management and Innovation. The group is examining how generative AI tools can contribute to higher-quality jobs and inclusive access to the latest technologies across sectors.

Shah’s contributions as both a researcher and educator have been recognized with many awards and honors throughout her career. She was named an associate fellow of the American Institute of Aeronautics and Astronautics (AIAA) in 2017, and in 2018 she was the recipient of the IEEE Robotics and Automation Society Academic Early Career Award. Shah was also named a Bisplinghoff Faculty Fellow, was named to MIT Technology Review’s TR35 List, and received an NSF Faculty Early Career Development Award. In 2013, her work on human-robot collaboration was included on MIT Technology Review’s list of 10 Breakthrough Technologies.

In January 2024, she was appointed to the first-ever AIAA Aerospace Artificial Intelligence Advisory Group, which was founded “to advance the appropriate use of AI technology particularly in aeronautics, aerospace R&D, and space.” Shah currently serves as editor-in-chief of Foundations and Trends in Robotics, as an editorial board member of the AIAA Progress Series, and as an executive council member of the Association for the Advancement of Artificial Intelligence.

A dedicated educator, Shah has been recognized for her collaborative and supportive approach as a mentor. She was honored by graduate students as “Committed to Caring” (C2C) in 2019. For the past 10 years, she has served as an advocate, community steward, and mentor for students in her role as head of house of the Sidney Pacific Graduate Community.

Shah received her bachelor’s and master’s degrees in aeronautical and astronautical engineering, and her PhD in autonomous systems, all from MIT. After receiving her doctoral degree, she joined Boeing as a postdoc, before returning to MIT in 2011 as a faculty member.

Shah succeeds Professor Steven Barrett, who has led AeroAstro as both interim department head and then department head since May 2023.



de MIT News https://ift.tt/MCrcyVP

Remembering Chasity Nunez, a shining star at MIT Health

On March 5, the MIT community lost one of its shining stars when Chasity Nunez passed away. She was 27.

“Chas,” as her friends and colleagues called her, served as the patient safety and clinical quality program coordinator at MIT Health. In her role, Nunez helped MIT Health maintain its high safety standards, working to train staff on reporting procedures and best practices for patient safety.

Director of Clinical Collaborations and Partnerships Elene Scheff was Nunez’s hiring manager and remembers her as a “perpetual learner.” Nunez put herself through both college and graduate school and was working on a graduate degree in informatics — her second master’s degree. “She loved to be challenged … She also loved collaborating with everybody,” Scheff remembers.

“Chas was passionate about the health and well-being of the MIT community,” adds MIT Chief Health Officer Cecilia Stuopis. “She was beloved by the colleagues who worked closely with her, and her dedication to our patients was powerful and impactful.”

Nunez’s dedication to helping patients within the MIT community was only matched by her desire to give back and be of service to her country. She was an active member of the U.S. Army National Guard, where she was stationed in Connecticut and served as an IT support specialist.

“[Chas] was always looking to improve upon herself,” says Janis Puibello, Nunez’s manager and MIT Health’s associate chief of nursing and clinical quality. “[She] was hungry for what we had to offer.”

Michele David, chief of clinical quality and patient safety, agrees. David recalls Nunez’s can-do spirit: “If she didn’t know how to do something, she would tell you, ‘I don’t know how to do it, but I will find out!’”

“She brought a lot to MIT Health and will always be with us,” says Puibello.

Nunez is survived by her mother and a daughter. To honor Nunez, MIT Health set up a GoFundMe campaign to help raise funds for her surviving daughter. The $5,000 campaign exceeded its goal by more than $3,000. All proceeds collected were donated to Nunez’s family to be used toward her daughter’s future education.



de MIT News https://ift.tt/tcAFDqS

sábado, 27 de abril de 2024

Exploring the history of data-driven arguments in public life

Political debates today may not always be exceptionally rational, but they are often infused with numbers. If people are discussing the economy or health care or climate change, sooner or later they will invoke statistics.

It was not always thus. Our habit of using numbers to make political arguments has a history, and William Deringer is a leading historian of it. Indeed, in recent years Deringer, an associate professor in MIT’s Program in Science, Technology, and Society (STS), has carved out a distinctive niche through his scholarship showing how quantitative reasoning has become part of public life.

In his prize-winning 2018 book “Calculated Values” (Harvard University Press), Deringer identified a time in British public life from the 1680s to the 1720s as a key moment when the practice of making numerical arguments took hold — a trend deeply connected with the rise of parliamentary power and political parties. Crucially, freedom of the press also expanded, allowing greater scope for politicians and the public to have frank discussions about the world as it was, backed by empirical evidence.

Deringer’s second book project, in progress and under contract to Yale University Press, digs further into a concept from the first book — the idea of financial discounting. This is a calculation to estimate what money (or other things) in the future is worth today, to assign those future objects a “present value.” Some skilled mathematicians understood discounting in medieval times; its use expanded in the 1600s; today it is very common in finance and is the subject of debate in relation to climate change, as experts try to estimate ideal spending levels on climate matters.

“The book is about how this particular technique came to have the power to weigh in on profound social questions,” Deringer says. “It’s basically about compound interest, and it’s at the center of the most important global question we have to confront.”

Numbers alone do not make a debate rational or informative; they can be false, misleading, used to entrench interests, and so on. Indeed, a key theme in Deringer’s work is that when quantitiative reasoning gains more ground, the question is why, and to whose benefit. In this sense his work aligns with the long-running and always-relevant approach of the Institute’s STS faculty, in thinking carefully about how technology and knowledge is applied to the world.

“The broader culture more has become attuned to STS, whether it’s conversations about AI or algorithmic fairness or climate change or energy, these are simultaneously technical and social issues,” Deringer says. “Teaching undergraduates, I’ve found the awareness of that at MIT has only increased.” For both his research and teaching, Deringer received tenure from MIT earlier this year.

Dig in, work outward

Deringer has been focused on these topics since he was an undergraduate at Harvard University.

“I found myself becoming really interested in the history of economics, the history of practical mathematics, data, statistics, and how it came to be that so much of our world is organized quantitatively,” he says.

Deringer wrote a college thesis about how England measured the land it was seizing from Ireland in the 1600s, and then, after graduating, went to work in the finance sector, which gave him a further chance to think about the application of quantification to modern life.

“That was not what I wanted to do forever, but for some of the conceptual questions I was interested in, the societal life of calculations, I found it to be a really interesting space,” Deringer says.

He returned to academia by pursuing his PhD in the history of science at Princeton University. There, in his first year of graduate school, in the archives, Deringer found 18th-century pamphlets about financial calculations concering the value of stock involved in the infamous episode of speculation known as the South Sea Bubble. That became part of his dissertation; skeptics of the South Sea Bubble were among the prominent early voices bringing data into public debates. It has also helped inform his second book.

First, though, Deringer earned his doctorate from Princeton in 2012, then spent three years as a Mellon Postdoctoral Research Fellow at Columbia University. He joined the MIT faculty in 2015. At the Institute, he finished turning his dissertation into the “Calculated Values” book — which won the 2019 Oscar Kenshur Prize for the best book from the Center for Eighteenth-Century Studies at Indiana University, and was co-winner of the 2021 Joseph J. Spengler Prize for best book from the History of Economics Society.

“My method as a scholar is to dig into the technical details, then work outward historically from them,” Deringer says.

A long historical chain

Even as Deringer was writing his first book, the idea for the second one was taking root in his mind. Those South Sea Bubble pamphets he had found while at Princeton incorporated discounting, which was intermittently present in “Calculated Values.” Deringer was intrigued by how adept 18th-century figures were at discounting.

“Something that I thought of as a very modern technique seemed to be really well-known by a lot of people in the 1720s,” he says.

At the same time, a conversation with an academic colleague in philosophy made it clear to Deringer how different conclusions about discounting had become debated in climate change policy. He soon resolved to write the “biography of a calculation” about financial discounting.

“I knew my next book had to be about this,” Deringer says. “I was very interested in the deep historical roots of discounting, and it has a lot of present urgency.”

Deringer says the book will incorporate material about the financing of English cathedrals, the heavy use of discounting in the mining industry during the Industrial Revolution, a revival of discounting in 1960s policy circles, and climate change, among other things. In each case, he is carefully looking at the interests and historical dynamics behind the use of discounting.

“For people who use discounting regularly, it’s like gravity: It’s very obvious that to be rational is to discount the future according to this formula,” Deringer says. “But if you look at history, what is thought of as rational is part of a very long historical chain of people applying this calculation in various ways, and over time that’s just how things are done. I’m really interested in pulling apart that idea that this is a sort of timeless rational calculation, as opposed to a product of this interesting history.”

Working in STS, Deringer notes, has helped encourage him to link together numerous historical time periods into one book about the numerous ways discounting has been used.

“I’m not sure that pursuing a book that stretches from the 17th century to the 21st century is something I would have done in other contexts,” Deringer says. He is also quick to credit his colleagues in STS and in other programs for helping create the scholarly environment in which he is thriving.

“I came in with a really amazing cohort of other scholars in SHASS,” Deringer notes, referring to the MIT School of Humanities, Arts, and Social Sciences. He cites others receiving tenure in the last year such as his STS colleague Robin Scheffler, historian Megan Black, and historian Caley Horan, with whom Deringer has taught graduate classes on the concept of risk in history. In all, Deringer says, the Institute has been an excellent place for him to pursue interdisciplinary work on technical thought in history.

“I work on very old things and very technical things,” Deringer says. “But I’ve found a wonderful welcoming at MIT from people in different fields who light up when they hear what I’m interested in.”



de MIT News https://ift.tt/irgCXVY