jueves, 31 de marzo de 2022

“Diverse people lead to diverse ideas”

Smells of steak, vegetables, and onions filled the air, the sizzle complementing sounds of laughter and music. Students from a variety of Black student groups on campus came together to mingle and relax, enjoying the nice spring weather and community.

Surveying the scene with satisfaction was Devin Johnson, an aeronautical and astronautical engineering major and an executive board member of the Black Students’ Union. He had helped organize the event and was proud to have created a space where Black students were comfortable and having fun together.

Dubbed “Black People Outside,” the 2019 barbecue event would catalyze a series of outside community gatherings between Black student organizations on campus, some planned and others spontaneous. Johnson, now a senior, remains dedicated to serving his community.

“I care a lot about the community that I'm in and the people that I'm around. I'm very willing to give back in terms of supporting and encouraging those around me,” he says.

Johnson grew up in Brooklyn, New York, where he was constantly surrounded by his family, which is one his biggest support systems. Both of his parents had jobs which focused on caring for others, which made Johnson curious about the world and eager to make a difference in it.

The summer before coming to MIT, Johnson participated in MIT Online Science and Technology and Engineering Community (MOSTEC), a six-month online science and engineering program for high school seniors. He stayed at home during this time and took an astrophysics class, learning about the properties of light and color, the Doppler effect, and galaxy clusters, amongst other things. Excited and inspired, he decided to pursue aerospace at MIT, to learn more about mechanical and mathematical elements of space.

Upon arriving on campus, Johnson quickly focused on finding community. He found that in Chocolate City, a living group primarily of Black men. Johnson initially met the members while visiting MIT during his senior year of high school. He recalls feeling instantly at home, that he had found a space where he could branch out from and meet new people, but always come back to.

Within the organization, Johnson has taken on many leadership roles. In his sophomore year, he became the co-chair, overseeing all organization events and fundraisers. He currently serves as the resident peer mentor, giving incoming first year students advice for how to navigate both MIT and Boston. Johnson is also a member of Phi Beta Sigma, Inc., one of the “Divine 9” historically Black fraternities dedicated to giving back to the community. Their motto is: “Culture for service, and service for humanity,” which also inspires him in his work for Chocolate City and MIT’s Black Students’ Union.

Johnson’s participation in the BSU has offered him another way to build and support his community — and to be encouraged by others in return. He remembers a frightening encounter with the MIT Police, who had responded to a call that turned out to be a false allegation about violent activity. Johnson was immediately surrounded and supported by his fellow students, which he greatly appreciated.

“It was very scary. And the people were there for me to come back from that and deal with that where Chocolate City and the members of the BSU,” he recalls.

As the BSU’s attorney general, Johnson’s role was to build and maintain the relationships between the BSU and other organizations on campus. This involved attending different clubs’ events and even collaborating on activities, such as the annual cookout and Black Homecoming, two new annual events that Johnson helped coordinate under the BSU.

Johnson has continued to explore his fascination with aerospace while at MIT. In the spring of his junior year, he worked on a research project with the Aerospace Plasma Group, where he learned about plasma-assisted combustion, designing equipment to measure how to increase the efficiency of a combustion cycle to produce more power. While the experience was online because of the pandemic, Johnson was able to learn new skills in a variety of areas — not only manufacturing equipment, but the science behind the combustion.

Despite working remotely, Johnson built physical models in his home to better understand the data and research he was doing virtually. He hopes to continue this type of hands-on learning as an asset in future endeavors.

“It all goes back to curiosity and wanting to satisfy the pursuit of knowledge,” he says.

This past summer, Johnson worked as a system engineering intern in NASA’s Jet Propulsion Laboratory (JPL). While this experience was also held remotely, he found that the digital platform allowed him to interface with more people in more departments. He joined a team overseeing the process of balancing the different projects all under the scope of sending a spacecraft to Europa, one of Jupiter’s moons. Johnson was involved in building the spacecraft, as well as its various models, testing the durability of the design, and sending and operating it in space. He gained as much knowledge as he could, reaching out to people from different teams in different departments.

“It was really amazing that the curiosity that I have could be satisfied at any point by any person in that organization,” he says.

Johnson’s mentor at JPL was a Kristen Virkler, a Black software engineer who engaged with him in many conversations about being a Black employee at an aerospace company. The two were even able to talk about working as a young Black individual on an Instagram takeover on the company’s Instagram account. For Johnson, this experience was an exciting step toward combining his passions, by building community in the aerospace fields.

After graduating from MIT, Johnson plans to work for JPL full-time, where he aims help promote diversity, accessibility, and inclusion while also learning all he can about engineering.

“A lot of people don’t really know that aerospace engineering or space exploration is a field because of the fact that there are not a lot of people that look like them in the field. Diverse people lead to diverse ideas,” he says.



de MIT News https://ift.tt/MhB4sK8

Featured video: L. Rafael Reif on the power of education

MIT President L. Rafael Reif recently joined Raúl Rodríguez, associate vice president of internationalization at Tecnológico de Monterrey, for a wide-ranging fireside chat about the power of education and its impact in addressing global issues, even more so in a post pandemic world. 

“When I was younger, my parents used to always tell me and my brothers that we had to have an education because your education is the only thing you can bring with you, if you have to leave in a hurry,” recalled Reif, who was visiting with students and researchers on the Monterrey Tec campus at the invitation of José Antonio Fernández, chair of the board at the Tec and a member of the MIT Corporation.

Reif recounted his own experiences both academic and personal and shared his hope for a better future, emphasizing the role students will play in shaping it. 

“Many think that the purpose of university is to educate and advance knowledge — education and research — and that it should stop there… but students want to do something good. They want to make an impact and help,” said Reif. “So, I think that the purpose of university is not only to educate and advance knowledge, but to help students use that knowledge to solve problems — problems facing their cities, their states, their country, their world.”

Conecta, a news site of Monterrey Tec, has additional coverage and photos from the MIT president’s visit. 

Video by: Monterrey Tec | 52 min, 46 sec



de MIT News https://ift.tt/dMFKY9L

How to weigh your options

The Black-Scholes-Merton model, the world’s most famous method of pricing stock options, emerged from MIT in the early 1970s. But as Robert C. Merton, one of its co-creators, explained in an annual Institute lecture on Monday, the real value of the method does not simply lie in understanding the value of stocks. It lies in understanding the components of almost any decision you might make.

“Once you’ve learned about options, [you] look at the world differently,” said Merton, while accepting MIT’s 50th annual James R. Killian, Jr. Faculty Achievement Award. “Options are everywhere.”

Indeed, Merton listed a whole array of business issues that can be modeled along the lines of the famous stock-pricing model, including decisions about using fuel and energy, investing in drug discovery and R&D, adding manufacturing capacity, making bankruptcy decisions, and even financing movie sequels.

Time and again, we are faced with tradeoffs between flexibility, certainty, and cost. The question is whether people — investors, business leaders, or the rest of us, in our daily lives — are making those decisions systematically.

“You go into a bank, you get a bank deposit, you get deposit insurance,” said Merton, who is the School of Management Distinguished Professor of Finance at the MIT Sloan School of Management. “What is that? Well, if the bank doesn’t pay you, you can give your deposit to the government, and they’ll give you what they owe you. You get made whole. That’s a put option.”

Learning about options theory, in this vein, is a larger tool for making many choices more thoughtfully.

“What [understanding] options gets you to do, if you think about it, is really all about decision-making under uncertainty,” said Merton — who, along with Myron Scholes, won the 1997 Nobel Prize in economic sciences for his work on options modeling.

Grasping the “production process” of finance

Merton delivered his lecture, “Emergence of Financial Engineering from Financial Science — An MIT Story,” before a large audience in the Institute’s Huntington Hall, Room 10-250.

Prior to Merton’s remarks, MIT faculty chair Lily Tsai introduced him, saying it was a “privilege to represent the MIT faculty in honoring you.” Tsai added: “We recognize your role in the founding of modern finance theory and your skill in developing and applying innovative techniques to resolve areas of tremendous public interest and impact.”

Merton, for his part, called the award a “great and singular honor,” adding, “It’s humbling when I consider the past honorees who I now join.”

The Killian Award was established in 1971; Killian served as MIT’s 10th president from 1948 to 1959 and served as chair of the MIT Corporation from 1959 to 1971. It is the highest such honor granted to faculty at MIT.

Merton’s Killian Award citation, announced in 2021, lauded him as “one of the founding architects of modern finance theory,” whose research has “become an integral part of the global financial system.” The citation emphasized Merton’s “commitment to innovation through scientific research and to advancing pedagogy in financial economics, as well as to serving as a highly valued mentor to graduate students and junior colleagues.”

As its name suggests, the Black-Scholes-Merton theory was developed by economist Fischer Black, who joined MIT in the mid-1970s; Scholes, a professor at MIT in the early 1970s; and Merton, an MIT-trained economist and faculty member at the time too. Black died in 1995; Nobel Prizes are only awarded to living people.

Options are contracts used to buy or sell assets at set prices, and are often used to diversify or hedge a portfolio’s holdings. The Black-Scholes-Merton theory was quickly recognized as a breakthrough and remains widely deployed to determine valuations and risks regarding many financial instruments, including corporate debt and other liabilities, mortgages, and deposit, pension, and other financial insurance.

Much of Merton’s lecture on Monday linked the intellectual development of the Black-Scholes-Merton model in the early 1970s to the larger realization that the approach could be used to evaluate many kinds of transactions.

In Merton’s account, a key contribution of his own work on the model was thinking through the larger set of elements affecting options prices, such as risk-free access to capital. Due to the law of one price — the convergence of prices for similar assets — such factors would have to influence options pricing as well.

The famous model, Merton said, is actually describing “a production process. In fact, that’s how I would interpret this price. … It’s the production cost for … somebody who can do this trade, who can manage it very well. Just like you’re building cars, you have a production cost, and then you have a production process, the assembly line. That’s what you’ve got here. This formula gives you your production cost.” In this sense the model’s output will ultimately yield a price at which investors can afford to make the transaction.

Serendipity in the ’70s

Merton also observed that the model’s popularity involved some “serendipity.” At the beginning of the 1970s, there was no large-scale market for options. Then, in 1973, the Chicago Board Options Exchange opened. By 2021, Merton noted, there were about 10 billion options traded in global markets.

That created a situation in which the modeling was being deployed in markets, which, as of the mid-1970s, were then generating data that could be studied to refine the theoretical side of finance.

“This work, Black, Scholes, was all done at MIT, nowhere else,” Merton recalled. “This was an exciting situation for the students and for the faculty, because not only did the science create practice, but practice fed back into the science, and there were just more problems to work on than you just knew what to do with. … This was a really, really, exciting period.”

As Merton noted in his talk, many people quickly realized the generalizability of the Black-Scholes-Merton model as well — including academics, investors, and the Nobel committee.

“When Myron and I shared the [Nobel] prize — and Fisher would have too if he hadn’t passed away — the citation was not for options,” Merton said. “It was for a new method to value derivatives. It recognized that this was a generalized approach.”

Never Say Never Again

After receiving his BS in engineering mathematics from Columbia University and an MS in mathematics from Caltech, Merton earned his doctorate from MIT’s Department of Economics in 1970, where his principal adviser was the legendary economist Paul A. Samuelson.

In his remarks, Merton dedicated his lecture to Samuelson, who he called a “mentor, co-researcher, and friend,” adding: “I would not be standing here if it weren’t for him.”

After earning his PhD, Merton joined the finance faculty at MIT Sloan, where he became a full professor and served until 1988 as the J.C. Penney Professor of Management. Merton taught at the Harvard Business School from 1988 through 2010, before rejoining the MIT faculty.

Merton is a member of the National Academy of Sciences, a fellow of the American Academy of Arts and Sciences, and a past president of the American Finance Association. He has received over two dozen honorary degrees from universities around the world.

Merton has devoted extensive time in recent years to working on retirement-finance issues, and he outlined some of his thoughts about those issue in Monday’s talk. Still, the bulk of his remarks focused on the foundations of options modeling and its wide applicability to the world.

Consider, Merton said, “Movie sequels. James Bond or something. Do you shoot it at the time when everybody’s there? Obviously that would be cheaper and more efficient. But then, if it’s a turkey and it doesn’t work, you’ve spent a lot of money shooting a lot of sequel that doesn’t have any value.”

In all, Merton added, “This goes to the core of making decisions under uncertainty. That’s what I want you to think [about]. And once you see it that way, the world is always going to look like options [in] everything.”



de MIT News https://ift.tt/HeS5Wj2

miércoles, 30 de marzo de 2022

With new industry, a new era for cities

Kista Science City, just north of Stockholm, is Sweden’s version of Silicon Valley. Anchored by a few big firms and a university, it has become northern Europe’s main high-tech center, with housing mixed in so that people live and work in the same general area.

Around the globe, a similar pattern is visible in many urban locales. Near MIT, Kendall Square, once home to manufacturing, has become a biotechnology and information technology hub while growing as a residential destination. Hamburg, Germany, has redeveloped part of its famous port with new business, recreation, and housing. The industrial area of Jurong, in Singapore, now features commerce, residential construction, parks, and universities. Even Brooklyn’s once-declining Navy Yard has become a mixed-use waterfront area.

In place after place, cities have developed key neighborhoods by locating 21st-century firms near residential dwellings and street-level commerce. Instead of heavy industry pushing residents out of cities, advanced manufacturing and other smaller-scale forms of business are drawing people back in, and “re-shaping the relationships between cities, people, and industry,” as MIT Professor Eran Ben-Joseph puts it in a new book co-authored with Tali Hatuka.

The book, “New Industrial Urbanism: Designing Places for Production,” was published this week by Routledge, providing a broad overview of a major trend in city form, from two experts in the field. Ben-Joseph is the Class of 1922 Professor of Landscape Architecture and Urban Planning at MIT; Hatuka is a planner, architect, and professor of urban planning and head of the Laboratory of Contemporary Urban Design at Tel Aviv University.

“New Industrial Urbanism is a socio-spatial concept which calls for reassessing and re-shaping the relationships between cities, people, and industry,” the authors write in the book. “It suggests shaping cities with a renewed understanding that an urban location and setting give industry a competitive advantage,” stemming from access to a skilled labor force, universities, and the effects of clustering industry firms together. 

As such, they add, “This concept calls for a [new] paradigm shift in the way we understand and address production in cities and regions.”

An opportunity to regenerate

In the book, Ben-Joseph and Hatuka place “new industrial urbanism” in contrast to earlier phases of city development. From about 1770 to 1880, in their outline, cities saw the emergence of heavy industry and smoke-spewing factories without much regard to planning.

Thus from about 1880 to 1970, some planners and architects began creating idealized forms for industrial cities and sometimes developed entirely planned industrial communities in exurban areas. By about 1970, though, a third phase took hold: deindustrialization, as residents started leaving older industrial cities en masse, while industry globalized and set up factories in some previously nonindustrial countries. Between 1979 and 2010, as Ben-Joseph and Hatuka note, the U.S. lost 41 percent of its manufacturing jobs.

In response to all this, authors see new industrial urbanism as a fourth phase, in which city form and industry interact. The current moment, as they write, is characterized by “hybridity.” Because some forms of current industry feature cleaner and more sustainable production, formerly undesirable industrial zones can now contain a more appealing mix of advanced industry, commerce, residential units, educational and other research institutions, and recreation.

As punishing as the loss of manufacturing has been in the U.S. and other places, the emergence of higher-tech production represents “an opportunity to regenerate urban areas and redefine the role of industry in the city,” Ben-Joseph and Hatuka write.

As the authors detail, city leaders take differing approaches to the issue of revitalization. Some places feature clustering, building strength in one particular industry. This is true of Kendall Square with biotechnology, or even Wageningen, the “Food Valley” of the Netherlands, where scores of agribusiness firms have located within a compact region.

Other cities must more thoroughly reinvent a deindustrialized spot, as in Brooklyn, Hamburg, and Jurong, keeping some historic structures intact. And some places, including Barcelona and Portland, Oregon, have taken a hybrid approach within their own city limits, encouraging new businesses at many scales, and many forms of land use.  

As “New Industrial Urbanism” emphasizes, there is not one royal road toward rebuilding cities. In Munich, the headquarters of BMW rise up in a four-cylinder tower from the 1970s, a reference to the company’s vehicles. Next to the tower is a massive BMW assembly plant, sprawled out over many acres. Over time, residential growth has “gradually grown around the area,” as Ben-Joseph and Hatuka put it. Because the plant is geared toward assembly alone, not materials production, it is more environmentally feasible to see residential growth nearby. The outcome is viable industry juxtaposed with living areas. 

“Our book is trying to show the various ways by which cities can address the changing contemporary relationships between city and industry,” Ben-Joseph says. “The cases that we describe and the concepts that we put forward represent the growing recognition of the role industry plays in the world’s total economic activity. They teach us that industrial development is always contextual and culturally dependent, and it is these variants that contribute to the evolution of different types and forms of industrial ecosystems.”

Wearing it well

As Ben-Joseph and Hatuka also emphasize, the pursuit of industry to help rebuild cities does not have to focus strictly on high-tech firms. In Los Angeles’ Garment District, as the book also details, changes in zoning threatened to disperse a thriving, century-old cluster of manufacturers.

Those manufacturers soon banded into a productive business improvement district; policymakers saw the wisdom of a hybrid approach to zoning that let manufacturers stay in place. (“Like farmland, industrial land is hard to reclaim once replaced by other functions,” Ben-Joseph and Hatuka write.) As a result, about 4,000 garment manufacturers remain in Los Angeles, providing crucial income to communities that have long depended on it.

“Just as we often do with housing policies, it is essential that we design strategic land-use mechanisms that protect and enhance existing industrial uses within our cities” Ben-Joseph adds. “Cases like downtown Los Angeles shows that cities are beginning to recognize the value of centrally located industrial land and the need to address pressures to convert these areas to up-scale housing and displacing existing manufacturers.”

As a book, “New Industrial Urbanism” has been almost a decade in the making. Along the way, the authors hosted a symposium about the topic at MIT in 2014, and helped curate an exhibit at MIT’s Wolk Gallery on the subject the same year. Through support from MIT and Tel-Aviv University, the book is also available as an open access publication.

Experts in the field have praised “New Industrial Urbanism.” Karen Chapple, a professor and director of the University of Toronto’s School of Cities, has noted that while some people have “embraced the notion of advanced manufacturing locating in cities, the literature has lacked a compelling and detailed vision of what a new industrial urbanism would actually encompass. This comprehensive volume fills that gap, with a powerful visual analysis thoroughly grounded in economic theory and historical context.”

For his part, Ben-Joseph is pleased by the trends toward a new industrial urbanism in many parts of the globe.

“We have seen a lot of progress in most countries,” Ben-Joseph says.

Still, he observes, much more is possible, in the U.S. and beyond. As the authors write, “re-evaluating manufacturing should be a primary goal of planners, urban designers, and architects. Awareness of this goal is critical to the future development of cities worldwide.”



de MIT News https://ift.tt/yxwZYKu

Solving the challenges of robotic pizza-making

Imagine a pizza maker working with a ball of dough. She might use a spatula to lift the dough onto a cutting board then use a rolling pin to flatten it into a circle. Easy, right? Not if this pizza maker is a robot.

For a robot, working with a deformable object like dough is tricky because the shape of dough can change in many ways, which are difficult to represent with an equation. Plus, creating a new shape out of that dough requires multiple steps and the use of different tools. It is especially difficult for a robot to learn a manipulation task with a long sequence of steps — where there are many possible choices — since learning often occurs through trial and error.

Researchers at MIT, Carnegie Mellon University, and the University of California at San Diego, have come up with a better way. They created a framework for a robotic manipulation system that uses a two-stage learning process, which could enable a robot to perform complex dough-manipulation tasks over a long timeframe. A “teacher” algorithm solves each step the robot must take to complete the task. Then, it trains a “student” machine-learning model that learns abstract ideas about when and how to execute each skill it needs during the task, like using a rolling pin. With this knowledge, the system reasons about how to execute the skills to complete the entire task.

The researchers show that this method, which they call DiffSkill, can perform complex manipulation tasks in simulations, like cutting and spreading dough, or gathering pieces of dough from around a cutting board, while outperforming other machine-learning methods.

Beyond pizza-making, this method could be applied in other settings where a robot needs to manipulate deformable objects, such as a caregiving robot that feeds, bathes, or dresses someone elderly or with motor impairments.

“This method is closer to how we as humans plan our actions. When a human does a long-horizon task, we are not writing down all the details. We have a higher-level planner that roughly tells us what the stages are and some of the intermediate goals we need to achieve along the way, and then we execute them,” says Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and author of a paper presenting DiffSkill.

Li’s co-authors include lead author Xingyu Lin, a graduate student at Carnegie Mellon University (CMU); Zhiao Huang, a graduate student at the University of California at San Diego; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences at MIT and a member of CSAIL; David Held, an assistant professor at CMU; and senior author Chuang Gan, a research scientist at the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Learning Representations.

Student and teacher

 The “teacher” in the DiffSkill framework is a trajectory optimization algorithm that can solve short-horizon tasks, where an object’s initial state and target location are close together. The trajectory optimizer works in a simulator that models the physics of the real world (known as a differentiable physics simulator, which puts the “Diff” in “DiffSkill”). The “teacher” algorithm uses the information in the simulator to learn how the dough must move at each stage, one at a time, and then outputs those trajectories.

Then the “student” neural network learns to imitate the actions of the teacher. As inputs, it uses two camera images, one showing the dough in its current state and another showing the dough at the end of the task. The neural network generates a high-level plan to determine how to link different skills to reach the goal. It then generates specific, short-horizon trajectories for each skill and sends commands directly to the tools.

The researchers used this technique to experiment with three different simulated dough-manipulation tasks. In one task, the robot uses a spatula to lift dough onto a cutting board then uses a rolling pin to flatten it. In another, the robot uses a gripper to gather dough from all over the counter, places it on a spatula, and transfers it to a cutting board. In the third task, the robot cuts a pile of dough in half using a knife and then uses a gripper to transport each piece to different locations.

robot at work

A cut above the rest

DiffSkill was able to outperform popular techniques that rely on reinforcement learning, where a robot learns a task through trial and error. In fact, DiffSkill was the only method that was able to successfully complete all three dough manipulation tasks. Interestingly, the researchers found that the “student” neural network was even able to outperform the “teacher” algorithm, Lin says.

“Our framework provides a novel way for robots to acquire new skills. These skills can then be chained to solve more complex tasks which are beyond the capability of previous robot systems,” says Lin.

Because their method focuses on controlling the tools (spatula, knife, rolling pin, etc.) it could be applied to different robots, but only if they use the specific tools the researchers defined. In the future, they plan to integrate the shape of a tool into the reasoning of the “student” network so it could be applied to other equipment.

The researchers intend to improve the performance of DiffSkill by using 3D data as inputs, instead of images that can be difficult to transfer from simulation to the real world. They also want to make the neural network planning process more efficient and collect more diverse training data to enhance DiffSkill’s ability to generalize to new situations. In the long run, they hope to apply DiffSkill to more diverse tasks, including cloth manipulation.

This work is supported, in part, by the National Science Foundation, LG Electronics, the MIT-IBM Watson AI Lab, the Office of Naval Research, and the Defense Advanced Research Projects Agency Machine Common Sense program.



de MIT News https://ift.tt/Vdnwi3R

“Yulia’s Dream” to support young, at-risk Ukrainian students of mathematics

Millions have fled the Russian invasion of Ukraine, and for those who are staying, schools are closed. While refugee-supporting programs focus on immediate needs, the Department of Mathematics’ MIT PRIMES program plans to use its resources to support the mathematics education of Ukrainian high school students.

In honor of Yulia Zdanovska, a 21-year-old Ukrainian mathematician killed by a Russian-fired missile in her home city of Kharkiv, PRIMES has launched “Yulia’s Dream,” a free math enrichment and research program for Ukrainian high school students and refugees in grades 9 to 11.

“The refugees are mostly women and children, including schoolchildren, whose education has been severely disrupted, as they must adapt to a new language and an unfamiliar environment,” says Slava Gerovitch, PRIMES’ director and a mathematics lecturer.

In just four days after the application site was launched, more than 40 Ukrainian students located in Ukraine and in other countries sent inquiries about the program. The deadline to apply is April 12. Part of the application includes providing solutions to the 2022 entrance problem set

“In their inquiries, some students thank MIT for organizing this program,” says Gerovitch. “One potential applicant writes, as translated from Russian, ‘I saw MIT’s program Yulia’s Dream in support of Ukrainians, and first of all I wish to express my gratitude to those who show a caring attitude at such difficult times.’”

Numerous Department of Mathematics graduate students and MIT math majors have already expressed an interest in working as mentors for the program. The goal is to enroll up to 30 students in the program who will meet online in small groups to study advanced math topics or will work on math research projects under the guidance of academic mentors, with instruction available in Ukrainian, English, and Russian. Weekly meetings will begin by the end of April, and will continue through the fall, with a possible extension through spring 2023. Starting this as a pilot program seed-funded by an anonymous donor, the department hopes to raise funds to make it a regular annual program.

Yulia’s Dream is dedicated to the memory of Zdanovska, a graduate of the National University of Kyiv, a silver medalist at the 2017 European Girls' Mathematical Olympiad, and an instructor for the “Teach for Ukraine.” According to a report by the International Mathematical Union Committee for Women in Mathematics, she remained in Ukraine when the war broke out, and was working as a volunteer in a residential area of Kharkiv when she died in a fire caused by a Russian missile.

“I saw reports of her tragic death, and she immediately reminded me of our typical PRIMES students — passionate about math, successful in competitions, choosing a math major in college, and willing to teach others,” says Gerovitch. “I felt an emotional connection, and I think others feel it too.”

He and mathematics professor Pavel Etingof, the chief research advisor of PRIMES, were moved to honor her with their PRIMES program.

“I was born in Moscow, but both of my parents come from the Vinnytsa region in West-Central Ukraine,” says Gerovitch. “I have often visited Ukraine, and I love that wonderful country. The horrible, totally unprovoked aggression of the Russian government against the people of Ukraine and the terrible destruction and loss of human life upset me deeply. Pavel felt the same way, and he was born in Ukraine, so his connection is even deeper. We thought of ways we could use our skills and resources to help the most vulnerable part of the population who are suffering from this war — children. Since MIT has a significant number of Ukrainian- and Russian-speaking students, we thought we could use their knowledge to help talented Ukrainian students whose education was violently interrupted by the war to pursue their dreams — something that Yulia Zdanovska was deprived of.”

Etingof is a graduate of the top specialized math school #145 in Kyiv, Ukraine; he also studied at Kyiv State University as a math major. He has extensive contacts among the Ukrainian mathematical community and speaks Ukrainian. Etingof also is active with Opportunities in North America for Ukrainian Mathematicians, a group of North American mathematicians helping Ukrainian mathematicians at all stages in their careers.

In a recent interview for WBUR, Etingof talked about the longer-range goal of Yulia’s Dream. “After the war, Ukraine needs to be rebuilt, and we want it to become an advanced European country. This requires a lot of young people who know math, who are good with science, especially researchers, and we want to help build that pipeline.”

Gerovitch PhD ’99 and Etingof, who both received degrees from the Oil and Gas Institute in Moscow, launched MIT PRIMES in 2010 as a free outreach program for high school students, with a focus on increasing the representation of women and underserved populations in mathematics research.

The working groups at Yulia's Dream will operate similarly to MIT’s PRIMES-USA program, the online-only section of PRIMES. PRIMES also runs CrowdMath, a joint initiative with the Art of Problem Solving, a massive online collaborative year-long research project open to all high schoolers around the world, but only conducted in English. In contrast, Yulia’s Dream will offer online-based math instruction in small groups, and in the native languages of Ukrainian students.

Yulia’s Dream is coordinated by Dmytro Matvieievskyi, a math graduate student at Northeastern University who graduated from School #27 of Kharkiv, Ukraine, and is a recipient of the Bronze medal at the 2012 International Math Olympiad (IMO) as part of the Ukraine Team. He speaks Ukrainian and Russian, and has an extensive network of contacts in the mathematical competition community of Ukraine.

To help with promotion and recruiting top students for the program, PRIMES has support from many of Ukraine’s top math teachers and math competitions and camps organizers in Ukraine, including leaders of the Ukraine IMO Team.

“I’m glad we can contribute a little bit to the worldwide effort to help Ukrainian people,” says Gerovitch. “I hope that the students we will teach will help rebuild Ukraine and make it a thriving, free, friendly nation.”



de MIT News https://ift.tt/hvxsyIB

martes, 29 de marzo de 2022

Fighting discrimination in mortgage lending

Although the U.S. Equal Credit Opportunity Act prohibits discrimination in mortgage lending, biases still impact many borrowers. One 2021 Journal of Financial Economics study found that borrowers from minority groups were charged interest rates that were nearly 8 percent higher and were rejected for loans 14 percent more often than those from privileged groups.

When these biases bleed into machine-learning models that lenders use to streamline decision-making, they can have far-reaching consequences for housing fairness and even contribute to widening the racial wealth gap.

If a model is trained on an unfair dataset, such as one in which a higher proportion of Black borrowers were denied loans versus white borrowers with the same income, credit score, etc., those biases will affect the model’s predictions when it is applied to real situations. To stem the spread of mortgage lending discrimination, MIT researchers created a process that removes bias in data that are used to train these machine-learning models.

While other methods try to tackle this bias, the researchers’ technique is new in the mortgage lending domain because it can remove bias from a dataset that has multiple sensitive attributes, such as race and ethnicity, as well as several “sensitive” options for each attribute, such as Black or white, and Hispanic or Latino or non-Hispanic or Latino. Sensitive attributes and options are features that distinguish a privileged group from an underprivileged group.

The researchers used their technique, which they call DualFair, to train a machine-learning classifier that makes fair predictions of whether borrowers will receive a mortgage loan. When they applied it to mortgage lending data from several U.S. states, their method significantly reduced the discrimination in the predictions while maintaining high accuracy.

“As Sikh Americans, we deal with bias on a frequent basis and we think it is unacceptable to see that transform to algorithms in real-world applications. For things like mortgage lending and financial systems, it is very important that bias not infiltrate these systems because it can emphasize the gaps that are already in place against certain groups,” says Jashandeep Singh, a senior at Floyd Buchanan High School and co-lead author of the paper with his twin brother, Arashdeep. The Singh brothers were recently accepted into MIT.

Joining Arashdeep and Jashandeep Singh on the paper are MIT sophomore Ariba Khan and senior author Amar Gupta, a researcher in the Computer Science and Artificial Intelligence Laboratory at MIT, who studies the use of evolving technology to address inequity and other societal issues. The research was recently published online and will appear in a special issue of Machine Learning and Knowledge Extraction.

Double take

DualFair tackles two types of bias in a mortgage lending dataset — label bias and selection bias. Label bias occurs when the balance of favorable or unfavorable outcomes for a particular group is unfair. (Black applicants are denied loans more frequently than they should be.) Selection bias is created when data are not representative of the larger population. (The dataset only includes individuals from one neighborhood where incomes are historically low.)

The DualFair process eliminates label bias by subdividing a dataset into the largest number of subgroups based on combinations of sensitive attributes and options, such as white men who are not Hispanic or Latino, Black women who are Hispanic or Latino, etc.

By breaking down the dataset into as many subgroups as possible, DualFair can simultaneously address discrimination based on multiple attributes.

“Researchers have mostly tried to classify biased cases as binary so far. There are multiple parameters to bias, and these multiple parameters have their own impact in different cases. They are not equally weighed. Our method is able to calibrate it much better,” says Gupta.

After the subgroups have been generated, DualFair evens out the number of borrowers in each subgroup by duplicating individuals from minority groups and deleting individuals from the majority group. DualFair then balances the proportion of loan acceptances and rejections in each subgroup so they match the median in the original dataset before recombining the subgroups.

DualFair then eliminates selection bias by iterating on each data point to see if discrimination is present. For instance, if an individual is a non-Hispanic or Latino Black woman who was rejected for a loan, the system will adjust her race, ethnicity, and gender one at a time to see if the outcome changes. If this borrower is granted a loan when her race is changed to white, DualFair considers that data point biased and removes it from the dataset.

Fairness vs. accuracy

To test DualFair, the researchers used the publicly available Home Mortgage Disclosure Act dataset, which spans 88 percent of all mortgage loans in the U.S. in 2019, and includes 21 features, including race, sex, and ethnicity. They used DualFair to “de-bias” the entire dataset and smaller datasets for six states, and then trained a machine-learning model to predict loan acceptances and rejections.

After applying DualFair, the fairness of predictions increased while the accuracy level remained high across all states. They used an existing fairness metric known as average odds difference, but it can only measure fairness in one sensitive attribute at a time.

So, they created their own fairness metric, called alternate world index, that considers bias from multiple sensitive attributes and options as a whole. Using this metric, they found that DualFair increased fairness in predictions for four of the six states while maintaining high accuracy.

“It is the common belief that if you want to be accurate, you have to give up on fairness, or if you want to be fair, you have to give up on accuracy. We show that we can make strides toward lessening that gap,” Khan says.

The researchers now want to apply their method to de-bias different types of datasets, such as those that capture health care outcomes, car insurance rates, or job applications. They also plan to address limitations of DualFair, including its instability when there are small amounts of data with multiple sensitive attributes and options.

While this is only a first step, the researchers are hopeful their work can someday have an impact on mitigating bias in lending and beyond.

“Technology, very bluntly, works only for a certain group of people. In the mortgage loan domain in particular, African American women have been historically discriminated against. We feel passionate about making sure that systemic racism does not extend to algorithmic models. There is no point in making an algorithm that can automate a process if it doesn’t work for everyone equally,” says Khan.

This research is supported, in part, by the FinTech@CSAIL initiative.



de MIT News https://ift.tt/zXaHqQn

New program bolsters innovation in next-generation artificial intelligence hardware

The MIT AI Hardware Program is a new academia and industry collaboration aimed at defining and developing translational technologies in hardware and software for the AI and quantum age. A collaboration between the MIT School of Engineering and MIT Schwarzman College of Computing, involving the Microsystems Technologies Laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that will deliver enhanced energy efficiency systems for cloud and edge computing.

“A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

Based on use-inspired research involving materials, devices, circuits, algorithms, and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technological solutions. The program spans materials and devices, as well as architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

“As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

The inaugural members of the program are companies from a wide range of industries including chip-making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem, both nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

The five inaugural members of the MIT AI Hardware Program are:  

  • Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV, and Astro;
     
  • Analog Devices, a global leader in the design and manufacturing of analog, mixed signal, and DSP integrated circuits;
     
  • ASML, an innovation leader in the semiconductor industry, providing chipmakers with hardware, software, and services to mass produce patterns on silicon through lithography;
     
  • NTT Research, a subsidiary of NTT that conducts fundamental research to upgrade reality in game-changing ways that improve lives and brighten our global future; and
     
  • TSMC, the world’s leading dedicated semiconductor foundry.

The MIT AI Hardware Program will create a roadmap of transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.  

“We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” says Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science. “Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

The program will prioritize the following topics:

  • analog neural networks;
  • new roadmap CMOS designs;
  • heterogeneous integration for AI systems;
  • onolithic-3D AI systems;
  • analog nonvolatile memory devices;
  • software-hardware co-design;
  • intelligence at the edge;
  • intelligent sensors;
  • energy-efficient AI;
  • intelligent internet of things (IIoT);
  • neuromorphic computing;
  • AI edge security;
  • quantum AI;
  • wireless technologies;
  • hybrid-cloud computing; and
  • high-performance computation.

“We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions — solutions that we are proud to give to the world and generations to come,” says Aude Oliva, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement in the MIT Schwarzman College of Computing.

The new program is co-led by Jesús del Alamo and Aude Oliva, and Anantha Chandrakasan serves as chair.



de MIT News https://ift.tt/U9LaRyG

lunes, 28 de marzo de 2022

MIT graduate engineering, business, science programs ranked highly by U.S. News for 2023

MIT’s graduate program in engineering has again topped the list of U.S. News and World Report’s annual rankings, released today. The program has held the No. 1 spot since 1990, when the magazine first published these rankings.

The MIT Sloan School of Management also placed highly, landing in the No. 5 spot for the best graduate business programs along with Harvard University.

Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering, chemical engineering, computer engineering, electrical/electronic/communications engineering (tied with Stanford University and the University of California at Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering and in biomedical engineering, a spot shared with Emory University-Georgia Tech.

In the rankings of individual MBA specialties, MIT placed first in four areas: business analytics, information systems, production/operations, and project management. It placed second in supply chain/logistics.

U.S. News does not issue annual rankings for all doctoral programs but revisits many every few years. For 2023, MIT’s economics program earned a No. 1 ranking overall, shared with Harvard and Stanford. The magazine also revisited its science rankings for the first time since 2019. MIT’s programs in chemistry, computer science, and mathematics (with Princeton University) earned No. 1 spots. Its programs in Earth sciences (with UC Berkeley and Stanford) and physics (with Caltech, Harvard, Princeton, and UC Berkeley) earned No. 2 spots. The Institute’s biology program ranked third (along with UC Berkeley and Caltech).

The magazine bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of programs in the sciences, social sciences, and humanities are based solely on reputational surveys.



de MIT News https://ift.tt/8n6w4Bf

Reversing hearing loss with regenerative therapy

Most of us know someone affected by hearing loss, but we may not fully appreciate the hardships that lack of hearing can bring. Hearing loss can lead to isolation, frustration, and a debilitating ringing in the ears known as tinnitus. It is also closely correlated with dementia.

The biotechnology company Frequency Therapeutics is seeking to reverse hearing loss — not with hearing aids or implants, but with a new kind of regenerative therapy. The company uses small molecules to program progenitor cells, a descendant of stem cells in the inner ear, to create the tiny hair cells that allow us to hear.

Hair cells die off when exposed to loud noises or drugs including certain chemotherapies and antibiotics. Frequency’s drug candidate is designed to be injected into the ear to regenerate these cells within the cochlea. In clinical trials, the company has already improved people’s hearing as measured by tests of speech perception — the ability to understand speech and recognize words.

“Speech perception is the No. 1 goal for improving hearing and the No. 1 need we hear from patients,” says Frequency co-founder and Chief Scientific Officer Chris Loose PhD ’07.

In Frequency’s first clinical study, the company saw statistically significant improvements in speech perception in some participants after a single injection, with some responses lasting nearly two years.

The company has dosed more than 200 patients to date and has seen clinically meaningful improvements in speech perception in three separate clinical studies, with some improvements lasting nearly two years after a single injection. Another study failed to show improvements in hearing compared to the placebo group, but the company attributes that result to flaws in the design of the trial.

Now Frequency is recruiting for a 124-person trial from which preliminary results should be available early next year.

The company’s founders, including Loose, MIT Institute Professor Robert Langer, CEO David Lucchino MBA ’06, Senior Vice President Will McLean PhD ’14, and Harvard-MIT Health Sciences and Technology affiliate faculty member Jeff Karp, are already gratified to have been able to help people improve their hearing through the clinical trials. They also believe they’re making important contributions toward solving a problem that impacts more than 40 million people in the U.S. and hundreds of millions more around the world.

“Hearing is such an important sense; it connects people to their community and cultivates a sense of identity,” says Karp, who is also a professor of anesthesia at Brigham and Women’s Hospital. “I think the potential to restore hearing will have enormous impact on society.”

From the lab to patients

In 2005, Lucchino was an MBA student in the MIT Sloan School of Management and Loose was a PhD candidate in chemical engineering at MIT. Langer introduced the two aspiring entrepreneurs, and they started working on what would become Semprus BioSciences, a medical device company that won the MIT $100K Entrepreneurship Competition and later sold at a deal valued at up to $80 million.

“MIT has such a wonderful environment of people interested in new ventures that come from different backgrounds, so we’re able to assemble teams of people with diverse skills quickly,” Loose says.

Eight years after playing matchmaker for Lucchino and Loose, Langer began working with Karp to study the lining of the human gut, which regenerates itself almost every day.

With MIT postdoc Xiaolei Yin, who is now a scientific advisor to Frequency, the researchers discovered that the same molecules that control the gut’s stem cells are also used by a close descendant of stem cells called progenitor cells. Like stem cells, progenitor cells can turn into more specialized cells in the body.

“Every time we make an advance, we take a step back and ask how this could be even bigger,” Karp says. “It’s easy to be incremental, but how do we take what we learned and make a massive difference?”

Progenitor cells reside in the inner ear and generate hair cells when humans are in utero, but they become dormant before birth and never again turn into more specialized cells such as the hair cells of the cochlea. Humans are born with about 15,000 hair cells in each cochlea. Such cells die over time and never regenerate.

In 2012, the research team was able to use small molecules to turn progenitor cells into thousands of hair cells in the lab. Karp says no one had ever produced such a large number of hair cells before. He still remembers looking at the results while visiting his family, including his father, who wears a hearing aid.

“I looked at them and said, ‘I think we have a breakthrough,’” Karp says. “That’s the first and only time I’ve used that phrase.”

The advance was enough for Langer to play matchmaker again and bring Loose and Lucchino into the fold to start Frequency Therapeutics.

The founders believe their approach — injecting small molecules into the inner ear to turn progenitor cells into more specialized cells — offers advantages over gene therapies, which may rely on extracting a patient’s cells, programming them in a lab, and then delivering them to the right area.

“Tissues throughout your body contain progenitor cells, so we see a huge range of applications,” Loose says. “We believe this is the future of regenerative medicine.”

Advancing regenerative medicine

Frequency’s founders have been thrilled to watch their lab work mature into an impactful drug candidate in clinical trials.

“Some of these people [in the trials] couldn’t hear for 30 years, and for the first time they said they could go into a crowded restaurant and hear what their children were saying,” Langer says. “It’s so meaningful to them. Obviously more needs to be done, but just the fact that you can help a small group of people is really impressive to me.”

Karp believes Frequency’s work will advance researchers’ ability to manipulate progenitor cells and lead to new treatments down the line.

“I wouldn't be surprised if in 10 or 15 years, because of the resources being put into this space and the incredible science being done, we can get to the point where [reversing hearing loss] would be similar to Lasik surgery, where you're in and out in an hour or two and you can completely restore your vision,” Karp says. “I think we'll see the same thing for hearing loss.”

The company is also developing a drug for multiple sclerosis (MS), a disease in which the immune system attacks the myelin in the brain and central nervous system. Progenitor cells already turn into the myelin-producing cells in the brain, but not fast enough to keep up with losses sustained by MS patients. Most MS therapies focus on suppressing the immune system rather than generating myelin.

Early versions of that drug candidate have shown dramatic increases in myelin in mouse studies. The company expects to file an investigational new drug application for MS with the FDA next year.

“When we were conceiving of this project, we meant for it to be a platform that could be broadly applicable to multiple tissues. Now we’re moving into the remyelination work, and to me it’s the tip of the iceberg in terms of what can be done by taking small molecules and controlling local biology,” Karp says.

For now, Karp is already thrilled with Frequency’s progress, which hit home the last time he was in Frequency’s office and met a speaker who shared her experience with hearing loss.

“You always hope your work will have an impact, but it can take a long time for that to happen,” Karp says. “It’s been an incredible experience working with the team to bring this forward. There are already people in the trials whose hearing has been dramatically improved and their lives have been changed. That impacts interactions with family and friends. It’s wonderful to be a part of.”



de MIT News https://ift.tt/TRcXLm7

Q&A: Stuart Schmill on MIT’s decision to reinstate the SAT/ACT requirement

MIT Admissions announced today that it will reinstate its requirement that applicants submit scores from an SAT or ACT exam.

The Institute suspended its longstanding requirement in 2020 and 2021 due to the Covid-19 pandemic that prevented most high schoolers from safely taking the exams. However, with the advent of safe, effective pediatric vaccination, the expansion of the free in-school SAT (where most students now take the test), and the introduction of the digital SAT, most prospective students can take them again.

Research conducted by the admissions office shows that the standardized tests are an important factor in assessing the academic preparation of applicants from all backgrounds, according to Dean of Admissions and Student Financial Services Stuart Schmill. He says the standardized exams are most helpful for assisting the admissions office in identifying socioeconomically disadvantaged students who are well-prepared for MIT’s challenging education, but who don’t have the opportunity to take advanced coursework, participate in expensive enrichment programs, or otherwise enhance their college applications.

MIT News spoke with Schmill about how his team arrived at its decision, which he also wrote about today on the MIT Admissions blog.

Q: Why is MIT reinstating its SAT/ACT requirement?

A: First, let me talk a bit about why we have an SAT/ACT requirement in the first place. We have a dedicated research and analysis team that regularly studies our process and decisions. One thing they look at is what we need to predict student success at MIT. We want to be confident an applicant has the academic preparation and noncognitive skills (like resilience, conscientiousness, time-management, and so on) to do well in our challenging, fast-paced academic environment.

In short: Our research has shown that, in most cases, we cannot reliably predict students will do well at MIT unless we consider standardized test results alongside grades, coursework, and other factors. These findings are statistically robust and stable over time, and hold when you control for socioeconomic factors and look across demographic groups. And the math component of the testing turns out to be most important.

One reason we think this is true is because of the unusually quantitative orientation of our education, as I explain in more detail in my post. An MIT education combines deeply analytic thinking with creative hands-on problem-solving to prepare students to solve the toughest problems in the world. Our General Institute Requirements demand that all first-years must take (or place out of, through Advanced Standing Examination) two semesters of calculus and two-semesters of calculus-based physics, no matter what field they intend to major in; students who do not place out of physics also take a math diagnostic. In other words, there is no pathway through MIT that does not include a rigorous foundation in mathematics, mediated by many quantitative exams along the way. So, in a way, it is not surprising that the SAT/ACT math exams are predictive of success at MIT; it would be more surprising if they weren't. 

I should emphasize here that we don’t focus only on the tests. In fact, we don’t care about the tests at all beyond the point where they — alongside other factors — help demonstrate preparation for MIT. We don’t prefer perfect scores, and a perfect score isn’t sufficient to say you’ll succeed at MIT, either. However, the tests are something we’ve found we usually need in addition to these other factors in order to demonstrate preparation.  

We are reinstating our requirement in order to be transparent and equitable in our expectations. Our concern is that, without the compelling clarity of a requirement, some well-prepared applicants won’t take the tests, and we won’t have enough information to be confident in their academic readiness when they apply. We believe it will be more equitable — and less anxiety-inducing — if we require all applicants who take the tests to disclose their scores, rather than ask each student to strategically guess whether or not to send them to us.

Of course, we know that some students won’t be able to safely take the tests due to their own specific health conditions or various disasters and disruptions, as was the case before the pandemic. In these cases, we will allow students to explain on their application why they were unable to safely take the exam, and we will not hold the lack of exam against them. We will instead use other factors in their application to assess preparation as best we can, but with one less tool in our kit in their case.  

Q: What do you say to those who argue the tests create structural barriers for socioeconomically disadvantaged and/or underrepresented students?

A: I appreciate this question, which we have kept foremost in our minds as we reviewed our research and policies. MIT Admissions has a strong commitment to diversity, and it is important to us that we minimize unfair barriers to our applicants wherever possible.

However, what we have found is that the way we use the SAT/ACT increases access to MIT for students from these groups relative to other things we can consider. The reason for this is that educational inequality impacts all aspects of a prospective student’s preparation and application, not just test-taking. As I wrote, low-income students, underrepresented students of color, and other disadvantaged populations often do not attend schools that offer advanced coursework (and if they do, they are less likely to be able to take it). They often cannot afford expensive enrichment opportunities, cannot expect lengthy letters of recommendation from their overburdened teachers, or cannot otherwise benefit from this kind of educational capital. Meanwhile, we know that the pandemic was most disruptive to our least-resourced students, who may have had no consistent coursework or grading for nearly two years now. 

I realize this argument may sound counterintuitive to some who have heard that the SAT/ACT exams raise barriers for access, and I don’t want to ignore the challenges with, or limits of, the tests. They are just one tool among many that we use. However, what I think many people outside our profession don’t understand is how unfortunately unequal all aspects of secondary education are in this country. And unlike some other inequalities — like access to fancy internships or expensive extracurriculars — our empirical research shows the SAT/ACT actually do help us figure out if someone will do well at MIT.

It turns out the shortest path for many students to demonstrate sufficient preparation — particularly for students with less access to educational capital — is through the SAT/ACT, because most students can study for these exams using free tools at Khan Academy, but they (usually) can’t force their high school to offer advanced calculus courses, for example. So, the SAT/ACT can actually open the door to MIT for these students, too.

The key thing I hope people understand is that we are using the tests as a crucial tool in the service of our mission, and not for the sake of the tests themselves. If and when we can find better, more equitable tools than the SAT/ACT, we will make changes to our policies and processes, as we did a few years ago when we stopped considering the SAT subject tests. Our creative and dedicated research and analysis team will continue to work hard in this area.  

Q: What do you think the impact of this reinstatement will be on your office and on MIT?

A: My hope is that it will help us recruit, select, and enroll a robustly diverse undergraduate student body that is well-prepared to succeed in our challenging curriculum. At least, when we presented our data and proposal to the Committee on Undergraduate Admissions and Financial Aid (CUAFA) — the student/faculty/staff policy committee that oversees our work — that is how we defined our goal, and CUAFA unanimously approved our plan on those terms. 

Before the pandemic, considering testing (alongside other factors) helped us expand access to MIT, and we are very proud of the diversity and talent of the undergraduate student body. There is currently no majority race or ethnicity among MIT’s undergraduates. If you look at research published in The New York Times a few years ago, there is more economic diversity and intergenerational mobility at MIT than at comparable institutions; nearly 20 percent of our students are the first-generation in their family to attend college, as I was. We think that if testing helped us do this before the pandemic, it can help us continue to do it now. So, that is how we will evaluate success in the years to come.



de MIT News https://ift.tt/fMYwXP6

Security tool guarantees privacy in surveillance footage

Surveillance cameras have an identity problem, fueled by an inherent tension between utility and privacy. As these powerful little devices have cropped up seemingly everywhere, the use of machine learning tools has automated video content analysis at a massive scale — but with increasing mass surveillance, there are currently no legally enforceable rules to limit privacy invasions

Security cameras can do a lot — they’ve become smarter and supremely more competent than their ghosts of grainy pictures past, the ofttimes “hero tool” in crime media. (“See that little blurry blue blob in the right hand corner of that densely populated corner — we got him!”) Now, video surveillance can help health officials measure the fraction of people wearing masks, enable transportation departments to monitor the density and flow of vehicles, bikes, and pedestrians, and provide businesses with a better understanding of shopping behaviors. But why has privacy remained a weak afterthought? 

The status quo is to retrofit video with blurred faces or black boxes. Not only does this prevent analysts from asking some genuine queries (e.g., Are people wearing masks?), it also doesn’t always work; the system may miss some faces and leave them unblurred for the world to see. Dissatisfied with this status quo, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with other institutions, came up with a system to better guarantee privacy in video footage from surveillance cameras. Called “Privid,” the system lets analysts submit video data queries, and adds a little bit of noise (extra data) to the end result to ensure that an individual can’t be identified. The system builds on a formal definition of privacy — “differential privacy” — which allows access to aggregate statistics about private data without revealing personally identifiable information.

Typically, analysts would just have access to the entire video to do whatever they want with it, but Privid makes sure the video isn’t a free buffet. Honest analysts can get access to the information they need, but that access is restrictive enough that malicious analysts can't do too much with it. To enable this, rather than running the code over the entire video in one shot, Privid breaks the video into small pieces and runs processing code over each chunk. Instead of getting results back from each piece, the segments are aggregated, and that additional noise is added. (There’s also information on the error bound you're going to get on your result — maybe a 2 percent error margin, given the extra noisy data added). 

For example, the code might output the number of people observed in each video chunk, and the aggregation might be the “sum,” to count the total number of people wearing face coverings, or the “average” to estimate the density of crowds. 

Privid allows analysts to use their own deep neural networks that are commonplace for video analytics today. This gives analysts the flexibility to ask questions that the designers of Privid did not anticipate. Across a variety of videos and queries, Privid was accurate within 79 to 99 percent of a non-private system.

“We’re at a stage right now where cameras are practically ubiquitous. If there's a camera on every street corner, every place you go, and if someone could actually process all of those videos in aggregate, you can imagine that entity building a very precise timeline of when and where a person has gone,” says MIT CSAIL PhD student ​​Frank Cangialosi, the lead author on a paper about Privid. “People are already worried about location privacy with GPS — video data in aggregate could capture not only your location history, but also moods, behaviors, and more at each location.” 

Privid introduces a new notion of “duration-based privacy,” which decouples the definition of privacy from its enforcement — with obfuscation, if your privacy goal is to protect all people, the enforcement mechanism needs to do some work to find the people to protect, which it may or may not do perfectly. With this mechanism, you don’t need to fully specify everything, and you're not hiding more information than you need to. 

Let’s say we have a video overlooking a street. Two analysts, Alice and Bob, both claim they want to count the number of people that pass by each hour, so they submit a video processing module and ask for a sum aggregation.

The first analyst is the city planning department, which hopes to use this information to understand footfall patterns and plan sidewalks for the city. Their model counts people and outputs this count for each video chunk.

The other analyst is malicious. They hope to identify every time “Charlie” passes by the camera. Their model only looks for Charlie’s face, and outputs a large number if Charlie is present (i.e., the “signal” they’re trying to extract), or zero otherwise. Their hope is that the sum will be non-zero if Charlie was present. 

From Privid’s perspective, these two queries look identical. It’s hard to reliably determine what their models might be doing internally, or what the analyst hopes to use the data for. This is where the noise comes in. Privid executes both of the queries, and adds the same amount of noise for each. In the first case, because Alice was counting all people, this noise will only have a small impact on the result, but likely won’t impact the usefulness. 

In the second case, since Bob was looking for a specific signal (Charlie was only visible for a few chunks), the noise is enough to prevent them from knowing if Charlie was there or not. If they see a non-zero result, it might be because Charlie was actually there, or because the model outputs “zero,” but the noise made it non-zero. Privid didn’t need to know anything about when or where Charlie appeared, the system just needed to know a rough upper bound on how long Charlie might appear for, which is easier to specify than figuring out the exact locations, which prior methods rely on. 

The challenge is determining how much noise to add — Privid wants to add just enough to hide everyone, but not so much that it would be useless for analysts. Adding noise to the data and insisting on queries over time windows means that your result isn’t going to be as accurate as it could be, but the results are still useful while providing better privacy. 

Cangialosi wrote the paper with Princeton PhD student Neil Agarwal, MIT CSAIL PhD student Venkat Arun, assistant professor at the University of Chicago Junchen Jiang, assistant professor at Rutgers University and former MIT CSAIL postdoc Srinivas Narayana, associate professor at Rutgers University Anand Sarwate, and assistant professor at Princeton University and Ravi Netravali SM '15, PhD '18. Cangialosi will present the paper at the USENIX Symposium on Networked Systems Design and Implementation Conference in April in Renton, Washington. 

This work was partially supported by a Sloan Research Fellowship and National Science Foundation grants.



de MIT News https://ift.tt/3PL90oC

domingo, 27 de marzo de 2022

Q&A: Climate Grand Challenges finalists on new pathways to decarbonizing industry

Note: This is the third article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalist teams, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

The industrial sector is the backbone of today’s global economy, yet its activities are among the most energy-intensive and the toughest to decarbonize. Efforts to reach net-zero targets and avert runaway climate change will not succeed without new solutions for replacing sources of carbon emissions with low-carbon alternatives and developing scalable nonemitting applications of hydrocarbons.

In conversations prepared for MIT News, faculty from three of the teams with projects in the competition’s “Decarbonizing complex industries and processes” category discuss strategies for achieving impact in hard-to-abate sectors, from long-distance transportation and building construction to textile manufacturing and chemical refining. The other Climate Grand Challenges research themes include using data and science to forecast climate-related risk, building equity and fairness into climate solutions, and removing, managing, and storing greenhouse gases. The following responses have been edited for length and clarity.

Moving toward an all-carbon material approach to building

Faced with the prospect of building stock doubling globally by 2050, there is a great need for sustainable alternatives to conventional mineral- and metal-based construction materials. Mark Goulthorpe, associate professor in the Department of Architecture, explains the methods behind Carbon>Building, an initiative to develop energy-efficient building materials by reorienting hydrocarbons from current use as fuels to environmentally benign products, creating an entirely new genre of lightweight, all-carbon buildings that could actually drive decarbonization.

Q: What are all-carbon buildings and how can they help mitigate climate change?

A: Instead of burning hydrocarbons as fuel, which releases carbon dioxide and other greenhouse gases that contribute to atmospheric pollution, we seek to pioneer a process that uses carbon materially to build at macro scale. New forms of carbon — carbon nanotube, carbon foam, etc. — offer salient properties for building that might effectively displace the current material paradigm. Only hydrocarbons offer sufficient scale to beat out the billion-ton mineral and metal markets, and their perilous impact. Carbon nanotube from methane pyrolysis is of special interest, as it offers hydrogen as a byproduct.

Q: How will society benefit from the widespread use of all-carbon buildings?

A: We anticipate reducing costs and timelines in carbon composite buildings, while increasing quality, longevity, and performance, and diminishing environmental impact. Affordability of buildings is a growing problem in all global markets as the cost of labor and logistics in multimaterial assemblies creates a burden that is very detrimental to economic growth and results in overcrowding and urban blight.

Alleviating these challenges would have huge societal benefits, especially for those in lower income brackets who cannot afford housing, but the biggest benefit would be in drastically reducing the environmental footprint of typical buildings, which account for nearly 40 percent of global energy consumption.

An all-carbon building sector will not only reduce hydrocarbon extraction, but can produce higher value materials for building. We are looking to rethink the building industry by greatly streamlining global production and learning from the low-labor methods pioneered by composite manufacturing such as wind turbine blades, which are quick and cheap to produce. This technology can improve the sustainability and affordability of buildings — and holds the promise of faster, cheaper, greener, and more resilient modes of dwelling.

Emissions reduction through innovation in the textile industry

Collectively, the textile industry is responsible for over 4 billion metric tons of carbon dioxide equivalent per year, or 5 to 10 percent of global greenhouse gas emissions — more than aviation and maritime shipping combined. And the problem is only getting worse with the industry’s rapid growth. Under the current trajectory, consumption is projected to increase 30 percent by 2030, reaching 102 million tons. A diverse group of faculty and researchers led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Yuly Fuentes-Medel, project manager for fiber technologies and research advisor to the MIT Innovation Initiative, is developing groundbreaking innovations to reshape how textiles are selected, sourced, designed, manufactured, and used, and to create the structural changes required for sustained reductions in emissions by this industry.

Q: Why has the textile industry been difficult to decarbonize?

A: The industry currently operates under a linear model that relies heavily on virgin feedstock, at roughly 97 percent, yet recycles or downcycles less than 15 percent. Furthermore, recent trends in “fast fashion” have led to massive underutilization of apparel, such that products are discarded on average after only seven to 10 uses. In an industry with high volume and low margins, replacement technologies must achieve emissions reduction at scale while maintaining performance and economic efficiency.

There are also technical barriers to adopting circular business models, from the challenge of dealing with products comprising fiber blends and chemical additives to the low maturity of recycling technologies. The environmental impacts of textiles and apparel have been estimated using life cycle analysis, and industry-standard indexes are under development to assess sustainability throughout the life cycle of a product, but information and tools are needed to model how new solutions will alter those impacts and include the consumer as an active player to keep our planet safe. This project seeks to deliver both the new solutions and the tools to evaluate their potential for impact.

Q: Describe the five components of your program. What is the anticipated timeline for implementing these solutions?

A: Our plan comprises five programmatic sections, which include (1) enabling a paradigm shift to sustainable materials using nontraditional, carbon-negative polymers derived from biomass and additives that facilitate recycling; (2) rethinking manufacturing with processes to structure fibers and fabrics for performance, waste reduction, and increased material efficiency; (3) designing textiles for value by developing products that are customized, adaptable, and multifunctional, and that interact with their environment to reduce energy consumption; (4) exploring consumer behavior change through human interventions that reduce emissions by encouraging the adoption of new technologies, increased utilization of products, and circularity; and (5) establishing carbon transparency with systems-level analyses that measure the impact of these strategies and guide decision making.

We have proposed a five-year timeline with annual targets for each project. Conservatively, we estimate our program could reduce greenhouse gas emissions in the industry by 25 percent by 2030, with further significant reductions to follow.

Tough-to-decarbonize transportation

Airplanes, transoceanic ships, and freight trucks are critical to transporting people and delivering goods, and the cornerstone of global commerce, manufacturing, and tourism. But these vehicles also emit 3.7 billion tons of carbon dioxide annually and, left unchecked, they could take up a quarter of the remaining carbon budget by 2050. William Green, the Hoyt C. Hottel Professor in the Department Chemical Engineering, co-leads a multidisciplinary team with Steven Barrett, professor of aeronautics and astronautics and director of the MIT Laboratory for Aviation and the Environment, that is working to identify and advance economically viable technologies and policies for decarbonizing heavy duty trucking, shipping, and aviation. The Tough to Decarbonize Transportation research program aims to design and optimize fuel chemistry and production, vehicles, operations, and policies to chart the course to net-zero emissions by midcentury.

Q: What are the highest priority focus areas of your research program?

A: Hydrocarbon fuels made from biomass are the least expensive option, but it seems impractical, and probably damaging to the environment, to harvest the huge amount of biomass that would be needed to meet the massive and growing energy demands from these sectors using today’s biomass-to-fuel technology. We are exploring strategies to increase the amount of useful fuel made per ton of biomass harvested, other methods to make low-climate-impact hydrocarbon fuels, such as from carbon dioxide, and ways to make fuels that do not contain carbon at all, such as with hydrogen, ammonia, and other hydrogen carriers.

These latter zero-carbon options free us from the need for biomass or to capture gigatons of carbon dioxide, so they could be a very good long-term solution, but they would require changing the vehicles significantly, and the construction of new refueling infrastructure, with high capital costs.

Q: What are the scientific, technological, and regulatory barriers to scaling and implementing potential solutions?

A: Reimagining an aviation, trucking, and shipping sector that connects the world and increases equity without creating more environmental damage is challenging because these vehicles must operate disconnected from the electrical grid and have energy requirements that cannot be met by batteries alone. Some of the concepts do not even exist in prototype yet, and none of the appealing options have been implemented at anywhere near the scale required.

In most cases, we do not know the best way to make the fuel, and for new fuels the vehicles and refueling systems all need to be developed. Also, new fuels, or large-scale use of biomass, will introduce new environmental problems that need to be carefully considered, to ensure that decarbonization solutions do not introduce big new problems.

Perhaps most difficult are the policy, economic, and equity issues. A new long-haul transportation system will be expensive, and everyone will be affected by the increased cost of shipping freight. To have the desired climate impact, the transport system must change in almost every country. During the transition period, we will need both the existing vehicle and fuel system to keep running smoothly, even as a new low-greenhouse system is introduced. We will also examine what policies could make that work and how we can get countries around the world to agree to implement them.



de MIT News https://ift.tt/P6kNuiV

A tool for predicting the future

Whether someone is trying to predict tomorrow’s weather, forecast future stock prices, identify missed opportunities for sales in retail, or estimate a patient’s risk of developing a disease, they will likely need to interpret time-series data, which are a collection of observations recorded over time.

Making predictions using time-series data typically requires several data-processing steps and the use of complex machine-learning algorithms, which have such a steep learning curve they aren’t readily accessible to nonexperts.

To make these powerful tools more user-friendly, MIT researchers developed a system that directly integrates prediction functionality on top of an existing time-series database. Their simplified interface, which they call tspDB (time series predict database), does all the complex modeling behind the scenes so a nonexpert can easily generate a prediction in only a few seconds.

The new system is more accurate and more efficient than state-of-the-art deep learning methods when performing two tasks: predicting future values and filling in missing data points.

One reason tspDB is so successful is that it incorporates a novel time-series-prediction algorithm, explains electrical engineering and computer science (EECS) graduate student Abdullah Alomar, an author of a recent research paper in which he and his co-authors describe the algorithm. This algorithm is especially effective at making predictions on multivariate time-series data, which are data that have more than one time-dependent variable. In a weather database, for instance, temperature, dew point, and cloud cover each depend on their past values.

The algorithm also estimates the volatility of a multivariate time series to provide the user with a confidence level for its predictions.

“Even as the time-series data becomes more and more complex, this algorithm can effectively capture any time-series structure out there. It feels like we have found the right lens to look at the model complexity of time-series data,” says senior author Devavrat Shah, the Andrew and Erna Viterbi Professor in EECS and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems.

Joining Alomar and Shah on the paper is lead author Anish Agrawal, a former EECS graduate student who is currently a postdoc at the Simons Institute at the University of California at Berkeley. The research will be presented at the ACM SIGMETRICS conference.

Adapting a new algorithm

Shah and his collaborators have been working on the problem of interpreting time-series data for years, adapting different algorithms and integrating them into tspDB as they built the interface.

About four years ago, they learned about a particularly powerful classical algorithm, called singular spectrum analysis (SSA), that imputes and forecasts single time series. Imputation is the process of replacing missing values or correcting past values. While this algorithm required manual parameter selection, the researchers suspected it could enable their interface to make effective predictions using time series data. In earlier work, they removed this need to manually intervene for algorithmic implementation.  

The algorithm for single time series transformed it into a matrix and utilized matrix estimation procedures. The key intellectual challenge was how to adapt it to utilize multiple time series.  After a few years of struggle, they realized the answer was something very simple: “Stack” the matrices for each individual time series, treat it as a one big matrix, and then apply the single time-series algorithm on it.

This utilizes information across multiple time series naturally — both across the time series and across time, which they describe in their new paper.

This recent publication also discusses interesting alternatives, where instead of transforming the multivariate time series into a big matrix, it is viewed as a three-dimensional tensor. A tensor is a multi-dimensional array, or grid, of numbers. This established a promising connection between the classical field of time series analysis and the growing field of tensor estimation, Alomar says.

“The variant of mSSA that we introduced actually captures all of that beautifully. So, not only does it provide the most likely estimation, but a time-varying confidence interval, as well,” Shah says.

The simpler, the better

They tested the adapted mSSA against other state-of-the-art algorithms, including deep-learning methods, on real-world time-series datasets with inputs drawn from the electricity grid, traffic patterns, and financial markets.

Their algorithm outperformed all the others on imputation and it outperformed all but one of the other algorithms when it came to forecasting future values. The researchers also demonstrated that their tweaked version of mSSA can be applied to any kind of time-series data.

“One reason I think this works so well is that the model captures a lot of time series dynamics, but at the end of the day, it is still a simple model. When you are working with something simple like this, instead of a neural network that can easily overfit the data, you can actually perform better,” Alomar says.

The impressive performance of mSSA is what makes tspDB so effective, Shah explains. Now, their goal is to make this algorithm accessible to everyone.

One a user installs tspDB on top of an existing database, they can run a prediction query with just a few keystrokes in about 0.9 milliseconds, as compared to 0.5 milliseconds for a standard search query. The confidence intervals are also designed to help nonexperts to make a more informed decision by incorporating the degree of uncertainty of the predictions into their decision making.

For instance, the system could enable a nonexpert to predict future stock prices with high accuracy in just a few minutes, even if the time-series dataset contains missing values.

Now that the researchers have shown why mSSA works so well, they are targeting new algorithms that can be incorporated into tspDB. One of these algorithms utilizes the same model to automatically enable change point detection, so if the user believes their time series will change its behavior at some point, the system will automatically detect that change and incorporate that into its predictions.

They also want to continue gathering feedback from current tspDB users to see how they can improve the system’s functionality and user-friendliness, Shah says.

“Our interest at the highest level is to make tspDB a success in the form of a broadly utilizable, open-source system. Time-series data are very important, and this is a beautiful concept of actually building prediction functionalities directly into the database. It has never been done before, and so we want to make sure the world uses it,” he says.

“This work is very interesting for a number of reasons. It provides a practical variant of mSSA which requires no hand tuning, they provide the first known analysis of mSSA, and the authors demonstrate the real-world value of their algorithm by being competitive with or out-performing several known algorithms for imputations and predictions in (multivariate) time series for several real-world data sets,” says Vishal Misra, a professor of computer science at Columbia University who was not involved with this research. “At the heart of it all is the beautiful modeling work where they cleverly exploit correlations across time (within a time series) and space (across time series) to create a low-rank spatiotemporal factor representation of a multivariate time series. Importantly this model connects the field of time series analysis to that of the rapidly evolving topic of tensor completion, and I expect a lot of follow-on research spurred by this paper.”



de MIT News https://ift.tt/2Kk5BXp