viernes, 29 de abril de 2022

School of Engineering first quarter 2022 awards

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. The School of Engineering periodically recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in our academic departments, labs, and centers.

  • Saman Amarasinghe of the Department of Electrical Engineering and Computer Science received the Outstanding Paper Award at the Fourth Conference on Machine Learning and Systems on Jan. 27 and the Best Paper Award at the International Symposium on Code Generation and Optimization on Feb. 7.
  • Irmgard Bischofberger of the Department of Mechanical Engineering was named a 2022 AIMBE Fellow on Feb. 18.
  • Lydia Bourouiba of the Department of Civil and Environmental Engineering and the Institute for Medical Engineering and Science was named a 2022 AIMBE Fellow on Feb. 18.
  • Luca Daniel of the Department of Electrical Engineering and Computer Science was named a 2022 IEEE Fellow on Jan. 12.
  • Dirk Englund of the Department of Electrical Engineering and Computer Science was named a 2022 Optica Fellow on Nov. 23, 2021.
  • Devavrat Shah of the Department of Electrical Engineering and Computer Science was named a 2022 IEEE Fellow on Jan. 12.  
  • Peter So of the Department of Mechanical Engineering was named a 2022 AIMBE Fellow on Feb. 18.


de MIT News https://ift.tt/vxE1Zgr

A one-up on motion capture

From “Star Wars” to “Happy Feet,” many beloved films contain scenes that were made possible by motion capture technology, which records movement of objects or people through video. Further, applications for this tracking, which involve complicated interactions between physics, geometry, and perception, extend beyond Hollywood to the military, sports training, medical fields, and computer vision and robotics, allowing engineers to understand and simulate action happening within real-world environments.

As this can be a complex and costly process — often requiring markers placed on objects or people and recording the action sequence — researchers are working to shift the burden to neural networks, which could acquire this data from a simple video and reproduce it in a model. Work in physics simulations and rendering shows promise to make this more widely used, since it can characterize realistic, continuous, dynamic motion from images and transform back and forth between a 2D render and 3D scene in the world. However, to do so, current techniques require precise knowledge of the environmental conditions where the action is taking place, and the choice of renderer, both of which are often unavailable.

Now, a team of researchers from MIT and IBM has developed a trained neural network pipeline that avoids this issue, with the ability to infer the state of the environment and the actions happening, the physical characteristics of the object or person of interest (system), and its control parameters. When tested, the technique can outperform other methods in simulations of four physical systems of rigid and deformable bodies, which illustrate different types of dynamics and interactions, under various environmental conditions. Further, the methodology allows for imitation learning — predicting and reproducing the trajectory of a real-world, flying quadrotor from a video.

“The high-level research problem this paper deals with is how to reconstruct a digital twin from a video of a dynamic system,” says Tao Du PhD '21, a postdoc in the Department of Electrical Engineering and Computer Science (EECS), a member of Computer Science and Artificial Intelligence Laboratory (CSAIL), and a member of the research team. In order to do this, Du says, “we need to ignore the rendering variances from the video clips and try to grasp of the core information about the dynamic system or the dynamic motion.”

Du’s co-authors include lead author Pingchuan Ma, a graduate student in EECS and a member of CSAIL; Josh Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; Wojciech Matusik, professor of electrical engineering and computer science and CSAIL member; and MIT-IBM Watson AI Lab principal research staff member Chuang Gan. This work was presented this week the International Conference on Learning Representations.

While capturing videos of characters, robots, or dynamic systems to infer dynamic movement makes this information more accessible, it also brings a new challenge. “The images or videos [and how they are rendered] depend largely on the on the lighting conditions, on the background info, on the texture information, on the material information of your environment, and these are not necessarily measurable in a real-world scenario,” says Du. Without this rendering configuration information or knowledge of which renderer is used, it’s presently difficult to glean dynamic information and predict behavior of the subject of the video. Even if the renderer is known, current neural network approaches still require large sets of training data. However, with their new approach, this can become a moot point. “If you take a video of a leopard running in the morning and in the evening, of course, you'll get visually different video clips because the lighting conditions are quite different. But what you really care about is the dynamic motion: the joint angles of the leopard — not if they look light or dark,” Du says.

In order to take rendering domains and image differences out of the issue, the team developed a pipeline system containing a neural network, dubbed “rendering invariant state-prediction (RISP)” network. RISP transforms differences in images (pixels) to differences in states of the system — i.e., the environment of action — making their method generalizable and agnostic to rendering configurations. RISP is trained using random rendering parameters and states, which are fed into a differentiable renderer, a type of renderer that measures the sensitivity of pixels with respect to rendering configurations, e.g., lighting or material colors. This generates a set of varied images and video from known ground-truth parameters, which will later allow RISP to reverse that process, predicting the environment state from the input video. The team additionally minimized RISP’s rendering gradients, so that its predictions were less sensitive to changes in rendering configurations, allowing it to learn to forget about visual appearances and focus on learning dynamical states. This is made possible by a differentiable renderer.

The method then uses two similar pipelines, run in parallel. One is for the source domain, with known variables. Here, system parameters and actions are entered into a differentiable simulation. The generated simulation’s states are combined with different rendering configurations into a differentiable renderer to generate images, which are fed into RISP. RISP then outputs predictions about the environmental states. At the same time, a similar target domain pipeline is run with unknown variables. RISP in this pipeline is fed these output images, generating a predicted state. When the predicted states from the source and target domains are compared, a new loss is produced; this difference is used to adjust and optimize some of the parameters in the source domain pipeline. This process can then be iterated on, further reducing the loss between the pipelines.

To determine the success of their method, the team tested it in four simulated systems: a quadrotor (a flying rigid body that doesn’t have any physical contact), a cube (a rigid body that interacts with its environment, like a die), an articulated hand, and a rod (deformable body that can move like a snake). The tasks included estimating the state of a system from an image, identifying the system parameters and action control signals from a video, and discovering the control signals from a target image that direct the system to the desired state. Additionally, they created baselines and an oracle, comparing the novel RISP process in these systems to similar methods that, for example, lack the rendering gradient loss, don’t train a neural network with any loss, or lack the RISP neural network altogether. The team also looked at how the gradient loss impacted the state prediction model’s performance over time. Finally, the researchers deployed their RISP system to infer the motion of a real-world quadrotor, which has complex dynamics, from video. They compared the performance to other techniques that lacked a loss function and used pixel differences, or one that included manual tuning of a renderer’s configuration.

In nearly all of the experiments, the RISP procedure outperformed similar or the state-of-the-art methods available, imitating or reproducing the desired parameters or motion, and proving to be a data-efficient and generalizable competitor to current motion capture approaches.

For this work, the researchers made two important assumptions: that information about the camera is known, such as its position and settings, as well as the geometry and physics governing the object or person that is being tracked. Future work is planned to address this.

“I think the biggest problem we're solving here is to reconstruct the information in one domain to another, without very expensive equipment,” says Ma. Such an approach should be “useful for [applications such as the] metaverse, which aims to reconstruct the physical world in a virtual environment," adds Gan. “It is basically an everyday, available solution, that’s neat and simple, to cross domain reconstruction or the inverse dynamics problem,” says Ma.

This research was supported, in part, by the MIT-IBM Watson AI Lab, Nexplore, DARPA Machine Common Sense program, Office of Naval Research (ONR), ONR MURI, and Mitsubishi Electric.



de MIT News https://ift.tt/cWisJQV

Engineers use artificial intelligence to capture the complexity of breaking waves

Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

“Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”

The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

Learning tank

To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.

“We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”

The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.

“It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”

Safe harbor

In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.

The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

“When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.

The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

“The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”

This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research.



de MIT News https://ift.tt/o8FnRzk

jueves, 28 de abril de 2022

Affordable prosthetics and orthotics to rival the world’s best devices

In 2014, Arun Cherian returned to his home country of India to help his sister with her wedding. By that time Cherian had earned his master’s in mechanical engineering at Columbia University, spent four years as a researcher at the University of California at Berkeley, and was pursuing his PhD at Purdue University, where he was studying the biomechanics of human locomotion. He looked over his childhood home with the fresh perspective of someone who has spent the better part of a decade working on engineering problems.

One thing that caught his eye was the cane furniture made from rattan trees that are ubiquitous in southern India. The furniture had been in his house for many years, yet with its complex geometry, it remained flexible and stable, like a spring. Cherian began wondering if the material could serve as the basis for prosthetic legs.

The idea sparked a journey that led Cherian to quit his PhD, spend years refining his approach, and ultimately launch Rise Bionics. Today Rise Bionics offers customized prosthetics and orthotics, not only for people missing limbs but also for people suffering from conditions like cerebral palsy, epilepsy, and scoliosis.

Clinical trials have shown the company’s products are comparable in quality to other leading models, while they are sold for a fraction of the cost.

“Rise Bionics has grown organically from the initial idea of creating lightweight, flexible prosthetic legs made out of cane to now making high-quality prosthetics — or even bionics — affordable and accessible to all,” Cherian says.

During that evolution Cherian says he’s been adopted by the MIT community. Courses and initiatives run by MIT’s D-Lab have provided training, mentorship, funding, and more to help Rise Bionics get to where it is today.

Rise has built devices for more than 500 people to date, and Cherian says the company is beginning to accelerate its growth across the globe now that disruptions caused by the pandemic are receding.

The path has not always been an easy one for Cherian, but he says the company’s ability to transform lives makes it all worth it.

“What it comes down to is nobody plans to be an amputee,” Cherian says. “Using technology, we’re able to help these people get back to their lives as fast as possible. A little girl we’ve been helping was born without both legs above the knees. The other day, her mother was sharing pictures of her hanging out with her school friends. Looking at the photos, you couldn’t say she was walking with two prosthetic legs; she seems to be enjoying a full childhood. That’s what we want for our patients: to help them get back to their lives.”

Setting a path

Less than a year after Cherian first had the idea for using cane to make prosthetics, a friend told him about D-Lab’s annual International Development and Design Summit (IDDS), which was taking place in India that year.

In the three-week program, Cherian worked through the D-Lab design philosophy, spending about two weeks conducting interviews and defining the problem he was trying to solve before building his product.

“You have mentors in the class, they’re students and people who have attended another IDDS, and they swore by this methodology,” Cherian recalls. “I’d been around the block and I remember thinking, ‘Really?’ But, my Lord, has [D-Lab Founding Director] Amy Smith and her team figured something out.”

IDDS was the beginning of a long relationship between Cherian and MIT. He went on to pitch problems his company was facing to students in the D-Lab courses 2.729 (Design for Scale) and EC.722 (Prosthetics for the Developing World). Groups of students worked on the problems during the semester, and three ended up flying to India to intern for a summer and test their ideas.

In 2016, Cherian was selected for a D-Lab Scale Ups Fellowship, which provided financial support, mentorship, and networking to help Rise Bionics scale. Cherian had bootstrapped the company to that point and calls the fellowship “hugely instrumental in helping us get to where we are today.”

“I’m extremely thankful to D-Lab and MIT,” Cherian says. “I’m not an alum, so for them to be generous enough to extend their resources to me is a testament to the fantastic culture at MIT.”

Later in 2016, a team from Rise Bionics traveled to Switzerland to participate in the Cybathalon, in which athletes wearing the world’s best prosthetics compete in athletic events. Most companies make advanced devices especially for the competition — Cherian says the European prosthetic giant Ossur, for instance, came with a $100,000 prosthetic leg. Nevertheless, the runners wearing Rise’s $300 device won two of the three races and clocked the fastest time at the event.

Today Rise Bionics does much more than make prosthetic legs. In fact, the company has developed an entire workflow for fitting patients with custom devices in a matter of hours instead of weeks. First, Rise trains paramedical professionals to use its handheld scanner to take measurements of patients at their home or in the neighborhood hospital. Then Rise uses an algorithm to design the custom mesh that sits between the patient’s body and the device. Rise has a central manufacturing facility where it produces and ships its devices, which can be made using cane or more traditional prosthetic and orthotic materials. Cherian says Rise can produce well over 40 custom devices per week.

“It is unheard of to be that fast, and the fit is great,” Cherian says. “Patients that have been using other devices for decades are used to multiple fitting sessions spread over days, but our fitting session takes 10 to 15 minutes, and they say, ‘Is that it?’”

The majority of Rise’s patients need prosthetic legs, but Cherian says about 40 percent are people with epilepsy, cerebral palsy, paralysis, and other congenital conditions that benefit from orthotics (which correct biomechanical issues).

Continued growth

Cherian says the company’s products are typically 30 to 50 percent the cost of competitors’, which makes for some unique scenes at the hospitals that Rise Bionics partners with. They’ve seen a rickshaw driver come for a fitting followed by an affluent patient in a Mercedes.

“We are really proud that we serve patients from corporate hospitals, five-star-like hospitals, and community hospitals,” Cherian says. “And everyone is given the same portfolio of devices.”

After hearing about government hospitals struggling to provide prosthetics and orthotics to special needs children from low-income communities during the pandemic, Rise started working with its wealthier patients and donors to facilitate device sponsorships. The resulting Help A Child Walk program has helped more than 90 children get assistive devices at no cost.

Rise, which is currently only operating in India, was forced to delay plans to scale during the pandemic, but Cherian says it has treated more than 120 patients in the last two months alone, and the company is working to establish partnerships in the Middle East, Africa, Brazil, and North America.

Looking forward, Cherian plans to use Rise’s platform to move into injury prevention — think custom insoles and seats — as well as exoskeletal suits, the subject of his PhD research. He believes revenue from that work will help the company scale its assistive business.

“The goal is bionics for all, and we want to make it as affordable and accessible as possible,” Cherian says. “The last thing I want is to financially burden any of these people. We want to be a great company, where we make money that we can use to do more good.”



de MIT News https://ift.tt/vgdKlUm

Material designed to improve power plant efficiency wins 2022 Water Innovation Prize

The winner of this year’s Water Innovation Prize is a company commercializing a material that could dramatically improve the efficiency of power plants.

The company, Mesophase, is developing a more efficient power plant steam condenser that leverages a surface coating developed in the lab of Evelyn Wang, MIT’s Ford Professor of Engineering and the head of the Department of Mechanical Engineering. Such condensers, which convert steam into water, sit at the heart of the energy extraction process in most of the world’s power plants.

In the winning pitch, company founders said they believe their low-cost, durable coating will improve the heat transfer performance of such condensers.

“What makes us excited about this technology is that in the condenser field, this is the first time we’ve seen a coating that can last long enough for industrial applications and be made with a high potential to scale up,” said Yajing Zhao SM ’18, who is currently a PhD candidate in mechanical engineering at MIT. “When compared to what’s available in academia and industry, we believe you’ll see record performance in terms of both heat transfer and lifetime.”

In most power plants, condensers cool steam to turn it into water. The pressure change caused by that conversion creates a vacuum that pulls steam through a turbine. Mesophase’s patent-pending surface coating improves condensers’ ability to transfer heat, thus allowing operators to extract power more efficiently.

Based on lab tests, the company predicts it can increase power plant output by up to 7 percent using existing infrastructure. Because steam condensers are used around the world, this advance could help increase global electricity production by 500 terawatt hours per year, which is equivalent to the electricity supply for about 1 billion people.

The efficiency gains will also lead to less water use. Water sent from cooling towers is a common means of keeping condensers cool. The company estimates its system could reduce fresh water withdrawals by the equivalent of what is used by 50 million people per year.

After running pilots, the company believes the new material could be installed in power plants during the regularly scheduled maintenance that occurs every two to five years. The company is also planning to work with existing condenser manufacturers to get to market faster.

“This all works because a condenser with our technology in it has significantly more attractive economics than what you find in the market today,” says Mesophase’s Michael Gangemi, an MBA candidate at MIT’s Sloan School of Management.

The company plans to start in the U.S. geothermal space, where Mesophase estimates its technology is worth about $800 million a year.

“Much of the geothermal capacity in the U.S. was built in the ’50s and ’60s,” Gangemi said. “That means most of these plants are operating way below capacity, and they invest frequently in technology like ours just to maintain their power output.”

The company will use the prize money, in part, to begin testing in a real power plant environment.

“We are excited about these developments, but we know that they are only first steps as we move toward broader energy applications,” Gangemi said.

MIT’s Water Innovation Prize helps translate water-related research and ideas into businesses and impact. Each year, student-led finalist teams pitch their innovations to students, faculty, investors, and people working in various water-related industries.

This year’s event, held in a virtual hybrid format in MIT’s Media Lab, included five finalist teams. The second-place $15,000 award was given to Livingwater Systems, which provides portable rainwater collection and filtration systems to displaced and off-grid communities.

The company’s product consists of a low-cost mesh that goes on roofs to collect the water and a collapsible storage unit that incorporates a sediment filter. The water becomes drinkable after applying chlorine tablets to the storage unit.

“Perhaps the single greatest attraction of our units is their elegance and simplicity,” Livingwater CEO Joshua Kao said in the company’s pitch. “Anyone can take advantage of their easy, do-it-yourself setup without any preexisting knowhow.”

The company says the system works on the pitched roofs used in many off-grid settlements, refugee camps, and slums. The entire unit fits inside a backpack.

The team also notes existing collection systems cost thousands of dollars, require expert installation, and can’t be attached to surfaces like tents. Livingwater is aiming to partner with nongovernmental organizations and nonprofit entities to sell its systems for $60 each, which would represent significant cost savings when compared to alternatives like busing water into settlements.

The company will be running a paid pilot with the World Food Program this fall.

“Support from MIT will be crucial for building the core team on the ground,” said Livingwater’s Gabriela Saade, a master’s student in public policy at the University of Chicago. “Let’s begin to realize a new era of water security in Latin America and across the globe.”

The third-place $10,000 prize went to Algeon Materials, which is creating sustainable and environmentally friendly bioplastics from kelp. Algeon also won the $5,000 audience choice award for its system, which doesn’t require water, fertilizer, or land to produce.

The other finalists were:

  • Flowless, which uses artificial intelligence and an internet of things (IoT) platform to detect leaks and optimize water-related processes to reduce waste;
  • Hydrologistics Africa Ltd, a platform to help consumers and utilities manage their water consumption; and
  • Watabot, which is developing autonomous, artificial intelligence-powered systems to monitor harmful algae in real time and predict algae activity.

Each year the Water Innovation Prize, hosted by the MIT Water Club, awards up to $50,000 in grants to teams from around the world. This year’s program received over 50 applications. A group of 20 semifinalist teams spent one month working with mentors to refine their pitches and business plans, and the final field of finalists received another month of mentorship.

The Water Innovation Prize started in 2015 and has awarded more than $275,000 to 24 different teams to date.



de MIT News https://ift.tt/rQJgzkP

How can we reduce the carbon footprint of global computing?

The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

“If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.

To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.

“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  

Innovative energy-efficiency options

To that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.

Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.

“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  

For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.

Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.

Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don't need to keep them running in full speed.”

Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It's doing the most with the least energy.”

Holistic and multidisciplinary approaches

“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.

Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.

“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”

Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. "Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google's senior fellow and senior vice president of Google Research.

Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

Other presenters singled out compute at the edge as a prime energy hog.

“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.

Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”

Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”

Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.

Dean also sees an environmental role for the end user.

“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.

However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”

Facing increasing demands

Despite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.

“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.

Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.

Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube.



de MIT News https://ift.tt/6sRFcDI

Aging Brain Initiative awards fund five new ideas to study, fight neurodegeneration

Neurodegenerative diseases are defined by an increasingly widespread and debilitating death of nervous system cells, but they also share other grim characteristics: Their cause is rarely discernible and they have all eluded cures. To spur fresh, promising approaches and to encourage new experts and expertise to join the field, MIT’s Aging Brain Initiative (ABI) this month awarded five seed grants after a competition among labs across the Institute.

Founded in 2015 by nine MIT faculty members, the ABI promotes research, symposia, and related activities to advance fundamental insights that can lead to clinical progress against neurodegenerative conditions, such as Alzheimer’s disease, with an age-related onset. With an emphasis on spurring research at an early stage before it is established enough to earn more traditional funding, the ABI derives support from philanthropic gifts.

“Solving the mysteries of how health declines in the aging brain and turning that knowledge into effective tools, treatments, and technologies is of the utmost urgency given the millions of people around the world who suffer with no meaningful treatment options,” says ABI director and co-founder Li-Huei Tsai, the Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “We were very pleased that many groups across MIT were eager to contribute their expertise and creativity to that goal. From here, five teams will be able to begin testing their innovative ideas and the impact they could have.”

To address the clinical challenge of accurately assessing cognitive decline during Alzheimer’s disease progression and healthy aging, a team led by Thomas Heldt, associate professor of electrical and biomedical engineering in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Medical Engineering and Science, proposes to use artificial intelligence tools to bring diagnostics based on eye movements during cognitive tasks to everyday consumer electronics such as smartphones and tablets. By moving these capabilities to common at-home platforms, the team, which also includes EECS Associate Professor Vivian Sze, hopes to increase monitoring beyond what can only be intermittently achieved with high-end specialized equipment and dedicated staffing in specialists’ offices. The team will pilot their technology in a small study at Boston Medical Center in collaboration with neurosurgeon James Holsapple.

Institute Professor Ann Graybiel’s lab in the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research will test the hypothesis that mutations on a specific gene may lead to the early emergence of Alzheimer’s disease (AD) pathology in the striatum. That’s a a brain region crucial for motivation and movement that is directly and severely impacted by other neurodegenerative disorders including Parkinson’s and Huntington’s diseases, but that has largely been unstudied in Alzheimer’s. By editing the mutations into normal and AD-modeling mice, Research Scientist Ayano Matsushima and Graybiel hope to determine whether and how pathology, such as the accumulation of amyloid proteins, may result. Determining that could provide new insight into the progression of disease and introduce a new biomarker in a region that virtually all other studies have overlooked.

Numerous recent studies have highlighted a potential role for immune inflammation in Alzheimer’s disease. A team led by Gloria Choi, the Mark Hyman Jr. Associate Professor in BCS and The Picower Institute for Learning and Memory, will track one potential source of such activity by determining whether the brain’s meninges, which envelop the brain, becomes a means for immune cells activated by gut bacteria to circulate near the brain, where they may release signaling molecules that promote Alzheimer’s pathology. Working in mice, Choi’s lab will test whether such activity is prone to increase in Alzheimer’s and whether it contributes to disease.

A collaboration led by Peter Dedon, the Singapore Professor in MIT’s Department of Biological Engineering, will explore whether Alzheimer’s pathology is driven by dysregulation of transfer RNAs (tRNAs) and the dozens of natural tRNA modifications in the epitranscriptome, which play a key role in the process by which proteins are assembled based on genetic instructions. With Benjamin Wolozin of Boston University, Sherif Rashad of Tohoku University in Japan, and Thomas Begley of the State University of New York at Albany, Dedon will assess how the tRNA pool and epitranscriptome may differ in Alzheimer’s model mice and whether genetic instructions mistranslated because of tRNA dysregulation play a role in Alzheimer’s disease.

With her seed grant, Ritu Raman, the d’Arbeloff Assistant Professor of Mechanical Engineering, is launching an investigation of possible disruption of intercellular messages in amyotrophic lateral sclerosis (ALS), a terminal condition in which motor neuron causes loss of muscle control. Equipped with a new tool to finely sample interstitial fluid within tissues, Raman’s team will be able to monitor and compare cell-cell signaling in models of the junction between nerve and muscle. These models will be engineered from stem cells derived from patients with ALS. By studying biochemical signaling at the junction the lab hopes to discover new targets that could be therapeutically modified.

Major support for the seed grants, which provide each lab with $100,000, came from generous gifts by David Emmes SM ’76; Kathleen SM ’77, PhD ’86 and Miguel Octavio; the Estate of Margaret A. Ridge-Pappis, wife of the late James Pappis ScD ’59; the Marc Haas Foundation; and the family of former MIT President Paul Gray ’54, SM ’55, ScD ‘60, with additional funding from many annual fund donors to the Aging Brain Initiative Fund.



de MIT News https://ift.tt/Mx0k4NJ

Given what we know, how do we live now?

To truly engage the climate crisis, as so many at MIT are doing, can be daunting and draining. But it need not be lonely. Building collective insight and companionship for this undertaking is the aim of the Council on the Uncertain Human Future (CUHF), an international network launched at Clark University in 2014 and active at MIT since 2020.

Gathering together in council circles of 8-12 people, MIT community members make space to examine — and even to transform — their questions and concerns about climate change. Through a practice of intentional conversation in small groups, the council calls participants to reflect on our human interdependence with each other and the natural world, and on where we are in both social and planetary terms. It urges exploration of how we got here and what that means, and culminates by asking: Given what we know, how do we live now?

Origins

CUHF developed gradually in conversations between co-founders Sarah Buie and Diana Chapman Walsh, who met when they were, respectively, the director of Clark’s Higgins School of Humanities and the president of Wellesley College. Buie asked Walsh to keynote a Ford-funded Difficult Dialogues initiative in 2006. In the years and conversations that followed, they concluded that the most difficult dialogue wasn’t happening: an honest engagement with the realities and implications of a rapidly heating planet Earth.

With social scientist Susi Moser, they chose the practice of council, a blend of both modern and traditional dialogic forms, and began with a cohort of 12 environmental leaders willing to examine the gravest implications of climate change in a supportive setting — what Walsh calls “a kind of container for a deep dive into dark waters.” That original circle met in three long weekends over 2014 and continues today as the original CUHF Steady Council.

Taking root at MIT

Since then, the Council on the Uncertain Human Future has grown into an international network, with circles at universities, research centers, and other communities across the United States and in Scotland and Kathmandu. The practice took root at MIT (where Walsh is a life member emerita of the MIT Corporation) in 2020.

Leadership and communications teams in the MIT School of Humanities, Arts and Social Sciences (SHASS) Office of the Dean and the Environmental Solutions Initiative (ESI) recognized the need the council could meet on a campus buzzing with research and initiatives aimed at improving the health of the planet. Joining forces with the council leadership, the two MIT groups collaborated to launch the program at MIT, inviting participants from across the institute, and sharing information on the MIT Climate Portal.
 
Intentional conversations

“The council gives the MIT community the kind of deep discourse that is so necessary to face climate change and a rapidly changing world,” says ESI director and professor of architecture John Fernández. “These conversations open an opportunity to create a new kind of breakthrough of mindsets. It's a rare chance to pause and ask: Are we doing the things we should be doing, given MIT’s mission to the nation and the world, and given the challenges facing us?”

As the CUHF practice spreads, agendas expand to acknowledge changing times; the group produces films and collections of readings, curates an online resource site, and convenes international Zoom events for members on a range of topics, many of which interact with climate, including racism and Covid-19. But its core activity remains the same: an intentional, probing conversation over time. There are no preconceived objectives, only a few simple guidelines: speak briefly, authentically, and spontaneously, moving around the circle; listen with attention and receptivity; observe confidentiality. “Through this process of honest speaking and listening, insight arises and trustworthy community is built,” says Buie.

While these meetings were held in person before 2020, the full council experience pivoted to Zoom at the start of the pandemic with two-hour discussions forming an arc over a period of five weeks. Sessions begin with a call for participants to slow down and breathe, grounding themselves for the conversation. The convener offers a series of questions that elicit spontaneous responses, concerns, and observations; later, they invite visioning of new possibilities.
 
Inviting emergent possibility

While the process may yield tangible outcomes — for example, a curriculum initiative at Clark called A New Earth Conversation — its greatest value, according to Buie, “is the collective listening, acknowledgment, and emergent possibility it invites. Given the profound cultural misunderstandings and misalignments behind it, climate breakdown defies normative approaches to ‘problem-solving.’ The Council enables us to live into the uncertainty with more awareness, humility, curiosity, and compassion. Participants feel the change; they return to their work and lives differently, and less alone.”

Roughly 60 faculty and staff from across MIT, all engaged in climate-related work, have participated so far in council circles. The 2021 edition of the Institute’s Climate Action Plan provides for the expansion of councils at MIT to deepen humanistic understanding of the climate crisis. The conversations are also a space for engaging with how the climate crisis is related to what the plan calls “the imperative of justice” and “the intertwined problems of equity and economic transition.”

Reflecting on the growth of the council's humanistic practice at MIT, Agustín Rayo, professor of philosophy and the Kenan Sahin Dean of MIT SHASS, says: “The council conversations about the future of our species and the planet are an invaluable contribution to MIT’s ‘whole-campus’ focus on the climate crisis.”

Growing the council at MIT means broadening participation. Postdocs will join a new circle this fall, with opportunities for student involvement soon to follow. More than a third of MIT’s prior council participants have continued with monthly Steady Council meetings, which sometimes reference recent events while deepening the council practice at MIT. The session in December 2021, for example, began with reports from MIT community members who had attended the COP26 UN climate change conference in Glasgow, then broke into council circles to engage the questions raised.

Cognitive leaps

The MIT Steady Council is organized by Curt Newton, director of MIT OpenCourseWare and an early contributor to the online platform that became the Institute’s Climate Portal. Newton sees a productive tension between MIT’s culture of problem-solving and the council’s call for participants to slow down and question the paradigms in which they operate. “It can feel wrong, or at least unfamiliar, to put ourselves in a mode where we’re not trying to create an agenda and an action plan,” he says. “To get us to step back from that and think together about the biggest picture before we allow ourselves to be pulled into that solution mindset  — it’s a necessary experiment for places like MIT.”

Over the past decade, Newton says, he has searched for ways to direct his energies toward environmental issues “with one foot firmly planted at MIT and one foot out in the world.” The silo-busting personal connections he’s made with colleagues through the council have empowered him “to show up with my full climate self at work.”

Walsh finds it especially promising to see CUHF taking root at MIT, “a place of intensity, collaboration, and high ideals, where the most stunning breakthroughs occur when someone takes a step back, stops the action, changes the trajectory for a time and begins asking new questions that challenge received wisdom.” She sees council as a communal practice that encourages those cognitive leaps. “If ever there were a moment in history that cried out for a paradigm shift,” she says, “surely this is it.”

Funding for the Council on the Uncertain Human Future comes from the Christopher Reynolds Foundation and the Kaiser Family Foundation.

Prepared by MIT SHASS Communications
Editorial team: Nicole Estvanik Taylor and Emily Hiestand



de MIT News https://ift.tt/JrB2m7I

miércoles, 27 de abril de 2022

Multiplying the MIT $100K’s impact

In two weeks, students will gather in Kresge Auditorium for the 26th annual MIT $100K Entrepreneurship Competition. The event has served as a springboard for a number of iconic companies over the years. But the full impact of the $100K competition has been far wider.

For more than 20 years, the $100K format — which includes mentorship, funding, and support services for teams before the final pitch competition — has also been replicated around the world.

Started by MIT students and alumni with $100K connections, these competitions have cumulatively helped entrepreneurs start thousands of companies that have gone on to raise billions of dollars. They have also helped build innovation ecosystems that have transformed local economies.

The student and alumni-led initiatives have been supported by local governments, other universities, and private organizations. MIT has also supported the replication of the $100K competition through programs such as the Global Startup Workshop (GSW) and the Regional Entrepreneurship Accelerator Program (REAP).

In some cases, the initiatives have taken on a life of their own after MIT community members got them started. Others have shuttered over time, although organizers say they led to positive changes in perceptions around entrepreneurship. All have been driven by a desire to bring MIT’s unique entrepreneurial mindset to other regions.

“Business competitions like the $100K can be very powerful, especially in regions that don’t have much of a startup culture, where you really want to galvanize young people to participate in entrepreneurship,” says Fiona Murray, associate dean of innovation and inclusion and the William Porter Professor of Entrepreneurship at MIT Sloan. “Seeing a bunch of young people with diverse backgrounds up on stage presenting new ideas gets people to say, ‘Someone like me could go and do that.’”

A student-run history

MIT students were also the driving force behind the original $100K competition. Students in MIT’s Entrepreneurship Club originally conceived of the business-plan competition in 1989, setting a goal of a $1,000 grand prize before earning enough support to expand it to $10,000 in the inaugural year.

The competition was an immediate hit, and within a few years participants began wondering if the model could spur entrepreneurial activity away from MIT’s campus.

In the mid 1990s, student organizers of what was by then the $50K competition started the Global Startup Workshop to support people in other regions interested in starting similar competitions. Today GSW is an independent, student-run conference and has held workshops focused on bolstering entrepreneurial ecosystems on six continents with participants from over 70 countries.

Around the time the GSW began, Juan Martinez-Barea MBA ’98 was working on the $50K competition organizing team.

“Thanks to that experience, I found my purpose in life,” Martinez-Barea says. “I came to MIT as an engineer, but I discovered a love for entrepreneurship.”

Martinez-Barea decided to bring the model to his hometown of Seville in Andalusia, Spain. He worked with Ken Morse, the former head of the Martin Trust Center for Entrepreneurship, and partnered with Sally Shepard MBA ’98 to launch the competition. Martinez-Barea was amazed at the reception he got when he pitched the idea to students, investors, universities, and companies.

Murray says collaboration between different stakeholders is one of the program’s biggest benefits.

“It’s a beacon,” she says of the $100K. “It attracts motivated people, gives them a timeline, helps them build a network and teams, provides mentorship, etc. It has all the elements you’d need to build a really effective innovation ecosystem.”

In its first year, in 1999, the Andalusia competition attracted 300 entrepreneurs with business ideas in areas ranging from microelectronics to biotechnology, artificial intelligence, and robotics. It also received a large amount of media attention — Martinez-Barea says the most prominent newspaper in Spain ran a picture of the competition on its front page with the headline “The Spanish Silicon Valley.” More than 100 startups were launched from the competition over the ensuing years.

Around that time, another group of $100K organizers at MIT, including Victor Mallet ’02, started the Ghana New Ventures competition. They received funding from MIT to host the first competition over MIT’s Independent Activities Period in 2001. The event taught university students how to pursue business ideas and connected them to mentors.

“Working on the competition at MIT was the most inspiring thing I did as an undergraduate,” Mallet says. “I wanted to see if it would also work in Ghana and inspire people there, and I think it did. [Entrepreneurial thinking] was a brand new thing in Ghana. People were really excited about it.”

Miguel Palacios MBA’99 participated in the $50K competition (which would expand to $100K a year later) as a student at MIT. In 2003, after a few years in management consulting, he began working to establish an entrepreneurial ecosystem at the Technical University of Madrid. It was easy deciding what one of his first initiatives was going to be. The entrepreneurship competition he helped build, called actúaupm, is currently in its 19th year and has helped create more than 300 companies. Palacios says hundreds of teams participate and around 20 companies emerge from it each year.

“The key [to the competition] is the phases,” he says. “The initial phase is very low-risk and you can play around with your idea. With other models like incubators, people are deciding who gets in and who doesn’t. With the competition, you permit everybody to participate in the ecosystem, so you’re bringing in a much more diverse pool of people with different ideas and capabilities. You are also letting people see that entrepreneurship may be a career option that can generate progress and wealth.”

In 2004, Neil Ruiz PhD ’14 and other students started the Philippine Entrepreneurship Startups Open (PESO). The group received support from MIT’s PKG Public Service Center to travel to the Philippines to establish local partnerships.

“My Filipino classmates and I were asking what we could do to incentivize staying in the country,” Ruiz recalls.

The team was able to get some of the most prominent business leaders in the Philippines to judge the first year’s event, and the winners got to ring the bell at the Philippines Stock Exchange the day after winning.

“It was a way of helping the entrepreneurs set big goals for themselves,” Ruiz says. “There were some really good ideas right away. It was so inspiring.”

These types of entrepreneurship competitions can make a big impact in places where entrepreneurship isn’t as common as it is in the U.S., says Mallet.

“Other places can have cultural barriers to entrepreneurship, so it helps those places to have someone who’s been exposed to the MIT and American way of doing things take that approach back to those communities,” Mallet says.

Murray, who has helped set up $100K-like competitions in regions around the world as part of MIT REAP, agrees that the $100K format can bolster entrepreneurial thinking.

“One of the most powerful results of the $100K is inspiring culture change,” Murray says. “Even though only a small fraction of the things that get pitched move forward, it begins to show young people the art of the possible.”

Multiplying MIT’s impact

In 2007, the Global Startup Workshop spun out of the $100K to become an independent organization run by MIT students. One of GSW’s organizers at the time, John Harthorne MBA ’07, who was also part of the winning $100K team that year, went on to found MassChallenge, a global startup accelerator that to date has helped nearly 3,000 companies cumulatively raise $8.6 billion.

MassChallenge is one of many initiatives with direct ties to the $100K that are still operating today. In addition to Palacios’ competition, which was eventually taken over by his former university in Madrid, PESO was adopted by the Ayala Foundation to provide more stable funding.

The efforts show the crucial role of students in exporting MIT’s approach to entrepreneurship to the world. In that process, they’ve multiplied MIT’s impact in ways that are difficult to quantify.

Martinez-Barea, for instance, is still getting contacted by people interested in replicating his Andalusia competition more than 20 years later. He says many regional governments have replicated the format to spur entrepreneurship in their economies.

“It was a matter of social responsibility in my case,” Martinez-Barea explains. “I was interested in creating wealth and prosperity in Spain through this, and it became an engine of economic development. I think the reason others have replicated [the $100K model] is simple: Because it works.”



de MIT News https://ift.tt/PreX4wo

From seawater to drinking water, with the push of a button

MIT researchers have developed a portable desalination unit, weighing less than 10 kilograms, that can remove particles and salts to generate drinking water.

The suitcase-sized device, which requires less power to operate than a cell phone charger, can also be driven by a small, portable solar panel, which can be purchased online for around $50. It automatically generates drinking water that exceeds World Health Organization quality standards. The technology is packaged into a user-friendly device that runs with the push of one button.

Unlike other portable desalination units that require water to pass through filters, this device utilizes electrical power to remove particles from drinking water. Eliminating the need for replacement filters greatly reduces the long-term maintenance requirements.

This could enable the unit to be deployed in remote and severely resource-limited areas, such as communities on small islands or aboard seafaring cargo ships. It could also be used to aid refugees fleeing natural disasters or by soldiers carrying out long-term military operations.

“This is really the culmination of a 10-year journey that I and my group have been on. We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean, that was a really meaningful and rewarding experience for me,” says senior author Jongyoon Han, a professor of electrical engineering and computer science and of biological engineering, and a member of the Research Laboratory of Electronics (RLE).

Joining Han on the paper are first author Junghyo Yoon, a research scientist in RLE; Hyukjin J. Kwon, a former postdoc; SungKu Kang, a postdoc at Northeastern University; and Eric Brack of the U.S. Army Combat Capabilities Development Command (DEVCOM). The research has been published online in Environmental Science and Technology.

Filter-free technology

Commercially available portable desalination units typically require high-pressure pumps to push water through filters, which are very difficult to miniaturize without compromising the energy-efficiency of the device, explains Yoon.

Instead, their unit relies on a technique called ion concentration polarization (ICP), which was pioneered by Han’s group more than 10 years ago. Rather than filtering water, the ICP process applies an electrical field to membranes placed above and below a channel of water. The membranes repel positively or negatively charged particles — including salt molecules, bacteria, and viruses — as they flow past. The charged particles are funneled into a second stream of water that is eventually discharged.

The process removes both dissolved and suspended solids, allowing clean water to pass through the channel. Since it only requires a low-pressure pump, ICP uses less energy than other techniques.

But ICP does not always remove all the salts floating in the middle of the channel. So the researchers incorporated a second process, known as electrodialysis, to remove remaining salt ions.

Yoon and Kang used machine learning to find the ideal combination of ICP and electrodialysis modules. The optimal setup includes a two-stage ICP process, with water flowing through six modules in the first stage then through three in the second stage, followed by a single electrodialysis process. This minimized energy usage while ensuring the process remains self-cleaning.

“While it is true that some charged particles could be captured on the ion exchange membrane, if they get trapped, we just reverse the polarity of the electric field and the charged particles can be easily removed,” Yoon explains.

They shrunk and stacked the ICP and electrodialysis modules to improve their energy efficiency and enable them to fit inside a portable device. The researchers designed the device for nonexperts, with just one button to launch the automatic desalination and purification process. Once the salinity level and the number of particles decrease to specific thresholds, the device notifies the user that the water is drinkable.

The researchers also created a smartphone app that can control the unit wirelessly and report real-time data on power consumption and water salinity.

Beach tests

After running lab experiments using water with different salinity and turbidity (cloudiness) levels, they field-tested the device at Boston’s Carson Beach.

Yoon and Kwon set the box near the shore and tossed the feed tube into the water. In about half an hour, the device had filled a plastic drinking cup with clear, drinkable water.

“It was successful even in its first run, which was quite exciting and surprising. But I think the main reason we were successful is the accumulation of all these little advances that we made along the way,” Han says.

The resulting water exceeded World Health Organization quality guidelines, and the unit reduced the amount of suspended solids by at least a factor of 10. Their prototype generates drinking water at a rate of 0.3 liters per hour, and requires only 20 watts of power per liter.

“Right now, we are pushing our research to scale up that production rate,” Yoon says.

One of the biggest challenges of designing the portable system was engineering an intuitive device that could be used by anyone, Han says.

Yoon hopes to make the device more user-friendly and improve its energy efficiency and production rate through a startup he plans to launch to commercialize the technology.

In the lab, Han wants to apply the lessons he’s learned over the past decade to water-quality issues that go beyond desalination, such as rapidly detecting contaminants in drinking water.

“This is definitely an exciting project, and I am proud of the progress we have made so far, but there is still a lot of work to do,” he says.

For example, while “development of portable systems using electro-membrane processes is an original and exciting direction in off-grid, small-scale desalination,” the effects of fouling, especially if the water has high turbidity, could significantly increase maintenance requirements and energy costs, notes Nidal Hilal, professor of engineering and director of the New York University Abu Dhabi Water research center, who was not involved with this research.

“Another limitation is the use of expensive materials,” he adds. “It would be interesting to see similar systems with low-cost materials in place.”

The research was funded, in part, by the DEVCOM Soldier Center, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the Experimental AI Postdoc Fellowship Program of Northeastern University, and the Roux AI Institute.



de MIT News https://ift.tt/aUyKfjH

An early bird takes flight

“I’m in denial, you know?”

Bob Bright, MIT Medical’s director of facilities, usually loves spring on campus, but this year, the bright yellows and greens of the daffodils and budding trees are muted; Maria Bachini, facilities coordinator at MIT Medical and Bright's colleague of 20 years, is retiring on April 29.

Although Bachini has worked with Bright for a long while, it only represents a fraction of her overall time at the Institute — which has spanned just shy of 57 years.

In these days of freelance work and the gig economy, spending even a few years in the same place might be hard to picture. Now imagine having a single job interview that unlocks a lifelong career.

When Bachini joined the Institute in May 1965, MIT was a very different place. “When I interviewed — which was basically my first job interview, by the way, I was given a typing test, and the hiring manager asked which kind of typewriter I wanted to use — a manual or an IBM Selectric,” Bachini recalls, “I requested the Selectric, of course … it was the latest thing.” 

That first interview and typing test led to a career marked by adaptation and reinvention. She spent 16 years in the Department of Physics before moving to MIT Medical. Across the years, Bachini had a front seat to a juggernaut of technical innovation as we moved from electrified typewriters and room-sized computers to laptops, smartphones, and Zoom. How did she do it? Being the consummate go-getter certainly helped.

“Maria had this great quote from her father: ‘If you’re 15 minutes early, you’re late,’” recalls William Kettyle, MIT Medical’s former medical director. “Maria was my assistant for 20 years, and in all that time, I beat her to the office just once — because I had done the overnight shift at the clinic.”

Cheryl Baranauskas, who worked with her on MIT Medical’s administrative support team, fondly remembers those early mornings with Bachini. “My best days at MIT Medical were working with Maria, sharing a morning coffee and conversation,” she remembers. “Maria’s work ethic is like no other, but she’s also a great mentor and a great listener. … Maria just makes everyone better.”

When Kettyle retired in 2014, Bachini took on a new challenge, working alongside Bright as MIT Medical’s facilities coordinator. Though the work has been different — ranging from construction projects to elevator maintenance — Bachini has thrived. As Bright put it, “Maria can do anything and work with anyone — I always get a kick out of how people from across campus react to Maria — her positive energy and sense of humor always puts everyone in a good mood.”

Even the pandemic did little to slow Bachini down. When she was unable to come to campus, she was on the front lines remotely, working with MIT Medical’s housekeeping and facilities teams to ensure that the facility was safe for patients. That’s not to say it wasn’t a challenge. As Bright explains, “Maria is a doer, and she likes to be where the action is. … Her commitment to MIT is phenomenal, and it was hard for her to be away from campus.”

Medical Director Cecilia Stuopis puts it this way: “In many ways, Maria Bachini is MIT Medical — her dedication, positivity, and can-do attitude are an inspiration to all of us. We are going to miss her, but we’re also so fortunate to have learned from her.”

Over the years, a lot has changed at MIT, but according to Bachini, some things have remained constant: “It’s a welcoming community and a phenomenal institution. I just can’t imagine working anywhere else. If I had it to do over again, I would do the same thing.”

Soon she’ll be away from campus, and as the weather warms, Bachini is excited to use her newfound leisure time to take beach walks with her sister. As for what she’s looking forward to most on that first day of retired life? “Not waking up at 4:30 a.m. to get ready for work,” she laughs.



de MIT News https://ift.tt/1csMLm4

Machine learning, harnessed to extreme computing, aids fusion energy development

MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

Fusion energy

Fusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

The computational challenge of fusion energy

Turbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

New approach increases confidence in predictions

This work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: "The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes." 

In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility.



de MIT News https://ift.tt/R8ZavGh

martes, 26 de abril de 2022

The MIT Press and Harvard Law School Library launch new series offering high-quality, affordable law textbooks

Together, the MIT Press and Harvard Law School Library announce the launch of the “Open Casebook” series. Leveraging free and open texts created and updated by distinguished legal scholars, the series offers high-quality yet affordable printed textbooks for use in law teaching across the country, tied to online access to the works and legal opinions under open licenses.

“As the creator of some of the earliest open online books and communities, the MIT Press is committed to increasing the impact and accessibility of scholarship,” notes Amy Brand, director and publisher, the MIT Press. “We are proud to collaborate with Harvard Law School Library on the Open Casebook series and provide high-quality, low-cost books to law students throughout the United States.”

The first book in the series is “Torts!” by Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, and Jordi Weinstock, lecturer on law at Harvard Law School. “Torts!” serves as primary text for a first-year law school torts course and maps the progression of the law of torts through the language and example of public judicial decisions in a range of cases. In the book, the authors present cases to students in a different way than in classic casebooks, providing significantly more original judicial opinion than is traditionally offered and featuring helpful reminders, questions, and illustrations to bring these original materials to life. Taken together, the cases within “Torts!” show differing approaches to the problems of defining legal harm and applying those definitions to a messy world. 

The Open Casebook series leverages free and open texts created by distinguished legal scholars on Harvard’s H2O platform. Created by Harvard Law School’s Library Innovation Lab, H2O facilitates the building, sharing, and remixing of open-access digital textbooks, with cases drawn from the lab’s companion Caselaw Access Project, which scanned and made freely available access to all American case law. Authors can create their own original books with H2O, finding and adapting existing texts to refine and build upon one another’s work.

The Open Casebook series will include textbooks for all standard first-year law school courses, including upcoming publications on the subjects of contracts and corporations. A digital version of each casebook can be found for free on opencasebook.org.



de MIT News https://ift.tt/DpKqsOA

Using excess heat to improve electrolyzers and fuel cells

Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

A sandwich mystery

The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

Acidic solution

The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

“The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

“We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

“Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

“This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy.



de MIT News https://ift.tt/mihYtZF

Physicists embark on a hunt for a long-sought quantum glow

For “Star Wars” fans, the streaking stars seen from the cockpit of the Millennium Falcon as it jumps to hyperspace is a canonical image. But what would a pilot actually see if she could accelerate in an instant through the vacuum of space? According to a prediction known as the Unruh effect, she would more likely see a warm glow.

Since the 1970s when it was first proposed, the Unruh effect has eluded detection, mainly because the probability of seeing the effect is infinitesimally small, requiring either enormous accelerations or vast amounts of observation time. But researchers at MIT and the University of Waterloo believe they have found a way to significantly increase the probability of observing the Unruh effect, which they detail in a study appearing today in Physical Review Letters.

Rather than observe the effect spontaneously as others have attempted in the past, the team proposes stimulating the phenomenon, in a very particular way that enhances the Unruh effect while suppressing other competing effects. The researchers liken their idea to throwing an invisibility cloak over other conventional phenomena, which should then reveal the much less obvious Unruh effect.

If it can be realized in a practical experiment, this new stimulated approach, with an added layer of invisibility (or “acceleration-induced transparency,” as described in the paper) could vastly increase the probability of observing the Unruh effect. Instead of waiting longer than the age of the universe for an accelerating particle to produce a warm glow as the Unruh effect predicts, the team’s approach would shave that wait time down to a few hours.

“Now at least we know there is a chance in our lifetimes where we might actually see this effect,” says study co-author Vivishek Sudhir, assistant professor of mechanical engineering at MIT, who is designing an experiment to catch the effect based on the group’s theory. “It’s a hard experiment, and there’s no guarantee that we’d be able to do it, but this idea is our nearest hope.”

The study’s co-authors also include Barbara Šoda and Achim Kempf of the University of Waterloo.

Close connection

The Unruh effect is also known as the Fulling-Davies-Unruh effect, after the three physicists who initially proposed it. The prediction states that a body that is accelerating through a vacuum should in fact feel the presence of warm radiation purely as an effect of the body’s acceleration. This effect has to do with quantum interactions between accelerated matter and quantum fluctuations within the vacuum of empty space.

To produce a glow warm enough for detectors to measure, a body such as an atom would have to accelerate to the speed of light in less than a millionth of a second. Such an acceleration would be equivalent to a g-force of a quadrillion meters per second squared (a fighter pilot typically experiences a g-force of 10 meters per second squared).

“To see this effect in a short amount of time, you’d have to have some incredible acceleration,” Sudhir says. “If you instead had some reasonable acceleration, you’d have to wait a ginormous amount of time — longer than the age of the universe — to see a measurable effect.”

What, then, would be the point? For one, he says that observing the Unruh effect would be a validation of fundamental quantum interactions between matter and light. And for another, the detection could represent a mirror of the Hawking effect — a proposal by the physicist Stephen Hawking that predicts a similar thermal glow, or “Hawking radiation,” from light and matter interactions in an extreme gravitational field, such as around a black hole.

“There’s a close connection between the Hawking effect and the Unruh effect — they’re exactly the complementary effect of each other,” says Sudhir, who adds that if one were to observe the Unruh effect, “one would have observed a mechanism that is common to both effects.”

A transparent trajectory

The Unruh effect is predicted to occur spontaneously in a vacuum. According to quantum field theory, a vacuum is not simply empty space, but rather a field of restless quantum fluctuations, with each frequency band measuring about the size of half a photon. Unruh predicted that a body accelerating through a vacuum should amplify these fluctuations, in a way that produces a warm, thermal glow of particles.

In their study, the researchers introduced a new approach to increase the probability of the Unruh effect, by adding light to the entire scenario — an approach known as stimulation.

“When you add photons into the field, you’re adding ‘n’ times more of those fluctuations than this half a photon that’s in the vacuum,” Sudhir explains. “So, if you accelerate through this new state of the field, you’d expect to see effects that also scale ‘n’ times what you would see from just the vacuum alone.”

However, in addition to the quantum Unruh effect, the additional photons would also amplify other effects in the vacuum — a major drawback that has kept other hunters of the Unruh effect from taking the stimulation approach.

Šoda, Sudhir, and Kempf, however, found a work-around, through “acceleration-induced transparency,” a concept they introduce in the paper. They showed theoretically that if a body such as an atom could be made to accelerate with a very specific trajectory through a field of photons, the atom would interact with the field in such a way that photons of a certain frequency would essentially appear invisible to the atom.

“When we stimulate the Unruh effect, at the same time we also stimulate the conventional, or resonant, effects, but we show that by engineering the trajectory of the particle, we can essentially turn off those effects,” Šoda says.

By making all other effects transparent, the researchers could then have a better chance of measuring the photons, or the thermal radiation coming from only the Unruh effect, as the physicists predicted.

The researchers already have some ideas for how to design an experiment based on their hypothesis. They plan to build a laboratory-sized particle accelerator capable of accelerating an electron to close to the speed of light, which they would then stimulate using a laser beam at microwave wavelengths. They are looking for ways to engineer the electron’s path to suppress classical effects, while amplifying the elusive Unruh effect.

“Now we have this mechanism that seems to statistically amplify this effect via stimulation,” Sudhir says. “Given the 40-year history of this problem, we’ve now in theory fixed the biggest bottleneck.”

This research was supported, in part, by the National Science and Engineering Research Council of Canada, the Australian Research Council, and a Google Faculty Research Award.



de MIT News https://ift.tt/96CvumX