lunes, 31 de julio de 2023

MIT engineers create an energy-storing supercapacitor from ancient materials

Two of humanity's most ubiquitous historical materials, cement and carbon black (which resembles very fine charcoal), may form the basis for a novel, low-cost energy storage system, according to a new study. The technology could facilitate the use of renewable energy sources such as solar, wind, and tidal power by allowing energy networks to remain stable despite fluctuations in renewable energy supply.

The two materials, the researchers found, can be combined with water to make a supercapacitor — an alternative to batteries — that could provide storage of electrical energy. As an example, the MIT researchers who developed the system say that their supercapacitor could eventually be incorporated into the concrete foundation of a house, where it could store a full day’s worth of energy while adding little (or no) to the cost of the foundation and still providing the needed structural strength. The researchers also envision a concrete roadway that could provide contactless recharging for electric cars as they travel over that road.

The simple but innovative technology is described this week in the journal PNAS, in a paper by MIT professors Franz-Josef Ulm, Admir Masic, and Yang-Shao Horn, and four others at MIT and at the Wyss Institute for Biologically Inspired Engineering.

Capacitors are in principle very simple devices, consisting of two electrically conductive plates immersed in an electrolyte and separated by a membrane. When a voltage is applied across the capacitor, positively charged ions from the electrolyte accumulate on the negatively charged plate, while the positively charged plate accumulates negatively charged ions. Since the membrane in between the plates blocks charged ions from migrating across, this separation of charges creates an electric field between the plates, and the capacitor becomes charged. The two plates can maintain this pair of charges for a long time and then deliver them very quickly when needed. Supercapacitors are simply capacitors that can store exceptionally large charges.

The amount of power a capacitor can store depends on the total surface area of its conductive plates. The key to the new supercapacitors developed by this team comes from a method of producing a cement-based material with an extremely high internal surface area due to a dense, interconnected network of conductive material within its bulk volume. The researchers achieved this by introducing carbon black — which is highly conductive — into a concrete mixture along with cement powder and water, and letting it cure. The water naturally forms a branching network of openings within the structure as it reacts with cement, and the carbon migrates into these spaces to make wire-like structures within the hardened cement. These structures have a fractal-like structure, with larger branches sprouting smaller branches, and those sprouting even smaller branchlets, and so on, ending up with an extremely large surface area within the confines of a relatively small volume. The material is then soaked in a standard electrolyte material, such as potassium chloride, a kind of salt, which provides the charged particles that accumulate on the carbon structures. Two electrodes made of this material, separated by a thin space or an insulating layer, form a very powerful supercapacitor, the researchers found.

The two plates of the capacitor function just like the two poles of a rechargeable battery of equivalent voltage: When connected to a source of electricity, as with a battery, energy gets stored in the plates, and then when connected to a load, the electrical current flows back out to provide power.

“The material is fascinating,” Masic says, “because you have the most-used manmade material in the world, cement, that is combined with carbon black, that is a well-known historical material — the Dead Sea Scrolls were written with it. You have these at least two-millennia-old materials that when you combine them in a specific manner you come up with a conductive nanocomposite, and that’s when things get really interesting.”

As the mixture sets and cures, he says, “The water is systematically consumed through cement hydration reactions, and this hydration fundamentally affects nanoparticles of carbon because they are hydrophobic (water repelling).” As the mixture evolves, “the carbon black is self-assembling into a connected conductive wire,” he says. The process is easily reproducible, with materials that are inexpensive and readily available anywhere in the world. And the amount of carbon needed is very small — as little as 3 percent by volume of the mix — to achieve a percolated carbon network, Masic says.

Supercapacitors made of this material have great potential to aid in the world’s transition to renewable energy, Ulm says. The principal sources of emissions-free energy, wind, solar, and tidal power, all produce their output at variable times that often do not correspond to the peaks in electricity usage, so ways of storing that power are essential. “There is a huge need for big energy storage,” he says, and existing batteries are too expensive and mostly rely on materials such as lithium, whose supply is limited, so cheaper alternatives are badly needed. “That’s where our technology is extremely promising, because cement is ubiquitous,” Ulm says.

The team calculated that a block of nanocarbon-black-doped concrete that is 45 cubic meters (or yards) in size — equivalent to a cube about 3.5 meters across — would have enough capacity to store about 10 kilowatt-hours of energy, which is considered the average daily electricity usage for a household. Since the concrete would retain its strength, a house with a foundation made of this material could store a day’s worth of energy produced by solar panels or windmills and allow it to be used whenever it’s needed. And, supercapacitors can be charged and discharged much more rapidly than batteries.

After a series of tests used to determine the most effective ratios of cement, carbon black, and water, the team demonstrated the process by making small supercapacitors, about the size of some button-cell batteries, about 1 centimeter across and 1 millimeter thick, that could each be charged to 1 volt, comparable to a 1-volt battery. They then connected three of these to demonstrate their ability to light up a 3-volt light-emitting diode (LED). Having proved the principle, they now plan to build a series of larger versions, starting with ones about the size of a typical 12-volt car battery, then working up to a 45-cubic-meter version to demonstrate its ability to store a house-worth of power.

There is a tradeoff between the storage capacity of the material and its structural strength, they found. By adding more carbon black, the resulting supercapacitor can store more energy, but the concrete is slightly weaker, and this could be useful for applications where the concrete is not playing a structural role or where the full strength-potential of concrete is not required. For applications such as a foundation, or structural elements of the base of a wind turbine, the “sweet spot” is around 10 percent carbon black in the mix, they found.

Another potential application for carbon-cement supercapacitors is for building concrete roadways that could store energy produced by solar panels alongside the road and then deliver that energy to electric vehicles traveling along the road using the same kind of technology used for wirelessly rechargeable phones. A related type of car-recharging system is already being developed by companies in Germany and the Netherlands, but using standard batteries for storage.

Initial uses of the technology might be for isolated homes or buildings or shelters far from grid power, which could be powered by solar panels attached to the cement supercapacitors, the researchers say.

Ulm says that the system is very scalable, as the energy-storage capacity is a direct function of the volume of the electrodes. “You can go from 1-millimeter-thick electrodes to 1-meter-thick electrodes, and by doing so basically you can scale the energy storage capacity from lighting an LED for a few seconds, to powering a whole house,” he says.

Depending on the properties desired for a given application, the system could be tuned by adjusting the mixture. For a vehicle-charging road, very fast charging and discharging rates would be needed, while for powering a home “you have the whole day to charge it up,” so slower-charging material could be used, Ulm says.

“So, it’s really a multifunctional material,” he adds. Besides its ability to store energy in the form of supercapacitors, the same kind of concrete mixture can be used as a heating system, by simply applying electricity to the carbon-laced concrete.

Ulm sees this as “a new way of looking toward the future of concrete as part of the energy transition.”

The research team also included postdocs Nicolas Chanut and Damian Stefaniuk at MIT’s Department of Civil and Environmental Engineering, James Weaver at the Wyss Institute, and Yunguang Zhu in MIT’s Department of Mechanical Engineering. The work was supported by the MIT Concrete Sustainability Hub, with sponsorship by the Concrete Advancement Foundation.



de MIT News https://ift.tt/6fYGsbi

domingo, 30 de julio de 2023

Using AI to protect against AI image manipulation

As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. Recently, advanced generative models such as DALL-E and Midjourney, celebrated for their impressive precision and user-friendly interfaces, have made the production of hyper-realistic images relatively effortless. With the barriers of entry lowered, even inexperienced users can generate and manipulate high-quality images from simple text descriptions — ranging from innocent image alterations to malicious changes. Techniques like watermarking pose a promising solution, but misuse requires a preemptive (as opposed to only post hoc) measure. 

In the quest to create such a new measure, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a technique that uses perturbations — minuscule alterations in pixel values invisible to the human eye but detectable by computer models — that effectively disrupt the model’s ability to manipulate the image.

PhotoGuard uses two different “attack” methods to generate these perturbations. The more straightforward “encoder” attack targets the image’s latent representation in the AI model, causing the model to perceive the image as a random entity. The more sophisticated “diffusion” one defines a target image and optimizes the perturbations to make the final image resemble the target as closely as possible.

“Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” says Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), affiliate of MIT CSAIL, and lead author of a new paper about PhotoGuard

“In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

PhotoGuard in practice

AI models view an image differently from how humans do. It sees an image as a complex set of mathematical data points that describe every pixel's color and position — this is the image's latent representation. The encoder attack introduces minor adjustments into this mathematical representation, causing the AI model to perceive the image as a random entity. As a result, any attempt to manipulate the image using the model becomes nearly impossible. The changes introduced are so minute that they are invisible to the human eye, thus preserving the image's visual integrity while ensuring its protection.

The second and decidedly more intricate “diffusion” attack strategically targets the entire diffusion model end-to-end. This involves determining a desired target image, and then initiating an optimization process with the intention of closely aligning the generated image with this preselected target.

In implementing, the team created perturbations within the input space of the original image. These perturbations are then used during the inference stage, and applied to the images, offering a robust defense against unauthorized manipulation.

“The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who is also an author on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

The diffusion attack is more computationally intensive than its simpler sibling, and requires significant GPU memory. The team says that approximating the diffusion process with fewer steps mitigates the issue, thus making the technique more practical.

To better illustrate the attack, consider an art project, for example. The original image is a drawing, and the target image is another drawing that’s completely different. The diffusion attack is like making tiny, invisible changes to the first drawing so that, to an AI model, it begins to resemble the second drawing. However, to the human eye, the original drawing remains unchanged.

By doing this, any AI model attempting to modify the original image will now inadvertently make changes as if dealing with the target image, thereby protecting the original image from intended manipulation. The result is a picture that remains visually unaltered for human observers, but protects against unauthorized edits by AI models.

As far as a real example with PhotoGuard, consider an image with multiple faces. You could mask any faces you don’t want to modify, and then prompt with “two men attending a wedding.” Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony.

Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunize it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.

All hands on deck

Key allies in the fight against image manipulation are the creators of the image-editing models, says the team. For PhotoGuard to be effective, an integrated response from all stakeholders is necessary. “Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

Despite PhotoGuard’s promise, it’s not a panacea. Once an image is online, individuals with malicious intent could attempt to reverse engineer the protective measures by applying noise, cropping, or rotating the image. However, there is plenty of previous work from the adversarial examples literature that can be utilized here to implement robust perturbations that resist common image manipulations.

“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

“The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.”

Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, as well as Andrew Ilyas ’18, MEng ’18; all three are EECS graduate students and MIT CSAIL affiliates. The team’s work was partially done on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based upon work supported by the U.S. Defense Advanced Research Projects Agency. It was presented at the International Conference on Machine Learning this July.



de MIT News https://ift.tt/r6ihTm7

viernes, 28 de julio de 2023

A wearable ultrasound scanner could detect breast cancer earlier

When breast cancer is diagnosed in the earliest stages, the survival rate is nearly 100 percent. However, for tumors detected in later stages, that rate drops to around 25 percent.

In hopes of improving the overall survival rate for breast cancer patients, MIT researchers have designed a wearable ultrasound device that could allow people to detect tumors when they are still in early stages. In particular, it could be valuable for patients at high risk of developing breast cancer in between routine mammograms.

The device is a flexible patch that can be attached to a bra, allowing the wearer to move an ultrasound tracker along the patch and image the breast tissue from different angles. In the new study, the researchers showed that they could obtain ultrasound images with resolution comparable to that of the ultrasound probes used in medical imaging centers.

“We changed the form factor of the ultrasound technology so that it can be used in your home. It’s portable and easy to use, and provides real-time, user-friendly monitoring of breast tissue,” says Canan Dagdeviren, an associate professor in MIT’s Media Lab and the senior author of the study.

MIT graduate student Wenya Du, Research Scientist Lin Zhang, Emma Suh ’23, and Dabin Lin, a professor at Xi’an Technological University, are the lead authors of the paper, which appears today in Science Advances.

A wearable diagnostic

For this project, Dagdeviren drew inspiration from her late aunt, Fatma Caliskanoglu, who was diagnosed with late-stage breast cancer at age 49, despite having regular cancer screens, and passed away six months later. At her aunt’s bedside, Dagdeviren, then a postdoc at MIT, drew up a rough schematic of a diagnostic device that could be incorporated into a bra and would allow for more frequent screening of individuals at high risk for breast cancer. 

Breast tumors that develop in between regularly scheduled mammograms — known as interval cancers — account for 20 to 30 percent of all breast cancer cases, and these tumors tend to be more aggressive than those found during routine scans.

“My goal is to target the people who are most likely to develop interval cancer,” says Dagdeviren, whose research group specializes in developing wearable electronic devices that conform to the body. “With more frequent screening, our goal to increase the survival rate to up to 98 percent.”

To make her vision of a diagnostic bra a reality, Dagdeviren designed a miniaturized ultrasound scanner that could allow the user to perform imaging at any time. This scanner is based on the same kind of ultrasound technology used in medical imaging centers, but incorporates a novel piezoelectric material that allowed the researchers to miniaturize the ultrasound scanner.

To make the device wearable, the researchers designed a flexible, 3D-printed patch, which has honeycomb-like openings. Using magnets, this patch can be attached to a bra that has openings that allow the ultrasound scanner to contact the skin. The ultrasound scanner fits inside a small tracker that can be moved to six different positions, allowing the entire breast to be imaged. The scanner can also be rotated to take images from different angles, and does not require any special expertise to operate.

“This technology provides a fundamental capability in the detection and early diagnosis of breast cancer, which is key to a positive outcome,” says Anantha Chandrakasan, dean of MIT’s School of Engineering, the Vannevar Bush Professor of Electrical Engineering and Computer Science, and one of the authors of the study. “This work will significantly advance ultrasound research and medical device designs, leveraging advances in materials, low-power circuits, AI algorithms, and biomedical systems.”

Early detection

Working with the MIT Center for Clinical and Translational Research, the researchers tested their device on one human subject, a 71-year-old woman with a history of breast cysts. Using the new device, the researchers were able to detect the cysts, which were as small as 0.3 centimeters in diameter — the size of early-stage tumors. They also showed that the device achieved resolution comparable to that of traditional ultrasound, and tissue can be imaged at a depth up to 8 centimeters.

“Access to quality and affordable health care is essential for early detection and diagnosis. As a nurse I have witnessed the negative outcomes of a delayed diagnosis. This technology holds the promise of breaking down the many barriers for early breast cancer detection by providing a more reliable, comfortable, and less intimidating diagnostic,” says Catherine Ricciardi, nurse director at MIT’s Center for Clinical and Translational Research and an author of the study.

To see the ultrasound images, the researchers currently have to connect their scanner to the same kind of ultrasound machine used in imaging centers. However, they are now working on a miniaturized version of the imaging system that would be about the size of a smartphone.

The wearable ultrasound patch can be used over and over, and the researchers envision that it could be used at home by people who are at high risk for breast cancer and could benefit from frequent screening. It could also help diagnose cancer in people who don’t have regular access to screening.

“Breast cancer is the most common cancer among women, and it is treatable when detected early,” says Tolga Ozmen, a breast cancer surgeon at Massachusetts General Hospital who is also an author of the study. “One of the main obstacles in imaging and early detection is the commute that the women have to make to an imaging center. This conformable ultrasound patch is a highly promising technology as it eliminates the need for women to travel to an imaging center.”

The researchers hope to develop a workflow so that once data are gathered from a subject, artificial intelligence can be used to analyze how the images change over time, which could offer more accurate diagnostics than relying on the assessment of a radiologist comparing images taken years apart. They also plan to explore adapting the ultrasound technology to scan other parts of the body.

The research was funded, in part, by the National Science Foundation, a 3M Non-Tenured Faculty Award, the Sagol Weizmann-MIT Bridge Program, and MIT Media Lab Consortium Funding.



de MIT News https://ift.tt/sxoUPXC

jueves, 27 de julio de 2023

Changing attitudes about jobs and gender in India

As a high school student who loved math, Lisa Ho ’17 was drawn by MIT’s spirit of “mens et manus” (“mind and hand”) and the opportunities to study both a subject and its practical applications. Now a PhD candidate in economics, Ho also appreciates the lessons in perseverance gleaned from her time on her high school robotics team that have translated to her current studies.

“It was the first time I was heavily invested in a project where the challenge was open-ended with no correct answer, so you were never ‘done’ with the work,” says Ho. “Effort didn’t necessarily translate into external validation. But I think that helped me to develop some patience and appetite for ambiguous work that doesn’t always pay off quickly.”

Labor and gender

When she first arrived at MIT, Ho was looking for a way to apply her interest in math to tackling social issues, and she initially settled on computer science. But as a junior, she took a new class in the Department of Economics offered by professors Esther Duflo and Sara Ellison, 14.31 (Data Analysis for Social Scientists), which piqued her interest in an entirely new aspect of numbers.

“I had a sense that I wanted to apply statistics and coding to study social issues,” explains Ho. “What I didn’t anticipate was that taking the class would teach me about what economics could be. Before that class, I thought that economists studied a much narrower set of topics.”

One study that Ho remembers learning about in that class examined gender-related differences in whether candidates who lose elections continue their political careers. She was struck by how economic principles and data analysis could be used to address a huge variety of questions about society.

For her dissertation, Ho is studying the intersection of gender and labor-force participation in India, where there’s a particularly large gap between men and women. So far, Ho’s research has found that most available jobs are not compatible with the expectations of domestic responsibilities that many women face.

Given that finding, “Two strategies come to mind,” explains Ho. “You can either try to change people’s attitudes around gendered divisions of labor at home so that women can take the jobs that are available, or you could try to change the jobs to be better-suited to people’s attitudes.”

Ho focuses on the latter strategy, noting that attitudes and behavior mutually inform each other, and so short-term, part-time jobs that shift attitudes might serve as a stepping stone to more intensive labor market involvement. She spent much of 2021-2022 in West Bengal running a randomized controlled trial with over 1,500 households. With the help of her co-authors Anahita Karandikar and Suhani Jalota, along with a 25-person field team, Ho offered her study participants a set of jobs with different flexible arrangements, varying these attributes experimentally to test which ones made the job most appealing to women who would otherwise be outside the labor force.

Some factors investigated included the ability to multitask between work and childcare, flexibility to choose work hours, and the ability to work from home. To estimate the causal effect of women’s employment, jobs were offered to a randomly selected subset of survey participants. Then, Ho and her team evaluated whether having job experience influenced attitudes and made households more open to women’s work. They also studied how workplace flexibility impacted job performance. Coordinating the logistics for a large-scale study was difficult at times, she says, but connecting with the study participants made it all worth it.

“I love doing research and having my research be related to what’s on people’s minds day-to-day,” says Ho. “One of the most enjoyable parts of the project on women’s employment for me was accompanying my field team while they conducted surveys. Many of the questions we ask stir up passionate discussion, like the question about whether men should do an equal share of the housework.”

Part of Ho’s interest in this area is personal. Growing up, her grandmother told her how her mother, who lived in the Bronx, had wanted to work but wasn’t allowed to.

“My great-grandfather wouldn’t allow it because he was worried about what other people would think,” says Ho, who was born in Singapore and spent most of her school years in the U.K. “He said that if his wife worked, other people would think he couldn’t support his family. Even looking within my own family, I can see there’s been a lot of progress in the last century, but there’s still a ton of work to be done with respect to women all over the world.”

Ultimately, Ho’s family and upbringing — one of her parents is Singaporean and the other is American — helped her to develop a broad perspective that she now utilizes when trying to answer her research questions. In addition to her global experiences as a child and an undergraduate, Ho spent a year as a Schwarzman Scholar at Tsinghua University in China before returning to MIT for graduate school.

“I’ve had a lot of exposure to different cultures and points of view, and that helps me in my research in terms of feeling at home among any group of people,” she says. “And it means I’m already used to a wide range of views on these topics, which keeps me open-minded when I listen to our study participants.”

A passion for teaching

Throughout her time at MIT, Ho has nurtured a passion for teaching and mentorship. As an undergraduate, she worked with the Educational Studies Program to organize outreach activities for middle and high school students. And she has enjoyed her experience as a teaching assistant during her graduate program.  

“Part of why I feel like TA’ing MIT undergrads is so rewarding is because I used to be them and now I’m on the other side,” she says. “They’re still trying to figure out what they’re interested in and what they want to do, which makes it feel very impactful to introduce them to new topics and to show them — just like I was shown as an undergrad! — that economics is a much broader field than most college students think when they graduate from high school.”

As she approaches the completion of her degree, Ho, too, is giving thought to what comes next. She wants to have the freedom to pursue the questions that move her the most. For now, she is making the most of her time at MIT and the opportunity to learn from the professors who inspired her to pursue economics in the first place. “I want to be an economist whose work is grounded in challenges that come up in people’s everyday lives — especially women’s and children’s,” she says. “To contribute to pragmatic policy solutions to those problems, being trained by professors in the MIT Development Economics group, such as Esther Duflo, Ben Olken, Frank Schilbach, David Atkin, and Abhijit Banerjee — it’s the ideal graduate school experience.”



de MIT News https://ift.tt/8GwyXED

Making sense of cell fate

Despite the proliferation of novel therapies such as immunotherapy or targeted therapies, radiation and chemotherapy remain the frontline treatment for cancer patients. About half of all patients still receive radiation and 60-80 percent receive chemotherapy.

Both radiation and chemotherapy work by damaging DNA, taking advantage of a vulnerability specific to cancer cells. Healthy cells are more likely to survive radiation and chemotherapy since their mechanisms for identifying and repairing DNA damage are intact. In cancer cells, these repair mechanisms are compromised by mutations. When cancer cells cannot adequately respond to the DNA damage caused by radiation and chemotherapy, ideally, they undergo apoptosis or die by other means.

However, there is another fate for cells after DNA damage: senescence — a state where cells survive, but stop dividing. Senescent cells’ DNA has not been damaged enough to induce apoptosis but is too damaged to support cell division. While senescent cancer cells themselves are unable to proliferate and spread, they are bad actors in the fight against cancer because they seem to enable other cancer cells to develop more aggressively. Although a cancer cell’s fate is not apparent until a few days after treatment, the decision to survive, die, or enter senescence is made much earlier. But, precisely when and how that decision is made has not been well understood.

In an open-access study of ovarian and osteosarcoma cancer cells appearing July 19 in Cell Systems, MIT researchers show that cell signaling proteins commonly associated with cell proliferation and apoptosis instead commit cancer cells to senescence within 12 hours of treatment with low doses of certain kinds of chemotherapy.

“When it comes to treating cancer, this study underscores that it’s important not to think too linearly about cell signaling,” says Michael Yaffe, who is a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study. “If you assume that a particular treatment will always affect cancer cell signaling in the same way — you may be setting yourself up for many surprises, and treating cancers with the wrong combination of drugs.”

Using a combination of experiments with cancer cells and computational modeling, the team investigated the cell signaling mechanisms that prompt cancer cells to enter senescence after treatment with a commonly used anti-cancer agent. Their efforts singled out two protein kinases and a component of the AP-1 transcription factor complex as highly associated with the induction of senescence after DNA damage, despite the well-established roles for all of these molecules in promoting cell proliferation in cancer.

The researchers treated cancer cells with low and high doses of doxorubicin, a chemotherapy that interferes with the function with topoisomerase II, an enzyme that breaks and then repairs DNA strands during replication to fix tangles and other topological problems.

By measuring the effects of DNA damage on single cells at several time points ranging from six hours to four days after the initial exposure, the team created two datasets. In one dataset, the researchers tracked cell fate over time. For the second set, researchers measured relative cell signaling activity levels across a variety of proteins associated with responses to DNA damage or cellular stress, determination of cell fate, and progress through cell growth and division.

The two datasets were used to build a computational model that identifies correlations between time, dosage, signal, and cell fate. The model identified the activities of the MAP kinases Erk and JNK, and the transcription factor c-Jun as key components of the AP-1 protein likewise understood to involved in the induction of senescence. The researchers then validated these computational findings by showing that inhibition of JNK and Erk after DNA damage successfully prevented cells from entering senescence.

The researchers leveraged JNK and Erk inhibition to pinpoint exactly when cells made the decision to enter senescence. Surprisingly, they found that the decision to enter senescence was made within 12 hours of DNA damage, even though it took days to actually see the senescent cells accumulate. The team also found that with the passage of more time, these MAP kinases took on a different function: promoting the secretion of proinflammatory proteins called cytokines that are responsible for making other cancer cells proliferate and develop resistance to chemotherapy.

“Proteins like cytokines encourage ‘bad behavior’ in neighboring tumor cells that lead to more aggressive cancer progression,” says Tatiana Netterfield, a graduate student in the Yaffe lab and the lead author of the study. “Because of this, it is thought that senescent cells that stay near the tumor for long periods of time are detrimental to treating cancer.”

This study’s findings apply to cancer cells treated with a commonly used type of chemotherapy that stalls DNA replication after repair. But more broadly, the study emphasizes that “when treating cancer, it’s extremely important to understand the molecular characteristics of cancer cells and the contextual factors such as time and dosing that determine cell fate,” explains Netterfield.

The study, however, has more immediate implications for treatments that are already in use. One class of Erk inhibitors, MEK inhibitors, are used in the clinic with the expectation that they will curb cancer growth.

“We must be cautious about administering MEK inhibitors together with chemotherapies,” says Yaffe. “The combination may have the unintended effect of driving cells into proliferation, rather than senescence.”

In future work, the team will perform studies to understand how and why individual cells choose to proliferate instead of enter senescence. Additionally, the team is employing next-generation sequencing to understand which genes c-Jun is regulating in order to push cells toward senescence.

This study was funded, in part, by the Charles and Marjorie Holloway Foundation and the MIT Center for Precision Cancer Medicine.



de MIT News https://ift.tt/gTiZWOE

How forests can cut carbon, restore ecosystems, and create jobs

To limit the frequency and severity of droughts, wildfires, flooding, and other adverse consequences of climate change, nearly 200 countries committed to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius. According to the latest United Nations Intergovernmental Panel on Climate Change (IPCC) Report, achieving that goal will require both large-scale greenhouse gas (GHG) emissions reduction and removal of GHGs from the atmosphere.

At present, the most efficient and scalable GHG-removal strategy is the massive planting of trees through reforestation or afforestation — a “natural climate solution” (NCS) that extracts atmospheric carbon dioxide through photosynthesis and soil carbon sequestration.

Despite the potential of forestry-based NCS projects to address climate change, biodiversity loss, unemployment, and other societal needs — and their appeal to policymakers, funders, and citizens — they have yet to achieve critical mass, and often underperform due to a mix of interacting ecological, social, and financial constraints. To better understand these challenges and identify opportunities to overcome them, a team of researchers at Imperial College London and the MIT Joint Program on the Science and Policy of Global Change recently studied how environmental scientists, local stakeholders, and project funders perceive the risks and benefits of NCS projects, and how these perceptions impact project goals and performance. To that end, they surveyed and consulted with dozens of recognized experts and organizations spanning the fields of ecology, finance, climate policy, and social science.

The team’s analysis, which appears in the journal Frontiers in Climate, found two main factors that have hindered the success of forestry-based NCS projects.

First, the ambition — levels of carbon removal, ecosystem restoration, job creation, and other environmental and social targets — of selected NCS projects is limited by funders’ perceptions of their overall risk. Among other things, funders aim to minimize operational risk (e.g., Will newly planted trees survive and grow?), political risk (e.g., Just how secure is their access to the land where trees will be planted?); and reputational risk (e.g., Will the project be perceived as an exercise in “greenwashing,” or fall way short of its promised environmental and social benefits?). Funders seeking a financial return on their initial investment are also concerned about the dependability of complex monitoring, reporting, and verification methods used to quantify atmospheric carbon removal, biodiversity gains, and other metrics of project performance.

Second, the environmental and social benefits of NCS projects are unlikely to be realized unless the local communities impacted by these projects are granted ownership over their implementation and outcomes. But while engaging with local communities is critical to project performance, it can be challenging both legally and financially to set up incentives (e.g., payment and other forms of compensation) to mobilize such engagement.

“Many carbon offset projects raise legitimate concerns about their effectiveness,” says study lead author Bonnie Waring, a senior lecturer at the Grantham Institute on Climate Change and the Environment, Imperial College London. “However, if nature climate solution projects are done properly, they can help with sustainable development and empower local communities.”

Drawing on surveys and consultations with NCS experts, stakeholders, and funders, the research team highlighted several recommendations on how to overcome key challenges faced by forestry-based NCS projects and boost their environmental and social performance.

These recommendations include encouraging funders to evaluate projects based on robust internal governance, support from regional and national governments, secure land tenure, material benefits for local communities, and full participation of community members from across a spectrum of socioeconomic groups; improving the credibility and verifiability of project emissions reductions and related co-benefits; and maintaining an open dialogue and shared costs and benefits among those who fund, implement, and benefit from these projects.

“Addressing climate change requires approaches that include emissions mitigation from economic activities paired with greenhouse gas reductions by natural ecosystems,” says Sergey Paltsev, a co-author of the study and deputy director of the MIT Joint Program. “Guided by these recommendations, we advocate for a proper scaling-up of NCS activities from project levels to help assure integrity of emissions reductions across entire countries.”



de MIT News https://ift.tt/J7n4G0p

martes, 25 de julio de 2023

3 Questions: What’s it like winning the MIT $100K Entrepreneurship Competition?

Solar power plays a major role in nearly every roadmap for global decarbonization. But solar panels are large, heavy, and expensive, which limits their deployment. But what if solar panels looked more like a yoga mat?

Such a technology could be transported in a roll, carried to the top of a building, and rolled out across the roof in a matter of minutes, slashing installation costs and dramatically expanding the places where rooftop solar makes sense.

That was the vision laid out by the MIT spinout Active Surfaces as part of the winning pitch at this year’s MIT $100K Entrepreneurship Competition, which took place May 15. The company is leveraging materials science and manufacturing innovations from labs across MIT to make ultra-thin, lightweight, and durable solar a reality.

The $100K is one of MIT’s most visible entrepreneurship competitions, and past winners say the prize money is only part of the benefit that winning brings to a burgeoning new company. MIT News sat down with Active Surface founders Shiv Bhakta, a graduate student in MIT’s Leaders for Global Operations dual-degree program within the MIT Sloan School of Management and Department of Civil and Environmental Engineering, and Richard Swartwout SM ’18 PhD ’21, an electrical engineering and computer science graduate and former Research Laboratory of Electronics postdoc and MIT.nano innovation fellow, to learn what the last couple of months have been like since they won.

Q: What is Active Surfaces’ solution, and what is its potential?

Bhakta: We’re commercializing an ultrathin film, flexible solar technology. Solar is one of the most broadly distributed resources in the world, but access is limited today. It’s heavy — it weighs 50 to 60 pounds a panel — it requires large teams to move around, and the form factor can only be deployed in specific environments.

Our approach is to develop a solar technology for the built environment. In a nutshell, we can create flexible solar panels that are as thin as paper, just as efficient as traditional panels, and at unprecedented cost floors, all while being applied to any surface. Same area, same power. That’s our motto.

When I came to MIT, my north star was to dive deeper in my climate journey and help make the world a better, greener place. Now, as we build Active Surfaces, I'm excited to see that dream taking shape. The prospect of transforming any surface into an energy source, thereby expanding solar accessibility globally, holds the promise of significantly reducing CO2 emissions at a gigaton scale. That’s what gets me out of bed in the morning.

Swartwout: Solar and a lot of other renewables tend to be pretty land-inefficient. Solar 1.0 is using low hanging fruit: cheap land next to easy interconnects and new buildings designed to handle the weight of current panels. But as we ramp up solar, those things will run out. We need to utilize spaces and assets better. That’s what I think solar 2.0 will be: urban PV deployments, solar that’s closer to demand, and integrated into the built environment. These next-generation use cases aren’t just a racking system in the middle of nowhere.

We’re going after commercial roofs, which would cover most [building] energy demand. Something like 80-90 percent of building electricity demands in the space can be met by rooftop solar.

The goal is to do the manufacturing in-house. We use roll-to-roll manufacturing, so we can buy tons of equipment off the shelf, but most roll-to-roll manufacturing is made for things like labeling and tape, and not a semiconductor, so our plan is to be the core of semiconductor roll-to-roll manufacturing. There’s never been roll-to-roll semiconductor manufacturing before.

Q: What have the last few months been like since you won the $100K competition?

Bhakta: After winning the $100K, we’ve gotten a lot of inbound contact from MIT alumni. I think that’s my favorite part about the MIT community — people stay connected. They’ve been congratulating us, asking to chat, looking to partner, deploy, and invest.

We’ve also gotten contacted by previous $100K competition winners and other startups that have spun out of MIT that are a year or two or three ahead of us in terms of development. There are a lot of startup scaling challenges that other startup founders are best equipped to answer, and it’s been huge to get guidance from them.

We’ve also gotten into top accelerators like Cleantech Open, Venture For Climatetech, and ACCEL at Greentown Labs. We also onboarded two rockstar MIT Sloan interns for the summer. Now we’re getting to the product-development phase, building relationships with potential pilot partners, and scaling up the area of our technology.      

Swartwout: Winning the $100K competition was a great point of validation for the company, because the judges themselves are well known in the venture capital community as well as people who have been in the startup ecosystem for a long time, so that has really propelled us forward. Ideally, we’ll be getting more MIT alumni to join us to fulfill this mission.

Q: What are your plans for the next year or so?

Swartwout: We’re planning on leveraging open-access facilities like those at MIT.nano and the University of Massachusetts Amherst. We’re pretty focused now on scaling size. Out of the lab, [the technology] is a 4-inch by 4-inch solar module, and the goal is to get up to something that’s relevant for the industry to offset electricity for building owners and generate electricity for the grid at a reasonable cost.

Bhakta: In the next year, through those open-access facilities, the goal is to go from 100-millimeter width to 300-millimeter width and a very long length using a roll-to-roll manufacturing process. That means getting through the engineering challenges of scaling technology and fine tuning the performance.

When we’re ready to deliver a pilotable product, it’s my job to have customers lined up ready to demonstrate this works on their buildings, sign longer term contracts to get early revenue, and have the support we need to demonstrate this at scale. That’s the goal.



de MIT News https://ift.tt/4qkWKre

New quantum magnet unleashes electronics potential

Some of our most important everyday items, like computers, medical equipment, stereos, generators, and more, work because of magnets. We know what happens when computers become more powerful, but what might be possible if magnets became more versatile? What if one could change a physical property that defined their usability? What innovation might that catalyze?

It’s a question that MIT Plasma Science and Fusion Center (PSFC) scientists Hang Chi, Yunbo Ou, Jagadeesh Moodera, and their co-authors explore in a new open-access Nature Communications paper, “Strain-tunable Berry curvature in quasi-two-dimensional chromium telluride.”

Understanding the magnitude of the authors’ discovery requires a brief trip back in time: In 1879, a 23-year-old graduate student named Edwin Hall discovered that when he put a magnet at right angles to a strip of metal that had a current running through it, one side of the strip would have a greater charge than the other. The magnetic field was deflecting the current’s electrons toward the edge of the metal, a phenomenon that would be named the Hall effect in his honor.

In Hall’s time, the classical system of physics was the only kind, and forces like gravity and magnetism acted on matter in predictable and immutable ways: Just like dropping an apple would result in it falling, making a “T” with a strip of electrified metal and magnet resulted in the Hall effect, full stop. Except it wasn’t, really; now we know quantum mechanics plays a role, too.

Think of classical physics as a map of Arizona, and quantum mechanics as a car trip through the desert. The map provides a macro view and generalized information about the area, but it can’t prepare the driver for all the random events one might encounter, like an armadillo running across the road. Quantum spaces, like the journey the driver is on, are governed by a different set of local traffic rules. So, while the Hall effect is induced by an applied magnetic field in a classical system, in a quantum case the Hall effect may occur even without the external field, at which point it becomes the anomalous Hall effect.

When cruising in the quantum realm, one is equipped with the knowledge of the so-called “Berry phase,” named after British physicist Michael Berry. It serves as a GPS logger for the car: It’s as if the driver has recorded their entire trip from start to finish, and by analyzing the GPS history, one can better plot the ups and downs, or “curvature” of the space. This “Berry curvature” of the quantum landscape can naturally shift electrons to one side, inducing the Hall effect without a magnetic field, just as the hills and valleys dictate the path of the car.

While many have observed the anomalous Hall effect in magnetic materials, none had been able to manipulate it by squeezing and/or stretching — until the paper’s authors developed a method to demonstrate the change in the anomalous Hall effect and Berry curvature in an unusual magnet.

First, they took half-millimeter-thick bases made of either aluminum oxide or strontium titanate, both of which are crystals, and grew an incredibly thin layer of chromium telluride, a magnetic compound, on top of the bases. On their own, these materials wouldn’t do much; however, when combined, film’s magnetism and the interface it created with the bases onto which it was grown caused the layers to stretch or squeeze.

To deepen their understanding of how these materials were working together, the researchers partnered with Oak Ridge National Laboratory (ORNL)'s Spallation Neutron Source to perform neutron scattering experiments — essentially blasting the material with shots of particles and studying what bounced back — to learn more about the film’s chemical and magnetic properties. Neutrons were an ideal tool for the study because they are magnetic but have no electrical charge. The neutron experiments allowed the researchers to build a profile that revealed how the chemical elements and magnetic behaviors changed at different levels as they probed deeper into the material.

The researchers witnessed the anomalous Hall effect and Berry curvature responding to the degree of squeezing or stretching occurring on the base after the film was applied, an observation later verified by modeling and data simulations.

Though this breakthrough occurred at the tiniest molecular level, the scientists’ discovery has significant, real-world ramifications. For example, hard drives store data in tiny magnetic regions, and if they were built using “strain-tunable” materials like the film, they could store additional data in regions that have been stretched different ways. In robotics, strain-tunable materials could be used as sensors able to provide precise feedback on robots’ movements and positioning. Such materials would be especially useful for “soft robots,” which use soft and flexible components that better imitate biological organisms. Or, a magnetic device that changed its behavior when flexed or bent could be used to detect minute changes in the environment, or to make incredibly sensitive health monitoring equipment.

In addition to Chi, Ou, and Moodera, who is also an affiliate of the MIT Department of Physics, MIT contributors to the work include postdoc Alexandre C. Foucher and Professor Frances Ross of the Department of Materials Science and Engineering.

This study was supported, in part, by the U.S. Army Research Office, U.S. National Science Foundation (NSF), U.S. Office of Naval Research, U.S. Air Force Office of Scientific Research, and the MIT-IBM Watson AI Research Lab. Facilities access was provided by the MIT Materials Research Laboratory, MRSEC, MIT.nano, SNS and Center for Nanophase Materials Sciences, Department of Energy Office of Science User Facilities operated by ORNL, and Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support supported by NSF.



de MIT News https://ift.tt/bK0QJq9

lunes, 24 de julio de 2023

A new vision for U.S. health care

It’s not exactly what he’s best known for, but Alexander Hamilton helped develop the first national, compulsory health insurance policy in the world: a 1798 taxpayer-financed plan Congress approved to cover sick and disabled seamen.

“The interests of humanity are concerned in it,” Hamilton wrote.

And they still are, as MIT Professor Amy Finkelstein notes in a new book. The U.S. has repeatedly tried to provide medical care for those who need it and cannot afford it. These efforts may have started with Hamilton, but they have continued through modern times, with policies that have mandated emergency-room care for all, and have extended insurance to those with certain serious illnesses.  

Then again, no policy has fully addressed the needs of the U.S. population. About 30 million U.S. citizens lack health insurance. Even for the insured, costs routinely exceed a plan’s benefits. Americans have $140 billion in unpaid medical debt, more than all other personal debt combined, and three-fifths of it is incurred by people with health insurance.

That’s why Finkelstein is calling for a total overhaul of the U.S. health insurance system, in a new book with economist Liran Einav of Stanford University, “We’ve Got You Covered: Rebooting American Health Care,” published today by Portfolio. In it, the scholars envision an approach with one layer of free and automatic health insurance for everyone, and another layer of private insurance for those seeking additional care amenities.  

“In the U.S., we have always had a commitment to do something when people are ill, so we might as well do it effectively and efficiently,” says Finkelstein, the John and Jennie S. MacDonald Professor in MIT’s Department of Economics. “I don’t think anyone would argue we have a wonderful, well-functioning health care system.”

Patchwork programs

Finkelstein has won the John Bates Clark Medal and received a MacArthur fellowship for empirical studies of health insurance and health care — including work on Medicaid and Medicare, the financial impact of being hospitalized, geographic variation in medical costs, and more. Finkelstein and Einav are also co-authors, with Ray Fisman, of the 2023 book, “Risky Business,” about the insurance industry.

Through two decades of intensive research, Finkelstein and Einav have also never advocated for specific health care policies — until now.

“We feel we do have something to say to the wider public about the problems, and also about the solution,” Finkelstein says. “We emphasize the problems of the insured, not only the uninsured.”

Indeed, around 150 million Americans rely on private employer-provided insurance. Yet they risk losing that insurance if they lose or change their job. Those with public health insurance, like Medicaid, face nearly the opposite problem. If a family member earns enough money to lift a household above the poverty line, they can lose eligibility. The net result: About one in four Americans under the age of 65 will be uninsured at some point in the next two years.

Many of them will actually be eligible for free or heavily discounted coverage. About 18 million Americans who are eligible for public health insurance remain unenrolled due to a lack of information and complicated signup procedures. And even Medicare, the workhorse public insurance program for many seniors, has out-of-pocket expenses with no cap. A quarter of people on Medicare spend a quarter of their income on health care.

Some reforms have brought better coverage to more people. As the scholars note, the Affordable Care Act of 2010 (which MIT economist Jonathan Gruber helped develop) has allowed 10 million formerly uninsured Americans to gain coverage. But it didn’t change the risk of losing insurance coverage or of incurring large medical debt due to highly incomplete coverage.

The book contends the U.S. has used a long series of piecemeal policies to try to fix problems with health coverage in the U.S. One long-standing approach has been to create disease-specific care subsidies, starting with a 1972 law extending Medicare to everyone with end-stage kidney disease. More recently, similar programs have been passed to cover patients with tuberculosis, breast and cervical cancer, sickle cell anemia, ALS, HIV/AIDS, and Covid-19.

Finkelstein and Einav are skeptical of this approach, however, due to its patchwork nature. Passing separate laws for different illnesses will always leave holes in coverage. Why not just automatically include everyone?

“When you think about covering all the gaps, that’s what universal basic coverage is,” Finkelstein says.

Land of the free

As “We’ve Got You Covered” notes, the current U.S. approach to health insurance is hardly etched in concrete: Employer-provided health care really only dates to the 1950s. And, the authors emphasize, the way the U.S. keeps instituting policies to make basic care available to anyone — open emergency rooms, subsidies for severe disease treatments — is telling us that the country has a bottom-line expectation of providing humane care when most needed.

“The reason why we have all these patches is that, hard as it is to believe, in the United States there is in fact a strong social norm, an unwritten social contract, that we don’t let people die in the streets,” Finkelstein says. “When people are in dire medical situations and don’t have resources, we inevitably as a society feel compelled to try to help them. The problems of the insured and the uninsured represent failures to achieve our commitments, not the lack of those commitments.”

To Finkelstein and Einav, then, the solution is to provide free, basic health care for everyone. No sign-up woes; enrollment would be automatic. No charges for basic care. No losing insurance if you leave your job. No falling off the public-insurance ranks if you climb above the poverty line.

At the same time, they envision, the U.S. would have another layer of private health insurance, covering health care amenities — private hospital rooms, say, or other elective elements of medical care. “You can pay to upgrade,” Finkelstein says.

That would not lead to the system of absolutely equal, universal care that some envision, but Finkelstein still believes it would improve the status quo.

“We have inequality in all aspects of our lives, and this is another,” Finkelstein says. “The key is to provide essential basic coverage.”

Could the U.S. afford a system of free, basic, automatic-enrollment health care? The book’s surprising answer is: Yes, absolutely. In the U.S., 18 percent of GDP is spent on health care. Half of that goes to public health care, and half on private care. As it happens, 9 percent of GDP is how much European countries spend on their public-care health systems.

“We’re already paying for universal coverage in the United States, even though we’re not getting it,” Finkelstein says. “We’re already spending 9 percent of GDP on publicly financed health care. We certainly could do it at the same price tag as all these other countries.”

“We’ve Got You Covered” even comes out against modest co-pays (despite studies showing they reduce visits to doctors), finding them “in conflict with the rationale for universal coverage, namely, access to essential medical care without regard to [financial] need,” as Finkelstein says.

Until the impossible becomes inevitable

If the Finkelstein-Einav health insurance system makes sense on the merits, though, does it have any chance of existing?

“One thing that makes me, if not optimistic, then at least not unduly pessimistic, is that this is an argument that will and does appeal to people across the political spectrum,” Finkelstein contends. Expanding health insurance is usually associated with progressive politicians, but the book points to a series of conservatives who, even into the 21st century, have supported universal coverage.

Certainly other experts have praised “We’ve Got You Covered.” Siddhartha Mukherjee, a physician and award-winning author, calls the book “the clearest diagnosis of the American health care system I have seen,” adding it “should and will reset the debate about how to fix health care.”

N. Gregory Mankiw PhD ’84 , the Robert M. Beren Professor of Economics at Harvard University and chair of the President’s Council of Economic Advisers under former president George W. Bush, terms it a “smart, cogent, and eminently readable look at the U.S. health care system and what can be done to fix it.”

Even if a change to a free system of basic care is not immediately in the offing, Finkelstein and Einav suggest in the book that their role, in writing “We’ve Got You Covered,” is something economist Milton Friedman suggested: Develop ideas and keep them in the public sphere until “the politically impossible becomes the politically inevitable.”

And in the meantime, Finkelstein and Einav firmly suggest people take more seriously the way U.S. health care policy implicitly assumes we should help everyone. And for the same reasons Hamilton wanted to help seamen, namely, “to protect from want and misery” in their lives.



de MIT News https://ift.tt/YKOxTQa

A new vision for ultrasound imaging

Nicole Henning did not foresee becoming an expert in ultrasound imaging.

Before joining Koch Institute for Integrative Cancer Research at MIT as an ultrasound research specialist at the Preclinical Imaging and Testing (P&IT) Core Facility, she earned a degree in animal sciences and aspired to go to veterinary school. Instead, she found herself working in animal husbandry, and soon, in facility and project management. While she enjoyed her work, it wasn’t enough.

“I wanted to do more research, particularly helping with other people’s research,” Henning says. “I wanted to try something new. I wanted more freedom to learn to do new things and experiment in my own way.”

Virginia Spanoudaki, scientific director of the P&IT, had a vision for implementing a new ultrasound imaging method. The core facility had an ultrasound machine that could potentially be used to produce larger quantities of less-biased information about the effects of different therapies on mouse models of cancer than other standard imaging techniques. However, one key ingredient was missing: dedicated research staff who could operate and improve the system for research use.

Spanoudaki’s ideal candidate would be someone who could learn not just imaging, but also someone who cared about animals and could make the most of the ultrasound technology. The P&IT is part of the Robert A. Swanson (1969) Biotechnology Center (SBC), which not only provides access to highly specialized, cutting-edge technology to scientists and engineers at the Koch Institute, the MIT community, and beyond, but offers expert consulting and training that help researchers help run their experiments in ways that maximize the technology being used, as well as assistance with analyzing data, asking new questions, and planning the next experiments.

Henning’s high level of initiative and motivation, and willingness to experiment and educate herself, made her a perfect match. She would become an expert and teacher in ultrasound imaging very quickly.

“This turned out to be the perfect job. In retrospect, I feel lucky,” says Henning.

Getting a feeling for the technology

Immediately after joining the P&IT in 2019, Henning got to know and tinkered with the ultrasound imaging system. While she had no background in the technology and faced a steep learning curve, she sought training from the Fujifilm specialists who supplied the machine to the core.

Ultrasound imaging uses sound waves to take real-time images of the body. It is often used in hospitals to track fetal development in pregnancies, or to diagnose diseases in various organs. In cancer research, ultrasound imaging may be used to study cancer development or to screen drugs for their effect on tumors or tissues.

Henning decided to take ultrasound’s capabilities one step further by developing ultrasound-guided injection (USGI), a technique that can be harnessed to initiate disease for modeling purposes or administer drugs into deep tissues. Previously, delivering these to such hard-to-reach tissues inside the body required invasive surgeries. However, such surgeries can become a confounding factor in drug screenings and disease processes, as immune responses involved in healing the surgical wounds may hinder or boost the disease process or efficacy of the drugs being tested. The key advancement of USGI is that it is a minimally invasive technique, combining ultrasound imaging to view the inside of the body to make precisely targeted injections into tissues, for instance into the lungs, liver, or pancreas.

Laura Maiorino, a postdoc in the Koch Institute laboratory of MIT Professor Darrell Irvine, was endeavoring to develop local therapies for early-stage lung cancer with the idea of circumventing toxicity while maximizing the anti-tumor response. She wondered, “Can we use ultrasound guidance to inject our therapies inside the lung, inside a tumor?” When she approached Henning with this question, the answer was immediate: “We should try.”

“And that has been Henning’s answer to many questions since,” says Maiorino.

After months of digging through scarce literature on the topic and hands-on refinements, Henning and Maiorino successfully developed a precision method to deliver payloads to a target region in the lung — a technique that allows scientists to test local treatments for lung cancer preclinically. Maiorino believes that “Henning’s direct impact on the development of a new therapeutic approach on its way to human testing so quickly is truly an exceptional accomplishment.”

Liang Hao, formerly a postdoc and now a visiting scientist in the lab of MIT Professor Sangeeta Bhatia at the Koch Institute, also worked with Henning to apply the USGI technique to cancer, in this case to develop colorectal cancer metastasis models for drug screenings. Before the use of Henning’s USGI technique, developing such models relied on invasive and time-consuming surgical methods. Together with Hao, Henning was able to cut down the time by over 90 percent.

“Nicole is beyond an ultrasound expert in our study,” Hao says. “She is not only the powerhouse of the experimentation, but also actively brings up brilliant ideas that significantly strengthen the science. We are lucky to have her in the core facility and contributing to our research project.”

A model of innovation

Once the USGI technique had been developed, it could have been just a job well done for Henning. Through outreach to scientists working at the Koch Institute and presenting at last year’s American Association for Laboratory Animal Science National Meeting, Henning is working to make USGI readily available to any scientist who can benefit from it. In the near future, she will also publish the technique protocols on an open access website to make them freely available.

“Her can-do attitude and infectious enthusiasm have attracted several new users to this minimally invasive approach, which is set to become a gold standard for disease modeling,” Maiorino says.

Henning’s collaborative research endeavors were recognized on June 8 with the 2023 MIT Excellence Award, where she was presented with the award by Chancellor Melissa Nobles in the category of “Innovative Solutions: Collaborating for Results.” This award is annually given to recognize exceptional staff at MIT who help create novel solutions for challenges and embrace change as opportunities for growth.

“Henning is not just offering service, she is offering dedicated collaboration,” says Spanoudaki. “MIT’s signature ‘can-do’ attitude is well-embodied by Henning. Through active collaborations driven by curiosity and perseverance, Henning has developed a technology that has sparked a paradigm shift for the field of disease modeling and drug screening Her enthusiasm to engage with others highlights the strength of academic environments, especially at the SBC, where collaboration and convergence creates impactful science.”



de MIT News https://ift.tt/gTOhSUY

Brain networks encoding memory come together via electric fields, study finds

The “circuit” metaphor of the brain is as indisputable as it is familiar: Neurons forge direct physical connections to create functional networks, for instance to store memories or produce thoughts. But the metaphor is also incomplete. What drives these circuits and networks to come together? New evidence suggests that at least some of this coordination comes from electric fields.

A new open-access study in Cerebral Cortex shows that as animals played working memory games, the information about what they were remembering was coordinated across two key brain regions by the electric field that emerged from the underlying electrical activity of all participating neurons. The field, in turn, appeared to drive the neural activity, or the fluctuations of voltage apparent across the cells’ membranes.

If the neurons are musicians in an orchestra, the brain regions are their sections, and the memory is the music they produce, the study’s authors say, then the electric field is the conductor.

The physical mechanism by which this prevailing electric field influences the membrane voltage of constituent neurons is called “ephaptic coupling.” Those membrane voltages are fundamental to brain activity. When they cross a threshold, neurons “spike,” sending an electrical transmission that signals other neurons across connections called synapses. But any amount of electrical activity could contribute to a prevailing electric field that also influences the spiking, says study senior author Earl K. Miller, Picower Professor in the Department of Brain and Cognitive Sciences at MIT.

“Many cortical neurons spend a lot of time wavering on verge of spiking” Miller says. “Changes in their surrounding electric field can push them one way or another.  It’s hard to imagine evolution not exploiting that.”

In particular, the new study showed that the electric fields drove the electrical activity of networks of neurons to produce a shared representation of the information stored in working memory, says lead author Dimitris Pinotsis, associate professor at City, University of London and a research affiliate in The Picower Institute for Learning and Memory. He noted that the findings could improve the ability of scientists and engineers to read information from the brain, which could help in the design of brain-controlled prosthetics for people with paralysis.

“Using the theory of complex systems and mathematical pen-and-paper calculations, we predicted that the brain’s electric fields guide neurons to produce memories,” Pinotsis says. “Our experimental data and statistical analyses support this prediction. This is an example of how mathematics and physics shed light on the brain’s fields and how they can yield insights for building brain-computer interface (BCI) devices.”

Fields prevail

In a 2022 study, Miller and Pinotsis developed a biophysical model of the electric fields produced by neural electrical activity. They showed that the overall fields that emerged from groups of neurons in a brain region were more reliable and stable representations of the information animals used to play working memory games than the electrical activity of the individual neurons. Neurons are somewhat fickle devices whose vagaries produce an information inconsistency called “representational drift.” In an opinion article earlier this year, the scientists also posited that in addition to neurons, electric fields affected the brain’s molecular infrastructure and its tuning so that the brain processes information efficiently.

In the new study, Pinotsis and Miller extended their inquiry to asking whether ephaptic coupling spreads the governing electric field across multiple brain regions to form a memory network, or “engram.”

They therefore broadened their analyses to look at two regions in the brain: The frontal eye fields (FEF) and the supplementary eye fields (SEF). These two regions, which govern voluntary movement of the eyes, were relevant to the working memory game the animals were playing because in each round the animals would see an image on a screen positioned at some angle around the center (like the numbers on a clock). After a brief delay, they had to glance in the same direction that the object had just been in.

As the animals played, the scientists recorded the local field potentials (LFPs, a measure of local electrical activity) produced by scores of neurons in each region. The scientists fed this recorded LFP data into mathematical models that predicted individual neural activity and the overall electric fields.

The models allowed Pinotsis and Miller to then calculate whether changes in the fields predicted changes in the membrane voltages, or whether changes in that activity predicted changes in the fields. To do this analysis, they used a mathematical method called Granger causality. Unambiguously, this analysis showed that in each region, the fields had strong causal influence over the neural activity and not the other way around. Consistent with last year’s study, the analysis also showed that measures of the strength of influence remained much steadier for the fields than for the neural activity, indicating that fields were more reliable.

The researchers then checked causality between the two brain regions and found that electric fields, but not neural activity, reliably represented the transfer of information between FEF and SEF. More specifically, they found that the transfer typically flowed from FEF to SEF, which agrees with prior studies of how the two regions interact. FEF tends to lead the way in initiating an eye movement.

Finally, Pinotsis and Miller used another mathematical technique called representation similarity analysis to determine whether the two regions were, in fact, processing the same memory.  They found that the electric fields, but not the LFPs or neural activity, represented the same information across both regions, unifying them into an engram memory network.

Further clinical implications

Considering evidence that electric fields emerge from neural electrical activity but then come to drive neural activity to represent information, Miller speculated that perhaps a function of electrical activity in individual neurons is to produce the fields that then govern them.

“It’s a two-way street,” Miller says. “The spiking and synapses are very important. That’s the foundation. But then the fields turn around and influence the spiking.”

That could have important implications for mental health treatments, he says, because whether and when neurons spike influences the strength of their connections, and thereby the function of the circuits they form, a phenomenon called synaptic plasticity. 

Clinical technologies such as transcranial electrical stimulation (TES) alter brain electrical fields, Miller notes. If electrical fields not only reflect neural activity but actively shape it, then TES technologies could be used to alter circuits. Properly devised electrical field manipulations, he says, could one day help patients rewire faulty circuits.

Funding for the study came from U.K. Research and Innovation, the U.S. Office of Naval Research, The JPB Foundation, and the Picower Institute.



de MIT News https://ift.tt/DzjLqWV

domingo, 23 de julio de 2023

The “forgotten peace” of World War I

As negotiations for the Treaty of Lausanne began in late 1922, the aim was to hammer out one last international settlement about territories and rights following the first world war, this time between the victorious Allied powers and the Ankara government that had just abolished the Ottoman sultanate and started governing what would soon become the Republic of Turkey. Those watching the conference closely included Armenian representatives who had survived the genocide led by Ottoman rulers in 1915-16, when hundreds of thousands of Armenians were killed.

At the conference, the Armenian delegation had one major aim: that Armenians be granted an autonomous region within Turkey, either in what is now eastern Turkey or northern Syria. They called this an Armenian National Home (ANH), an autonomous, demilitarized area within Turkish territory where Armenians could practice self-rule and express their culture and religion safely.

The negotiations did not produce what the Armenians wanted, however. Turkish leaders rebuffed Armenian demands, while the Allies were not heavily invested in the matter. The Treaty of Lausanne became known as the “birth certificate” of modern Turkey, while Turkey’s Armenian population became a minority group with mostly equal rights, but often facing discrimination in practice.

“The Treaty of Lausanne doesn’t mention Armenians even once,” says MIT historian Lerna Ekmekcioglu.

Now, in a newly published research article, Ekmekcioglu contends that the Treaty of Lausanne is an often-overlooked event of great historical significance for Armenians. As she writes, “the Treaty of Lausanne rendered the Armenian Genocide politically inconsequential.” There was no redress for Armenians, in the form of autonomy or any kind of restorative justice, and no accountability for the perpetrators.

That article, “Debates over an Armenian National Home at the Lausanne Conference and the Limits of Post-Genocide Co-Existence,” uses new archival research to reconstruct the dynamics of the treaty negotiations. As such, the research illuminates both Armenians’ struggles as well as the international community’s struggles to deliver consistent support for multiethnic, multireligious states. 

“The issue broadly is how states govern people whose identities don’t fit with the historically dominant group’s identity,” says Ekmekcioglu, who is the McMillan-Stewart Associate Professor of History at MIT and director of MIT’s Program in Women’s and Gender Studies. “It’s an ongoing question. This is a very good case study for contemplating these questions. It’s also very relevant to this day because the Lausanne Treaty did not collapse.”

The paper appears as a chapter in the edited volume, “They All Made Peace — What Is Peace?: The 1923 Lausanne Treaty and the New Imperial Order,” published this month by the University of Chicago Press. It is edited by Jonathan Conlin, a historian at the University of Southampton, and Ozan Ozavci, an assistant professor at Utrecht University. The volume marks the 100th anniversary of the treaty being signed, which occurred on July 24, 1923. The book is part of a collective scholarly effort about the treaty, the “Lausanne Project,” whose website suggests the pact may be the “forgotten peace” of World War I.

Ekmekcioglu’s past work largely focuses on the lives of Armenians in the modern Turkish state. In her 2016 book, “Recovering Armenia: The Limits of Belonging in Post-Genocide Turkey,” published by Stanford University Press, she notes that immediately after World War I, Armenians were optimistic about their political prospects; Ekmekcioglu calls the time from 1918 to 1922 an “exceptional period,” as Armenians hoped to gain full rights they did not have under the Ottoman Empire.

However, the Treaty of Lausanne negotiations — held in Lausanne, Switzerland — brought an end to Armenian optimism. Perhaps that should have been predicted: In the few years after World War I ended, Turkish military forces defeated Allied-backed troops in skirmishing for control over some Turkish territory. That made the Treaty of Lausanne discussions highly unusual: The putative victors, the Allies, had just lost military battles to the side they were negotiating against.

“They have so much negotiating power that they get most of what they want,” Ekmekcioglu says, speaking of the incipient Turkish government of the time.

In that sense, 1922 was probably already too late for negotiations to deliver success for the Armenians. But as Ekmekcioglu details in the article, the Allies lacked not just military leverage, but perhaps moral standing. The Turkish press ran many stories about colonial misdeeds by the British and French, and even stories about the Ku Klux Klan in the U.S., all aimed at showing that the Allied powers had mistreated minority groups. To whatever extent there may have been Ottoman backing for a new Armenian settlement, that kind of coverage helped squelch it.

“One of the reasons they [the Allied side] didn’t have much standing in the eyes of the Turkish public is that they confused humanitarianism with colonialism,” Ekmekcioglu says. “They claimed specifically to have never treated any minorities badly in the empire. But Turkish newspapers were writing about that double standard of imperialism.”

The Treaty of Lausanne has perhaps been best known for having ratified a massive and compulsory population exchange in the 1920s between orthodox Greeks in Asia Minor and surrounding areas, and Muslims in Greece. Perhaps 2 million people were relocated, about three-quarters of them Greek. That exchange, which homogenized area populations, has often been regarded as an antedecdent to the partitioning of India and Pakistan in the late 1940s.

“This has important international legal law consequences because population transfer then becomes a potentially recognized solution to the existence of heterogeneity and population mixing,” Ekmekcioglu observes. “Other groups, in the future will take this as an example. It is a self-fulfilling prophecy.”

So, while the Treaty of Lausanne did guarantee certain rights for all populations, its inability to deliver a more thorough pluralism in political bodies may be a lasting part of its legacy. To be sure, the Armenian representatives at the Lausanne conference also wanted their own largely homogenized territory, too — although, as Ekmekcioglu notes in the paper, their extraordinary circumstances makes that fairly understandable.  

And so, after suffering at the hands of the Ottomans, the Armenians then felt let down by the international community, another blow in short succession. Perhaps there were no easy answers at the time, but, Ekmekcioglu observes, we can still think through what the best alternatives might have been. Especially, she notes, in a world often still struggling to achieve stability and pluralism at once.

“To understand minorities in Turkey to this day, you have to understand the Treaty of Lausanne, and how it came to be,” Ekmekcioglu says. “It’s a great laboratory for comparing, and ideally coming up with an answer to, the issue of difference.”



de MIT News https://ift.tt/hiN5Xr2

viernes, 21 de julio de 2023

New sensor mimics cell membrane functions

Drawing inspiration from natural sensory systems, an MIT-led team has designed a novel sensor that could detect the same molecules that naturally occurring cell receptors can identify.

In work that combines several new technologies, the researchers created a prototype sensor that can detect an immune molecule called CXCL12, down to tens or hundreds of parts per billion. This is an important first step to developing a system that could be used to perform routine screens for hard-to-diagnose cancers or metastatic tumors, or as a highly biomimetic electronic “nose,” the researchers say.

“Our hope is to develop a simple device that lets you do at-home testing, with high specificity and sensitivity. The earlier you detect cancer, the better the treatment, so early diagnostics for cancer is one important area we want to go in,” says Shuguang Zhang, a principal research scientist in MIT’s Media Lab.

The device draws inspiration from the membrane that surrounds all cells. Within such membranes are thousands of receptor proteins that detect molecules in the environment. The MIT team modified some of these proteins so that they could survive outside the membrane, and anchored them in a layer of crystallized proteins atop an array of graphene transistors. When the target molecule is detected in a sample, these transistors relay the information to a computer or smartphone.

This type of sensor could potentially be adapted to analyze any bodily fluid, such as blood, tears, or saliva, the researchers say, and could screen for many different targets simultaneously, depending on the type of receptor proteins used.

“We identify critical receptors from biological systems and anchor them onto a bioelectronic interface, allowing us to harvest all those biological signals and then transduce them into electrical outputs that can be analyzed and interpreted by machine-learning algorithms,” says Rui Qing, a former MIT research scientist who is now an associate professor at Shanghai Jiao Tong University.

Qing and Mantian Xue PhD ’23, are the lead authors of the study, which appears today in Science Advances. Along with Zhang, Tomás Palacios, director of MIT’s Microsystems Laboratory and a professor of electrical engineering and computer science, and Uwe Sleytr, an emeritus professor at the Institute of Synthetic Bioarchitectures at the University of Natural Resources and Life Sciences in Vienna, are senior authors of the paper.

Free from membranes

Most current diagnostic sensors are based on either antibodies or aptamers (short strands of DNA or RNA) that can capture a particular target molecule from a fluid such as blood. However, both of these approaches have limitations: Aptamers can be easily broken down by body fluids, and manufacturing antibodies so that every batch is identical can be difficult.

One alternative approach that scientists have explored is building sensors based on the receptor proteins found in cell membranes, which cells use to monitor and respond to their environment. The human genome encodes thousands of such receptors. However, these receptor proteins are difficult to work with because once removed from the cell membrane, they only maintain their structure if they are suspended in a detergent.

In 2018, Zhang, Qing, and others reported a novel way to transform hydrophobic proteins into water-soluble proteins, by swapping out a few hydrophobic amino acids for hydrophilic amino acids. This approach is called the QTY code, after the letters representing the three hydrophilic amino acids — glutamine, threonine, and tyrosine — that take the place of hydrophobic amino acids leucine, isoleucine, valine, and phenylalanine.  

“People have tried to use receptors for sensing for decades, but it is challenging for widespread use because receptors need detergent to keep them stable. The novelty of our approach is that we can make them water-soluble and can produce them in large quantities, inexpensively,” Zhang says.

Zhang and Sleytr, who are longtime collaborators, decided to team up to try to attach water-soluble versions of receptor proteins to a surface, using bacterial proteins that Sleytr has studied for many years. These proteins, known as S-layer proteins, are found as the outermost surface layer of the cell envelope in many types of bacteria and archaea.

When S-layer proteins are crystallized, they form coherent monomolecular arrays on a surface. Sleytr had previously shown that these proteins can be fused with other proteins such as antibodies or enzymes. For this study, the researchers, including senior scientist Andreas Breitwieser, who is also a co-author in the paper, used S-layer proteins to create a very dense, immobilized sheet of a water-soluble version of a receptor protein called CXCR4. This receptor binds to a target molecule called CXCL12, which plays important roles in several human diseases including cancer, and to an HIV coat glycoprotein, which is responsible for virus entry into human cells.

“We use these S-layer systems to allow all these functional molecules to attach to a surface in a monomolecular array, in a very well-defined distribution and orientation,” Sleytr says. “It’s like a chessboard where you can arrange different pieces in a very precise manner.”

The researchers named their sensing technology RESENSA (Receptor S-layer Electrical Nano Sensing Array).

Sensitivity with biomimicry

These crystallized S-layers can be deposited onto nearly any surface. For this application, the researchers attached the S-layer to a chip with graphene-based transistor arrays that Palacios’ lab had previously developed. The single-atomic thickness of the graphene transistors makes them ideal for the development of highly sensitive detectors.

Working in Palacios’ lab, Xue adapted the chip so that it could be coated with a dual layer of proteins — crystallized S-layer proteins attached to water-soluble receptor proteins. When a target molecule from the sample binds to a receptor protein, the charge of the target changes the electrical properties of the graphene in a way that can be easily quantified and transmitted to a computer or smartphone connected to the chip.

“We chose graphene as the transducer material because it has excellent electrical properties, meaning it can better translate those signals. It has the highest surface-to-volume ratio because it's a sheet of carbon atoms, so every change on the surface, caused by the protein binding events, translates directly to the whole bulk of the material,” Xue says.

The graphene transistor chip can be coated with S-layer-receptor proteins with a density of 1 trillion receptors per square centimeter with upward orientation. This allows the chip to take advantage of the maximum sensitivity offered by the receptor proteins, within the clinically relevant range for target analytes in human bodies. The array chip integrates more than 200 devices, providing a redundancy in signal detection that helps to ensure reliable measurements even in the case of rare molecules, such as the ones that could reveal the presence of an early-stage tumor or the onset of Alzheimer’s disease, the researchers say.

Thanks to the use of QTY code, it is possible to modify naturally existing receptor proteins that could then be used, the researchers say, to generate an array of sensors in a single chip to screen virtually any molecule that cells can detect. “What we are aiming to do is develop the basic technology to enable a future portable device that we can integrate with cell phones and computers, so that you can do a test at home and quickly find out whether you should go to the doctor,” Qing says.

“This new system is the combination of different research fields as molecular and synthetic biology, physics, and electrical engineering, which in the team’s approach are nicely integrated,” says Piero Baglioni, a professor of physical chemistry at the University of Florence, who was not involved in the study. “Moreover, I believe that it is a breakthrough that could be very useful in diagnostics of many diseases.”

The research was funded by the National Science Foundation, MIT Institute for Soldier Nanotechnologies, and Wilson Chu of Defond Co. Ltd.



de MIT News https://ift.tt/kCxidFp

Moving days for MIT’s history

Gloria Martinez has a million-and-a-half items on her to-do list.

Quite literally: Give or take a few hundred thousand, that's the number of unique objects in the MIT Museum's astonishingly diverse collection. Martinez is supervising the collection's move to a new storage facility — the final step in delivering the new museum, whose headquarters opened last fall in the heart of Kendall Square.

Only a tiny fraction of the huge collection is on display at the popular new museum. Working under tight deadlines, Martinez and her team are painstakingly gathering details on the vast store, now held at a warehouse and at the previous site of the museum, and packing everything for transfer to another storage facility a few miles away.

Martinez, a master of the art and science of handling museum artifacts, faces a bewildering and ever-changing array of logistical challenges. “My day might look one way on my calendar in the morning, and turn into something completely different once I start,” she says.

Casting a wide net

Artifacts such as lab devices, ship models, and architectural plans are critical for understanding the Institute past and present, says Deborah Douglas, the museum's director of collections and curator of science and technology.

“The museum was formed around its objects, and has been systematically accumulating those artifacts over the past 50 years,” Douglas says. “The aim is not simply to document the triumphs of MIT, but to tell the story of science, technology, architecture, and all of the endeavors that the people of the Institute have engaged with in the 19th, 20th, and 21st centuries.”

“It's an encyclopedic collection,” she adds. The treasure trove holds, for example, plans for America’s Cup-winning yachts, 100 years of architecture student thesis drawings (including those of Robert Taylor, the first trained Black architect in the United States), one of the first aluminum-framed bicycles, human-powered airplane parts, holograms (lots of holograms!), thoroughly offbeat lab instruments, folded-paper items, costumes, trophies, massive chunks of physics experiments and early computers, and remnants of MIT's famous hacks, such as a door from the firetruck that appeared on the Great Dome in 2006 to mark the fifth anniversary of 9/11.

Florencia Pierri, assistant curator for science and technology, heard about the vast size of collection when she was hired. “But it's different,” she says, “to actually climb a ladder for the first time and survey the rows and rows and rows of 10-foot-tall shelves just chock full of stuff.”

Planning the process

Drawn by the scale of the challenge, Martinez joined the museum in 2019. Collaborating with museum registrar Katie Porter, she began quietly exploring the collection and sketching out plans for the move.

The next year, when the pandemic hit, MIT's lockdown policies drastically shrank the hours that anyone could work at the warehouse. “To be honest, Covid afforded me the opportunity to have that time exclusively just to plan and prepare for the projects,” she recalls.

As is common in museums, much of what MIT held was barely described in the museum's collection database. “Gloria insisted on a level of inventory review that will make the collection much more accessible online, so that it will become much easier for people on campus and around the world to understand what we have,” Douglas says.

By 2021, the museum had recruited three assistant curators who could bring their expertise to that job (and were starting to grasp just how much effort it would entail). Martinez also hired and trained a packing team of four people with a background in handling museum items.

She constantly revised plans and processes for the move. All of the museum's 20,000-or-so three-dimensional objects needed good photographs, so the team set up an ad hoc but highly effective photo station. Another issue cropped up with the sheer physicality of the objects, which might weigh 80 or 150 pounds. Grappling all day with these might lead to injuries or other accidents, so she aimed to shorten the work sessions.

“Processing the endless stream of items can be draining, physically and intellectually,” says Jon Duval, assistant curator of architecture and design. “But it's fun,” he says. “It's a joy to be around people who get equally excited about museum work and about artifacts, and to learn things from everyone else who's there. Or to look at something that we have no idea about, and try to figure out what it might be.”

“It's so great to have a team,” says Elisabeth (“Libby”) Meier, assistant curator for the museum's Hart Nautical Collection. “Gloria is a wizard at scheduling and making sure materials are where they need to be. And working with the other curators on things that aren't in my collection is fascinating, because I generally don't know what's coming out of the box, and they generally do.”

Puzzling out objects

There's no lack of artifacts that call for quick detective work.

Among the surprises was something that the collection database described simply as a brick. “I noticed it because I tried to move it and it was a lot heavier than I thought it would be,” says Pierri. She discovered that the “brick” was part of a graphite rod created for the world's first human-made self-sustaining nuclear chain reaction, at the University of Chicago in 1942.

Another unexpected find was two empty leather trunks that were falling apart and seemed to be overdue for the trash. But the trunks belonged to Katharine McCormack, the first woman to earn an MIT science degree and the main source of funding for developing the first birth control pill. McCormack used the trunks to smuggle diaphragms from Europe to the United States in the 1920s, Pierri says.

An even stranger curiosity was a box of rusted cans — grocery store items employed in influential experiments with bar codes, says Duval.

Not every surprise during the inventory is so compelling, and not everything is kept. “Maybe we don't need quite so many toenail clippers from the contents of somebody's desk drawer,” Douglas says.

Connecting with the collection

With steady progress on an upgraded inventory, the packing team moved into high gear last November. Everything will be out of the current warehouse by December 2023, and out of the former museum site by July 2024.

Martinez keeps fine-tuning her plans as situations and timelines change. “It's been very fluid,” she says. “You can't be rigid in this profession. If you are, you really need to find another job.”

“For the last couple of years, we've been doing this nonstop, and it will require a lot more hard work and additional reinforcements” says Martinez, who adds that she's just starting to see the light at the end of the tunnel.

“Gloria has done an extraordinary job,” says Douglas. “She's created a really intelligent approach to inventorying, but then also created a plan for packing all the material that lets us be much more efficient in our moving process.”

“Ultimately, the goal of all this moving stuff around is to make it even more accessible,” Douglas says. “This is material that you can use to teach with, do research with, stimulate academic and entrepreneurial endeavors, and to educate — whether yourself or the broader community. If we can help inspire people, if we can help educate people, or provide useful resources for the work that's going on today or in the future at MIT and beyond, then we will have accomplished our mission.”



de MIT News https://ift.tt/hr9WyT8