Technique could lead to cameras

Virtually any modern information-capture device — such as a camera, audio recorder, or telephone — has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes.

Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a “clipped” audio signal or as “saturation” in digital images — when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC’s voltage limit.

The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn’t skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

The paper’s chief result, however, is theoretical: The researchers establish a lower bound on the rate at which an analog signal with wide voltage fluctuations should be measured, or “sampled,” in order to ensure that it can be accurately digitized. Their work thus extends one of the several seminal results from longtime MIT Professor Claude Shannon’s groundbreaking 1948 paper “A Mathematical Theory of Communication,” the so-called Nyquist-Shannon sampling theorem.

Alternative to 3D scanners that cost 200 times as much

Last year, a team of forensic dentists got authorization to perform a 3-D scan of the prized Tyrannosaurus rex skull at the Field Museum of Natural History in Chicago, in an effort to try to explain some strange holes in the jawbone.

Upon discovering that their high-resolution dental scanners couldn’t handle a jaw as big as a tyrannosaur’s, they contacted the Camera Culture group at MIT’s Media Lab, which had recently made headlines with a prototype system for producing high-resolution 3-D scans.

The prototype wasn’t ready for a job that big, however, so Camera Culture researchers used $150 in hardware and some free software to rig up a system that has since produced a 3-D scan of the entire five-foot-long T. rex skull, which a team of researchers — including dentists, anthropologists, veterinarians, and paleontologists — is using to analyze the holes.

The Media Lab researchers report their results in the latest issue of the journal PLOS One.

“A lot of people will be able to start using this,” says Anshuman Das, a research scientist at the Camera Culture group and first author on the paper. “That’s the message I want to send out to people who would generally be cut off from using technology — for example, paleontologists or museums that are on a very tight budget. There are so many other fields that could benefit from this.”

Das is joined on the paper by Ramesh Raskar, a professor of media arts and science at MIT, who directs the Camera Culture group, and by Denise Murmann and Kenneth Cohrn, the forensic dentists who launched the project.

Studied nonintrusively at home using wireless signals

More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.

To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

“Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”

Katabi worked on the study with Matt Bianchi, chief of the Division of Sleep Medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT. Mingmin Zhao, an MIT graduate student, is the paper’s first author, and Shichao Yue, another MIT graduate student, is also a co-author.

Health issues like cognitive decline and cardiac disease

We’ve long known that blood pressure, breathing, body temperature and pulse provide an important window into the complexities of human health. But a growing body of researchsuggests that another vital sign – how fast you walk – could be a better predictor of health issues like cognitive decline, falls, and even certain cardiac or pulmonary diseases.

Unfortunately, it’s hard to accurately monitor walking speed in a way that’s both continuous and unobtrusive. Professor Dina Katabi’s group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has been working on the problem, and believes that the answer is to go wireless.

In a new paper, the team presents “WiGait,” a device that can measure the walking speed of multiple people with 95 to 99 percent accuracy using wireless signals.

The size of a small painting, the device can be placed on the wall of a person’s house and its signals emit roughly one-hundredth the amount of radiation of a standard cellphone. It builds on Katabi’s previous work on WiTrack, which analyzes wireless signals reflected off people’s bodies to measure a range of behaviors from breathing and falling to specific emotions.

“By using in-home sensors, we can see trends in how walking speed changes over longer periods of time,” says lead author and PhD student Chen-Yu Hsu. “This can provide insight into whether someone should adjust their health regimen, whether that’s doing physical therapy or altering their medications.”

WiGait is also 85 to 99 percent accurate at measuring a person’s stride length, which could allow researchers to better understand conditions like Parkinson’s disease that are characterized by reduced step size.

Hsu and Katabi developed WiGait with CSAIL PhD student Zachary Kabelac and master’s student Rumen Hristov, alongside undergraduate Yuchen Liu from the Hong Kong University of Science and Technology, and Assistant Professor Christine Liu from the Boston University School of Medicine. The team will present their paper in May at ACM’s CHI Conference on Human Factors in Computing Systems in Colorado.

How it works

Today, walking speed is measured by physical therapists or clinicians using a stopwatch. Wearables like FitBit can only roughly estimate speed based on step count, and GPS-enabled smartphones are similarly inaccurate and can’t work indoors. Cameras are intrusive and can only monitor one room. VICON motion tracking is the only method that’s comparably accurate to WiGate, but it is not widely available enough to be practical for monitoring day-to-day health changes.

Neural networks trained on visual data

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

A significant MIT investment in advanced manufacturing innovation

These are not your grandmother’s fibers and textiles. These are tomorrow’s functional fabrics — designed and prototyped in Cambridge, Massachusetts, and manufactured across a network of U.S. partners. This is the vision of the new headquarters for the Manufacturing USA institute called Advanced Functional Fabrics of America (AFFOA) that opened Monday at 12 Emily Street, steps away from the MIT campus.

AFFOA headquarters represents a significant MIT investment in advanced manufacturing innovation. This facility includes a Fabric Discovery Center that provides end-to-end prototyping from fiber design to system integration of new textile-based products, and will be used for education and workforce development in the Cambridge and greater Boston community. AFFOA headquarters also includes startup incubation space for companies spun out from MIT and other partners who are innovating advanced fabrics and fibers for applications ranging from apparel and consumer electronics to automotive and medical devices.

MIT was a founding member of the AFFOA team that partnered with the Department of Defense in April 2016 to launch this new institute as a public-private partnership through an independent nonprofit also founded by MIT. AFFOA’s chief executive officer is Yoel Fink. Prior to his current role, Fink led the AFFOA proposal last year as professor of materials science and engineering and director of the Research Laboratory for Electronics at MIT, with his vision to create a “fabric revolution.” That revolution under Fink’s leadership was grounded in new fiber materials and textile manufacturing processes for fabrics that see, hear, sense, communicate, store and convert energy, and monitor health.

From the perspectives of research, education, and entrepreneurship, MIT engagement in AFFOA draws from many strengths. These include the multifunctional drawn fibers developed by Fink and others to include electronic capabilities within fibers that include multiple materials and function as devices. That fiber concept developed at MIT has been applied to key challenges in the defense sector through MIT’s Institute for Soldier Nanotechnology, commercialization through a startup called OmniGuide that is now OmniGuide Surgical for laser surgery devices, and extensions to several new areas including neural probes by Polina Anikeeva, MIT associate professor of materials science and engineering. Beyond these diverse uses of fiber devices, MIT faculty including Greg Rutledge, the Lammot du Pont Professor of Chemical Engineering, have also led innovation in predictive modeling and design of polymer nanofibers, fiber processing and characterization, and self-assembly of woven and nonwoven filters and textiles for diverse applications and industries.

Rutledge coordinates MIT campus engagement in the AFFOA Institute, and notes that “MIT has a range of research and teaching talent that impacts manufacturing of fiber and textile-based products, from designing the fiber to leading the factories of the future. Many of our faculty also have longstanding collaborations with partners in defense and industry on these projects, including with Lincoln Laboratory and the Army’s Natick Soldier Research Development and Engineering Center, so MIT membership in AFFOA is an opportunity to strengthen and grow those networks.”

The Microsoft Windows targeting system

The ransomware program WannaCry, launched on May 12, targets the Microsoft Windows operating system. While this malware has infected over 200,000 computers worldwide, the attack affected around 100 computers across the 50,000 devices on the MIT network.

This limited impact is due to the many security services provided to the community by MIT Information Systems and Technology (IS&T).

“MIT values an open network to foster research, innovation and collaborative learning,” says IS&T Associate Vice President Mark Silis. “We continuously strive to balance potential security risks with the benefits of our open network environment by offering a number of security services to our community, including Sophos anti-virus, CrowdStrike anti-malware, and CrashPlan backup.

“IS&T staff are working with faculty, staff, and students to secure their devices and address any remaining issues related to WannaCry. In the weeks ahead, our department will continue to educate and advise the MIT community.”

A post on the CISCO Talos blog provides in-depth technical details about the WannaCry ransomware attack.

Preventive measures

IS&T strongly recommends that community members take this opportunity to make sure their Windows machines are fully patched, especially with the MS17-010 Security Update. Microsoft has even released patches for Windows XP, Windows 8, and Windows Server 2003, which are no longer officially supported.

In addition, IS&T recommends installing Sophos and CrowdStrike. These programs successfully block the execution of WannaCry ransomware on machines where they have been installed. A third program, CrashPlan, is also recommended. This cloud-based offering, which runs continuously in the background, securely encrypts and backs up data on computers. Should files be lost due to ransomware or a computer breakdown, restoring data is straightforward.

A device that detects leaky pipes also won top prizes

The big winner at this year’s MIT $100K Entrepreneurship Competition aims to drastically accelerate artificial-intelligence computations — to light speed.

Devices such as Apple’s Siri and Amazon’s Alexa, as well as self-driving cars, all rely on artificial intelligence algorithms. But the chips powering these innovations, which use electrical signals to do computations, could be much faster and more efficient.

That’s according to MIT team Lightmatter, which took home the $100,000 Robert P. Goldberg grand prize from last night’s competition for developing fully optical chips that compute using light, meaning they work many times faster — using much less energy — than traditional electronics-based chips. These new chips could be used to power faster, more efficient, and more advanced artificial-intelligence devices.

“Artificial intelligence has affected or will affect all industries,” said Nick Harris, an MIT PhD student, during the team’s winning pitch to a capacity crowd in the Kresge Auditorium. “We’re bringing the next step of artificial intelligence to light.”

Two other winners took home cash prizes from the annual competition, now in its 28th year. Winning a $5,000 Audience Choice award was change:WATER Labs, a team of MIT researchers and others making toilets that can condense waste into smaller bulk for easier transport in areas where people live without indoor plumbing. PipeGuard, an MIT team developing a sensor that can be sent through water pipes to detect leaks, won a $10,000 Booz Allen Hamilton data prize.

The competition is run by MIT students and supported by the Martin Trust Center for MIT Entrepreneurship and the MIT Sloan School of Management.

Computing at light speed

Founded out of MIT, Lightmatter has developed a new optical chip architecture that could in principle speed up artificial-intelligence computations by orders of magnitude.

New generation of computers for coming superstorm of data

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

Fibers and fabrics opens headquarters

Just over a year after its funding award, a new center for the development and commercialization of advanced fabrics is officially opening its headquarters today in Cambridge, Massachusetts, and will be unveiling the first two advanced fabric products to be commercialized from the center’s work.

Advanced Functional Fabrics of America (AFFOA) is a public-private partnership, part of Manufacturing USA, that is working to develop and introduce U.S.-made high-tech fabrics that provide services such as health monitoring, communications, and dynamic design. In the process, AFFOA aims to facilitate economic growth through U.S. fiber and fabric manufacturing.

AFFOA’s national headquarters will open today, with an event featuring Under Secretary of Defense for Acquisition, Technology, and Logistics James MacStravic, U.S. Senator Elizabeth Warren, U.S. Rep. Niki Tsongas, U.S. Rep. Joe Kennedy, Massachusetts Governor Charlie Baker, New Balance CEO Robert DeMartini, MIT President L. Rafael Reif, and AFFOA CEO Yoel Fink. Sample versions of one of the center’s new products, a programmable backpack made of advanced fabric produced in North and South Carolina, will be distributed to attendees at the opening.

AFFOA was created last year with over $300 million in funding from the U.S. and state governments and from academic and corporate partners, to help foster the creation of revolutionary new developments in fabric and fiber-based products. The institute seeks to create “fabrics that see, hear, sense, communicate, store and convert energy, regulate temperature, monitor health, and change color,” says Fink, a professor of materials science and engineering at MIT. In short, he says, AFFOA aims to catalyze the creation of a whole new industry that envisions “fabrics as the new software.”

Under Fink’s leadership, the independent, nonprofit organization has already created a network of more than 100 partners, including much of the fabric manufacturing base in the U.S. as well as startups and universities spread across 28 states.

“AFFOA’s promise reflects the very best of MIT: It’s bold, innovative, and daring,” says MIT President L. Rafael Reif. “It leverages and drives technology to solve complex problems, in service to society. And it draws its strength from a rich network of collaborators — across governments, universities, and industries. It has been inspiring to watch the partnership’s development this past year, and it will be exciting to witness the new frontiers and opportunities it will open.”

A “Moore’s Law” for fabrics

While products that attempt to incorporate electronic functions into fabrics have been conceptualized, most of these have involved attaching various types of patches to existing fabrics. The kinds of fabrics and fibers envisioned by — and already starting to emerge from — AFFOA will have these functions embedded within the fibers themselves.

Referring to the principle that describes the very rapid development of computer chip technology over the last few decades, Fink says AFFOA is dedicated to a “Moore’s Law for fibers” — that is, ensuring that there will be a recurring growth in fiber technology in this newly developing field.

A key element in the center’s approach is to develop the technology infrastructure for advanced, internet-connected fabric products that enable new business models for the fabric industry. With highly functional fabric systems, the ability to offer consumers “fabrics as a service” creates value in the textile industry — moving it from producing goods in a price-competitive market, to practicing recurring revenue models with rapid innovation cycles that are now characteristic of high-margin technology business sectors.

The extremely high resolution of 3-D printers

Today’s 3-D printers have a resolution of 600 dots per inch, which means that they could pack a billion tiny cubes of different materials into a volume that measures just 1.67 cubic inches.

Such precise control of printed objects’ microstructure gives designers commensurate control of the objects’ physical properties — such as their density or strength, or the way they deform when subjected to stresses. But evaluating the physical effects of every possible combination of even just two materials, for an object consisting of tens of billions of cubes, would be prohibitively time consuming.

So researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new design system that catalogues the physical properties of a huge number of tiny cube clusters. These clusters can then serve as building blocks for larger printable objects. The system thus takes advantage of physical measurements at the microscopic scale, while enabling computationally efficient evaluation of macroscopic designs.

“Conventionally, people design 3-D prints manually,” says Bo Zhu, a postdoc at CSAIL and first author on the paper. “But when you want to have some higher-level goal — for example, you want to design a chair with maximum stiffness or design some functional soft [robotic] gripper — then intuition or experience is maybe not enough. Topology optimization, which is the focus of our paper, incorporates the physics and simulation in the design loop. The problem for current topology optimization is that there is a gap between the hardware capabilities and the software. Our algorithm fills that gap.”

Neural networks power consumption could help make the systems portable

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Energy evaluator

Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. Different types of networks vary according to their number of layers, the number of connections between the nodes, and the number of nodes in each layer.

The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation.

“The first thing we did was develop an energy-modeling tool that accounts for data movement, transactions, and data flow,” Sze says. “If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks.”