Monthly Archives: March 2017

The Microsoft Windows targeting system

The ransomware program WannaCry, launched on May 12, targets the Microsoft Windows operating system. While this malware has infected over 200,000 computers worldwide, the attack affected around 100 computers across the 50,000 devices on the MIT network.

This limited impact is due to the many security services provided to the community by MIT Information Systems and Technology (IS&T).

“MIT values an open network to foster research, innovation and collaborative learning,” says IS&T Associate Vice President Mark Silis. “We continuously strive to balance potential security risks with the benefits of our open network environment by offering a number of security services to our community, including Sophos anti-virus, CrowdStrike anti-malware, and CrashPlan backup.

“IS&T staff are working with faculty, staff, and students to secure their devices and address any remaining issues related to WannaCry. In the weeks ahead, our department will continue to educate and advise the MIT community.”

A post on the CISCO Talos blog provides in-depth technical details about the WannaCry ransomware attack.

Preventive measures

IS&T strongly recommends that community members take this opportunity to make sure their Windows machines are fully patched, especially with the MS17-010 Security Update. Microsoft has even released patches for Windows XP, Windows 8, and Windows Server 2003, which are no longer officially supported.

In addition, IS&T recommends installing Sophos and CrowdStrike. These programs successfully block the execution of WannaCry ransomware on machines where they have been installed. A third program, CrashPlan, is also recommended. This cloud-based offering, which runs continuously in the background, securely encrypts and backs up data on computers. Should files be lost due to ransomware or a computer breakdown, restoring data is straightforward.

A device that detects leaky pipes also won top prizes

The big winner at this year’s MIT $100K Entrepreneurship Competition aims to drastically accelerate artificial-intelligence computations — to light speed.

Devices such as Apple’s Siri and Amazon’s Alexa, as well as self-driving cars, all rely on artificial intelligence algorithms. But the chips powering these innovations, which use electrical signals to do computations, could be much faster and more efficient.

That’s according to MIT team Lightmatter, which took home the $100,000 Robert P. Goldberg grand prize from last night’s competition for developing fully optical chips that compute using light, meaning they work many times faster — using much less energy — than traditional electronics-based chips. These new chips could be used to power faster, more efficient, and more advanced artificial-intelligence devices.

“Artificial intelligence has affected or will affect all industries,” said Nick Harris, an MIT PhD student, during the team’s winning pitch to a capacity crowd in the Kresge Auditorium. “We’re bringing the next step of artificial intelligence to light.”

Two other winners took home cash prizes from the annual competition, now in its 28th year. Winning a $5,000 Audience Choice award was change:WATER Labs, a team of MIT researchers and others making toilets that can condense waste into smaller bulk for easier transport in areas where people live without indoor plumbing. PipeGuard, an MIT team developing a sensor that can be sent through water pipes to detect leaks, won a $10,000 Booz Allen Hamilton data prize.

The competition is run by MIT students and supported by the Martin Trust Center for MIT Entrepreneurship and the MIT Sloan School of Management.

Computing at light speed

Founded out of MIT, Lightmatter has developed a new optical chip architecture that could in principle speed up artificial-intelligence computations by orders of magnitude.

New generation of computers for coming superstorm of data

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

Fibers and fabrics opens headquarters

Just over a year after its funding award, a new center for the development and commercialization of advanced fabrics is officially opening its headquarters today in Cambridge, Massachusetts, and will be unveiling the first two advanced fabric products to be commercialized from the center’s work.

Advanced Functional Fabrics of America (AFFOA) is a public-private partnership, part of Manufacturing USA, that is working to develop and introduce U.S.-made high-tech fabrics that provide services such as health monitoring, communications, and dynamic design. In the process, AFFOA aims to facilitate economic growth through U.S. fiber and fabric manufacturing.

AFFOA’s national headquarters will open today, with an event featuring Under Secretary of Defense for Acquisition, Technology, and Logistics James MacStravic, U.S. Senator Elizabeth Warren, U.S. Rep. Niki Tsongas, U.S. Rep. Joe Kennedy, Massachusetts Governor Charlie Baker, New Balance CEO Robert DeMartini, MIT President L. Rafael Reif, and AFFOA CEO Yoel Fink. Sample versions of one of the center’s new products, a programmable backpack made of advanced fabric produced in North and South Carolina, will be distributed to attendees at the opening.

AFFOA was created last year with over $300 million in funding from the U.S. and state governments and from academic and corporate partners, to help foster the creation of revolutionary new developments in fabric and fiber-based products. The institute seeks to create “fabrics that see, hear, sense, communicate, store and convert energy, regulate temperature, monitor health, and change color,” says Fink, a professor of materials science and engineering at MIT. In short, he says, AFFOA aims to catalyze the creation of a whole new industry that envisions “fabrics as the new software.”

Under Fink’s leadership, the independent, nonprofit organization has already created a network of more than 100 partners, including much of the fabric manufacturing base in the U.S. as well as startups and universities spread across 28 states.

“AFFOA’s promise reflects the very best of MIT: It’s bold, innovative, and daring,” says MIT President L. Rafael Reif. “It leverages and drives technology to solve complex problems, in service to society. And it draws its strength from a rich network of collaborators — across governments, universities, and industries. It has been inspiring to watch the partnership’s development this past year, and it will be exciting to witness the new frontiers and opportunities it will open.”

A “Moore’s Law” for fabrics

While products that attempt to incorporate electronic functions into fabrics have been conceptualized, most of these have involved attaching various types of patches to existing fabrics. The kinds of fabrics and fibers envisioned by — and already starting to emerge from — AFFOA will have these functions embedded within the fibers themselves.

Referring to the principle that describes the very rapid development of computer chip technology over the last few decades, Fink says AFFOA is dedicated to a “Moore’s Law for fibers” — that is, ensuring that there will be a recurring growth in fiber technology in this newly developing field.

A key element in the center’s approach is to develop the technology infrastructure for advanced, internet-connected fabric products that enable new business models for the fabric industry. With highly functional fabric systems, the ability to offer consumers “fabrics as a service” creates value in the textile industry — moving it from producing goods in a price-competitive market, to practicing recurring revenue models with rapid innovation cycles that are now characteristic of high-margin technology business sectors.

Technique could lead to cameras

Virtually any modern information-capture device — such as a camera, audio recorder, or telephone — has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes.

Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a “clipped” audio signal or as “saturation” in digital images — when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC’s voltage limit.

The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn’t skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

The paper’s chief result, however, is theoretical: The researchers establish a lower bound on the rate at which an analog signal with wide voltage fluctuations should be measured, or “sampled,” in order to ensure that it can be accurately digitized. Their work thus extends one of the several seminal results from longtime MIT Professor Claude Shannon’s groundbreaking 1948 paper “A Mathematical Theory of Communication,” the so-called Nyquist-Shannon sampling theorem.

The extremely high resolution of 3-D printers

Today’s 3-D printers have a resolution of 600 dots per inch, which means that they could pack a billion tiny cubes of different materials into a volume that measures just 1.67 cubic inches.

Such precise control of printed objects’ microstructure gives designers commensurate control of the objects’ physical properties — such as their density or strength, or the way they deform when subjected to stresses. But evaluating the physical effects of every possible combination of even just two materials, for an object consisting of tens of billions of cubes, would be prohibitively time consuming.

So researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new design system that catalogues the physical properties of a huge number of tiny cube clusters. These clusters can then serve as building blocks for larger printable objects. The system thus takes advantage of physical measurements at the microscopic scale, while enabling computationally efficient evaluation of macroscopic designs.

“Conventionally, people design 3-D prints manually,” says Bo Zhu, a postdoc at CSAIL and first author on the paper. “But when you want to have some higher-level goal — for example, you want to design a chair with maximum stiffness or design some functional soft [robotic] gripper — then intuition or experience is maybe not enough. Topology optimization, which is the focus of our paper, incorporates the physics and simulation in the design loop. The problem for current topology optimization is that there is a gap between the hardware capabilities and the software. Our algorithm fills that gap.”

Neural networks power consumption could help make the systems portable

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Energy evaluator

Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. Different types of networks vary according to their number of layers, the number of connections between the nodes, and the number of nodes in each layer.

The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation.

“The first thing we did was develop an energy-modeling tool that accounts for data movement, transactions, and data flow,” Sze says. “If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks.”

Alternative to 3D scanners that cost 200 times as much

Last year, a team of forensic dentists got authorization to perform a 3-D scan of the prized Tyrannosaurus rex skull at the Field Museum of Natural History in Chicago, in an effort to try to explain some strange holes in the jawbone.

Upon discovering that their high-resolution dental scanners couldn’t handle a jaw as big as a tyrannosaur’s, they contacted the Camera Culture group at MIT’s Media Lab, which had recently made headlines with a prototype system for producing high-resolution 3-D scans.

The prototype wasn’t ready for a job that big, however, so Camera Culture researchers used $150 in hardware and some free software to rig up a system that has since produced a 3-D scan of the entire five-foot-long T. rex skull, which a team of researchers — including dentists, anthropologists, veterinarians, and paleontologists — is using to analyze the holes.

The Media Lab researchers report their results in the latest issue of the journal PLOS One.

“A lot of people will be able to start using this,” says Anshuman Das, a research scientist at the Camera Culture group and first author on the paper. “That’s the message I want to send out to people who would generally be cut off from using technology — for example, paleontologists or museums that are on a very tight budget. There are so many other fields that could benefit from this.”

Das is joined on the paper by Ramesh Raskar, a professor of media arts and science at MIT, who directs the Camera Culture group, and by Denise Murmann and Kenneth Cohrn, the forensic dentists who launched the project.

The leap to our homes just yet

While 3-D movies continue to be popular in theaters, they haven’t made the leap to our homes just yet — and the reason rests largely on the ridge of your nose.

Ever wonder why we wear those pesky 3-D glasses? Theaters generally either use special polarized light or project a pair of images that create a simulated sense of depth. To actually get the 3-D effect, though, you have to wear glasses, which have proven too inconvenient to create much of a market for 3-D TVs.

But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aim to change that with “Home3D,” a new system that allows users to watch 3-D movies at home without having to wear special glasses.

Home3D converts traditional 3-D movies from stereo into a format that’s compatible with so-called “automultiscopic displays.” According to postdoc Petr Kellnhofer, these displays are rapidly improving in resolution and show great potential for home theater systems.

“Automultiscopic displays aren’t as popular as they could be because they can’t actually play the stereo formats that traditional 3-D movies use in theaters,” says Kellnhofer, who was the lead author on a paper about Home3D that he will present at this month’s SIGGRAPH computer graphics conference in Los Angeles. “By converting existing 3-D movies to this format, our system helps open the door to bringing 3-D TVs into people’s homes.”