Monthly Archives: February 2017

Practical paper folding patterns to produce any 3D structure

In a 1999 paper, Erik Demaine — now an MIT professor of electrical engineering and computer science, but then an 18-year-old PhD student at the University of Waterloo, in Canada — described an algorithm that could determine how to fold a piece of paper into any conceivable 3-D shape.

It was a milestone paper in the field of computational origami, but the algorithm didn’t yield very practical folding patterns. Essentially, it took a very long strip of paper and wound it into the desired shape. The resulting structures tended to have lots of seams where the strip doubled back on itself, so they weren’t very sturdy.

At the Symposium on Computational Geometry in July, Demaine and Tomohiro Tachi of the University of Tokyo will announce the completion of a quest that began with that 1999 paper: a universal algorithm for folding origami shapes that guarantees a minimum number of seams.

“In 1999, we proved that you could fold any polyhedron, but the way that we showed how to do it was very inefficient,” Demaine says. “It’s efficient if your initial piece of paper is super-long and skinny. But if you were going to start with a square piece of paper, then that old method would basically fold the square paper down to a thin strip, wasting almost all the material. The new result promises to be much more efficient. It’s a totally different strategy for thinking about how to make a polyhedron.”

Demaine and Tachi are also working to implement the algorithm in a new version of Origamizer, the free software for generating origami crease patterns whose first version Tachi released in 2008.

Maintaining boundaries

The researchers’ algorithm designs crease patterns for producing any polyhedron — that is, a 3-D surface made up of many flat facets. Computer graphics software, for instance, models 3-D objects as polyhedra consisting of many tiny triangles. “Any curved shape you could approximate with lots of little flat sides,” Demaine explains.

Technically speaking, the guarantee that the folding will involve the minimum number of seams means that it preserves the “boundaries” of the original piece of paper. Suppose, for instance, that you have a circular piece of paper and want to fold it into a cup. Leaving a smaller circle at the center of the piece of paper flat, you could bunch the sides together in a pleated pattern; in fact, some water-cooler cups are manufactured on this exact design.

In this case, the boundary of the cup — its rim — is the same as that of the unfolded circle — its outer edge. The same would not be true with the folding produced by Demaine and his colleagues’ earlier algorithm. There, the cup would consist of a thin strip of paper wrapped round and round in a coil — and it probably wouldn’t hold water.

“The new algorithm is supposed to give you much better, more practical foldings,” Demaine says. “We don’t know how to quantify that mathematically, exactly, other than it seems to work much better in practice. But we do have one mathematical property that nicely distinguishes the two methods. The new method keeps the boundary of the original piece of paper on the boundary of the surface you’re trying to make. We call this watertightness.”

Studied nonintrusively at home using wireless signals

More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.

To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

“Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”

Katabi worked on the study with Matt Bianchi, chief of the Division of Sleep Medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT. Mingmin Zhao, an MIT graduate student, is the paper’s first author, and Shichao Yue, another MIT graduate student, is also a co-author.

Electronic health records, to predict outcomes in hospitals

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.