Category Archives: Computer

Alternative to 3D scanners that cost 200 times as much

Last year, a team of forensic dentists got authorization to perform a 3-D scan of the prized Tyrannosaurus rex skull at the Field Museum of Natural History in Chicago, in an effort to try to explain some strange holes in the jawbone.

Upon discovering that their high-resolution dental scanners couldn’t handle a jaw as big as a tyrannosaur’s, they contacted the Camera Culture group at MIT’s Media Lab, which had recently made headlines with a prototype system for producing high-resolution 3-D scans.

The prototype wasn’t ready for a job that big, however, so Camera Culture researchers used $150 in hardware and some free software to rig up a system that has since produced a 3-D scan of the entire five-foot-long T. rex skull, which a team of researchers — including dentists, anthropologists, veterinarians, and paleontologists — is using to analyze the holes.

The Media Lab researchers report their results in the latest issue of the journal PLOS One.

“A lot of people will be able to start using this,” says Anshuman Das, a research scientist at the Camera Culture group and first author on the paper. “That’s the message I want to send out to people who would generally be cut off from using technology — for example, paleontologists or museums that are on a very tight budget. There are so many other fields that could benefit from this.”

Das is joined on the paper by Ramesh Raskar, a professor of media arts and science at MIT, who directs the Camera Culture group, and by Denise Murmann and Kenneth Cohrn, the forensic dentists who launched the project.

The leap to our homes just yet

While 3-D movies continue to be popular in theaters, they haven’t made the leap to our homes just yet — and the reason rests largely on the ridge of your nose.

Ever wonder why we wear those pesky 3-D glasses? Theaters generally either use special polarized light or project a pair of images that create a simulated sense of depth. To actually get the 3-D effect, though, you have to wear glasses, which have proven too inconvenient to create much of a market for 3-D TVs.

But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aim to change that with “Home3D,” a new system that allows users to watch 3-D movies at home without having to wear special glasses.

Home3D converts traditional 3-D movies from stereo into a format that’s compatible with so-called “automultiscopic displays.” According to postdoc Petr Kellnhofer, these displays are rapidly improving in resolution and show great potential for home theater systems.

“Automultiscopic displays aren’t as popular as they could be because they can’t actually play the stereo formats that traditional 3-D movies use in theaters,” says Kellnhofer, who was the lead author on a paper about Home3D that he will present at this month’s SIGGRAPH computer graphics conference in Los Angeles. “By converting existing 3-D movies to this format, our system helps open the door to bringing 3-D TVs into people’s homes.”

Practical paper folding patterns to produce any 3D structure

In a 1999 paper, Erik Demaine — now an MIT professor of electrical engineering and computer science, but then an 18-year-old PhD student at the University of Waterloo, in Canada — described an algorithm that could determine how to fold a piece of paper into any conceivable 3-D shape.

It was a milestone paper in the field of computational origami, but the algorithm didn’t yield very practical folding patterns. Essentially, it took a very long strip of paper and wound it into the desired shape. The resulting structures tended to have lots of seams where the strip doubled back on itself, so they weren’t very sturdy.

At the Symposium on Computational Geometry in July, Demaine and Tomohiro Tachi of the University of Tokyo will announce the completion of a quest that began with that 1999 paper: a universal algorithm for folding origami shapes that guarantees a minimum number of seams.

“In 1999, we proved that you could fold any polyhedron, but the way that we showed how to do it was very inefficient,” Demaine says. “It’s efficient if your initial piece of paper is super-long and skinny. But if you were going to start with a square piece of paper, then that old method would basically fold the square paper down to a thin strip, wasting almost all the material. The new result promises to be much more efficient. It’s a totally different strategy for thinking about how to make a polyhedron.”

Demaine and Tachi are also working to implement the algorithm in a new version of Origamizer, the free software for generating origami crease patterns whose first version Tachi released in 2008.

Maintaining boundaries

The researchers’ algorithm designs crease patterns for producing any polyhedron — that is, a 3-D surface made up of many flat facets. Computer graphics software, for instance, models 3-D objects as polyhedra consisting of many tiny triangles. “Any curved shape you could approximate with lots of little flat sides,” Demaine explains.

Technically speaking, the guarantee that the folding will involve the minimum number of seams means that it preserves the “boundaries” of the original piece of paper. Suppose, for instance, that you have a circular piece of paper and want to fold it into a cup. Leaving a smaller circle at the center of the piece of paper flat, you could bunch the sides together in a pleated pattern; in fact, some water-cooler cups are manufactured on this exact design.

In this case, the boundary of the cup — its rim — is the same as that of the unfolded circle — its outer edge. The same would not be true with the folding produced by Demaine and his colleagues’ earlier algorithm. There, the cup would consist of a thin strip of paper wrapped round and round in a coil — and it probably wouldn’t hold water.

“The new algorithm is supposed to give you much better, more practical foldings,” Demaine says. “We don’t know how to quantify that mathematically, exactly, other than it seems to work much better in practice. But we do have one mathematical property that nicely distinguishes the two methods. The new method keeps the boundary of the original piece of paper on the boundary of the surface you’re trying to make. We call this watertightness.”

Studied nonintrusively at home using wireless signals

More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.

To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

“Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”

Katabi worked on the study with Matt Bianchi, chief of the Division of Sleep Medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT. Mingmin Zhao, an MIT graduate student, is the paper’s first author, and Shichao Yue, another MIT graduate student, is also a co-author.

Electronic health records, to predict outcomes in hospitals

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.