Leave it to MIT to find a way to turn our obsession with food photos into a science project. Imagine seeing an especially tempting food pic on Instagram, and wondering how to reproduce that at home. What are the ingredients, and the processes involved to make that dish?
Now researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are working on an artificial neural network that will analyze food photos and turn them into recipes. Just from a photo.
Have a look at how Pic2Recipe works.
OK, it’s a work in progress. But remember life without the Internet or a smart phone? Technology moves quickly and does things we can only dream of now.
The researchers have been patiently feeding their computer 800,000 photos and matching recipes to create a large “dataset” to study. That’s right, artificial intelligence (AI) can analyze all the data, and learn patterns and connections between the recipes and the photos of the finished food.
That gives it something to compare your photo of braised osso buco on a bed of polenta to. The program is learning about recipes much the way we do.
“Just like a human, it can infer the presence of invisible, homogenized or obscured ingredients using context. For instance, if I see a green colored soup, it probably contains peas — and most definitely salt!” says Hynes. “When the model finds the best match, it’s really taking a holistic view of the entire image or the entire recipe. That’s part of why the model is interesting: It learns a lot about recipes in a very unstructured way.”
To read more about this, head to NPR for the full article.