6 Ways that Machine Vision can Help Museums

AI-machine-vision.jpg

In my last post, we examined the topics of artificial intelligence and machine learning in museums. Today, I’d like to continue this thread and focus on machine vision. It couldn’t be more timely, as Google recently announced their public beta of Cloud Vision API and it has us all dreaming of interesting ways that machine vision can be used to help museums.

What is machine vision?

Machine vision is the ability for a computer to understand what it is seeing.

Back in 2014, the Museum of Arts and Design in New York, hosted a panel examining the “Cultural Impact of Computer Vision” from the eyes of artists. Today, let’s take a look from the perspective of museums.

Machine vision can be used to inspect and analyze images. Imagine being able to classify all of your visual objects with the flip of a switch (actually, a few lines of code).

Let’s take a look at a few examples!

1. Identifying Subject Matter

Machine vision has become advanced enough to detect the subject matter and objects depicted in an image. What is depicted in this painting, photo, video, or sculpture?

We put Google Vision API to the test with Canaletto’s The Grand Canal in Venice from Palazzo Flangini to Campo San Marcuola located at the J. Paul Getty Museum in Los Angeles.

the-grand-canal.jpg

Maybe it was beginner’s luck, but the 4 terms returned (watercraft rowing, rowing, gondola, and painting) were all accurate descriptions of the subject matter and objects.

screen.gif

There is still a ways to go with object classification but it’s worth noting that the more you “train” a machine vision engine, the more accurate it becomes.

2. Exacting Color Composition

Color composition is one meta-tag that you are unlikely to find in most museum collections databases. Running an object’s image through a computer vision tool can extract and output data related to its color clusters, partitions, and histogram data.

3. Sentiment Analysis

If there are unobstructed human faces in an image, machine vision can be used to determine the emotional state of those portrayed by analyzing the facial characteristics.

To put this to the test, we ran a few portraits through the Emotion API of Microsoft Project Oxford:

Rembrandt (circle of), “Bust of a Laughing Young Man” (1629)

Rijksmuseum

bust-of-a-laughing-young-man.jpg

Nice to see you, too!

Pablo Picasso, “Femme aux Bras Croisés” (1901)

Private collection

Pablo_Picasso_Femme_aux_Bras_Croisés_Woman_with_Folded_Arms.jpg

Inline with his Blue Period: Sad people in gloomy settings.

Otto Dix, “Self-Portrait” (1912)

Detroit Institute of the Arts

mr-dix.jpg

Why so angry, Mr. Dix?

4. Text / Character Recognition

Want to easily extract text from every object in your collection? This has been possible for many years. The tool commonly known as “optical character recognition,” has become more accessible and faster to use via cloud APIs.

While this might not be absolutely necessary for pieces by Ed Ruscha or Lawrence Weiner (as the title and text displayed in their works are usually the same), this function’s greatest value could come from extracting text from written documents (historical letters, etc.) so that it’s all searchable and easy to classify.

5. Recognizing Similarity and Patterns

Are there other works in your collections that are very similar, not just on subject matter, but visual composition? A computer can see these relationships and quantify the differences and similarities.  

These two Clyfford Still “replica” paintings are slightly different, 5.58% to be exact! 

replica.jpg

(L to R) PH-225, (1956). Oil on canvas. Collection of the Modern Art Museum of Fort Worth; PH-1074, (1956–9). Oil on canvas. Clyfford Still Museum © City and County of Denver

I was personally inspired to find this out after visiting the Clyfford Still Museum this past October 2015 for “Repeat/Recreate,” a fascinating exhibition in its own right.

Walking into the Impressionism Gallery at the Museum of Fine Arts, Boston, you’ll find these two stunning paintings by Claude Monet, side-by-side. According to computer analysis the two works are 96.81% similar.

monet.jpg

(L to R) Water Lilies, 1905. Oil on canvas; Water Lilies. 1907. Oil on canvas. Collection of Museum of Fine Arts, Boston

6. Art Authentication

Back in 2008, PBS NOVA covered the case of computers helping distinguish forged art from original masterpieces. This project was in cooperation with the Van Gogh Museum and challenged computer scientists to build tools to analyze brush stocks and identify forgeries.

So, where do we begin?

There are various tools available today to help you jumpstart your journey into machine vision.

As you can see, computer vision is a very powerful tool and is more accessible than ever before. In the hands of museums, it can lead to interesting discoveries, rich data, and new paths into your collection.

Does your museum have plans to use any aspect of machine vision this year? Let us know your take on the topic on Twitter at @Cuseum with the hashtag #musetech.

Related Reading