I’ve been following some cool work coming out of University of Rochester. I did a story for WXXI here, about how researchers in the computer science department teamed up with researchers in the medical department to create software that could help track your mood and monitor mental health.
I spoke with Jiebo Luo, an engineer, and Vince Silenzio, a psychiatrist. To most people, the most notable aspect of their research is the selfie-for-your-health-y video feature. Users take a vid of their faces with a front-facing camera and a computer tells them what they’re emoting. The software reads cues like facial ticks, micro-expressions, and other nonverbal cues. It can gauge your heart rate by analyzing your forehead. It can register the dilation of your pupils. And, of course, it can read whether or not you’re smiling. (Victims of BRF beware.)
The less-sexy part of this software, which Luo says could potentially be turned into a smartphone app, has nothing to do with videos. The program also analyzes its users social media use. Everything from how fast your scrolling to the kitten picture you reposted is judged. It also analyzes your network, and puts a value on every link that connects that network, from friendships between users, to likes, to posts or comments.
Right now, the program only registers emotions as positive, negative, or neutral. As far as algorithms are concerned, every interaction you have with your social media has a value of +, -, or n.
So let’s break this down:
Let’s say you’re strolling around Facebook town and you see a picture of a kitten that you just have to share on your wall. The computer would analyze the picture. (Fluffy, baby animal? Pinkish hues? Bright light? Value: Positive.) You added some text with the picture: AWWWWW! (Value: Positive) You get a ton of likes and a few smiley emoticons in the comments section (Positive, positive). Odds are, your day is going pretty sweet.
For someone battling mental health issues like depression or bipolar disorder, this information could really help them get a handle on what’s going on inside their heads. It could also help bridge the gap between a patient and a clinician during a tele-health session, which is a story for another post.
A lot of work went into analyzing that kitten picture. Luo’s been working on sentiment analysis for some time. His algorythm, called Sentribute, does exactly as the name suggests. It uses attributes within an image to judge its sentiment. I won’t go into the minutia of the operations (mostly because I tried really hard but I really don’t understand any of them I’m so sorry) but you can read his paper on it here.
But Luo doesn’t stop there. Not only can you use sentiment analysis to determine the mood of an individual, you can use it to monitor the mood of the country!
I’m not being hyperbolic. Stay with me.
Luo took his fancy-pants algorithm to Flickr and went through images of 2008 presidential candidates. He wanted to see if social (multi)media could be used to predict elections. His theory was that people will post pictures of the candidates they like and will vote for, and therefore the candidate with the greatest online showing would be the victor.
In a sense, each time an image or video is uploaded or viewed, it constitutes an implicit vote for (or against) the subject of the image. This vote carries along with it a rich set of associated data including time and (often) location information. By aggregating such votes across millions of Internet users, we reveal the wisdom that is embedded in social multimedia sites for social science applications such as politics, economics, and marketing.
During this process, he had to correct for things like sacasm and attack ads, which is where the sentiment analysis comes in. Not all pictures of Obama are counted as “votes” per-se, but flattering pictures of him (smiling, head up, arms raised, otherwise stereotypically strong, powerful) would be.
The results? Flickr users correctly predicted the outcome of the primary and general elections of 2008. Sometimes, with super accuracy that I feel couldn’t be recreated if you tried.
An interesting finding: Obama won the presidential election on 11/4/2008 by capturing 52.9% of the popular vote . The TUPD score for Obama and McCain on the same day is 642 (Obama) v.s. 571 (McCain) [F], with Obama got 642/(642 + 571) = 52.9% exactly the same as the real voting result.
You may be asking yourself, Would this experiment have been as accurate if it didn’t feature Barack Obama, “The Social Media President?” But Luo’s research also determined that Flickr photos accurately predicted the Iowa caucuses, the New Hampshire Primary, and the Michigan primary wins for Huckabee, McCain, and Romney, respectively.
On this kind of macro-level, Luo thinks this research can lead to a greater understanding of the wisdom of crowds. He also thinks it can be used to understand economics and marketing in a new way. (His paper includes analysis of Apple product sales through Flickr shares.)
But on a smaller scale, Luo’s research can give us all kinds of insights about the pictures we choose to post and share. Maybe your decision to use a particular Instagram filter is a warning sign that your on the edge of a depressive episode and could warn you to get help before it becomes an issue. It could help us understand how we deal with contagious emotions within our personal networks. As Luo told me, the applications are endless!
Sentiment analysis is kind of blowing my mind right now, in part because recognizing emotions is so embedded in my understanding of the human experience. I’ll categorize this under “turning computers into sentient beings.”
Also, I got through this entire post without stating the obvious “A PICTURE IS WORTH A THOUSAND WORDS” joke (until just now).