Dec 08

Smarter Recommendations: Mood and Context

This post on TechCrunch, Clerk Dogs Takes a Curated Approach To Movie Recommendations, got me thinking about recommendation engines again.  Namely, the Netflix contest to optimize its algorithm.  The problem I’ve had with the Netflix approach is that it assumes it’s just a matter of being smarter at using inputs to predict outputs.  But what if your inputs are wrong?

An illustration will help.  I remember the first time, some time in the 90′s, I used Amazon’s book recommendation engine.  I had fun rating all the books I’d read.  Book lovers enjoy remembering the vast history of books we’ve had affairs with.  This type of engine works relatively well for books — looking at co-occurence of books read (and ratings) across users.  Music and movies present a more difficult challenge.

My premise is that many times movie watching (more so than music) is a social experience.  The movies you choose to see, order from Netflix, or buy, are all products of who you plan to watch them with.  Further, there is an element of mood.  How many times has someone said, like choosing where to eat out, “what are you in the mood for?”

Imagine a Netflix rating system that asked a few questions in addition to the stars:

  1. Who did you watch the movie with?  What would they rate the movie?
  2. What day of the week did you watch?  What time?
  3. Was this movie recommended to you?

These might not be the right three… but the way I’d approach the Netflix Prize challenge (though of course this strategy is not within the rules) is to cycle through a few qualifying questions asked in an easy yet entertaining way.  Imagine the fun byproducts of some of these questions?  The best date movie for a Saturday night at home, the most popular hump day movie, etc.

Just a beginning of a thought here.  I just think all 5 star movies (even my own) don’t always deserve 5 stars (though some might — though I’ll use another post to talk about the likelihood that 1 and 5 starts are more mood and context dependent than 2-4 stars).  And if you could tease this out, you’d get vast improvements over the “optimization” approach.  A lesson that could be applied outside the world of recommendations.