This is a sort of worked example, to see what falls out of the kind of philosophy discussed in the previous.
I mentioned the Lytro camera a couple posts back in this thread. But let's dig deeper.
It's Grandma's birthday, and we bring out the Lytro and shoot a bunch of stuff, drop it into some cloud service someplace. Our magical "make sense of my photos" software scoops up these pictures, and does a full-depth rendering of them to classify the objects. There's Grandma, her two granddaughters at her shoulders, blowing out the candles on her cake.
The software notes three people, and who they are. Maybe not names (privacy, natch) but that the two little girls are those two little girls, and the old lady is that old lady. The software notes the birthday cake idiom. This all goes in the metadata we're keeping around.
Now we do a search: Grandma's Birthday. We pull up the photo, obviously, and render it with grandma and the cake in focus, because it's a birthday picture.
Now we do a search: Pictures of Grandma. We pull up the photo and render it with Grandma sharp and the cake soft, because it's a picture of Grandma.
Now we do a search: Pictures of Grandma with Susie and Ellie. We pull up the photo and render it with Grandma and the two girls sharp, and the cake soft, because it's a group picture of the three of them.
Suddenly the Lytro begins to make sense. It's not a stupid gimmicky toy for nerds, it's a tool we can use to record things which get contextualized and rendered later.
Post a Comment