Want a hardcore problem to work on? to fund? to stay-up-nights-only-to-see-Google-do-it? or maybe get-bought-by-Google?
We need a search engine that searches based on visual and contextual clues about the appearance of objects -- especially of signs and symbols -- rather than just based on words that (perhaps) describe them.
For example, if I see a bumper sticker around town, with a green star on a blue field, I might want to see if this represents some well known organization or cause. I could search for “green star" and “bumper sticker” or something similar. But I probably won’t find anything.
Moreover, when the elements of the design don’t have names (“star”, “stripe,” “field”), properly describing a complex design in a single search gets difficult. Imagine you saw the new Pepsi logo:
You don’t know what it is; for the sake of the argument, imagine you don’t have any cultural Pepsi associations to work from either. What do you type in to the search box? Circle? red? stripe?
How do we solve this problem?
I envision a search that consists of several stages. At the first stage, you can add descriptive words, or you can import a similar image, or even draw/sketch some cues right on the page. That may sound unlikely for less design-oriented folks, but many unknown visual designs consist of largely straight lines, simple geometry, etc. So it’s not unreasonable that I could sketch in a simple design, or even take a swing at the Pepsi logo above, with just a circle and 3 straight lines using an AJAX or Flash inline drawing tool.
From these inputs, the search engine draws a set of possible results – but it also generates a set of context-narrowing options that I can use.
It presents options to choose where I saw this design: web, billboard, tv, clothing, museum, public building (e.g., a capitol or courthouse), manhole cover, etc.
Perhaps knowing material is useful: was this printed? embroidered? leather? denim? engraved metal?
This is a challenging but eminently creatable piece of software.
I’ve actually had a lot of instances where I would have liked to use something like this – but, if it’s never happened to you, consider: when computer vision progresses beyond working with the local environment, objects and known patterns (people), the machine will need to take the next step. It will want to dereference symbols to find data and meanings in order to solve problems. And, in order to do this, it will need benefit from this kind of visual-semiotic search heuristic, which starts with a visual-context search like the one we are discussing.