App Myopia

Moving beyond the desktop towards just-in-time interaction
So often what passes for vision is usually nothing more than tiny extensions of what is already known and safe.  Of course, it’s only natural as people tend to think within what is most comfortable. I call this “Default Thinking” and have already discussed this in my first post, (it was initially discussed as far back as 1962 by Thomas Kuhn).

Default Thinking comes up frequently when discussing technology, but a particularly virulent form of it has taken hold in mobile: App Myopia. This is a paradigm that sees every possible mobile opportunity only as an exercise in creating an app. This is a rather useful myopia, to be sure, as some people are making lots of money selling apps, but it is beginning to feel like a local maximum and a paradigm that can only get us so far. As Thomas Kuhn might say, we are in need of a revolution.

This approach blinds us to other ways of looking at the deeper potential of mobile. As the number of apps we use grows, there clearly will come a tipping point where we’ll just drown within the task of finding, downloading, and managing applications. It is just not practical to have an app for every store we visit, every company we deal with, every entertainment outlet we patronize, and every product we have. The sheer weight of user responsibly will create a negative feedback loop and people will just refuse to be bothered. In some ways, it is similar to the shift from Yahoo’s original hierarchical list of web sites to Google’s search model: as the number gets high enough, a list of anything becomes overwhelming.

So far, I’ve just described the pain that App Myopia will eventually, inexorably, bring. In my previous post, I talked about an “opportunistic cluster” of hundreds of smart devices. The pain of App Myopia makes these types of devices nearly impossible to interact with easily as each new device would require it’s own unique app to discover and interact with them. This new world of smart devices represents a huge opportunity but they require new interaction patterns around information, functionality and even distributed intelligence that can’t be addressed only using the old-school application model.

Example 1: 10,000 bus stop apps
Right now, if I want to see when a bus will arrive nearby, I need to get the city bus app (native or web based, it doesn’t matter). Then, I have a fairly complex task to find not only the bus line I want to take (if I even know the correct one), but also which particular stop I’m at. Even the best-designed apps will require significant effort and understanding to do this.

In my opportunistic cluster model, the bus stop I am standing in front of *is* the app. I open my phone and I’m looking at what *this* bus stop has to offer. It’s the purest form of progressive disclosure: it shows me the immediate, obvious information needed, but with some small bit of functionality near the bottom for the full ‘city bus app’ experience. This is the complete opposite of the current app experience today.

Example 2: My personal bottle of ketchup
Today, through bar code scanning, photo recognition, and soon, RFID tags, my phone can recognize the brand of ketchup I’m holding, allowing me to do a web search on it. This is cool, but severely limited as every single bottle registers exactly the same way. When I look up this bottle, I’m seeing the platonic ideal, not *my* bottle. A clever mix of history stored on my phone and cheap sensors in the bottle will create an experience for just this particular bottle: how many times it’s been used, when it’s been used, and even by whom it’s been used, which will create a data cloud that is unique to this particular bottle of ketchup. You may recognize this as a spime created by Bruce Sterling.

Example 3: Trouble beeper
My last example is a simple device, a tiny disc that I attach to a wall where I put my car keys. I’ve configured it with my morning commute. It only has one job: to know when I’m passing by. When I leave the house for work, it notices my passing and agreeably glows green, indicating that my commute appears unencumbered. This will quickly become a routine part of my day and I’ll hardly even register this green glow. The disc will, however, glow red and beep if there are any potential problems, such as a traffic jam or a delayed train. It only grabs my attention when there is a problem.

Most of us are creatures of habit with very repetitive parts of our day, and we really don’t want surprises. Instead of building an app that I have to consult to determine if my commute is ok, I have a device that only talks to me when it is not. I’ll need to pull out my phone and figure something out when this occurs, but that will rarely happen. This completely turns the interaction model of an app on its head.

Just-in-time interaction
Tying interaction to a specific device becomes, in effect, my personal gateway into precise, targeted, functionality. There is something very powerful about the relationship between a specific object and myself: it contextually unlocks information and functionality that is normally too complex to get through a classic application.  Apps can still exist, of course, but they are the old school, heavy lifters I go to when I really want to roll up my sleeves and work. In a sea of smart devices, I will pass by hundreds of devices a day, ignoring the vast majority. But when I do choose to interact with them, I want just-in-time interaction, exactly what I need from that object, at that time.

This conversation is not utopian. Primitive versions of these examples are being built today. What is holding us back right now is the mobile phone’s inability to properly act as a navigator through this opportunistic cluster of devices. We’re forced to use barcode scanners, cameras, or soon NFC chips to cobble together clunky approximations of these visions. Here are the types of advances in mobile phones that would unlock this potential:

Advance 1: A Discovery service
Phones today have lots of sensors built into them but no service to tie them all together. There needs to be a service between the phone and the cloud that offers a ranked list of devices and information sources nearby. This would include geo-tagged objects (such as every bus stop in a city, but there’s no reason this can’t extend to every tree in a park), nearby RFID tags, low energy Bluetooth devices, and narrow function Wi-Fi devices such as bathroom scales. This will be an ever-changing list of technologies, which is why it’s so critical that a service exists to act as buffer. As new technologies appear, our phones just see more stuff.

This vision isn’t as nearly as difficult to achieve as you might think. First, there doesn’t need to be a monolithic phone or web service by single company, several can exist at once and compete. Second, any company that wanted to put their geo-tagged data into a service would just need to publish their feed on the web once in a standardized format. Any cloud service that wanted to play could then crawl the data and offer it up. The RFID, Bluetooth, and Wi-Fi ranking would most likely reside on the phone itself and could be cleverly augmented by cloud services. Signal strength should also play into the ranking so the device near me would be ranked higher than the one 40 feet away (a common problem in Bluetooth pairing today)

Keep in mind that only devices that want to be found, such as bus stops and posters, would be ranked. This service would not be used for personal devices such as mobile phones.

Advance 2: A Common Interaction language
Each of these devices would need to have their local information and functionality displayed in a universal form. HTML5 is the obvious candidate here. It can’t be a native app because they would have to run on every device on the planet. As these information services are, at least initially, a bit simpler than classic applications today, it’s possible to imagine their requirements being less demanding. A tremendous amount of functionality could be unlocked by using just what’s available today in HTML5, let alone what will be coming over the next few years.

A huge side benefit to using HTML5 is how easy it would be to embed this functionality in existing desktop web services. Any web mapping service today could show these geo tagged services (e.g. both fixed stops bus stops and moving buses) so you could look at the approaching bus from your office computer just as easily as your phone.

Advance 3: OS level access to these nearby devices
If I need to launch an app to browse and interact with what’s nearby, I’m not much better off. There needs to be a type of radar built into the phone’s home screen experience. Speed and simplicity are key to exposing devices that are nearby and allowing me to see and choose them with as little trouble as possible.

Of course, there will likely be a range of newer, more focused devices that could offer a similar discovery service (watches and digital projection eye glasses come to mind). Mobile phones are just the tools we’re comfortable with today and the best place to experiment.

Conclusion
The history of mobile phones has been a long slow process of copying what works on the desktop and then sheepishly realizing that it just doesn’t quite work right. App Myopia is one of the final bad habits we need to abandon. Just-in-time interaction is a new opportunity and represents a style of interaction that reinvents basic interaction styles between devices. The ideas in this paper are a tiny start in this direction. My purpose isn’t to predict a full list of future products as much as to tear down old models so we can start to build these breakthrough products. We can’t see the future if we can’t let go of the past.

Image from Ignacio Leonardi