The level of hype around the “Internet of Things” (or IoT) is getting a bit out of control. It may be the technology that crashes into Gartner’s trough of disillusionment faster than any other. But that doesn’t mean we can’t figure things out. Quite the contrary, as the trade press collectively loses its mind over the IoT, I’m spurred on further to delve deeper. In my mind, the biggest barrier we have to making the IoT work comes from us. We are being naive as our overly simplistic understanding of how we control the IoT is likely going to fail and generate a huge consumer backlash.
But let’s backup just a bit. The Internet of Things is a vast sprawling concept. Most people refer to just the consumer side of things: smart devices for your home and office. This is more precisely referred to as Home Automation but to most folks, that sounds just a bit boring. Nevertheless, when some writer trots out that tired old chestnut: “My alarm clock turns on my coffee machine!”, that is home automation.
But of course, it’s much more than just coffee machines. Door locks are turning on music, moisture sensors are turning on yard sprinklers, and motion sensors are turning on lights. The entire house will flower into responsive activities, making our lives easier, more secure and even more fun.
However, I am deeply concerned these Home Automation scenarios are too simplistic. As a UX designer, I know how quixotic and down right goofy humans can be. The simple rule-based “if this then that” style scenarios trotted out are doomed to fail. Well, maybe fail is too strong of a word. They won’t fail as in a “face plant into burning lava” fail. In fact, I’ll admit that they might even work 90% of the time. To many people that may seem fine, but just try using a voice recognition system with a 10% failure rate. It’s the small mistakes that will drive you crazy.
I’m reminded of one of the key learnings of the artificial intelligence (or AI) community. It was called Moravec’s Paradox:
It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.
Moravec’s paradox created two types of AI problems: HardEasy and EasyHard.
HardEasy problems were assumed to be very hard to accomplish, such as playing chess. The assumption was that you’d have to replicate human cunning and experience in order to play chess well. It turns out this was completely wrong as a simple brute force approach was able to do quite well. This was a hard problem that turned out to be (relatively) easy.
The EasyHard problem is exactly the opposite: a problem that everyone expects to be simple but turns out to quite hard indeed. The classic example here is language translation. The engineers at the time expected the hardest problem was just finding a big enough dictionary. All you had to do was look up the words just plop them down in perfect order. Obviously, that problem is something we’re still working on today. An EasyHard problem is one which seems simple but never….quite….works…..the….way….you…..want.
I claim that Home automation is an EasyHard problem. The engineer in all of us assumes it is going to be simple: walk in a room, turn on the lights. What’s the big deal? Now, I’ll admit, this rule does indeed work most of the time but here are series of exceptions that break down:
Problem: I walk into the room and my wife is sleeping, turning on the lights wakes her up.
Solution: More sensors: detect someone on the bed.
Problem: I walk into the room and my dog is sleeping on the bed, my room lights don’t turn on
Solution: Better sensors: detect human vs pets
Problem: I walk into the room, my wife is watching TV on the bed. She wants me to hand her a book but as the the room is dark I can’t see it.
Solution: read my mind
Don’t misunderstand my intentions here. I’m not luddite! I do strongly feel that we are going to eventually get to home automation. My point is that as an EasyHard problem, we don’t treat home automation with the respect it deserves. Just because we can automate our home doesn’t mean we’ll automate it correctly. The real work with home automation isn’t with the IoT connectivity, it’s the control system that will make it do the right thing at the right time.
Let’s take a look at my three scenarios above and discuss how they will impact our eventual solutions to home automation.
1. MORE SENSORS
Almost every scenario today is built on a very fault intolerant structure. A single sensor controls the lights. A single door knob alerts the house I’m coming in. This has the obvious error condition that if that sensor fails, the entire action breaks down. But the second, more likely, case is that it just infers the wrong conclusion. A single motion sensor in my room assumes that I am the only thing that matters, my sleeping wife is a comfort casualty. I can guarantee that as smart homes roll out, saying ‘sorry dear, that shouldn’t have happened’ is going to wear very thin.
The solution of course is to have more sensors that can reason and know how many people are in a room. This isn’t exactly that hard but it will take a lot more work as you need to build up a model of the house, populate it with proxies, creating, in effect a simulation of your home . This will surely come, but it will just take a little time for it to become robust and tolerate of our oh so human capability to act in unexpected ways.
2. BETTER SENSORS
This too should be soon in coming. There are already sensors that can tell the difference from humans and pets, they just aren’t widely used. This will feed into the software simulation of my house, knowing where people, pets and things are throughout the space. This is starting to sound a bit like an AI system, modeling my life and making decisions based on what it thinks is needed at the time. Again, not exactly impossible, but tricky stuff that will, over time, get better and better.
3. READ MY MIND
But at some point we reach a limit. When do you turn on the lights so I can find the book and when do I just muddle through because I don’t want to bother my wife? This is where the software has to have the ‘humility’ to stop and just ask. I discussed this a bit in my UX grid of IoT post: background swarms of smart devices will do as much of the ‘easy stuff’ as they can but will eventually need me to signal intent so they can cascade a complex set of actions that fulfill my goal.
Take the book example again. I walk into the room, the AI detects my wife on the bed. It could even detect the TV is on but still know she is not sleeping. But because it’s not clearly reasonable to turn on the lights to full brightness, it just turns on the low baseboard lighting so I can navigate. So far so good, the automatic system is being helpful but conservative. When I walk up to my wife and she asks for the book, I just have to say “lights” and the system turns the ‘lights on” which could be a complex set of commands turning on 5 different lights at different intensities.
Of it may not be voice commands, they too have issues. A classic button or even a gesture will also work. These ‘intent cliffs” are needed because human interaction is too subtle to be fully encapsulated by an AI. Humans can’t always do it, what makes us think computers can?
My point here is to emphatically support the idea of home automation. However, the UX designer in me is far too painfully aware that humans are messy, illogical beasts and simplistic if/then rules are going to create a backlash against this technology. It isn’t until we take the coordinated control of these IoT devices seriously that we’ll start building more nuanced and error tolerate systems. They will certainly be simplistic at first but at least we’ll be on the right path. We must create systems that expect us to be human, not punish us for when we are.