My conversion from Mac to Windows last fall had an unexpected bonus: by shifting from Mac to Windows conventions, I was forced to rethink many aspects of the desktop experience. As a result, I’m now inspired more by desktop UX concepts than mobile ones. I’m not surprised if some might think this quaint, or worse, accuse me of being an old Luddite unwilling to embrace the new order. It’s so easy to say the “Desktop is dead”, just ignore it and move on.
However, the world really isn’t so convenient. As an industry, we aren’t great at thinking about the future. We jump to simple and satisfying binary opinions that turn out to be so quickly wrong yet we don’t seem ever to notice, cheering for black and white answers within a technicolor world. Here are three reasons why “Desktop is dead” is so misleading.
- Historical pattern
We’ve seen this before: TV killed radio, Internet killed TV, Apps killed the Internet, and Facebook killed Apps. Yet all of these dead things are still very much alive today. We confuse dominance with existence. We are so in love with the winner that we assume second place is meaningless.
We are just a small technological or business model flip from being contradicted. Radio programming was a dead end until the advent of podcasting reinvigorated it. We extrapolate our current linear view too aggressively, oblivious to how quickly the world can change.
- Incorrect Heroes
We follow our heroes, assuming they are infallible. This is most clearly seen with Apple. They see iOS as the dominant model going forward. Their near abandonment of desktop and laptop Macs sends a blindingly clear message: they don’t care about desktop. So why should we?
Let’s be clear that mobile is ascendant and desktop usage is shrinking. I don’t want anyone to claim I don’t appreciate that. But there’s more to the story. I’ve already written about the under-appreciation of the Desktop UX, explaining that there are corners of the Desktop UX approach that are still superior to mobile. My concern with mobile is that it has designed itself into a corner. I’m not trying to say mobile is bad! However, it’s very simplicity has costs. As the technology world continues to expand, we’ll need something more flexible that can grow to meet these demands.
It’s far too childish to say mobile or desktop will win. Both concepts have baggage. My point is that we’ll need something that goes beyond either. I’m just taking inspiration from what the desktop can do.
My current inspiration is coming from the somewhat chaotic approach Microsoft Surface is taking. It’s trying to have it all: a keyboard/pen/mouse device that works on tablets, laptops, and desktop systems. To be honest, it’s a bit of a hot mess right now, and I love it. They are rebuilding the airplane while in mid-flight and the inconsistencies are rampant. Of course, Surface could solve many of these issues with just a new OS release. I’d even go far to say if Windows 10 just shipped a revamped touch-friendly File Explorer with a few integrated graphics tools, the designer world would likely swoon. (note to Microsoft: Paint3D isn’t one of those tools)
But even given its inconsistencies, there is something energetic about the Surface approach. Using touch gestures and a keyboard while web browsing feels amazing and weird at the same time. It begs the question: can these approaches work together? It’s inspiring because instead of the simplistic ‘one thing wins’ approach, there is this new hope that a fusion between them could be greater than the sum.
In my previous post, I called out 3 ‘boring’ aspects of the Desktop UX that are deeply missed in Mobile UX: windows, mouse input, and files. I’ll take each of these admittedly dusty old concepts and show how they actually inspire me to think differently about the new coming technologies.
We’ve already got 100″ TVs, double wide curved panoramic computer monitors and wall-sized touch sensitive displays. As these fall in price, their usage will only increase and we’re going to start exploring in earnest how these can be of use to both individuals and cooperative teams.
As an experiment, I hooked up a 50″ 4K HD TV to my desktop. The sheer scope of the space in front of me was intoxicating. What people often forget was the original design purpose of windows was to take advantage of limited screen space. Their primary goal was to overlap. (This likely came from the same school of thought that considered folders within folders a good physical metaphor as well) With all of my new extra screen space, the need to overlap was gone, allowing all of my windows to be in full view.
But this invented new problems. Just setting up my windows to not overlap was cumbersome. In addition, that much space requires extra focus and movement. Just cmd-tab switching isn’t enough, your head has to move too much. Not all pixels are created equal: the corners of the display aren’t as useful as the center. There is some exciting work ahead, likely using a mix of pen and touch gestures to manage windows in a large space, zooming one into central focus yet keeping the others in relative position so you can still interact with their contents.
If you’ve made it this far, I can see you shaking your head, “Who cares!?” I’m talking about the inspiration that comes from crazy experiments. If we are headed for a world with large displays, my little experiment exposed some of the issues we may need to be address.
And even if you don’t believe we’ll all have huge displays, VR will likely explode this issue further. With VR it’s possible to have hundreds of objects surrounding you in 3D space. Of course using 2D windows in a VR world is the very definition of anachronistic. However, windows feel like a gateway technology allowing our existing UX models to transition into 3D. We likely won’t be using windows in VR space forever, but they’ll likely be very handy in the short term. There is something very inspiring about creating a new UX model to deal with large 2D and 3D spaces.
Mouse gestures are far more precise than finger tapping: click, shift-click, click-drag, double-click, and others provide a more precise amount of control. Again, I don’t want to make a direct comparison: just using your finger on mobile displays has been a huge step forward over the original mouse. My point is that if you pay attention, especially if you are editing text, you’ll notice how complex ‘just a finger’ really is on mobile. Even something as simple as spelling correction and copy/paste are so crammed into the tap gesture that you’re usually given one when you want the other.
My desktop inspiration isn’t to look backward, asking for a return to shift-clicking but just to step back a bit, realize how much expressive power is in the mouse/button/keyboard approach. Sure, it came at a cost and could easily be abused but how might we take just a fraction of that precision into a pen-based world?
Apple is experimenting with force touch and Microsoft surface is exploring a button on the pen. Both are interesting. I’d go even further and suggest that machine learning and voice recognition could augment these interactions further. Tapping could return to just insert the cursor, but tapping and holding could select the word, and machine learning could select more based on context. Dragging would extend the selection intelligently, offering alternatives and vocal commands could act on the choices offered.
To be clear: I’m not a big fan of file systems. They are vague sprawling nesting hierarchies far too complex for most users. However, I’m a big fan of files. They are little blobs of data that exist outside of apps. The issue I have with most mobile UX is that it locks app data away from easy access, unable to be used in cloud services or by other apps. For simple things like to-do managers, this is likely a reasonable assumption but as we start to explore more complex objects and flows, files start to look interesting. Keep in mind I use ‘files’ in the very broadest sense. I’m really just talking about data or behaviors that exist outside of the app that created them. These may not feel like classic files are more like little iconic blobs with new behaviors. There could be:
- Internet blobs that retrieve information
- Machine learning blobs that manipulate what is dropped on top of them
- Copy/paste blobs that hang around the edge of your window
- Assistant blobs that notify you when something important is happening
The world of apps seems too closed off and monolithic. Not for basic consumer apps but the type of power user apps that a designer or developer could use to create. I’m inspired by the idea of taking old school ideas of files and Unix pipes and applying them to machine learning and assistive technologies. Yes, it’s a stretch, but there is something really interesting hiding in there.
Ghost in the Shell
This meandering post is a partial reaction to the all too common belief that mobile is the only game in town and that the desktop UX is long past any value. Inspiration can come from anywhere. I strongly believe the world of machine learning is going to usher in an amazing augmentation of our abilities and that building on top of what we’ve learned from the desktop UX is a great place to start.