I don’t know who this Dustin Curtis guy is (I’m sure he returns the favor), but he’s absolutely right. The ever-larger screens on (non-Apple) smartphones are ever-less ergonomic and ever-more ridiculous.
The bigger the screen, the more impossible one-handed operation becomes.
It’s another case of companies getting into an arms race over “who can make the biggest x” without any regard to why. Where have we seen this before?
If they’re going to make phones with ever-bigger screens, they should take care to design the UI to improve one-handed use, as suggested by Itai Vonshak‘s awesome Emblaze UI design:
Itai works at Palm now, btw 🙂
H/T to my friend Jon Tzou for sending this my way
And the best tweet ever on the subject comes from Sebastiaan de With:
I will now take advantage of my 15 minutes of fame to squeeze every bit of self-promotion out of it that I can.
My name is John Kneeland. I am originally from Philadelphia and graduated from the University of Pennsylvania in 2008 with a degree in nothing useful. Now I live in San Francisco.
I currently work with the awesome webOS team (née Palm) at HP.
Much to my surprise, the number of visitors to my blog providing thoughtful commentary far outnumbered spambots. It was great to see your ideas on how to solve the problem of interacting with tablet web browsers.
Now I’ll take you through mine.
In my humble opinion, the best implementation of basic browser functions we have seen on mobile devices to date is the “gesture area” as implemented on all webOS phones since the debut of the Palm Pre in 2009. For those of you who are unfamiliar with the Pre (you’re missing out btw), the gesture area is a capacitive touch-sensitive strip below the display where one can simply swipe their finger in a certain direction to trigger an action. In the browser, swiping left was the equivalent of hitting the “back” button in the browser, while swiping it was the equivalent of hitting the “forward” button.
I am particularly enamored with gestures as opposed to onscreen buttons in mobile devices, for reasons I will expound upon in a future post. For now I will just say this is my preferred way to interact with the browser. webOS nailed it with the gesture area in their phones.
And so of course HP/Palm went and ditched the gesture area in their first tablet.
In their defense, there are many good reasons the gesture area that works so brilliantly on phones simply doesn’t work well on a tablet, but that’s a topic for another post altogether.
But back to the point at hand: If we do not have a hardware gesture area, what can we do?
Make the whole screen your gesture area
Anyway, since that’s obviously silly, let’s refine the idea some more. Obviously we don’t want the whole screen looking or acting like a gesture area all the time or we can’t actually use the device. Rather, we need a way for the touchscreen to interpret when you want to be using it as a gesture area.
I have two distinct ideas on how to do this:
1. Multitouch gestures
I like this, but the main problem that I see is that Apple has probably patented it. That wouldn’t stop the Android team of course, but it’s enough to give me pause.
Right now lots of touchscreen interfaces have a means of bringing up a contextual menu: hold your finger down in a single spot for a second and the contextual menu pops up next to your finger.
I think this is an area of opportunity for solving the browser nav issue. And so I propose, the Magic Gesture Area (note to self: check if Apple has copyrighted the word “magic”)
First, I am going to reduce the number of steps required. Right now the contextual menu requires you to tap, hold, release, move finger to the desired menu item and tap again. Why not just position the popup options so that you can swipe right to them? I will change it to tap, hold, swipe. Once you trigger the menu, just swipe your finger to the left to go back, or swipe it to go up. Or swipe it up to reload. Whichever way you swipe, of course will have a visual cue to confirm it, just as the current webOS gesture area’s light pulses in the direction of the swipe.
But wait, John, you say. What about the functions that are currently in use by tap-holding, like “open in new card” or text editing features?
Well I’m glad you asked. The contextual menu’s original features still have a home here: on the bottom. Only now instead of tapping on them, you can drag you finger on top of them until the one you want is highlighted (kind of like the System Menus behavior in Mac OS 1 through 7.x)
I have an iPad and a TouchPad. I also had a Samsung Galaxy Tab for a week but didn’t find Honeycomb good enough to warrant a permanent spot in my collection.
One thing that strikes me is how awkward tablet browser interfaces seem to be.
What the tablet makers have done is take the exact same browser paradigm popular on desktops (with Honeycomb even going so far as to bring desktop browser tabs) and shoehorn it into a tablet screen. Since all the buttons are not within the range of where the fingers are, the user has to move their hand from where it was naturally and extend it up to hit the buttons. Instinctively, this just doesn’t feel right. I’ve tried to break this down into more definable reasons:
- Economy of movement is good, and the tablet browser as is does not minimize movement well. The less a user has to move to do something, the better. All the more so on a touchscreen, which requires more movement to get from point A to point B (it’s a 1:1 ratio of movement in life to movement on the screen, whereas a mouse/trackpad amplifies the movements you make in a few inches of space to cover a much larger screen). While it may require an inch or so of movement to flick a PC’s cursor 6 inches up to the back button, it requires the full 6 inches of movement for your finger on a tablet. Bad.
- Unlike desktops (or even laptops), the hands are not just used for interacting with a tablet; they’re also used for holding it and supporting its weight. If the user has to move their hand to do something, they have to shift how they are holding the tablet every time they need to use one of the browser’s buttons. Bad.
- Accuracy suffers. Pick a key on your keyboard and try hitting it with your wrists resting on the hand rests. Now lift your entire arm and try zooming into it with your finger. It’s not difficult (unless you’ve been hanging out with Jack Daniels), but it does take somewhat more effort than the former. I think this is because when your wrist is stable, you only have to move your fingers whereas you have to use your entire upper arm for the latter, which involves more “moving parts” and takes more effort to get the same level of accuracy. Bad.
I’m going to look at some existing UXes and see how to make them better, at least for humans such as myself.