YAPAHUUAD: Yet Another Post About
How UI and UX Are Different
Ok,
only kinda. Have you seen the new user interface of the iPhone 6? Err, UI? Err…
you know, the rainbow of colors? No, that was iOS 7… you know what I mean. But
ignoring the looks, it works well. Right?
Wrong…
or is it?
It
isn’t that the UI doesn’t work well, it’s that a UI is how something looks, not
how it works. The experience of using a UI is the UX, or user experience. If
you somehow stumbled onto any graphics designer’s blog, you have probably seen
something like “You are all idiots for thinking UI and UX are interchangeable”
or at least something more creative then my attempt at a failed high school student
council speech where the candidate makes it look like they wrote a speech but
decided to improvise at the last moment.
I’m
rambling. Here’s my shtick: tons of people will mix up UI and UX, and tons of
people will tell you UI is different from UX. My take is that they are
different, but rely on each other. You can’t have one without the other. If
Apple had simply updated their UI with a dizzying array of colors, people would
still be disoriented. But instead they added some new ways that you worked with
the UI, such as hiding away the information in the background when a popup
comes up. With the introduction of the iPhone 6 and iPhone 6 Plus, they decided
to try and keep one-handed usage 100% possible… by introducing what I’ve rarely
seen done/done well/”oh-god, what just happened?” feature called “Reachability.”
Double tap on the Home button and the UI magically slides halfway down the
screen so you can access those amazingly hard to reach buttons at the top… that
I’ve had difficulty with since the iPhone 2G (1, 2G, Original iPhone, whatever
you want to call it).
But
what about other areas of UX? While nearly every phone out there tries to
change the picture that defines a button, and pass it off as a brand new UI
with a radically new UX, the reality is: it’s still a button. You treat it the
same on every phone in existence as you treat the button on your microwave. But
people are saying “we can do better… let’s get rid of buttons…” Hold on a
second. How do you plan to do that? *Slips in the evil word Skeuomorphic * Ah,
there it is… the word that every UI designer loved unless they designed physical
devices. It’s viewed poorly, not because it’s bad, but because designers always
feel/believe/think they can do better. Heck, the designs that skeuomorphism
mimics had to be designed in real life at some point. It’s also viewed poorly
now because Apple decided to disregard or minimize it’s usage in iOS 7 and up.
But there is a challenge with this idea that the death of skeuomorphism is upon
us.
I
call it the future.
Why?
When you hold anything in your hand, if it doesn’t mold to your hand or is
graspable by a physical being, it is discarded to the wastebins of society. Try
to come up with something that isn’t like something you’ve held/touched/used
before. Now spend thousands to millions of theoretical years simplifying that
object, which as many designers will tell you, less is more. Done that?
Congrats, you’ve made a door knob, door, maybe a bottle, probably most people
who did this came up with a ball of some sort. Maybe you made a twisted spaghetti
thing or are going to tell me “it’s virtual, so it has no form” or maybe it
really is just hot air. Did you see what I did? I put you in the roll of many
of these same designers, the ones who have won countless awards for their
virtual designs, and told you to do what the award winners have been saying we
should strive for.
Wait,
I still didn’t talk about the future. Let’s go through some examples of why, how
you experience a UI is likely to return to skeuomorphism even if the look of it
will look anything but skeuomorphic. We’ll start with something of a juggernaut,
3D printing. Take that same idea you came up with and will it into existence…
you now have, oh wait it fell over and is now a mess of melted plastic. If by
some amazing chance, it is still standing, you probably made something with a
flat bottom or a center of gravity. For something to exist in the real world,
it needs a physical basis, and it probably will need to be handled by a person.
So it falls into the same mockery I wrote that you read in the last paragraph. It’s
a physical item, and guess what, it probably is based off or looks like some
item that you have come in contact with or seen somewhere before. Future: 1,
every individual who thinks skeuomorphism should die in a fire: 0.
I
have an Oculus Rift. Or as I like to tell people, the precursor to the Matrix. If
you read or watch anything with virtual reality, you will get descriptions or
demos how it brought about incredible changes… but the real mind blowing area
is when you listen to or read from them HOW they do it in real life. They are
trying to trick the mental layer of your brain below the Id. The part that says
“electrical impulse… this means weird rubbing motion… this means blockage… this
means I need an electrical impulse to the diaphragm…” congrats, you sneezed. You
can put on an Oculus Rift, look at a virtual cliff, take it off and look down
in real life, and repeat many times, and you will still FEEL like you’re on an
edge of a cliff. How is that not skeuomorphic? Mimicking a real environment in
a virtual space? Get tapped on the back, you think you’re going to fall and may
yell at whoever tapped you. Well, one thing these designers and inventors found
is that UIs suck. A UI can make you sick and make you mentally go “what is
this? Why is this here?” The solution that has been found to work best is to
build a physical UI. My favorite example is instead of playing Batman with an
inventory UI, you instead look down at your virtually impeccable abs… then at
your Bat Utility Belt, and see what you actually have on you in your Bat
Inventory. It acts, feels, and works perfectly and you will wonder why it was
never done before… maybe the UI got in the way instead of the physical representation
of the objects that it was representing.
Some
quick additional ones: Google Glass, which is a modern case of augmented
reality, doesn’t seem to have any discernable physical representation of what
you’re looking at on the small screen. Except Google calls them “cards” and you’ll
now never be able to remove the imagery of swapping cards, even if the UI doesn’t
do any fancy card-swap animation. You will also probably just look at the small
screen for its content and nothing more, just like a card. People will still
find some way to describe it in physical terms, and that will end up being how
people picture and interact with them. If the small screens were called
portals, people might try to move around to glimpse additional information out
of the corners of the screen. The change in physically-describing term has changed
how people use it.
Another
one: Myo, its best thought of as the Minority Report without the screens. It
takes muscle impulses and converts them into actions that some programmer can
handle and do something with. Without being told what it’s supposed to do, just
being told you can control things with your hand and arm, people instantly
pretend to grab something like a ball. They treat the air in front of them as a
physical object, and use it as if they really held it in their hand. How about
Leap Motion Controllers? You’d probably end up doing the same thing as a Myo,
but only towards your computer screen instead of in any available area (the
benefit of the Myo over Leap Motion, and I don’t say that because I own a Myo
but not a Leap Motion…). Once you learn gestures, you’ll use those with your
Leap Motion. But many of those motions are designed to mimic real world
interactions. You want to select a group of objects, point at them (as if they
were actually there in front of you), make a circle as if you had a pen/pencil
and were circling them. I’ll word it this way: Future with skeuomorphic design:
5, UI designers and “visionaries” who say skeuomorphic will die (with or
without Apple): 0.
I’ll close with a
final far out idea: cloud computing. If anyone found this blog from the far
reaches of the internet and are having a good time mocking my poor seer
abilities, they probably just did a spit take and are on the floor laughing
right now. Everything I just mentioned, plus visions of the future a-la Blade Runner,
Matrix, and many others have an idea of what will come next. What will the future
be like and how people, or other beings, will interact with it. They almost
always have one thing in common: a person interacting with something. Speak the
physical action and Hal will open the pod bay door for you. Bend reality like
in Inception. But never, ever, remove the physicality of every object that a
person uses from it. Even Tron, which was quite sparse in its decorative
nature, has many physical objects or made them appear out of light matter when
needed. But there is also an underlying subject that elements are
interconnected somehow. That interconnection is cloud computing, and if the
future is in development now, they will be building a physical internet of
things out of everything and naming them as they are. A skeuomorphic
development that offers amazingly realistic UX of a UI that is as real as we perceive
and interact with the physical world around us, all connected and interacting seamlessly
with each other. Just imagine if you tried to replace that floppy disk that is
used for the save icon… are you going to replace it with a Blu-Ray disk, flash
drive, the word “Save”, label saying “Say Save”, or will it implicitly save so
no save button actually exists? The death of skeuomorphism may be upon us, but
the future indicates that it will never die as the designers of UI say it will
in their attempt to improve UX… a UX that already reached its pinnacle under skeuomorphic
design.
UI and UX are
different. But good UX needs proper UI. Less is more, and the physical world
around you has already reached that state, whether you think so or not. So
maybe a physically mimicking UI can invoke the best UX… *anti-climactic cliff
hanger*
No comments:
Post a Comment