The advent of
Personal Interaction Systems

Cripto Troffei
Happy computing
Published in
7 min readSep 16, 2014

--

Ubiquitous, context-aware, interconnected. Today’s computers and devices are incredible machines, growing new functionalities and applications each and every day. We can decide what programs to install on them, where to place our files, which services to use, tweak some preference. And yet, they feel more like a digital plaza rather than a mirror of the binary portion of our life.

We are simply free to decide the nature of action and information flow, but the interface to those are never really optimized for you. Your data and functions are scattered (or even worse, boxed) in apps and online services, plugins, programs and social profiles.

It’s their world we are inhabiting. Their methods we are adopting. We are just users, who only matter if representing the statistical majority.

We need a new layer between them and our systems in order to improve both the effectiveness and pleasure (yes, pleasure) of computing. We need something smart and transitional, able to adjust to our own contexts and necessities.

At this point you could say: “Well, isn’t the OS in charge of doing that?”
Apparently not. And we still need a better input device than a mouse and a keyboard. Now close your eyes and imagine your day in front of the computer, filtering only the movements you perform to give commands and enable stuff. Set aside the real shortcut ninjas, anyone would see themselves falling into an infinite loop of digital onanism. Can’t be the best way.

So we need a constantly adapting tool, not only to make more sense to us, but to give us the best means to get to our results.

Functions and data we need from every device and peer should be quickly accessible from a single hub, always at our fingertips. We should make the least possible effort to change context and access our most used functionalities. The concept of interface has to include intelligence for resembling more and more to an actual workflow assistant.

We could call it “smart personal workflow controller”, but it would sound a little lame, wouldn’t it? Plus this would still be an incomplete definition. So let’s just see, more in details, what abilities this proposed new paradigm should present. (Spoiler alert: we are building it)

Connect

The core feature of this system is the ability to interconnect. It must provide a better link between you and your devices, applications and tasks. Also, it should facilitate cross-application workflows, letting you combine commands from separate programs and running them on your behalf.

Moreover, many of your applications nowadays reside on the web, and server-based automation is just a part of the solution.

But through connections we can get much more, especially if we think about two connected devices as a new whole (which is, as we know, something more complex than the sum of their parts). Different interconnected devices can become one the extension of the other, exchanging and presenting information and interactions in the context that better suits each single case.

Do you remember when Transformers unite to create a big badass super Transformer? Well, something like that, but with a lot less violence. ☺

Adapt

Any philosopher and spiritual man will tell you that at our core we are all the same. But although I love to ramble on such noble matters, it’s the “real world” we are talking here, and here is our personality in control. It is the monster that needs to be fed, reluctant to change and glued to its origins.

With machine learning, and other simpler computing techniques, we finally have the means to make our tools adapt to our mindsets. If you think about it, you can see that the entity that must shape our mental models is culture, not accessories.

So this new control tool must shine in the sweet spot between automation and curation, learning so about our workflow’s peculiarities. Since the context for interaction can also come from environmental factors and aleatory predilections, it has to be “open-minded” about accepting various methods of activation, such as manual, gestural, vocal, and able to shift its looks in order to easily link abstraction to action, with visual aids that make sense or satisfy more your personality.

And when your mind is eased, you still need different modes of input for different kind of task. While touch-screens evolve, our desktops are still “interaction flatlands”.

Also, it has to adjust to your level of expertise. A concept that represents and was greatly put by Allan Grinshtein in a recent LayerVault’s blogpost, which we already implemented in Actions from 1.0 for onboard tutoring, is Progressive Reduction. This represents the design strategy of evolving the interface along with the user’s expertise, for instance providing full icons and text-labels on buttons on a level, removing the latter on a higher one (head to the article for an insightful overview of the concept of experience decay, crucial if you mean to implement this method in your software).

Filter

Possibilities… It’s great to have them, but too easy to be overwhelmed. People use a small spectrum of functions made available from installed softwares, but every command is available and visually flattened in a pool along with the others. I’m talking menubars and shortcuts here, because they are your interface to action now. The occasional UI palettes and icons still miss the point and are little to no customizable.

The interface should assist you finding your path to action, like a GPS for productivity. It has to know what is available, what is useful, how it is useful, and filter out all the other features, contexts and data.

Vision is our first true interface to the world. We can spot a Waldo in incredible situations, so we could count on our abilities, if we just lay things down in a more expressive way than a menu.

Ubiquity

After a while, this personal user interface will be the new core of your interactions, so it has to be always with you. At least in some form. The first step is to have an instance of this “controller application” on all of your portable devices. But you could already see why an interface like this one on a smartphone would be… a little constricted.

Your tablet is at home, but you have another one at reach. Like any web-service, you should be able to login (with the same app of course) and get your personal outlook on your digital world. From any device, not only any kind of device.

Help

Now we have an instrument that knows little about us, can recognize context and what you want to do, sees what you are doing and how you are doing it. What could it do more?

For instance, it could provide precious tricks for the app you are using, tell you about new features, show relevant informations. There is still a lot of data in our infinite virtual boxes that should really be where the action is. But we keep jumping here and there like schizophrenic bunnies. Not nice.

But the greater help is the one that rises from the intersection between all this functions. It keeps you adherent to your flow. In a time when multitasking is trendy like a guillotine in late-1700, there is still a lot to improve when it comes to our working life. Our heads will keep rolling, but we could at least stop being the ones forging the blades.

Fun

Wait, say what now? Aren’t we talking about productivity here? Of course, that’s why it’s our inner child the one who needs soothing, and what’s better than making work similar to play? Engagement is a weapon of mass creation if placed in the right context. Gamification is on the rise not just to lure you money and keep you glued to services, but to make you more productive. There are already many examples of productive psychological teasing, such as KhanAcademy and Lift, just to name a few. But there is still a lot to do in that direction.

It’s time to bring the “fun” back to “function”.

The future

There are still capabilities we should but can’t include, as we are limited by the devices where this application resides, such as being “always on” and have haptic responses. To work around this last lack, sound is where you should look, since it is the closer experience to touch for now, a mechanical behavior. And, sorry about that, there are some other sides I cannot share yet. But will.

Mark Weiser’s intuition of transparent interface won’t see light for a long, long time. Humans take pleasure from different types of interaction, and thinking that any kind of maker will be satisfied just by instructing a fictional character on what to do on his behalf, well, that is just not going to happen anytime soon.

We started with Actions, which is alas very far from what is proposed here. It’s the first hint on what could be the future control tools, although a very limited one. So we’ve started from scratch with Quadro.

Then my data becomes my interface, and I am then whole, digitally and in reality. — Jarno Mikael Koponen (via PandoDaily)

At this point you’ll probably be wondering if I’m… Yes, I’m crazy enough to think that we are truly bringing on, or at least starting, a paradigm shift to computer interaction. Are you crazy enough to want to help us out?

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Cripto Troffei
Cripto Troffei

Written by Cripto Troffei

Product/UX/UI/Sound designer & CEO @ Quadro

No responses yet

What are your thoughts?