Meet the Author

  • Jodie Keily
  • UX Designer
After completing an Industrial Design and Technology degree at university, Jodie decided to move away from product and into digital. Now as the UX Designer at Lab, Jodie gets to use her research and prototyping skills to produce wireframes, and her design knowledge to develop these into finalised designs.

Takeaways

  • Could gestural interaction be the next big thing?

Gestural Interaction: Is it the future? 


28 March 2017
Thought Leadership
2 mins

Have you wondered what it would be like to use a computer without a mouse or keyboard ? I suppose if I’d asked you this question 5 years ago you would probably think I was mental - and maybe you still do. I believe that, considering how quickly technology is advancing, it won’t be long before this happens. We already have facial recognition to log into our laptops, so is it that hard to believe that it won’t be long before we can incorporate our gestures with our voice to control the things around us?

 

Image 1) An example of 2D gestures that could be used to interact with products (Digital Trends).

 

Siri and Amazon Echo are perfect examples of how technology has advanced in such a short space of time – we can now use products without touching them but by just speaking to them. Today, these smart products give us the answers we’re after and carry out tasks we’re  too lazy to do ourselves. This technology has been designed into an intuitive user experience, and it allows your tasks to be seamlessly carried out with minimal effort – why move when you can just speak?

 

Image 2) Amazon Echo dot

 

Image 3) iPhone 6 with Siri activated

Considering how quickly voice activated products have come into our world, I believe that gesture controlled, smart(er) devices will be in our homes, work places and lives sooner than we think. For this additional ‘input’ to be incorporated into our everyday lives, a new way of designing user experiences for it will need to be considered.

For example, the gestures you would do, if put in front of a large gesture controlled screen now, would likely be the same as those you use on your mobile (without the touch). However, how you use your mobile and how the person sitting next to you uses theirs could be completely different (e.g. Android vs. Apple users).

Looking at Image 4 below, most people use one hand to zoom in on a mobile, but as the screen size increases wouldn’t it make sense to use two? If so, does this mean that it would make sense to use two on a gestured controlled screen? Or would it make more sense to use one as it allows us to put minimal effort into controlling the screen? There is no right or wrong way now, but if gesture controlled devices were to take off then there would need to be.

 



Image 4) Mobile vs large screen (Yanna Vogiazou blog post)

Yanna Vogiazou (UX Director at Exippl) stated in her blog for InVision, “We’ll need to design truly multimodal experiences, combining various inputs in a seamless flow. We should add “gesture-friendly” to our vocabulary.” 

Being designers in this decade allows us to be truly innovative by merging all user inputs and designing appropriately to create the best user experience for our customers. “People are seeking out experiences – not technologies” (Bobby Gill, 2016).

The creators of new sensing technology Soli have already been working on inventing gesture control interactions, however, this is not to say that their products will be the most intuitive. We are currently in a period of fluidity and excitement where new interactions can be introduced into our lives that break, what we perceive as, normal behaviours. The opportunity to be innovative and blue sky is now. Responding to user needs and using their inputs is what is driving innovation within our digital and ‘smart’ world.

Who knows, maybe gesture controlled devices won’t take off as well as voice activated ones did, but if they do, they will only be successful if new user experiences are explored, tested and built.


Have you wondered what it would be like to use a computer without a mouse or keyboard ? I suppose if I’d asked you this question 5 years ago you would probably think I was mental - and maybe you still do. I believe that, considering how quickly technology is advancing, it won’t be long before this happens. We already have facial recognition to log into our laptops, so is it that hard to believe that it won’t be long before we can incorporate our gestures with our voice to control the things around us?

 

Image 1) An example of 2D gestures that could be used to interact with products (Digital Trends).

Siri and Amazon Echo are perfect examples of how technology has advanced in such a short space of time – we can now use products without touching them but by just speaking to them. Today, these smart products give us the answers we’re after and carry out tasks we’re  too lazy to do ourselves. This technology has been designed into an intuitive user experience, and it allows your tasks to be seamlessly carried out with minimal effort – why move when you can just speak?

 

Image 2) Amazon Echo dot

 

Image 3) iPhone 6 with Siri activated

Considering how quickly voice activated products have come into our world, I believe that gesture controlled, smart(er) devices will be in our homes, work places and lives sooner than we think. For this additional ‘input’ to be incorporated into our everyday lives, a new way of designing user experiences for it will need to be considered.

For example, the gestures you would do, if put in front of a large gesture controlled screen now, would likely be the same as those you use on your mobile (without the touch). However, how you use your mobile and how the person sitting next to you uses theirs could be completely different (e.g. Android vs. Apple users).

Looking at Image 4 below, most people use one hand to zoom in on a mobile, but as the screen size increases wouldn’t it make sense to use two? If so, does this mean that it would make sense to use two on a gestured controlled screen? Or would it make more sense to use one as it allows us to put minimal effort into controlling the screen? There is no right or wrong way now, but if gesture controlled devices were to take off then there would need to be.

 



Image 4) Mobile vs large screen (Yanna Vogiazou blog post)

Yanna Vogiazou (UX Director at Exippl) stated in her blog for InVision, “We’ll need to design truly multimodal experiences, combining various inputs in a seamless flow. We should add “gesture-friendly” to our vocabulary.” 

Being designers in this decade allows us to be truly innovative by merging all user inputs and designing appropriately to create the best user experience for our customers. “People are seeking out experiences – not technologies” (Bobby Gill, 2016).

The creators of new sensing technology Soli have already been working on inventing gesture control interactions, however, this is not to say that their products will be the most intuitive. We are currently in a period of fluidity and excitement where new interactions can be introduced into our lives that break, what we perceive as, normal behaviours. The opportunity to be innovative and blue sky is now. Responding to user needs and using their inputs is what is driving innovation within our digital and ‘smart’ world.

Who knows, maybe gesture controlled devices won’t take off as well as voice activated ones did, but if they do, they will only be successful if new user experiences are explored, tested and built.

Want to stay up to date?

Get the low down on the latest news within the industry on Twitter @LabDigitalUK.

 

Have you wondered what it would be like to use a computer without a mouse or keyboard ? I suppose if I’d asked you this question 5 years ago you would probably think I was mental - and maybe you still do. I believe that, considering how quickly technology is advancing, it won’t be long before this happens. We already have facial recognition to log into our laptops, so is it that hard to believe that it won’t be long before we can incorporate our gestures with our voice to control the things around us?

 

Image 1) An example of 2D gestures that could be used to interact with products (Digital Trends).

Siri and Amazon Echo are perfect examples of how technology has advanced in such a short space of time – we can now use products without touching them but by just speaking to them. Today, these smart products give us the answers we’re after and carry out tasks we’re  too lazy to do ourselves. This technology has been designed into an intuitive user experience, and it allows your tasks to be seamlessly carried out with minimal effort – why move when you can just speak?

 

Image 2) Amazon Echo dot

 

Image 3) iPhone 6 with Siri activated

Considering how quickly voice activated products have come into our world, I believe that gesture controlled, smart(er) devices will be in our homes, work places and lives sooner than we think. For this additional ‘input’ to be incorporated into our everyday lives, a new way of designing user experiences for it will need to be considered.

For example, the gestures you would do, if put in front of a large gesture controlled screen now, would likely be the same as those you use on your mobile (without the touch). However, how you use your mobile and how the person sitting next to you uses theirs could be completely different (e.g. Android vs. Apple users).

Looking at Image 4 below, most people use one hand to zoom in on a mobile, but as the screen size increases wouldn’t it make sense to use two? If so, does this mean that it would make sense to use two on a gestured controlled screen? Or would it make more sense to use one as it allows us to put minimal effort into controlling the screen? There is no right or wrong way now, but if gesture controlled devices were to take off then there would need to be.

 



Image 4) Mobile vs large screen (Yanna Vogiazou blog post)

Yanna Vogiazou (UX Director at Exippl) stated in her blog for InVision, “We’ll need to design truly multimodal experiences, combining various inputs in a seamless flow. We should add “gesture-friendly” to our vocabulary.” 

Being designers in this decade allows us to be truly innovative by merging all user inputs and designing appropriately to create the best user experience for our customers. “People are seeking out experiences – not technologies” (Bobby Gill, 2016).

The creators of new sensing technology Soli have already been working on inventing gesture control interactions, however, this is not to say that their products will be the most intuitive. We are currently in a period of fluidity and excitement where new interactions can be introduced into our lives that break, what we perceive as, normal behaviours. The opportunity to be innovative and blue sky is now. Responding to user needs and using their inputs is what is driving innovation within our digital and ‘smart’ world.

Who knows, maybe gesture controlled devices won’t take off as well as voice activated ones did, but if they do, they will only be successful if new user experiences are explored, tested and built.

Want to stay up to date?

Get the low down on the latest news within the industry on Twitter @LabDigitalUK.

 


Takeaways

  • Could gestural interaction be the next big thing?

Meet the Author

  • Jodie Keily
  • UX Designer
After completing an Industrial Design and Technology degree at university, Jodie decided to move away from product and into digital. Now as the UX Designer at Lab, Jodie gets to use her research and prototyping skills to produce wireframes, and her design knowledge to develop these into finalised designs.