The gaming world has been revolutionized by games that track a player’s natural body movements and translates them into the virtual environment. By using gesture and voice recognition, gaming consoles such as the Xbox Kinect allow players to kick a ball, shoot an arrow, and actively participate in the game by simply moving their bodies, no controller required.
Watch out manufacturing, the revolution is coming. Very soon, factory floors may start seeing gesture and voice recognition systems, combined with biometrics, which allow workers to control factory operations with natural body movements and voiced commands. A simple example of this, according to a Machine Design.com article, involves logging into workstations.
Currently, many automated factories operate off of Graphic User Interfaces (GUI’s), where a worker would log in by clicking on an icon and entering a username and password. In the future, the same worker could simply step up to the work station, which would scan his retina and automatically log him in. With a simple gesture the worker could command the computer to start operations, and by holding up his hand in a “stop” gesture, halt operations. The machine could be programmed to ask for confirmations of these gestures, requiring a vocal “yes” from the operator.
So how does this technology work? A color video camera works with a depth sensor that provides a 3D perspective and a set of microphones which isolates individual player’s voices. Advanced software tracks the layout of the room and player movement, monitoring movements and responding accordingly.
A biometric natural user interface (NUI) would be able to identify only the person logged into that particular machine, responding singularly to that person’s gestures and movements while ignoring all other workers. Should a worker leave a workstation, it would not respond to anyone else and can even be programmed to shut down after a specified period of time.
A few clear advantages of gesture-based interfaces include:
Eliminates reliance on touch-screens in greasy, dusty, or less-than-ideal environments where these screens can become unreadable and hard to use.
Increases worker safety – allows workers to keep on gloves and protective glasses, which may have previously required removal to work with keyboards or see touch-screens. Also leads to a cleaner work environment, by eliminating the need to touch screens, keyboards or a mouse.
Reduces maintenance – gesture-based interfaces eliminate the need for keyboards, mouse’s and other input devices which often wear out and need to be replaced.
Requires less training – workers naturally have gesture-ability and many are used to using this type of technology in consumer applications (games and smartphones). This will make adaptation to it in the industrial setting very easy for them.
Eliminates language barriers – since the gestures are all the same, no matter what language you speak, this “universal language” would be the same in factories all over the world. It would also further reduce training by eliminating keyboard and language training.
Reduces costs – reduces training, maintenance and costly halts in production
Machine Design predicts this technology first showing up in factories for heavy equipment or applications with extreme conditions, like cold rooms, in which there are more dangerous processes, more things to clog up input devices, and its harder for workers to maneuver around the touchscreen or the mouse.
Gesture based technology is just the tip of the iceberg when it comes to NUI’s. Check out the rest of the Machine Design article for where this technology is headed.