Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their 1983 book, The Psychology of Human–Computer Interaction. The first known use was in 1975 by Carlisle. The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field. (Full article...)
Video game rehabilitation is a process of using common video game consoles and methodology to target and improve physical and mental weaknesses through therapeutic processes. Video games are becoming an integral part of occupational therapy practice in acute, rehabilitation, and community settings. The design for video games in rehabilitation is focused on a number of fundamental principles, such as reward, goals, challenge, and meaningful play. 'Meaningful play' emerges from the relationship between player action and system outcome, apparent to the player through, visual, physical and aural feedback. Platforms that feature motion control, notably the NintendoWii, Microsoft's XboxKinect, Sony's Eye Toy, and virtual reality have all been effective in this field of research. Methodologies have been applied to all age groups, from toddlers to the elderly. It has been used in a variety of cases ranging from stroke rehabilitation, cerebral palsy and other neurological impairments, to tendinitis and multiple sclerosis. Researchers have promoted such technology based on the personalization of gaming systems to patients, allowing for further engagement and interaction. Additionally, gaming consoles have the ability to capture real-time data and provide instant feedback to the patients using the systems. Currently, several researchers have performed case studies to demonstrate the benefits of this technology. Repeat trials and experiments have shown that outcomes are easily replicated among various groups worldwide. Additionally, the outcomes have increased interest in the field, growing experiments beyond simple case studies to experiments with a larger participant base. (Full article...)
Image 8The user interacts directly with hardware for the human input and output such as displays, e.g. through a graphical user interface. The user interacts with the computer over this software interface using the given input and output (I/O) hardware. Software and hardware are matched so that the processing of the user input is fast enough, and the latency of the computer output is not disruptive to the workflow. (from Human–computer interaction)
Image 9Virtual Fixtures immersive AR system developed in 1992. Picture features Dr. Louis Rosenberg interacting freely in 3D with overlaid virtual objects called 'fixtures'. (from Virtual reality)
Image 10Some alternative methods of tracking and analyzing gestures, and their respective relationships (from Gesture recognition)
Image 11Robinson R22 Virtual Reality Training Device developed by VRM Switzerland (from Virtual reality)
Image 12A VPL Research DataSuit, a full-body outfit with sensors for measuring the movement of arms, legs, and trunk. Developed c. 1989. Displayed at the Nissho Iwai showroom in Tokyo (from Virtual reality)
Image 13A computer monitor provides a visual interface between the machine and the user. (from Human–computer interaction)
Image 14In theory, VR represents a participant's field of view (yellow area). (from Virtual reality)
Image 15A real hand (left) is interpreted as a collection of vertices and lines in the 3D mesh version (right), and the software uses their relative position and interaction in order to infer the gesture. (from Gesture recognition)
Image 16Paramount for the sensation of immersion into virtual reality are a high frame rate and low latency.
Image 17The skeletal version (right) is effectively modeling the hand (left). This has fewer parameters than the volumetric version and it's easier to compute, making it suitable for real-time gesture analysis systems. (from Gesture recognition)
Image 21These binary silhouette(left) or contour(right) images represent typical input for appearance-based algorithms. They are compared with different hand templates and if they match, the correspondent gesture is inferred. (from Gesture recognition)