Extended Reality is a very broad term used to describe all virtual, combined physical environments and human-computer interactions generated by wearable and computerized technologies. e.g. the Human Computer Interaction (HCI) paradigm, which applies to user interfaces created for human interaction in the presence of computers, in this case the human wearing a computer.
In the HCI paradigm the human can manipulate the computer screen by moving his or her eyes or head, and will be able to change the visuals displayed on the screen by changing the position of their eyes or head in space. The user will also be able to interact with objects and controls through the use of his or her eyes or head, in this case the user will be able to manipulate a graphical image or object on a computer screen.
In the human-computer interaction (HCI) paradigm human users can move around the computer screen in three dimensions using their eyes in order to see the screen better, this is similar to how human users can move around a television or monitor screen. The human will also be able to manipulate the virtual environment by moving their heads and moving their eyes in a three dimensional space, and will be able to interact with visual objects and controls by manipulating their heads and head movements. The human will also be able to control the movement of their virtual environment by manipulating a joystick in a way that allows them to move around their virtual world and interact with objects in the environment.
In the HCI paradigm the human can change the visual presentation of objects displayed on the computer screen, in this case the human will be able to view objects in three dimensional space and will be able to change their appearance based on what they are currently viewing. They can also manipulate objects on the screen by moving their eyes around in the virtual environment and the changes that they make will be seen in real time.
Human user interfaces will also allow the human to control the user interface through the use of visual cues, as an example the human may be able to look at buttons on the screen to make certain keystrokes or press a button, the keystrokes will be converted into visual data to be interpreted by the computer and will be sent over the internet. The human will also be able to interact with the computer through gestures in the form of hand gestures or voice commands, in this case the human will be able to type information in a text field and the computer will interpret the data based on the visual data that was typed into the text field. The human will also be able to use voice commands in a conversational mode, in order to interact with the computer and control the computer. in many cases they will even be able to dictate text in the form of emails.
The human will also be able to create a virtual world, as an example a human user can control a virtual environment based on the information that was entered into the input fields. This will be done by using various virtual objects, in the same way as how a human user would interact with a virtual environment on a computer screen.