Chapter 6 Interfaces

Get Started. It's Free
or sign up with your email address
Rocket clouds
Chapter 6 Interfaces by Mind Map: Chapter 6 Interfaces

1. Command-Based

1.1. Commands such as abbreviations (e.g. ls) typed in at the prompt to which the system responds (e.g. listing current files)

1.1.1. Some are hard wired at keyboard, others can be assigned to keys Efficient, precise, and fast Large overhead to learning set of commands

1.2. Research and design issues

1.2.1. Form, name types and structure are key research questions Consistency is most important design principle e.g. always use first letter of command Command interfaces popular for web scripting

2. WIMP and GUI

2.1. Xerox Star first WIMP -> rise to GUIs

2.1.1. Same basic building blocks as WIMPs but more varied Color, 3D, sound, animation, Many types of menus, icons, windows

2.1.2. New graphical elements, e.g. toolbars, docks, rollovers

2.1.3. Challenge now is to design GUIs that are best suited for tablet, smartphone and smartwatch interfaces

2.2. Windows could be scrolled, stretched, overlapped, opened, closed, and moved around the screen using the mouse

2.2.1. Windows were invented to overcome physical constraints of a computer display enable more information to be viewed and tasks to be performed

2.2.2. Scroll bars within windows also enable more information to be viewed

2.2.3. Multiple windows can make it difficult to find desired one listing, iconising, shrinking are techniques that help

2.3. Icons represented applications, objects, commands, and tools that were opened when clicked on

2.4. Menus offering lists of options that could be scrolled through and selected

2.5. Pointing device a mouse controlling the cursor as a point of entry to the windows, menus, and icons on the screen

3. Multimedia

3.1. Combines different media within a single interface with various forms of interactivity

3.1.1. graphics, text, video, sound, and animations

3.2. Users click on links in an image or text

3.2.1. another part of the program, an animation or a video clip is played, can return to where they were or move on to another place

3.3. Pros and Cons

3.3.1. Facilitates rapid access to multiple representations of information

3.3.2. Facilitates rapid access to multiple representations of information

3.3.3. Can enable easier learning, better understanding, more engagement, and more pleasure

3.3.4. Can encourage users to explore different parts of a game or story

3.3.5. Tendency to play video clips and animations, while skimming through accompanying text or diagrams

3.4. Research and design issues

3.4.1. How to design multimedia to help users explore, keep track of, and integrate the multiple representations provide hands-on interactivities and simulations that the user has to complete to solve a task Use ‘dynalinking,’ where information depicted in one window explicitly changes in relation to what happens in another (Scaife and Rogers, 1996).

3.4.2. Several guidelines that recommend how to combine multiple media for different kinds of task

4. Virtual reality

4.1. Computer-generated graphical simulations providing:

4.1.1. “the illusion of participation in a synthetic environment rather than external observation of such an environment” (Gigante, 1993)

4.2. Provide new kinds of experience, enabling users to interact with objects and navigate in 3D space

4.3. Create highly engaging user experiences

4.4. Pros and cons

4.4.1. Can have a higher level of fidelity with objects they represent compared to multimedia

4.4.2. Induces a sense of presence where someone is totally engrossed by the experience “a state of consciousness, the (psychological) sense of being in the virtual environment” (Slater and Wilbur, 1999)

4.4.3. Provides different viewpoints: 1st and 3rd person

4.4.4. Head-mounted displays are uncomfortable to wear, and can cause motion sickness and disorientation

4.5. Research and design issues

4.5.1. Much research on how to design safe and realistic VRs to facilitate training e.g. flying simulators help people overcome phobias (e.g. spiders, talking in public)

4.5.2. Design issues how best to navigate through them (e.g. first versus third person) how to control interactions and movements (e.g. use of head and body movements) how best to interact with information (e.g. use of keypads, pointing, joystick buttons); level of realism to aim for to engender a sense of presence

5. Web

5.1. Early websites were largely text-based, providing hyperlinks

5.2. Concern was with how best to structure information to enable users to navigate and access it easily and quickly

5.3. Nowadays, more emphasis on making pages distinctive, striking, and pleasurable

5.4. Need to think of how to design information for multi-platforms - keyboard or touch?

5.4.1. e.g. smartphones, tablets, PCs

6. Consumer electronics and appliances

6.1. Everyday devices in home, public place, or car

6.1.1. e.g. washing machines, remotes, photocopiers, printers and navigation systems)

6.2. And personal devices

6.2.1. e.g. MP3 player, digital clock and digital camera

6.3. Used for short periods

6.3.1. e.g. putting the washing on, watching a program, buying a ticket, changing the time, taking a snapshot

6.4. Need to be usable with minimal, if any, learning

7. Mobile

7.1. Handheld devices intended to be used while on the move

7.2. Have become pervasive, increasingly used in all aspects of everyday and working life

7.3. Apps running on mobiles have greatly expanded, e.g

7.3.1. used in restaurants to take orders

7.3.2. car rentals to check in car returns

7.3.3. supermarkets for checking stock

7.3.4. in the streets for multi-user gaming

7.3.5. in education to support life-long learning

8. Speech

8.1. Where a person talks with a system that has a spoken language application, e.g. timetable, travel planner

8.2. Used most for inquiring about very specific information, e.g. flight times or to perform a transaction, e.g. buy a ticket

8.3. Also used by people with disabilities

8.3.1. e.g. speech recognition word processors, page scanners, web readers, home control systems

9. Pen

9.1. Enable people to write, draw, select, and move objects at an interface using lightpens or styluses

9.1.1. capitalize on the well-honed drawing skills developed from childhood

9.2. Digital pens, e.g. Anoto, use a combination of ordinary ink pen with digital camera that digitally records everything written with the pen on special paper

10. Touch

10.1. Touch screens, such as walk-up kiosks, detect the presence and location of a person’s touch on the display

10.2. Multi-touch support a range of more dynamic finger tip actions, e.g. swiping, flicking, pinching, pushing and tapping

10.3. Now used for many kinds of displays, such as Smartphones, iPods, tablets and tabletops

11. Air-based gestures

11.1. Uses camera recognition, sensor and computer vision techniques

11.1.1. can recognize people’s body, arm and hand gestures in a room

11.1.2. systems include Kinect

11.2. Movements are mapped onto a variety of gaming motions, such as swinging, bowling, hitting and punching

11.3. Players represented on the screen as avatars doing same actions

12. Haptic

12.1. Tactile feedback

12.1.1. applying vibration and forces to a person’s body, using actuators that are embedded in their clothing or a device they are carrying, such as a smartphone

12.2. Can enrich user experience or nudge them to correct error

12.3. Can also be used to simulate the sense of touch between remote people who want to communicate

13. Multi-modal

13.1. Meant to provide enriched and complex user experiences

13.1.1. multiplying how information is experienced and detected using different modalities, i.e. touch, sight, sound, speech

13.1.2. support more flexible, efficient, and expressive means of human–computer interaction

13.1.3. Most common is speech and vision

14. Shareable

14.1. Shareable interfaces are designed for more than one person to use

14.1.1. provide multiple inputs and sometimes allow simultaneous input by co-located groups

14.1.2. large wall displays where people use their own pens or gestures

14.1.3. interactive tabletops where small groups interact with information using their fingertips

14.1.4. e.g. DiamondTouch, Smart Table and Surface

15. Tangible

15.1. Type of sensor-based interaction, where physical objects, e.g., bricks, are coupled with digital representations

15.2. When a person manipulates the physical object/s it causes a digital effect to occur, e.g. an animation

15.3. Digital effects can take place in a number of media and places or can be embedded in the physical object

16. Augmented and mixed reality

16.1. Augmented reality - virtual representations are superimposed on physical devices and objects

16.2. Mixed reality - views of the real world are combined with views of a virtual environment

16.3. Many applications including medicine, games, flying, and everyday exploring

17. Wearables

17.1. First developments were head- and eyewear-mounted cameras that enabled user to record what was seen and to access digital information

17.2. Since, jewellery, head-mounted caps, smart fabrics, glasses, shoes, and jackets have all been used

17.2.1. provide the user with a means of interacting with digital information while on the move

17.3. Applications include automatic diaries, tour guides, cycle indicators and fashion clothing

18. Robots and drones

18.1. Four types of robot

18.1.1. remote robots used in hazardous settings

18.1.2. domestic robots helping around the house

18.1.3. pet robots as human companions

18.1.4. sociable robots that work collaboratively with humans, and communicate and socialize with them – as if they were our peers

19. Brain-computer interfaces

19.1. Brain–computer interfaces (BCI) provide a communication pathway between a person’s brain waves and an external device, such as a cursor on a screen

19.2. Person is trained to concentrate on the task, e.g. moving the cursor

19.3. BCIs work through detecting changes in the neural functioning in the brain

19.4. BCIs apps:

19.4.1. Games

19.4.2. enable people who are paralysed to control robots

20. Which interface?

20.1. Will depend on task, users, context, cost, robustness, etc.

20.2. Mobile platforms taking over from PCs

20.3. Speech interfaces also being used much more for a variety of commercial services

20.4. Appliance and vehicle interfaces becoming more important