Chapter 6 Interfaces

Laten we beginnen. Het is Gratis
of registreren met je e-mailadres
Chapter 6 Interfaces Door Mind Map: Chapter 6 Interfaces

1. Multimedia

1.1. Combines different media within a single interface with various forms of interactivity

1.1.1. graphics, text, video, sound, and animations

1.2. Users click on links in an image or text

1.2.1. another part of the program, an animation or a video clip is played, can return to where they were or move on to another place

1.3. Pros and Cons

1.3.1. Facilitates rapid access to multiple representations of information

1.3.2. Facilitates rapid access to multiple representations of information

1.3.3. Can enable easier learning, better understanding, more engagement, and more pleasure

1.3.4. Can encourage users to explore different parts of a game or story

1.3.5. Tendency to play video clips and animations, while skimming through accompanying text or diagrams

1.4. Research and design issues

1.4.1. How to design multimedia to help users explore, keep track of, and integrate the multiple representations

1.4.1.1. provide hands-on interactivities and simulations that the user has to complete to solve a task

1.4.1.2. Use ‘dynalinking,’ where information depicted in one window explicitly changes in relation to what happens in another (Scaife and Rogers, 1996).

1.4.2. Several guidelines that recommend how to combine multiple media for different kinds of task

2. Virtual reality

2.1. Computer-generated graphical simulations providing:

2.1.1. “the illusion of participation in a synthetic environment rather than external observation of such an environment” (Gigante, 1993)

2.2. Provide new kinds of experience, enabling users to interact with objects and navigate in 3D space

2.3. Create highly engaging user experiences

2.4. Pros and cons

2.4.1. Can have a higher level of fidelity with objects they represent compared to multimedia

2.4.2. Induces a sense of presence where someone is totally engrossed by the experience

2.4.2.1. “a state of consciousness, the (psychological) sense of being in the virtual environment” (Slater and Wilbur, 1999)

2.4.3. Provides different viewpoints: 1st and 3rd person

2.4.4. Head-mounted displays are uncomfortable to wear, and can cause motion sickness and disorientation

2.5. Research and design issues

2.5.1. Much research on how to design safe and realistic VRs to facilitate training

2.5.1.1. e.g. flying simulators

2.5.1.2. help people overcome phobias (e.g. spiders, talking in public)

2.5.2. Design issues

2.5.2.1. how best to navigate through them (e.g. first versus third person)

2.5.2.2. how to control interactions and movements (e.g. use of head and body movements)

2.5.2.3. how best to interact with information (e.g. use of keypads, pointing, joystick buttons);

2.5.2.4. level of realism to aim for to engender a sense of presence

3. Web

3.1. Early websites were largely text-based, providing hyperlinks

3.2. Concern was with how best to structure information to enable users to navigate and access it easily and quickly

3.3. Nowadays, more emphasis on making pages distinctive, striking, and pleasurable

3.4. Need to think of how to design information for multi-platforms - keyboard or touch?

3.4.1. e.g. smartphones, tablets, PCs

4. Mobile

4.1. Handheld devices intended to be used while on the move

4.2. Have become pervasive, increasingly used in all aspects of everyday and working life

4.3. Apps running on mobiles have greatly expanded, e.g

4.3.1. used in restaurants to take orders

4.3.2. car rentals to check in car returns

4.3.3. supermarkets for checking stock

4.3.4. in the streets for multi-user gaming

4.3.5. in education to support life-long learning

5. Pen

5.1. Enable people to write, draw, select, and move objects at an interface using lightpens or styluses

5.1.1. capitalize on the well-honed drawing skills developed from childhood

5.2. Digital pens, e.g. Anoto, use a combination of ordinary ink pen with digital camera that digitally records everything written with the pen on special paper

6. Air-based gestures

6.1. Uses camera recognition, sensor and computer vision techniques

6.1.1. can recognize people’s body, arm and hand gestures in a room

6.1.2. systems include Kinect

6.2. Movements are mapped onto a variety of gaming motions, such as swinging, bowling, hitting and punching

6.3. Players represented on the screen as avatars doing same actions

7. Multi-modal

7.1. Meant to provide enriched and complex user experiences

7.1.1. multiplying how information is experienced and detected using different modalities, i.e. touch, sight, sound, speech

7.1.2. support more flexible, efficient, and expressive means of human–computer interaction

7.1.3. Most common is speech and vision

8. Tangible

8.1. Type of sensor-based interaction, where physical objects, e.g., bricks, are coupled with digital representations

8.2. When a person manipulates the physical object/s it causes a digital effect to occur, e.g. an animation

8.3. Digital effects can take place in a number of media and places or can be embedded in the physical object

9. Wearables

9.1. First developments were head- and eyewear-mounted cameras that enabled user to record what was seen and to access digital information

9.2. Since, jewellery, head-mounted caps, smart fabrics, glasses, shoes, and jackets have all been used

9.2.1. provide the user with a means of interacting with digital information while on the move

9.3. Applications include automatic diaries, tour guides, cycle indicators and fashion clothing

10. Brain-computer interfaces

10.1. Brain–computer interfaces (BCI) provide a communication pathway between a person’s brain waves and an external device, such as a cursor on a screen

10.2. Person is trained to concentrate on the task, e.g. moving the cursor

10.3. BCIs work through detecting changes in the neural functioning in the brain

10.4. BCIs apps:

10.4.1. Games

10.4.2. enable people who are paralysed to control robots

11. Command-Based

11.1. Commands such as abbreviations (e.g. ls) typed in at the prompt to which the system responds (e.g. listing current files)

11.1.1. Some are hard wired at keyboard, others can be assigned to keys

11.1.1.1. Efficient, precise, and fast

11.1.1.1.1. Large overhead to learning set of commands

11.2. Research and design issues

11.2.1. Form, name types and structure are key research questions Consistency is most important design principle e.g. always use first letter of command Command interfaces popular for web scripting

12. WIMP and GUI

12.1. Xerox Star first WIMP -> rise to GUIs

12.1.1. Same basic building blocks as WIMPs but more varied

12.1.1.1. Color, 3D, sound, animation, Many types of menus, icons, windows

12.1.2. New graphical elements, e.g.

12.1.2.1. toolbars, docks, rollovers

12.1.3. Challenge now is to design GUIs that are best suited for tablet, smartphone and smartwatch interfaces

12.2. Windows could be scrolled, stretched, overlapped, opened, closed, and moved around the screen using the mouse

12.2.1. Windows were invented to overcome physical constraints of a computer display

12.2.1.1. enable more information to be viewed and tasks to be performed

12.2.2. Scroll bars within windows also enable more information to be viewed

12.2.3. Multiple windows can make it difficult to find desired one

12.2.3.1. listing, iconising, shrinking are techniques that help

12.3. Icons represented applications, objects, commands, and tools that were opened when clicked on

12.4. Menus offering lists of options that could be scrolled through and selected

12.5. Pointing device a mouse controlling the cursor as a point of entry to the windows, menus, and icons on the screen

13. Consumer electronics and appliances

13.1. Everyday devices in home, public place, or car

13.1.1. e.g. washing machines, remotes, photocopiers, printers and navigation systems)

13.2. And personal devices

13.2.1. e.g. MP3 player, digital clock and digital camera

13.3. Used for short periods

13.3.1. e.g. putting the washing on, watching a program, buying a ticket, changing the time, taking a snapshot

13.4. Need to be usable with minimal, if any, learning

14. Speech

14.1. Where a person talks with a system that has a spoken language application, e.g. timetable, travel planner

14.2. Used most for inquiring about very specific information, e.g. flight times or to perform a transaction, e.g. buy a ticket

14.3. Also used by people with disabilities

14.3.1. e.g. speech recognition word processors, page scanners, web readers, home control systems

15. Touch

15.1. Touch screens, such as walk-up kiosks, detect the presence and location of a person’s touch on the display

15.2. Multi-touch support a range of more dynamic finger tip actions, e.g. swiping, flicking, pinching, pushing and tapping

15.3. Now used for many kinds of displays, such as Smartphones, iPods, tablets and tabletops

16. Haptic

16.1. Tactile feedback

16.1.1. applying vibration and forces to a person’s body, using actuators that are embedded in their clothing or a device they are carrying, such as a smartphone

16.2. Can enrich user experience or nudge them to correct error

16.3. Can also be used to simulate the sense of touch between remote people who want to communicate

17. Shareable

17.1. Shareable interfaces are designed for more than one person to use

17.1.1. provide multiple inputs and sometimes allow simultaneous input by co-located groups

17.1.2. large wall displays where people use their own pens or gestures

17.1.3. interactive tabletops where small groups interact with information using their fingertips

17.1.4. e.g. DiamondTouch, Smart Table and Surface

18. Augmented and mixed reality

18.1. Augmented reality - virtual representations are superimposed on physical devices and objects

18.2. Mixed reality - views of the real world are combined with views of a virtual environment

18.3. Many applications including medicine, games, flying, and everyday exploring

19. Robots and drones

19.1. Four types of robot

19.1.1. remote robots used in hazardous settings

19.1.2. domestic robots helping around the house

19.1.3. pet robots as human companions

19.1.4. sociable robots that work collaboratively with humans, and communicate and socialize with them – as if they were our peers

20. Which interface?

20.1. Will depend on task, users, context, cost, robustness, etc.

20.2. Mobile platforms taking over from PCs

20.3. Speech interfaces also being used much more for a variety of commercial services

20.4. Appliance and vehicle interfaces becoming more important