Tuesday, July 29, 2008
Thursday, February 28, 2008
Signing off for the time being...
I'm teaching graphic design at Hagerstown Community College, so this blog is in suspension for now.
-BT
-BT
Tuesday, November 28, 2006
The Six Disciplines of User Experience
User Experience (UE) is not a single discipline. At least six skills are needed, and they are almost never found in the same person. There are few places that provide education in this area. In any event, the six skill sets cut across traditional academic disciplines. Here is what it takes:
1. Field studies People to observe potential users in their normal settings, the better to determine real user needs. Training for this discipline is most apt to come from anthropology and sociology, where the skills of careful, systematic observation are taught.
2. Behavioral designers Those who can create a cohesive conceptual model for the product, a model that is consistent, is easy to learn and understand, and will form the basis for engineering design. The behavioral designers work from a detailed task analysis of typical action sequences that are required for the tasks to be supported. They must ascertain that the solution provides support for the work flow, not just for each isolated action. Behavioral design has to mesh the task requirements with the skills, knowledge, and capabilities of the intended users. Skills in behavioral design are most apt to come from cognitive science and experimental psychology, especially from programs in human-computer interaction.
3. Model builders and rapid prototypers Those who can rapidly build product mock-ups, pretend systems that can be tested immediately, even before the real technology is ready. It often takes three people to cover the capabilities required by this task: programming, designing electrical circuits, and building mechanical models. Here the skills typically come from computer programming, electrical and mechanical engineering, and model building of the sort usually taught in schools of architecture and industrial design.
4. User-testers People who understand the pitfalls of experimental tests who can do feasibility and usability studies quickly and efficiently with one-day turnaround time. These rapid user-testing studies of the prototypes allow for rapid iteration of designs, the better to meet the real needs of the users. The results will be approximate rather than exact,3 which is usually sufficient, since in industry we are looking for big effects, not the small phenomena of interest to the scientist. These are the skills of experimental psychology, although what is needed in practice has to be much faster, much less labor-intensive than the traditional laboratory experiments.
5. Graphical and industrial designers Those who possess the design skills that combine science and a rich body of experience with art and intuition. Here is where "joy" and "pleasure come into the equation: joy of ownership, joy of use. This part of the design must satisfy many constraints. It must merge the conceptual model and behavioral aspects of the product with the various size, power, heat dissipation, and other requirements of the technology, yet produce a device that is aesthetically pleasing ("a joy to own"), cost-efficient, and consistent with the demands of manufacturing. These skills are most frequently taught in schools of art, design, and architecture.
6. Technical writers People whose goal should be to show the technologists how to build things that do not require manuals.
1. Field studies People to observe potential users in their normal settings, the better to determine real user needs. Training for this discipline is most apt to come from anthropology and sociology, where the skills of careful, systematic observation are taught.
2. Behavioral designers Those who can create a cohesive conceptual model for the product, a model that is consistent, is easy to learn and understand, and will form the basis for engineering design. The behavioral designers work from a detailed task analysis of typical action sequences that are required for the tasks to be supported. They must ascertain that the solution provides support for the work flow, not just for each isolated action. Behavioral design has to mesh the task requirements with the skills, knowledge, and capabilities of the intended users. Skills in behavioral design are most apt to come from cognitive science and experimental psychology, especially from programs in human-computer interaction.
3. Model builders and rapid prototypers Those who can rapidly build product mock-ups, pretend systems that can be tested immediately, even before the real technology is ready. It often takes three people to cover the capabilities required by this task: programming, designing electrical circuits, and building mechanical models. Here the skills typically come from computer programming, electrical and mechanical engineering, and model building of the sort usually taught in schools of architecture and industrial design.
4. User-testers People who understand the pitfalls of experimental tests who can do feasibility and usability studies quickly and efficiently with one-day turnaround time. These rapid user-testing studies of the prototypes allow for rapid iteration of designs, the better to meet the real needs of the users. The results will be approximate rather than exact,3 which is usually sufficient, since in industry we are looking for big effects, not the small phenomena of interest to the scientist. These are the skills of experimental psychology, although what is needed in practice has to be much faster, much less labor-intensive than the traditional laboratory experiments.
5. Graphical and industrial designers Those who possess the design skills that combine science and a rich body of experience with art and intuition. Here is where "joy" and "pleasure come into the equation: joy of ownership, joy of use. This part of the design must satisfy many constraints. It must merge the conceptual model and behavioral aspects of the product with the various size, power, heat dissipation, and other requirements of the technology, yet produce a device that is aesthetically pleasing ("a joy to own"), cost-efficient, and consistent with the demands of manufacturing. These skills are most frequently taught in schools of art, design, and architecture.
6. Technical writers People whose goal should be to show the technologists how to build things that do not require manuals.
Monday, November 27, 2006
Results from World Usability Day

November 14 was World Useabilty Day. To get a good review of what happenned, go here:
http://www.worldusabilityday.org.
Wednesday, August 03, 2005
Making Forms Usable
I recently came across a problem while in my role as a contractor at the IRS: That of reviewing a form that agency users file to get Section 508 determination on proposed purchases. Don't get me wrong, this item is not about 508. It is about Form layout.
My major problem with this form was in the flow of cells which had questions to be answered or items to be filled out.
They started with a horizontal row: Cell a, cell b, then, unexpectedly, Cell e2.
Cells c and d were ibn the vertical column under a, followed by Cell e1. Then it went across the row to cell f under e2. Finally it went to g under f.
The signature cells went across the bottom in another row.
Not only did this format not follow the eye, it changed it's reading values back and forth from horizontal to vertical at will. There was plenty of white space on the form to rearrange it as either all vertical in 2 columns, or all Horizontal on one big column. The goal of getting the whole form on a single page would have been met either way.
Rule: When you design a form, put yourself in the position of the most confused (or confusable) user.
Usability can be defined as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it" (Preece et al., 1994, p. 722).
My major problem with this form was in the flow of cells which had questions to be answered or items to be filled out.
They started with a horizontal row: Cell a, cell b, then, unexpectedly, Cell e2.
Cells c and d were ibn the vertical column under a, followed by Cell e1. Then it went across the row to cell f under e2. Finally it went to g under f.
The signature cells went across the bottom in another row.
Not only did this format not follow the eye, it changed it's reading values back and forth from horizontal to vertical at will. There was plenty of white space on the form to rearrange it as either all vertical in 2 columns, or all Horizontal on one big column. The goal of getting the whole form on a single page would have been met either way.
Rule: When you design a form, put yourself in the position of the most confused (or confusable) user.
Usability can be defined as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it" (Preece et al., 1994, p. 722).
Friday, June 10, 2005
Small Businesses Need Usability Testing, Too...
One of the biggest mistakes a small business makes in setting up a web site is failure to get outside feedback on usability.
"Usability" is now more than a buzzword. It has emerged as a significant metric for how Web sites are viewed today. Usability surveys, usability tests, usability scores and usability focus groups are all part of the research and development of most large Web sites.
Brent Melson', of Philadelphia's National Software Testing Labs, finds that many smaller e-business operators don't get usability feedback from anyone beyond those on their development team. But those developers and others are too close to the process and biased toward the chosen design and infrastructure:
"You get used to your site and used to any foibles. You need to hear from people who aren't working on it."
For small businesses, organizing a focus group to evaluate your Web site is beyond your time and resources. But getting some sort of outside perspective  be it employees not involved in the design, or your spouses or friends  is crucial to the site's development and performance.
Outside Usability testing is not as expensive as you may think. For more info, contact Bill T. at btchakir@mac.com.
"Usability" is now more than a buzzword. It has emerged as a significant metric for how Web sites are viewed today. Usability surveys, usability tests, usability scores and usability focus groups are all part of the research and development of most large Web sites.
Brent Melson', of Philadelphia's National Software Testing Labs, finds that many smaller e-business operators don't get usability feedback from anyone beyond those on their development team. But those developers and others are too close to the process and biased toward the chosen design and infrastructure:
"You get used to your site and used to any foibles. You need to hear from people who aren't working on it."
For small businesses, organizing a focus group to evaluate your Web site is beyond your time and resources. But getting some sort of outside perspective  be it employees not involved in the design, or your spouses or friends  is crucial to the site's development and performance.
Outside Usability testing is not as expensive as you may think. For more info, contact Bill T. at btchakir@mac.com.
Tuesday, May 31, 2005
California State University, Northridge (CSUN) Conference set
CSUN's 21st Annual International Conference
"Technology and Persons with Disabilities"
March 20-25, 2006 ~ Los Angeles, CA
CSUN's 21st Annual International Conference, "Technology and Persons with Disabilities" will be held at the Hilton Los Angeles Airport and Los Angeles Airport Marriott Hotels, March 20-25, 2006. A Preregistration brochure with complete information about the conference will be available in early January 2006. Check our website regularly for conference information updates at: http://www.csun.edu/cod
Questions:
Center on Disabilities
California State University, Northridge
18111 Nordhoff Street
Northridge, CA 91330-8340
Phone: 818/677-2578
Fax: 818/677-4929
Email: ctrdis@csun.edu
Website: http://www.csun.edu/cod
"Technology and Persons with Disabilities"
March 20-25, 2006 ~ Los Angeles, CA
CSUN's 21st Annual International Conference, "Technology and Persons with Disabilities" will be held at the Hilton Los Angeles Airport and Los Angeles Airport Marriott Hotels, March 20-25, 2006. A Preregistration brochure with complete information about the conference will be available in early January 2006. Check our website regularly for conference information updates at: http://www.csun.edu/cod
Questions:
Center on Disabilities
California State University, Northridge
18111 Nordhoff Street
Northridge, CA 91330-8340
Phone: 818/677-2578
Fax: 818/677-4929
Email: ctrdis@csun.edu
Website: http://www.csun.edu/cod
Thursday, January 06, 2005
Accessibility Testing - Overview
What is Accessibility Testing?
Accessibility testing is the mechanism of insuring that electronic information technology is accessible to people with disabilities. In 1998, Congress amended the Rehabilitation Act to add Section 508, eliminating barriers in information technology, making available new opportunities for people with disabilities, and encouraging development of technologies that help achieve these goals. The law applies to all Federal agencies when they develop, procure, maintain, or use electronic and information technology. Under Section 508 (29 U.S.C. 794d), agencies must give disabled employees and members of the public access to information that is comparable to the access available to others.
Test Objectives:
Accessibility (or 508) testing must address the following items mandated by the law:
(a) All functions can be run from a keyboard.
When software is designed to run on a system that has a keyboard, product functions shall be executable from a
keyboard where the function itself or the result of performing a function can be discerned textually.
(b) Built-in and documented accessibility features shall not be disrupted or disabled.
Applications shall not disrupt or disable activated features of other products that are identified as accessibility features, where those features are developed and documented according to industry standards. Applications also shall not disrupt or disable activated features of any operating system that are identified as accessibility features where the application programming interface for those accessibility features has been documented by the manufacturer of the operating system and is available to the product developer
(c) Assistive technology can track focus and changes.
A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.
(d) Text equivalents provide identity, operation and the state of the user interface to assistive technology.
Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text.
(e) All bitmap images must be consistent.
When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application’s performance.
(f) Operating system must provide text display functions.
Textual information shall be provided through operating system functions for displaying text. The minimum information that shall be made available is text content, text input caret location, and text attributes.
(g) No overriding of user selected display attributes.
Applications shall not override user selected contrast and color selections and other hjbs.
(h) User must be able to opt for non-animated presentation mode.
When animation is displayed, the information shall be displayable in at least one non-animated presentation mode at the option of the user
(i) Information may not be conveyed by color coding alone.
Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.
(j) A range of contrast levels must be provided.
When a product permits a user to adjust color and contrast settings, a variety of color selections capable of producing a range of contrast levels shall be provided.
(k) Avoid particular frequencies of flashing elements.
Software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.
(l) Electronic forms must be made accessible through assistive technology.
When electronic forms are used, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.
Test Environment Planning
Specialized tools (such as Bobby, WinScreamer, etc.) are required for accessibility testing. These tools are required to test applications directly as written in original source code (for instance SAP, CITRIX, etc) and/or applications written as web accessible products (for instance: html and xml, etc.) These tools must be installed on applicable testing workstations prior to testing.
Test Design Strategy
Minimal effort is required to develop accessibility test scripts because the tests are tool driven. Test scripts are built into the tool to address all paragraphs in the law.
Test Design for accessibility:
• Develop test cases to describe the conditions, data and expected results for a particular test object
• Identify web accessible products for which multiple scripts will be required.
• Identify stakeholders to participate in the analysis of the accessibility deficiencies identified in the reports returned by the tool.
Accessibility testing is the mechanism of insuring that electronic information technology is accessible to people with disabilities. In 1998, Congress amended the Rehabilitation Act to add Section 508, eliminating barriers in information technology, making available new opportunities for people with disabilities, and encouraging development of technologies that help achieve these goals. The law applies to all Federal agencies when they develop, procure, maintain, or use electronic and information technology. Under Section 508 (29 U.S.C. 794d), agencies must give disabled employees and members of the public access to information that is comparable to the access available to others.
Test Objectives:
Accessibility (or 508) testing must address the following items mandated by the law:
(a) All functions can be run from a keyboard.
When software is designed to run on a system that has a keyboard, product functions shall be executable from a
keyboard where the function itself or the result of performing a function can be discerned textually.
(b) Built-in and documented accessibility features shall not be disrupted or disabled.
Applications shall not disrupt or disable activated features of other products that are identified as accessibility features, where those features are developed and documented according to industry standards. Applications also shall not disrupt or disable activated features of any operating system that are identified as accessibility features where the application programming interface for those accessibility features has been documented by the manufacturer of the operating system and is available to the product developer
(c) Assistive technology can track focus and changes.
A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.
(d) Text equivalents provide identity, operation and the state of the user interface to assistive technology.
Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text.
(e) All bitmap images must be consistent.
When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application’s performance.
(f) Operating system must provide text display functions.
Textual information shall be provided through operating system functions for displaying text. The minimum information that shall be made available is text content, text input caret location, and text attributes.
(g) No overriding of user selected display attributes.
Applications shall not override user selected contrast and color selections and other hjbs.
(h) User must be able to opt for non-animated presentation mode.
When animation is displayed, the information shall be displayable in at least one non-animated presentation mode at the option of the user
(i) Information may not be conveyed by color coding alone.
Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.
(j) A range of contrast levels must be provided.
When a product permits a user to adjust color and contrast settings, a variety of color selections capable of producing a range of contrast levels shall be provided.
(k) Avoid particular frequencies of flashing elements.
Software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz.
(l) Electronic forms must be made accessible through assistive technology.
When electronic forms are used, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.
Test Environment Planning
Specialized tools (such as Bobby, WinScreamer, etc.) are required for accessibility testing. These tools are required to test applications directly as written in original source code (for instance SAP, CITRIX, etc) and/or applications written as web accessible products (for instance: html and xml, etc.) These tools must be installed on applicable testing workstations prior to testing.
Test Design Strategy
Minimal effort is required to develop accessibility test scripts because the tests are tool driven. Test scripts are built into the tool to address all paragraphs in the law.
Test Design for accessibility:
• Develop test cases to describe the conditions, data and expected results for a particular test object
• Identify web accessible products for which multiple scripts will be required.
• Identify stakeholders to participate in the analysis of the accessibility deficiencies identified in the reports returned by the tool.
Wednesday, January 05, 2005
Another white paper on Usability Testing...
ABSTRACT
Usability testing is a dynamic process that can be used throughout the process of developing interactive multimedia software. The purpose of usability testing is to find problems and make recommendations to improve the utility of a product during its design and development. For developing effective interactive multimedia software, dimensions of usability testing were classified into the general categories of: learnability; performance effectiveness; flexibility; error tolerance and system integrity; and user satisfaction. In the process of usability testing, evaluation experts consider the nature of users and tasks, tradeoffs supported by the iterative design paradigm, and real world constraints to effectively evaluate and improve interactive multimedia software. Different methods address different purposes and involve a combination of user and usability testing, however, usability practitioners follow the seven general procedures of usability testing for effective multimedia development. As the knowledge about usability testing grows, evaluation experts will be able to choose more effective and efficient methods and techniques that are appropriate to their goals.
Usability Testing
Usability can be defined as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it" (Preece et al., 1994, p. 722). Based upon this definition, the usability of a multimedia software could be measured by how easily and effectively a specific user can use the multimedia program, given particular kinds of support, to carry out a fixed set of tasks, in a defined set of environments (Chapanis, 1991).
Usability testing determines whether a system meets a pre-defined, quantifiable level of the usability for specific types of user carrying out specific tasks. Traditionally, software products including information materials and multimedia software have been evaluated by means of marketplace reviews, magazine reviews, and beta tests, but these approaches leave too little time for major modifications and improvement of products (Reed, 1992; Skelton, 1992). As the process of observing and collecting data from users while they interact with multimedia prototypes, usability testing can be used to address and solve a system’s usability problems before it goes into production.
The aim of usability testing is not to solve problems, or to enable a quantitative assessment of usability (Patterson, 1994). It provides a means of identifying problem areas, and the extracting of information concerning problems, difficulties, weaknesses and areas for improvement. Even if usability testing should reveal difficulties or faults that cannot be corrected in the model under development, the information is still important for the designers in planning for the future release of a product (Chapanis, 1991; Dieli, 1989).
Usability testing may serve a number of different purposes: to improve an existing product; to compare two or more products; to measure a system against a standard or a set of guidelines (Lindgaard, 1994). It can also be used as a comparison test: usability of a product is compared against competitors’ products, and serves as a verification tool- a way to check user reaction to new features (Reed, 1992).
Usability testing is concerned with ‘fitness for use of a system,’ and as such it can be a powerful instructional systems development (ISD) tool for identifying problems with multimedia interface as defined by the specific user rather than the interface as designed by the instructional systems designers (Davies, 1995). With usability testing, rapid prototyping in the multimedia production process is beginning to emerge as a way to test design approaches and user interfaces, and will reduce the software development cycle while at the same time increasing effectiveness (Henson & Knezek, 1991; Northrup, 1995).
Reed (1992) indicates maxims of usability for software developers: (a) design for the software end user, not for the designers/clients; (b) test the multimedia software, not the user; (c) test usability with real users early and often; (d) don’t test everything at once; (e) measure performance of real-world tasks with software, not functionality of the program; and (f) test usability problems that software designers never imagined.
Assessment Testing
At the intermediate point of the web site development cycle, real time trials may be conducted to evaluate user performance rather than user thought processes. Assessment tests allow designers to focus on specific usability deficiencies while gathering other task-pertinent data. Although hints may be provided to a user under performance duress, there is little interaction between the user and the test administrator. This affords a greater opportunity to collect quantitative data that may bolster the results gathered from an exploratory test. The number of hints provided by the test administrator, errors committed by all users, successful task completions among all users, and the elapsed time for task completion are examples of documented critical appraisals. Given any specific goal of a web site, an error free sales transaction for instance, the assessment test may be used to measure an initial benchmark for user performance within a web site.
Validation Testing
Validation tests gauge how a web site compares to a competitor, a company historical standard, or to a project specific objective. This test measures consistency among all users against a predetermined benchmark and is therefore quantitative in nature. Because the validation test is used to accredit the site's ability to meet its designers' goals, it is conducted near the end of a web site's development cycle. The test is capable of determining which criteria are being met and reasons for those not met. For validation testing, the parameters of performance among targeted users typically exceed speed and accuracy. They can include ranking preferences both within the site itself and amongst similar competitor sites. In addition, all site components are tested as a complete package. For example, evaluating a search engine's efficiency works in conjunction with the amount of information available within the site. If there are deficiencies, the validation test affords management the opportunity to delay going live in order to fine-tune the site, prepare a response for public relations purposes, or effectively train a support team for predicted user difficulty.
A usability test is a formal evaluation process that has as its goal improvement of the usability of the product being tested. It differs from a quality assurance or quality test, which has as its goal assessing whether the product works according to specifications. It differs from a customer assurance test, a pilot test, and a beta test because the usability test ensures the collection of systematic, recorded, quantifiable data and observation of behaviors.
A usability test has these five characteristics:
Each test has specific goals and concerns that are tested
The participants represent real users (6 to 12 participants are typical)
The participants do real tasks
The participants are observed and recorded
The data is analyzed, problems diagnosed, and recommendations made
A usability test consists of these activities:
• Planning the test, developing participants profiles, identifying participants from user pool, creating test materials, writing task scenarios, determining usability criteria and measures
• Preparing the test location, pilot testing materials and procedures
• Introducing the participant to the situation, the product, and the procedure
• Running of the task-based test, where participants are asked to complete a series of tasks that address the specific goals and concerns being tested.
• Participants are asked to "think aloud" (articulate their thoughts, feelings, and actions). This data, and the recorded video images, helps target areas that re confusing, unclear, or misleading during the analysis stage.
• Debriefing the participant to get final thoughts, subjective feelings about the product, and suggestions for improvement.
• Analyzing the data, making recommendations, and documenting findings
The deliverable from a usability test is a report that details the problems encountered by the participants and recommendations for change based on known human factors, cognitive, and behavioral principles, and recognized best practices.
© 2003, Bill Tchakirides
Usability testing is a dynamic process that can be used throughout the process of developing interactive multimedia software. The purpose of usability testing is to find problems and make recommendations to improve the utility of a product during its design and development. For developing effective interactive multimedia software, dimensions of usability testing were classified into the general categories of: learnability; performance effectiveness; flexibility; error tolerance and system integrity; and user satisfaction. In the process of usability testing, evaluation experts consider the nature of users and tasks, tradeoffs supported by the iterative design paradigm, and real world constraints to effectively evaluate and improve interactive multimedia software. Different methods address different purposes and involve a combination of user and usability testing, however, usability practitioners follow the seven general procedures of usability testing for effective multimedia development. As the knowledge about usability testing grows, evaluation experts will be able to choose more effective and efficient methods and techniques that are appropriate to their goals.
Usability Testing
Usability can be defined as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it" (Preece et al., 1994, p. 722). Based upon this definition, the usability of a multimedia software could be measured by how easily and effectively a specific user can use the multimedia program, given particular kinds of support, to carry out a fixed set of tasks, in a defined set of environments (Chapanis, 1991).
Usability testing determines whether a system meets a pre-defined, quantifiable level of the usability for specific types of user carrying out specific tasks. Traditionally, software products including information materials and multimedia software have been evaluated by means of marketplace reviews, magazine reviews, and beta tests, but these approaches leave too little time for major modifications and improvement of products (Reed, 1992; Skelton, 1992). As the process of observing and collecting data from users while they interact with multimedia prototypes, usability testing can be used to address and solve a system’s usability problems before it goes into production.
The aim of usability testing is not to solve problems, or to enable a quantitative assessment of usability (Patterson, 1994). It provides a means of identifying problem areas, and the extracting of information concerning problems, difficulties, weaknesses and areas for improvement. Even if usability testing should reveal difficulties or faults that cannot be corrected in the model under development, the information is still important for the designers in planning for the future release of a product (Chapanis, 1991; Dieli, 1989).
Usability testing may serve a number of different purposes: to improve an existing product; to compare two or more products; to measure a system against a standard or a set of guidelines (Lindgaard, 1994). It can also be used as a comparison test: usability of a product is compared against competitors’ products, and serves as a verification tool- a way to check user reaction to new features (Reed, 1992).
Usability testing is concerned with ‘fitness for use of a system,’ and as such it can be a powerful instructional systems development (ISD) tool for identifying problems with multimedia interface as defined by the specific user rather than the interface as designed by the instructional systems designers (Davies, 1995). With usability testing, rapid prototyping in the multimedia production process is beginning to emerge as a way to test design approaches and user interfaces, and will reduce the software development cycle while at the same time increasing effectiveness (Henson & Knezek, 1991; Northrup, 1995).
Reed (1992) indicates maxims of usability for software developers: (a) design for the software end user, not for the designers/clients; (b) test the multimedia software, not the user; (c) test usability with real users early and often; (d) don’t test everything at once; (e) measure performance of real-world tasks with software, not functionality of the program; and (f) test usability problems that software designers never imagined.
Assessment Testing
At the intermediate point of the web site development cycle, real time trials may be conducted to evaluate user performance rather than user thought processes. Assessment tests allow designers to focus on specific usability deficiencies while gathering other task-pertinent data. Although hints may be provided to a user under performance duress, there is little interaction between the user and the test administrator. This affords a greater opportunity to collect quantitative data that may bolster the results gathered from an exploratory test. The number of hints provided by the test administrator, errors committed by all users, successful task completions among all users, and the elapsed time for task completion are examples of documented critical appraisals. Given any specific goal of a web site, an error free sales transaction for instance, the assessment test may be used to measure an initial benchmark for user performance within a web site.
Validation Testing
Validation tests gauge how a web site compares to a competitor, a company historical standard, or to a project specific objective. This test measures consistency among all users against a predetermined benchmark and is therefore quantitative in nature. Because the validation test is used to accredit the site's ability to meet its designers' goals, it is conducted near the end of a web site's development cycle. The test is capable of determining which criteria are being met and reasons for those not met. For validation testing, the parameters of performance among targeted users typically exceed speed and accuracy. They can include ranking preferences both within the site itself and amongst similar competitor sites. In addition, all site components are tested as a complete package. For example, evaluating a search engine's efficiency works in conjunction with the amount of information available within the site. If there are deficiencies, the validation test affords management the opportunity to delay going live in order to fine-tune the site, prepare a response for public relations purposes, or effectively train a support team for predicted user difficulty.
A usability test is a formal evaluation process that has as its goal improvement of the usability of the product being tested. It differs from a quality assurance or quality test, which has as its goal assessing whether the product works according to specifications. It differs from a customer assurance test, a pilot test, and a beta test because the usability test ensures the collection of systematic, recorded, quantifiable data and observation of behaviors.
A usability test has these five characteristics:
Each test has specific goals and concerns that are tested
The participants represent real users (6 to 12 participants are typical)
The participants do real tasks
The participants are observed and recorded
The data is analyzed, problems diagnosed, and recommendations made
A usability test consists of these activities:
• Planning the test, developing participants profiles, identifying participants from user pool, creating test materials, writing task scenarios, determining usability criteria and measures
• Preparing the test location, pilot testing materials and procedures
• Introducing the participant to the situation, the product, and the procedure
• Running of the task-based test, where participants are asked to complete a series of tasks that address the specific goals and concerns being tested.
• Participants are asked to "think aloud" (articulate their thoughts, feelings, and actions). This data, and the recorded video images, helps target areas that re confusing, unclear, or misleading during the analysis stage.
• Debriefing the participant to get final thoughts, subjective feelings about the product, and suggestions for improvement.
• Analyzing the data, making recommendations, and documenting findings
The deliverable from a usability test is a report that details the problems encountered by the participants and recommendations for change based on known human factors, cognitive, and behavioral principles, and recognized best practices.
© 2003, Bill Tchakirides
What Is Usability Testing?
This is part of a white paper I wrote in 2003:
Usability Testing has as its goal the analysis of the user experience by direct evaluation of actual users. A usability test ensures the collection of systematic, recorded, quantifiable data and observation of user behaviors. At it’s simplest level, usability testing will determine whether a system works well from the user’s perspective, or whether it requires rework.
Test Objectives:
Usability testing covers all user interfaces required to carry out the functions of the web site. Each project or system which is meant to interact with an actual user, whether internal employee carrying out a process, third party professional accessing specialized system areas, or individual users interacting with the web site is subjected to usability testing.
Usability is characterized by the following attributes:
Easy to learn. The user should be able to learn basic functions of the system quickly and the more advanced functions gradually. Example: the provision of menus, wizards, help files and accessible tutorials can make a system easier to learn by a less experienced user.
Easy to remember. A user should not have to relearn the system after being away from it for a while.
Efficient to use. Once a user has learned the system, a high level of productivity should be possible. Example: toolbar buttons and keyboard shortcuts for common activities are available and can be customized by the advanced user.
Few errors and easy recovery (Forgiveability). The system should help the user avoid errors and make recovery from errors quickly and easily. Example: confirmation windows on high impact commands, spell checkers, etc.
Subjectively pleasing. The system should be satisfying to use so that users are comfortable with it. These are primarily design considerations.
Test Design Strategy:
The Usability Lifecycle covers each phase of the creation and implementation of the web site (Architecture, Development, Integration, Deployment and Operations). It has specific activities to be carried out and work products which are created by those activities.
Characteristics of Usability tests:
Usability tests conducted after development and prior to release are primarily Validation tests. The activities listed below are designed to obtain user feedback through Usability testing, thereby producing elements for the Usability Review. They meet the following criteria:
Each participant completes an interview/questionnaire.
Each participant will receive a short, verbal orientation to the project site, an explanation of the test purpose, and a statement on how they will be observed and taped. They must also sign a video release.
Each participant performs their assigned task or tasks.
A final interview is held with the participant.
The data is analyzed (both qualitatively – comments of observers- and quantitatively – metrics involving time to completion of tasks, number of errors, etc.), problems diagnosed, and recommendations made.
Usability Review is finalized.
Test Environment Planning:
Usability testing requires specific lab configurations and testing equipment. In most cases, this is a fully compatible laboratory environment consisting of: An enclosed room, video cameras and recorders for both video and sound, one way mirrors for observers to monitor tests (or closed circuit video for monitoring), user workstation(s) serving as a model of the user’s actual work area.
Usability Testing has as its goal the analysis of the user experience by direct evaluation of actual users. A usability test ensures the collection of systematic, recorded, quantifiable data and observation of user behaviors. At it’s simplest level, usability testing will determine whether a system works well from the user’s perspective, or whether it requires rework.
Test Objectives:
Usability testing covers all user interfaces required to carry out the functions of the web site. Each project or system which is meant to interact with an actual user, whether internal employee carrying out a process, third party professional accessing specialized system areas, or individual users interacting with the web site is subjected to usability testing.
Usability is characterized by the following attributes:
Subjectively pleasing. The system should be satisfying to use so that users are comfortable with it. These are primarily design considerations.
Test Design Strategy:
The Usability Lifecycle covers each phase of the creation and implementation of the web site (Architecture, Development, Integration, Deployment and Operations). It has specific activities to be carried out and work products which are created by those activities.
Characteristics of Usability tests:
Usability tests conducted after development and prior to release are primarily Validation tests. The activities listed below are designed to obtain user feedback through Usability testing, thereby producing elements for the Usability Review. They meet the following criteria:
Test Environment Planning:
Usability testing requires specific lab configurations and testing equipment. In most cases, this is a fully compatible laboratory environment consisting of: An enclosed room, video cameras and recorders for both video and sound, one way mirrors for observers to monitor tests (or closed circuit video for monitoring), user workstation(s) serving as a model of the user’s actual work area.
Saturday, January 01, 2005
This is a new Blog for Usability / Accessibility Professionals
It is January 1, 2005, New Year's Day. This is the launching of a blog for my colleagues in the web usability and Section 508 business.
A bit of background:
My Name is Bill Tchakirides and I just completed a three-year stint at the IRS Modernization Program being run by CSC in Lanham, Maryland. During that time I was the Usability and Section 508 Subject Matter Expert for the IT&D department. Prior to coming to Lanham, I was a usability and e-commerce consultant for other departments at CSC.
There is so much going on in this area, especially in Section 508-related information, that I felt a blog could be helpful. Too much of the information which is becoming available has a pricetag on it (not that Jacob Nielsen isn't worth the bucks...but for the lone practitioner his info is somewhat prohibitive.)
Over the next few postings I'll be putting up links that I use, discussing software solutions and passing out trade news.
PLEASE FEEL FREE TO ADD YOUR COMMENTS, LINKS, JOB LISTINGS, NEWS ETC. This blog is wide open. And tell your associates in the trade about it. We can get a great forum going here.
Regards and best wishes for the New Year,
Bill T.
A bit of background:
My Name is Bill Tchakirides and I just completed a three-year stint at the IRS Modernization Program being run by CSC in Lanham, Maryland. During that time I was the Usability and Section 508 Subject Matter Expert for the IT&D department. Prior to coming to Lanham, I was a usability and e-commerce consultant for other departments at CSC.
There is so much going on in this area, especially in Section 508-related information, that I felt a blog could be helpful. Too much of the information which is becoming available has a pricetag on it (not that Jacob Nielsen isn't worth the bucks...but for the lone practitioner his info is somewhat prohibitive.)
Over the next few postings I'll be putting up links that I use, discussing software solutions and passing out trade news.
PLEASE FEEL FREE TO ADD YOUR COMMENTS, LINKS, JOB LISTINGS, NEWS ETC. This blog is wide open. And tell your associates in the trade about it. We can get a great forum going here.
Regards and best wishes for the New Year,
Bill T.


