Return to search

A contrast of guideline recommendations and Tullis's prediction model for computer displays: Should text be left-justified?

Two experiments investigated the effect of layout complexity for performance at varying levels of practice on four types of information extraction tasks. Layout complexity is defined as the number of unique horizontal and vertical starting positions of items in the display (Tullis, 1984). Tullis investigated the ability of display-formatting variables (overall density, local density, number of groups, size of groups, number of items, and layout complexity) to predict human performance and preference. Although layout complexity was the best predictor of subjective ratings, it did not contribute to the prediction of search time beyond what could be predicted by overall density, local density, number of groups, and size of groups. This is a particularly interesting finding since vertically aligning lists and left-justifying items, practices strongly recommended in guidelines for display formatting (e.g., Engel & Granada, 1976; Smith & Mosier, 1986), are important factors in Tullis's definition of layout complexity. Thus, the guidelines and Tullis's model lead to conflicting predictions concerning the effect of left-justifying text on user search time. In the first study, layout complexity was manipulated by either left-justifying or not left-justifying text. Although the text was not left-justified, the starting positions of the text were ordered rather than random. In the second study, subjects viewed a third experimental screen that displayed the starting positions of items in a completely unpredictable pattern. Subjects performed all four tasks (find label, scan data, compare labels, and compare data) in four one-hour sessions. Moderate violations of the typical guideline recommendations did not increase user search time across all four tasks in either the first or the second study. However, when subjects compared multiple data values, the random format did increase user search time. Though performance using the three experimental screens was comparable across the four tasks with only one exception, subjective ratings demonstrated differences between the three formats. Subjects disliked the random format and degraded their performance using the random screens.

Identiferoai:union.ndltd.org:RICE/oai:scholarship.rice.edu:1911/16140
Date January 1988
CreatorsFontenelle, Gail Ann
ContributorsHowell, William C.
Source SetsRice University
LanguageEnglish
Detected LanguageEnglish
TypeThesis, Text
Format127 p., application/pdf

Page generated in 0.0016 seconds