A Framework for Automated Evaluation of Hypertext Search Interfaces
AbstractAn evaluation framework and simulator of an interactive information retrieval system (SIIIRS) is introduced. SIIIRS is designed to allow researchers to conduct many exploratory studies that can help to narrow the focus of future human subject studies by showing which differences in information exploration style and functionality are likely to produce significant differences in future human subject studies. An experiment was carried out to demonstrate how SIIIRS could be used to predict performance when using different search strategies in a dynamic hypertext environment. The analysis of both the performance and behavioural measures obtained in the experiment showed significant differences in how the different agents (search strategies) performed when using different combinations of query difficulty, newness, and query tail size (as defined in the research reported in this paper). Overall, the agents differed in terms of their behaviours compared to one another and in terms of their interaction with the simulator parameter of newness and the dynamic hypertext control parameter of query tail size. The analysis of the behavioural measures showed the same pattern as found in the performance measures, with query tail size (an indicator of how easy it is to modify the topic during the search) having a strong influence on performance. The results of this study are discussed in terms of their implications for future automated evaluation of hypertext search interfaces.