The Effects of Interactive Latency on Exploratory Visual Analysis
Zhicheng Liu, Jeffrey Heer 
The goal of interactive visualization is to enable data analysis at rates resonant with the pace of human thought.
This study investigates the effects of interface interaction latency in visualization software bring used for exploratory research. The study included data analysts from several SF tech companies, who were asked to complete think aloud protocols while completing exploratory research tasks in two different data sets visualized using the imMens software. In one study, the subjects were given a normal copy of the software running with minimal latency, in the other they were given a modified copy which introduced varying amounts of latency to different interactions such as zooming, brushing, color modification and filtering.
They present an in depth method for coding, categorizing, and analyzing user statements. They categorize user statements in the following manner:
The authors describe visual exploratory analysis as a novel area of study because of its open ended nature, variable structure, and un-defined end goal state. User interaction may be triggered by salient visual cues, driven by a priori hypotheses, or carried out through exploratory browsing. The process is unconstrained and spontaneous.
The strongest conclusion of this study is based on the following observation: significant interface latency decreased user activity, data set coverage, rates of observation, generalization, and hypothesis. This statement can seem obvious on its face, however it points to a very interesting conclusion:
Faster visualization systems, which allow researchers to work at a pace resonant with their own thought, do not take researchers to the same conclusions faster, they take them to better and more thorough understanding.
An Insight-Based Methodology and Longitudinal Study for Evaluating Bioinformatics Visualizations
Purvi Saraya, Chris North, Vy Lam, and Karen Duca [2005, 2006]
The primary purpose of visualization is to generate insight. The main consideration for any scientist is discovery. Arriving at an insight often sparks the critical breakthrough that leads to discovery: suddenly seeing something that previously passed unnoticed or seeing something familiar in a new light.
This paired study includes two assessments of visualization software being used to understand large volume gene expression micro-array data sets. For the purposes of my work, the details of what they are trying to learn scientifically are less important. What matters is that they present two excellent methodologies for assessing the exploratory research capabilities of visualization software. They also provide the above domain independent definition of insight, and note that their methodologies are designed to be generalized and applied to other fields of scientific research, which is precisely what I intend to do.
The first study was an in-person pilot, following a modified think aloud protocol in order to assess the amount and value of insight gained from using each of 6 visualization softwares. The authors discuss their investigation methodology, coding, categorization, and analysis procedures, and note the need for a longitudinal study to account for the extended time frames in which scientific data analysis is performed.
The second study was a modified diary study aiming to assess the amount and character of insight to be gained from the same 6 visualization softwares over a three month period divided into 2 week sessions. The authors discuss their investigation methodology, coding, categorization, and analysis procedures, and describe a series of recommendations for improving the software.
Their conclusion notes, “this longitudinal study is just the beginning of this line of work, and there is much more research to be done. More studies need to be conducted with different subjects and tools in diverse domains, in order to extract broader abstractions and patterns of the visual analytics process.
My own study is an attempt to generalize the domain independent components of their work to the field of atmosphere and climate science using the MERLIN software which we built at NASA JPL this summer in order to build further recommendations for improving the software as well as hopefully contributing to the general knowledge of visual analytics.
A Field Study of Exploratory Learning Strategies
John Rieman 
This is an older study referenced by the above three papers, which presents a methodology for utilizing “Eureka Reports” to capture key moments of learning in situ which are hard for researchers to access in a controlled environment. This is a modified diary study, in which office workers record what they do on a daily basis, and when they learn something new about their environment, they record that moment in a eureka report. The topic is quite different from my work, Rieman is studying office workers learning how to operate computational devices in the workplace, however the methodology provides a key step forward for me in constructing my longitudinal insight based workflow assessment.
Research and Analysis Logbook
I created two documents, a logbook and an insight report. This pairing is based on Rieman’s Activity Log and Eureka Report. The logbook is designed to allow the scientist to explain what research and analysis activities they have been doing, and to log which screens from the software they engaged with. When they learn something new, they will fill out an insight report. This is intended to allow me to capture moments of learning as they really happen in the scientists natural environment and workflow.
The questions and statements in the insight report document are based on the metrics and categorizations that Saraiya, North, Lam and Duca described in their paired insight based assessment methodologies. I additionally implemented a numeric rating system for each insight, the idea for which is drawn from Zhicheng and Heer, but I have extended the categories based on recommendations from my science partners
Booklet Competitive Analysis
I examined two booklets which I find to be particularly effective: the FIELD NOTES notebook and CMU’s orange degree notebook. From these I built type and style standards to apply to my own notebook and found an appropriate size for the booklet.
Copies of the Discovery notebook have been sent to two researchers at NASA JPL, and one researcher at the School of the Art Institute of Chicago. One of my tactical learnings was that the booklet should be made to support infrequent or inconsistent use patterns. The original notebook prototype was designed to be used continuously for two weeks, however researcher feedback indicated that this was too invasive of a protocol, so the current version supports a flexible work schedule, allowing the researcher to simply fill out the notebook on days that they work with the software.
Probably the biggest takeaway from the project for me was learning the value of in situ insight in the everyday lives of people. A lot of the research and assessment methods that we use as designers are limited because of the controlled testing environment. Especially when thinking about cognitive tools for scientists and researchers, getting access to the real ways that people think, learn, and explore can be critical.
I also found a wealth of knowledge concerning research and assessment protocols in the field of HCI and visualization. Being able to read about the minute details of each study, and empirical assessments of what each of those studies produced gave me a very strong foundation to start from.