Conference Report, Part One: Assessing Library Use

Though information literacy involves more than finding information,  we do need to find information in order to evaluate it and more.  At a recent NorthEast Regional Computing Program conference (Providence, RI: March 8-10, 2010) two sessions on usability testing addressed this very issue.

The first session was “Common sense and technology: A library usability experience” with Michael Davidson, Robert Fitzpatrick, Jennifer Green (Plymouth State University), and Gabrielle Reed (Massachusetts College of Art and Design).  The second session was “Undergraduate assessment 360: Putting the library in context” with Susanna Cowan, Michael Howser, and Kathleen Labadorf (University of Connecticut).  Three themes came up in both sessions:

1. Check Assumptions
What makes sense to experienced library users may not make sense to a novice.  The first set of speakers, for instance, discovered how hidden the functional information (hours, etc.) on their library website actually was.  Also, as the second set of speakers pointed out, what we assume from a study conducted at one institution may not apply to our own students.

2. Check Them Continually
The first group of presenters described assessment in terms of a cycle.  We assess, make changes based on the assessment results,  and then assess those changes.  Both sessions advocated continual assessment, though.

3. Check Them in Context
Both sessions emphasized the value of multiple means of assessment.    The second session speakers mentioned how we can make a stronger case for something when multiple data points overlap.  For example, interviews can put survey numbers in perspective.

Both groups also recommended further reading.  The first one recommended Steve Krug’s Don’t make me think: A common sense approach to web usability.  The second one recommended Char Booth’s Informing innovation: Tracking student interest in emerging library technologies at Ohio University.