...evaluate programs and services on specified criteria
Meaning and Importance of Competency
The ability to evaluate programs and services is an essential skill for librarians and information professionals to possess. Because so much of what we do as professionals falls into the category of providing programs and services for our users, it is essential that we regularly evaluate what we do in order to ensure that we continue to efficiently and effectively meet the ever-evolving needs of those we serve. In his book chapter entitled “Learning and Using Evaluation: A Practical Introduction” (2008), Charles R. McClure states that, in the context of libraries, “evaluation is the process of determining the success, impact, results, costs, outcomes, or other factors related to a library activity, program, service, or resource use” and that, “at its best, evaluation provides input for planning and decision-making, provides high quality qualitative and quantitative data describing library activities, and serves as a basis to constantly assess and improve library services” (p. 179). Evaluation takes place on an ongoing basis in the context of constant and purposeful change and is a critical component of the management of libraries and other information organizations.
Evaluation is very closely connected with another critical management activity: planning. Planning considers what the future will look like and is a formal process by which an organization defines its mission, values, goals, and objectives for the future in order to guide decision-making around what types of programs and services the library offers. McClure writes of planning that “at its best, planning identifies current trends affecting the organization, considers possible goals and strategies, integrates information technology planning with services planning, takes a user perspective on what services are needed and why, and considers future implications of present decisions” (2008, p. 180). The effective satisfaction of user needs is at the heart of library evaluation activities. Evaluation is a critical step in determining “the degree to which the goals and objectives were accomplished” (p. 180), and previous evaluation activities can also be taken into account during the planning process. “In short,” McClure writes, “planning and evaluation are the two sides of the same coin—each contributing to a successfully managed library” (p. 181).
Evaluation benefits libraries in various ways. Casey and Savastinuk (2007) point out that evaluation of older services can reveal previously unconsidered viewpoints and that “by evaluating the service, you may discover a way to revitalize it—or may determine that these resources could be better used elsewhere” (p. 51). Thus, evaluation can infuse older services with new energy or expose areas from which resources can be reallocated to better meet the needs of users. In addition, evaluation can help libraries reach new users, by providing insight into which programs and services will be most accessible and relevant for different populations of users. McClure describes a number of benefits of evaluation and reasons why it is important to different library stakeholder groups, including its role in “[determining] the degree to which service objectives are accomplished and the degree to which these objectives were appropriate for the library” and in “[assisting] the organization to determine the types of staff training that might be most needed and appropriate” (2008, p. 181).
Evaluation of programs and services must be carried out on the basis of specific criteria. Criteria can vary depending upon the context; for example, reference services may be evaluated based upon guidelines issued by a professional association, such as the Reference and User Services Association (RUSA), whereas the usability of online learning objects associated with a library’s information literacy instruction program may be evaluated based upon principles or heuristics from the fields of user experience or interaction design. Some criteria for assessing library programs and services that McClure highlights include extensiveness, efficiency, effectiveness, service quality, impact, and usefulness (2008, pp. 182-183). Although the specific criteria on which particular programs and services are evaluated can vary, the important thing is that the evaluation is carried out based on specified criteria.
Evaluation is a complex process, and evaluators face various challenges in completing their charge. In order to ensure that evaluation efforts are successful, it helps to follow a structured plan. Some decisions that McClure (2008) suggests making ahead of time include who will do the work, how they will do it, what each involved individual’s role will be, what methods and measures will be used, how data will be collected and analyzed, and how results will be reported (pp. 184-189). In addition, McClure recommends that evaluators keep in mind the oftentimes politically charged environment within which evaluation takes place and “understand clearly who are the stakeholders and what are their interests and objectives related to the evaluation” (p. 190).
Preparation and Evidence
To demonstrate my ability to evaluate programs and services on specified criteria, I present coursework from two courses I took in the online School of Library and Information Science (SLIS) at San Jose State University (SJSU). My first piece of evidence is a usability evaluation of SJSU’s Library Online Tutorial for the School of Library & Information Science Students (LOTSS) that I completed for LIBR 251, Web Usability. My second piece of evidence is an analysis of face-to-face and chat reference services at Washington State’s X Library System (XLS)* that I completed for LIBR 210, Reference and Information Services, in which I evaluate XLS’s reference services on specified criteria.
First Piece of Evidence: LOTSS Usability Evaluation, LIBR 251
In Summer 2010, I took the course “Web Usability” with Dr. Jeremy Kemp. The course was designed to offer students “a framework for developing user-friendly interfaces for use in information systems” and to teach us “the principles of user-centric design and style for particular circumstances and populations” (SJSU SLIS, n.d.). Dr. Kemp’s course was broken into three “seminars”: the Evaluation Seminar, the Design Seminar, and the Implementation Seminar. For the Evaluation Seminar, students were assigned to evaluate the SJSU library’s LOTSS tutorial (2006) on usability criteria, including Tognazzini’s “First Principles of Interaction Design” (n.d.) and Nielsen’s “10 Heuristics for User Interface Design” (2005). Since LOTSS is a learning object—a tutorial—designed to improve the information literacy of SLIS students by teaching us how to use various research databases, it falls within the category of library programs and services. Since my evaluation of LOTSS is based on specified usability criteria, this assignment demonstrates my ability to evaluate programs and services on specified criteria, and I therefore submit it as evidence of my mastery of this competency.
For the LOTSS Usability Evaluation assignment, students were assigned to complete LOTSS and write a discussion forum post, with screenshots, documenting three areas of concern with regard to the tutorial’s usability. We were instructed to “go through the interface several times and inspect the various interface elements while comparing them to the list of usability principles,” looking “at visual cues and especially for interactive items that don’t behave in familiar ways” (Kemp, 2010). In my evaluation, the three usability concerns that I discuss are all drawn from Nielsen’s “10 Heuristics for User Interface Design” (2005); they include 1) user control and freedom, 2) consistency and standards, and 3) help and documentation. These are the criteria on which I evaluate LOTSS in my assignment.
Nielsen’s “user control and freedom” heuristic states that “users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialogue” (2005). In my evaluation, I describe how, as I progressed through the tutorial, I was often asked to answer multiple choice questions by selecting the radio button next to my chosen answer, but “rather than being able to select a radio button and then click ‘submit,’ as soon as I selected any radio button I would receive a submission alert that required me to choose ‘OK’ to acknowledge the alert” (p. 1). This is problematic because “if I inadvertently clicked the wrong radio button, I did not have a chance to change my answer before receiving the alert, which I would then have to acknowledge” (p. 2). In my assignment, I argue that “the designer could have allowed more user control and freedom by adding a ‘submit’ button, which would allow the user to change answers as many times as needed before receiving an alert” (p. 2).
Nielsen’s “consistency and standards” heuristic states that “users should not have to wonder whether different words, situations, or actions mean the same thing” in different contexts (2005). In my evaluation, I describe how “the instructions in the left window did not always match what was displayed in the right window” (p. 2). I provide an example where the icon I was told to look for looked different from the icon as it was displayed in the tutorial window. I offer a simple solution: “simply using the same icon in the instructions as the one that appears in the entry in question” (p. 3).
Finally, Nielsen’s “help and documentation” heuristic states that help and documentation information “should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large” (2005). I point out that “by its nature as a tutorial, one of LOTSS's most vital components is its instructions, for they guide the user through the tutorial” (p. 3). I describe confusion that I experienced when LOTSS encouraged me to take time to explore different database elements and how “I wasn't sure if I was supposed to do the exploration right then and there or after finishing the tutorial, and I was unsure whether I had reached the end of the module” (p. 3). I suggest that this type of instruction, encouraging further exploration, might be more helpful if placed at the end of the module, after the user has completed all essential tasks (p. 3).
In my LOTSS Usability Evaluation, I used specified criteria—Nielsen’s “10 Heuristics for User Interface Design” (2005)—to evaluate the usability of a learning object, LOTTS, that is one component of King Library’s information literacy instruction program for SJSU students. It therefore demonstrates my ability to evaluate programs and services on specified criteria, and I submit it here as evidence of my mastery of this competency.
Second Piece of Evidence: Observation Analysis, LIBR 210
In my evidence for Competency I, I describe the Observation Analysis assignment I completed for LIBR 210, Reference and Information Services, with Dr. Michelle Holschuh Simmons in Spring 2011. For that assignment, students were assigned to observe and analyze both face-to-face and virtual reference services at the library of our choice; I chose the X Library System (XLS)* in Washington State. In demonstrating my mastery of Competency I, which deals with the ability to use service concepts, principles and techniques that facilitate information access, relevance, and accuracy for individuals or groups of users, I used specified criteria—including both the “Guidelines for Behavioral Performance of Reference and Information Service Providers” (2004) issued by RUSA (hereafter referred to as the “RUSA Guidelines”) and digital reference criteria developed by David Ward and described by Wikoff (2008)—to analyze XLS’s reference services.
As I discuss in my evidence for Competency I, for the face-to-face portion of the Observation Analysis assignment, students were instructed to “focus on interpersonal interactions between users and librarians, the physical environment, the absence or presence of the reference interview, behaviors and attitudes of both librarians and users, and evidence of policies. We were also instructed to analyze noteworthy interactions using established criteria, such as the RUSA Guidelines (2004), and to make references to the professional literature” (Competency I). In my Observation Analysis assignment, I analyze XLS’s in-person reference services using each of the five main criteria outlined in the RUSA Guidelines (2004), including approachability, interest, listening/inquiring, searching, and follow-up, as well as many of the sub-points listed under each criterion (pp. 4-12). In addition, for the virtual reference portion of the assignment (pp. 12-16), I use evaluation criteria including not only the RUSA Guidelines (2004) but also the digital reference criteria developed by David Ward and described by Wikoff (2008).
In my analysis of XLS reference services, I identify areas of both strength and weakness. I acknowledge that the staff excels in many areas, including approachability, interest, and searching, and that they can strengthen their services by developing their reference interview skills and periodically reviewing, discussing, and brainstorming ways of further implementing service guidelines such as those issued by RUSA (pp. 11-12). Because I used specified criteria such as the RUSA Guidelines and Ward’s digital reference criteria to conduct my evaluation and make improvement recommendations, this assignment demonstrates my ability to evaluate programs and services on specified criteria and I therefore submit it as evidence of my competency in this area.
Evaluation is a critical task for the 21st century librarian. As libraries strive to redefine ourselves in the era of ebooks, Google, and Wikipedia, planning and evaluation take on a central role in what we do. Change is inevitable, so by embracing planning and evaluation, we can ensure that we are prepared to handle the changes going on all around us in society. Casey and Savastinuk (2007) argue that “constant, smooth change—evolutionary, not revolutionary—better allows an organization to move forward without the seismic fits and starts so commonly associated with the major upheavals of discontinuous change” (p. 44). They propose a three-step cycle of 1) brainstorming for new and modified services, 2) planning for services and success, and 3) evaluating services on a regular basis (pp. 44-45) to integrate change into a library’s organizational structure. They argue that by embracing planning and evaluation, libraries can continuously improve services to existing users while reaching out to potential library users (p. 5). I am excited about utilizing this approach in my future career. In my past and current roles at The Seattle Public Library (SPL), I have had the opportunity to serve on project teams such as the Public Services Staffing and Scheduling Analysis Work Group (2008-2009), the Strategic Plan Preparing Team (2011), and currently, the Library Innovation Team. In each of these contexts, evaluation has played an important role in our work. My experiences evaluating programs and services on specified criteria as a SLIS student and as an employee of SPL have helped me to develop strong skills in this area, and I look forward to using them to help my organization advance even further in its mission in the future.
Casey, M. E., & Savastinuk, L. C. (2007). Library 2.0: A guide to participatory library service. Medford, NJ: Information Today, Inc.
Kemp, J. W. (2010). Homework 1. Unpublished assignment sheet, San Jose State University.
McClure, C. R. (2008). Learning and using evaluation: A practical introduction. In K. Haycock & B. E. Sheldon (Eds.), The portable MLIS: Insights from the experts (pp. 179-192). Westport, CT: Libraries Unlimited.
Nielsen, J. (2005). Ten heuristics for user interface design. Retrieved from http://www.useit.com/papers/heuristic/heuristic_list.html.
Reference and User Services Association. (2004). Guidelines for behavioral performance of reference and information service providers. Retrieved from http://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral.
San Jose State University Library. (2006). Library online tutorial for the School of Library & Information Science students. Retrieved from http://tutorials.sjlibrary.org/tutorial/slis/index.htm.
San Jose State University School of Library & Information Science. (n.d.). Course descriptions. Retrieved from http://slisweb.sjsu.edu/classes/coursedesc.htm.
Tognazzini, B. (n.d.). First principles of interaction design. Retrieved from http://www.asktog.com/basics/firstPrinciples.html.
Wikoff, N. (2008). Reference transaction handoffs: Factors affecting the transition from chat to email. RUSQ, 47(3), 230-241.
*NOTE: Out of respect for the organizations and individuals evaluated in my Observation Analysis, I have removed, changed, or obscured names and other identifying information.