Data, Evidence, and Outcomes – What Does it All Mean? (ALA 2014)

There is a very cool statement associated with Joe Matthews, who presented this session.  I heard it years ago and it goes something like this:

Strategy is about accomplishing more with less, and that requires focus!

Matthews has done a lot of work with libraries on thinking strategically, and translating strategic plans into performance.  It’s great to have a vision, but how do you operationalize it?  And how do you know you are operationalizing it well–committing resources in alignment with what you identified as strategic goals?

Matthews’ name is associated with a balanced scorecard approach for libraries, and the development of measurable performance targets.  Metrics or indicators should not only show where you have been (what you got done) but also help you figure out, during the implementation phase of your strategic plan, how to adjust for greater success.

When you attempt to be data-driven, you can run into problems, such as:

  • Too many measures and no focus
  • Entrenched or no measurement systems
  • Unjustified trust in informal feedback systems
  • Fuzzy objectives

Another problem is being satisfied to just know what’s going on, what you are doing. Of course, having lots of data like that doesn’t tell you whether or not you are having an impact.  But wait, there’s more! Even if your data does begin to demonstrate your impact, that’s still not the point – the point is to use your data to continuously improve your impact.  The phrase they use is “change the target.”

He suggested we look at our own units and ask this question:

How do the library services or resources enhance or expedite what people need to do?

It starts with understanding the work our users are doing, and how the parent institution values that work.  One cannot demonstrate value of a library until one has defined outcomes that are of importance to the parent institution.

There are many kinds of things to measure:  satisfaction/user experienceoperations (a resource perspective; how are we allocating resources and how much of various things are we doing?); impact (how are we affecting outcomes?)

Matthews reviewed some models for establishing metrics.

The Logic Model uses if … then statements.

If the Library does ____, we can produce these _______, so that our users will be able to _____, which will result in this kind of impact.

Example:  If the Library builds ample teaching rooms, develops lesson plans, and trains staff (inputs), we can offer 10 undergrad workshops per semester on data management (outputs), which will enable 50-60 students per year to produce better quality senior theses with less obstacles and failures (outcomes), which will contribute to the University’s ability and understanding for how to provide a strong and valuable research experience for undergrads (impact).

Orr’s Model

input –> process –> output –> outcome –> impact

  • input — resource perspective
    (space, equipment, funding, staff)
  • output –operational perspective
    (workshop, program, report, # attendees, etc)
  • outcomes — user perspective
    (increased skills; know how or know that; behavior change; status change)
  • impact  — stakeholder perspective
    (faster completion; better employment, etc)

Matthews also briefly mentioned the Gates Common Impact Measurement System, which is a model for evaluating the impact of social programs that have been funded philanthropically.

The big takeaway here is the one about alignment with the goals of the university in order to have an impact.  The example of instruction program evaluation is compelling:  is the focus of assessment really only trying to figure out, after X number of library interventions, that students can tell the difference between a catalog and an article database?  or other procedural kinds of things?  Again, not that these aren’t important.  But the university is trying to turn out critical thinkers in various disciplines, practitioners who can go out and present their knowledge coherently and appropriately in various media, experts who efficiently use information sources and tools to maintain their expertise and stay up to date – how is library instruction contributing to that?  No amount of “happy sheets” (on a scale of 1 to 10….) or even pre- and post-tests are going to tell the impact story if your instruction goals and your assessment are not focused on impact from the start.

Matthews was big on being very clear about your goals in order to assess your impact.  Once you start talking about impact goals, it helps you make some difficult choices about your programs — if we want to have an impact on outcomes the university cares about, we will need to prioritize the kind of instruction that has the potential to yield those kinds of outcomes.  It helps you see that perfecting one shot instruction sessions is never going to be about impact in that way, which helps you better understand how to more efficiently resource that activity if you are going to continue to do it.

From this guy to Chris Argyris to so many, many other thought leaders in the area of organizational effectiveness — they all keep urging us to articulate up front why we are doing things.  It seems so obvious, and yet . . .

Advertisements
This entry was posted in Assessment, Organizational Effectiveness, Strategic Planning, Uncategorized and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s