Data Driven Decision Making – LRRT Forum (ALA – 2014)

My former colleague, Jim Church, co-presented a study of what graduate students at Berkeley are citing – they did a study of about 45,000 citations in dissertations in 4 disciplines (poli sci, business, econ, and history) completed between 2008-2012.  They showed a lot of interesting stats about the sources being cited – who was citing monographs, foreign language material, what the median publishing date was for diff disciplines, how many citations the dissertations averaged, etc.  It was a useful way to look at dissertations and graduate level research and stirred up a lot of other kinds of questions. For example, in one of the disciplines, the median age was much older than expected.  This kind of research is not to be undertaken lightly — they got a library grant and were able to hire students to do some of the number crunching, which was labor intensive and took time.  They were too recently finished with the initial analysis to say how all the data would be used.  Still, it was a wonderful example of how to do top quality research that could help to overcome some of “unjustified trust in anecdotal evidence” mentioned by another ALA speaker . . .

 

Posted in Assessment, Organizational Effectiveness, Uncategorized | Tagged , | Leave a comment

Data, Evidence, and Outcomes – What Does it All Mean? (ALA 2014)

There is a very cool statement associated with Joe Matthews, who presented this session.  I heard it years ago and it goes something like this:

Strategy is about accomplishing more with less, and that requires focus!

Matthews has done a lot of work with libraries on thinking strategically, and translating strategic plans into performance.  It’s great to have a vision, but how do you operationalize it?  And how do you know you are operationalizing it well–committing resources in alignment with what you identified as strategic goals?

Matthews’ name is associated with a balanced scorecard approach for libraries, and the development of measurable performance targets.  Metrics or indicators should not only show where you have been (what you got done) but also help you figure out, during the implementation phase of your strategic plan, how to adjust for greater success.

When you attempt to be data-driven, you can run into problems, such as:

  • Too many measures and no focus
  • Entrenched or no measurement systems
  • Unjustified trust in informal feedback systems
  • Fuzzy objectives

Another problem is being satisfied to just know what’s going on, what you are doing. Of course, having lots of data like that doesn’t tell you whether or not you are having an impact.  But wait, there’s more! Even if your data does begin to demonstrate your impact, that’s still not the point – the point is to use your data to continuously improve your impact.  The phrase they use is “change the target.”

He suggested we look at our own units and ask this question:

How do the library services or resources enhance or expedite what people need to do?

It starts with understanding the work our users are doing, and how the parent institution values that work.  One cannot demonstrate value of a library until one has defined outcomes that are of importance to the parent institution.

There are many kinds of things to measure:  satisfaction/user experienceoperations (a resource perspective; how are we allocating resources and how much of various things are we doing?); impact (how are we affecting outcomes?)

Matthews reviewed some models for establishing metrics.

The Logic Model uses if … then statements.

If the Library does ____, we can produce these _______, so that our users will be able to _____, which will result in this kind of impact.

Example:  If the Library builds ample teaching rooms, develops lesson plans, and trains staff (inputs), we can offer 10 undergrad workshops per semester on data management (outputs), which will enable 50-60 students per year to produce better quality senior theses with less obstacles and failures (outcomes), which will contribute to the University’s ability and understanding for how to provide a strong and valuable research experience for undergrads (impact).

Orr’s Model

input –> process –> output –> outcome –> impact

  • input — resource perspective
    (space, equipment, funding, staff)
  • output –operational perspective
    (workshop, program, report, # attendees, etc)
  • outcomes — user perspective
    (increased skills; know how or know that; behavior change; status change)
  • impact  — stakeholder perspective
    (faster completion; better employment, etc)

Matthews also briefly mentioned the Gates Common Impact Measurement System, which is a model for evaluating the impact of social programs that have been funded philanthropically.

The big takeaway here is the one about alignment with the goals of the university in order to have an impact.  The example of instruction program evaluation is compelling:  is the focus of assessment really only trying to figure out, after X number of library interventions, that students can tell the difference between a catalog and an article database?  or other procedural kinds of things?  Again, not that these aren’t important.  But the university is trying to turn out critical thinkers in various disciplines, practitioners who can go out and present their knowledge coherently and appropriately in various media, experts who efficiently use information sources and tools to maintain their expertise and stay up to date – how is library instruction contributing to that?  No amount of “happy sheets” (on a scale of 1 to 10….) or even pre- and post-tests are going to tell the impact story if your instruction goals and your assessment are not focused on impact from the start.

Matthews was big on being very clear about your goals in order to assess your impact.  Once you start talking about impact goals, it helps you make some difficult choices about your programs — if we want to have an impact on outcomes the university cares about, we will need to prioritize the kind of instruction that has the potential to yield those kinds of outcomes.  It helps you see that perfecting one shot instruction sessions is never going to be about impact in that way, which helps you better understand how to more efficiently resource that activity if you are going to continue to do it.

From this guy to Chris Argyris to so many, many other thought leaders in the area of organizational effectiveness — they all keep urging us to articulate up front why we are doing things.  It seems so obvious, and yet . . .

Posted in Assessment, Organizational Effectiveness, Strategic Planning, Uncategorized | Tagged | Leave a comment

Electronic Lab Notebooks (ALA 2014)

Speakers from Cornell and Yale talked about the product LabArchives and how they are supporting researchers on their campuses with electronic lab notebooks.  It was apparent how deeply they understood the kinds of things that researchers do with lab notebooks, and the day to day issues of data management.

LabArchivesIconThe online environment of LabArchives has great bells and whistles – ability to upload almost anything, link out to other info, share among groups via communication tools and access permissions, etc.  It’s flexible and lets the researcher design it to be organized the way they want.  One caveat – it’s not a great file mgmt system for lots and lots of files–in that situation, better to manage them elsewhere, and link out to files.

I had to leave early so I missed the discussion about how these folks were using their product to engage with their community….nothing I say next reflects on them in any way!

Getting a tool as part of your service menu, and then teaching that tool seems like  a good way to branch out in new areas — it’s a concrete way to market your services and generate requests for curricular support. At the same time, without human thinking and effort, it doesn’t get you embedded into the curriculum.  I feel like this often happens to the library – faculty see us as the folks to demo or teach a tool, rather than as partners who can help students learn threshold concepts that will transform their understanding of research, or whatever information skill or practice is the focus.  So, we can teach Refworks “how to” sessions that focus on mechanics, but we aren’t necessarily invited to help craft assignments that will get students more focussed on reproducible research and the development of connected knowledge.  Our marketing tells people that we teach/demo tools!  I’m not against mechanics, those are certainly important, but those kinds of training sessions can be handled by trainers or even lynda.com.  If teaching the tool can help you get a more in-depth understanding of the research happening on your campus, and the ways students need to be supported (and this definitely seemed to be the case with the presenters) then it is a useful and important first step.  But if you find yourself just teaching the tool in the same way year after year, and not building better instruction by leveraging your relationships with faculty, then I think you might as well point people at online tutorials.

QmarkShort of pointing all students to this kind of tool, especially given its somewhat intense learning curve, I wonder about a lightweight approach — could we or should we craft some kind of PennBox template that students can adopt for projects?  Little or no learning curve to use it . . .  It could be presented in a suite of helpful resources, such as tips for labeling files as part of version control, or how to log actions coherently when different members of a research team are working together asynchronously.  Measuring the use of such a website would be one way to show how the library supports undergraduate research, and get some continuous data back about what students are drawn to and seem to like.

Posted in Data Services, Uncategorized | Tagged | Leave a comment

Some afternoon sessions (ALA 2014)

Lots of walking between south and north halls on Saturday afternoon . . .

Libraries in the Course Mgmt System: Best Practices and New Directions

The presenters started projects in order to get beyond the reactive, one-by-one approach of asking faculty (or waiting for faculty to ask) about getting library content into the course website.  Speakers from Univ of South Florida and Minnesota – they’ve both completed projects for automatically putting relevant library resources into course websites.  At USF, they created a table (which they update every semester, and claim it’s not that hard) which they can use with the CMS, so that every single course gets  the best option among:  the course specific research guide, a subject specific research guide, or a general library web page (which I think they said they made in wordpress) into each and every course website.  The approach at MN was for engineering/science courses, but followed the same basic idea — resources, chat-with-a-librarian, etc.  The speakers made the point that once you get the process going, it’s not hard to maintain, semester after semester.  I think this is a beginning, but my thoughts revolve currently about how to be more integrated into the curriculum.  Although we can count clicks (who used the stuff we made available thru courses) in this approach, it seems like we ought to be thinking of a more “learning-centered” approach,using the course mgmt system to help students learn how to get work done.

The strategic planning session I headed for next was cancelled but it was good exercise walking the entire length of the LV convention center.

New Directions for Data Visualization in Library Public Services

First speaker was Angela Zoss, Duke, who has established a pretty strong program in helping her community learn about and use data visualization tools.  Given that there is no centralized clientele or disciplinary cluster with “obvious” customers — Zoss figured out who the players on campus were and what they were about. Her representation of campus folks who were playing in this area was so clear, that you could visualize the kinds of services she would want to offer, the role she could play.  Her services were in the expected areas – workshops, consultations, research guides, and help via a lab.  She showed some of the software applications, including Tableau, that  students can use.

NCSUThe second speaker was from NCSU, and showed several examples of members of the campus community using their data visualization room.

Discussion Group for Heads of Public Services – Major issues

The first topic of discussion was whether library structures have adapted in the ways they need to, given changes in the outside world.  The facilitator quoted Chris Argyris’ theory of double-loop learning, which I first encountered last year when we had our own Penn professor,  Alan Barstow, talk to Dept Heads about learning organizations.  Hierarchy and command and control issues were discussed.

Along with Argyris, these readings came up:

  • Megan Hodge, When Library Workers Expand their Horizons, So Do Libraries. Am Lib, 3/10/14
  • Cheryl LaGuardia, Organizations, See How They Run, LJ 5/15/14
  • John Lubans’ book – Leading From the Middle and other Contrarian Essays on Library Leadership,  June 2010

The second discussion topic was about innovative ways of providing reference services.  Quite a few people mentioned the increased use of undergraduates and why this was a good idea (e.g., many questions do not require the expertise of a subject librarian, we want the subject experts to spend their time working at things with greater impact, etc.)   Depending on the physical set-up there were various models.   One setup described in the discussion is one I’ve seen used a couple different ways – the librarians work in highly visible consultation areas (glass-wall small offices) in the reference area, with the student assistants out front handling the triage.   Librarians aren’t so much shifted hour by hour, those are their offices or they are the spots where they park for the day to work.  Shifts and engaged liaisons are problematic — obviously, shifts make librarians less flexible.   Not surprising, a model that seemed to be very popular is that of the information commons which includes the full suite of student support (writing center, tech help, etc), with the bulk of help being provided by research consultants who are themselves students.  In one instance of this model, students are able to make short appointments with librarians by an online scheduler which is integrated with librarian calendars (busy times are automatically blocked off!  no maintaining multiple calendars!).  The student research consultants sound like the “learning community model” we have been trying to develop.  (They had about 200 applicants and hired 30 student research consultants.  They did not involve the liaisons in hiring, it was handled by head of instruction and head of reference.  These folks were very committed to this model and had lots of good things to say about how interactive and energetic their reference commons had become.)  Several other librarians mentioned similar efforts in various flavors, including a few which were mature enough that the students themselves were doing the training of the new student workers.  Again, like our program, one person mentioned that staff were learning how to do instruction better by working with students and particularly by watching how students wrote their own training materials…. My takeaway from this discussion was not that there are libraries trying really amazingly different things, but that more successful efforts resulted from a good mix of  culture/leadership support/space configuration/hearts&minds.

Threshold Concepts

OK, honestly I don’t remember when I attended this workshop but it was one of the hallelujah moments of the conference for me.  It’s not so much any particular point that was made — it’s the strong and swelling chorus of voices saying that we ought to be teaching for critical thinking and not for checklists.  Evaluating resources is not a superficial process of looking at author, date, etc etc etc.  We ought to be teaching the kind of underlying concepts that help people develop the intuitions they need to figure out each new tool/resource/interface, rather than teaching the mechanics of interfaces.  This was great affirmation for me, and is a trend that is mirrored, I believe, in the astonishing number of job postings across academic libraries for instructional designers.  Why?  because we will need instructional design skills to intertwine teaching info lit threshold concepts with the learning goals that faculty have already fleshed out.

The threshold concept folks have a critique of the old ACRL info literacy standards that I find very compelling, and have influenced the new draft framework, at the same time recognizing where threshold concepts don;t reach all aspects of library skills teaching and learning.  Threshold concepts for information literacy will be emerging through research and discussion for some time — remember, this is just a model that’s being constructed.  But they are likely to include concepts such as “scholarship is a conversation” and “Format as a Process.”  In the second one, the idea is not to train students to think peer reviewed=scholarly=safe-to-believe, but to understand that there are processes, business models, social conventions and practices behind the production of an information source, and that the context in which you might use it matters to the evaluation.

They say that one function of a threshold concept makes tacit disciplinary or professional knowledge explicit.  I have said this a different way, coming from some coursework in learning science:  novices approach research differently than experts do, and we want to help novices work more like experts.  A good researcher is not one who knows all the latest interfaces – you could be a great researcher without that.  But you have to know how the landscape is put together, how scholarly knowledge is created and accumulates, in order to begin doing research like an expert.  This is the reason my colleagues hear me yacking away all the time about metacognition — new knowledge is often an explicit understanding that you can verbalize, and over time it becomes second nature.  By the time you’ve become an expert, you often aren’t as good at explaining what you do as someone who is newer to the task.  We need to help students surface the wrong ideas they have about the landscape (everything is a “website”) so they can be replaced by threshold concepts that will help them decode what they are seeing.

How will we actually accomplish this kind of teaching and learning?  Well, we won’t do it by tweaking the one-shot session.  That is geared toward mechanics and procedural training, and allows for little else, in my opinion.  I also hear the old saw “Well, we can only do what the faculty want.”  Unless and until we develop instructional design capacity and are able to have discourse with faculty that demonstrates to them the value we can contribute to their learning outcomes, yes, they will relegate us to teaching the latest interface.

If you want to delve into the original learning science theorists of threshold concepts as a model, the key names are Jan Meyer and Ray Land.

 

Exhibits

Most interesting conversation – ArchiveIt.  It helps you easily grab and organize web sites as something to archive.  Talked to the rep (who said they have had conversations with Annenberg researchers) about using it in a researcher context.  A tradeoff – the ease of capturing and managing the “raw data” of websites, and the long terms issues with maintaining it (ArchiveIt is licensed, and the cost seems to be proportional to storage) and the content is stored in what are called “work” files, an open standards format (I think) but you need to license the reader….

Posted in Academic Technology Planning, Data Services, Learning, Organizational Effectiveness, Uncategorized | Tagged , , , , | Leave a comment

ALA – Friday June 27 – Taiga

I attended a Taiga session about candidate recruitment for senior positions.  Early on in my career, I gained a healthy respect for the importance of recruiting and selecting staff, from being on both well-run and less systematic search teams.  So hearing the rep from Isaacson Miller talk about what they can bring to a search and how they recruit was impressive.  One of the first things mentioned is that senior leaders in libraries do not move around as much as senior leaders in other parts of higher ed – they may stay in place for 15 or 20 years.  By that time, the organization has “forgotten” what a good search process for such a senior leader needs to include.  Isaacson Miller staff generally talk to scores of people and search through lists of leaders from various professional organizations to generate candidate pools.  One of my takeaways from this session was that they develop a comprehensive picture of what kind of candidate they are looking for by interviewing appropriate people and groups across campus and putting together a clear picture of what the incoming person will need to do, and what challenges they will face, and use that in planning how to identify and interview candidates.   This was an interesting approach — typically, I have been on committees that focus immediately on the job description (often basing a posting on the old job description, or what other institutions are putting into job descriptions) and there was not a lot of formal discussion of the specific local challenges that the incoming person will face as they go about handling the responsibilities of the job.

Another takeaway was understanding how having faculty as part of the search process can influence it.  The rep spoke about how faculty are used to looking for certain kinds of things in a CV, and might not think as highly of a candidate who had fewer publications, not understanding that publishing is not an expected or even necessarily valued professional activity at some institutions, especially where librarians are not faculty and/or don’t have a tenure process.  The rep explained how a search firm can help the search team avoid misinterpreting data about a candidate — if something seems like a red flag, to ask about it, rather than assume they know what it means.  Having worked in a place where search teams were instructed to put forward the list of interviewed candidates with strengths and weaknesses (rather than a recommendation), it was also interesting to hear that many provosts will ask for an unranked list of finalists, so if they really prefer one candidate over others they aren’t put into the position of seeming to go against the search team’s recommendation.  It strikes me that it’s an advantage for the candidate as well as for the institution when a search firm is used, since they are looking for fit just as much as the institution, and they can ask the search firm rep the kind of questions it would be hard to put forward diplomatically to members of the hiring organization without having it count against them in the long run.

The last takeaway came from a discussion about how it can be difficult for AUL’s at ARL libraries to move up the last rung, without taking a detour to a non-ARL institution and serving as director there for awhile — the rep’s explanation was that thinking holistically about the library’s problems, being the “face” of the library, is not something that AUL’s typically do on a daily basis, and that it takes a lot of reflection as well as experience handling the kinds of problems that directors handle to come across credibly in an interview.  Another good session, with lots of takeaways.

Posted in Uncategorized | Leave a comment

ALA – Friday June 27 – NISO

The NISO standards update program was all new information for me.  In general, I understand the value of standards, but I don’t have a good way to stay up-to-date in all the areas where standards are of interest to me.  Partly because standards documentation very quickly becomes too technical and detailed for the amount of time I can devote to “non-day-job” reading.  That’s why programs like the NISO update are so useful – lots of high-level conceptual explanation.

The area of discovery was covered first — NISO has just participated in releasing a new set of recommendations in this area via the paper Open Discovery Initiative: Promoting Transparency in Discovery (NISO RP-19-2014).  Moving from library catalogs (what a library owns) to web-scale discovery systems raised a lot of issues.  If a vendor is offering a web-scale discovery system, what content is included?  What metadata is made available for searches to run against?  (For example, article “metadata” exists at several levels from multiple sources — some available from the publishing journal. some from a separate A&I service, even the full-text of the article itself. )  Are there practices in place that affect how the results are presented?  The ODI guidelines attempt to make the whole process more transparent, so that users and librarians can better understand and use the discovery service, but the guidelines avoid territory by which discovery services get compettitive advantage.   Interface, performance, and relevance ranking were named as areas in which these products legitimately compete against each other in the market place.   It’s interesting to think about when it becomes useful to establish guidelines like this — you could imagine that acting too late would be bad, because huge amounts of time/money would be wasted trying to achieve interoperability if everything about discovery was secret and proprietary and required making commercial arrangements individually with all players.  Presumably acting too soon might also be a problem, if the issues that arise from new kinds of discovery weren’t fully apparent.  Although “it’s never too soon” seems like the best answer…

Another area covered was digital rights management in a global context.  The Linked Content Coalition formed in March 2014 as an umbrella organization to gather together stakeholders dealing with metadata and indicators about the access rights associated with content.  This seems like a really interesting problem space too, in which the goal is to have machines be able to understand when and how to allow access to content rather than having to manually manage it.  By supplying the right metadata, for example, it would become simpler for a faculty person to know immediately whether or not they could put the fulltext of a paper in their MOOC without having to consult an IP expert for guidance.  Think about the work of this group as also simplifying things with regard to rights management in an international context.  LCC has already produced a lot of work.  They have a reference model which spells out all the entities that have to be addressed (people/organizations; place; creation; and then all the kinds of entities associated with rights such as the right itself, the assignment of a right, assertions made about rights, and conflicts).  All these entities then have to have identifiers, which are linked in standard ways, and managed by the appropriates authorities (registries, etc), and this bit of work is laid out in the LCC’s Ten Targets document.  It quickly gets very complicated, which is why you begin to see that machine-readable metadata is essential — any system of manually figuring out who can do what with a creation leaves us with the situation of people doing nothing because they don’t know what’s legal or doing whatever they want because it’s too complicated to figure out, and life’s too short.

The last part of the NISO workshop I saw was also concerned with global interoperability.  I had never heard of BISAC — this is a North American standard maintained by the Book Industry Study Group (BISG) for applying subject headings to published works.  It’s the way, for example, that Amazon and Barnes&Noble “know” what a book is about.  Evidently, every country seems to have their own scheme like this, and the craziness happens when all these schemes have to be mapped to each other (every time the schemes are updated) so that books can be sold internationally.  Very recently, work has been done to produce Thema, which will be an international schema for subject headings.  Thema development is managed by EDitEUR, the international group that oversees a lot of standards used in the book publishing industry.  There is a lot of legacy infrastructure based on the nationally-produced subject schemata, so Thema is not the replacement for BISAC (yet) but it may eventually develop to the point where it supplants national schemes.  It had some interesting features, including “expected audience.”  However, it was not clear to me why, other than simply the typical historical legacy of efforts being siloed, we have BISAC in addition to LoC subject headings.

The program got me thinking a lot about how sketchy my knowledge of standards is, and how over-simplified problems can seem until you begin to specify in detail what it will take to achieve seamless interoperabilty.  It also got me thinking about “information literacy” and how instruction librarians often say that students need to understand how scholarly knowledge is generated and disseminated, and sometimes they even include “business models” as important conceptual understanding for successfully navigating the information landscape.  Should we begin to include some understanding of the way standards affect the information landscape?  Clearly not all the technical details, but at least some familiarity?

Posted in Uncategorized | Tagged , , , | Leave a comment

Government information and the shutdown

I remember when….
Years ago, as a government documents librarian, I was involved with the federal depository program in which libraries received government information–at that time, as  printed materials–which they made available in their various locations across the nation.  The idea is evident — to ensure that all citizens had access to information about the activities of the U.S. government.

As a relatively new documents librarian, I had come on the scene about the time the Government Printing Office (GPO) was articulating a model for moving government information to the web — with obvious advantages in terms of timely access for many citizens.

A number of my colleagues and I raised concerns about a significant change to the model for access.  In the older model, with distribution of physical items, government information that was meant to be shared was not under government control.  In those days, we prided ourselves on that principle, as anyone who had been a docs librarian for long had experienced instances in which agencies in the govt had tried to retract (demand return of) publications for political reasons.  Once the physical items had been distributed to libraries, though, it was nearly impossible for such shenanigans to succeed.

Now we have yet another reason for making govt information available on non-govt servers — perhaps via libraries whose mission is to help keep the citizenry informed.

Due to the shutdown,  already at my school, I have heard stories that students had to get extensions on their assignments because of the unavailability of government information.  Had libraries and the GPO been able to figure out a shared access model by which government information was securely offered from non-government servers, people would still be able to read reports, access census information, and so on.

It’s probably not too late to fix this picture, if public access  to government information is deemed crucial enough as a public good that various stakeholders would commit resources to developing  a more distributed model.

Posted in Uncategorized | Tagged , , , | Leave a comment