ALA 2014 Las Vegas!

photo(4)I can’t get that Jimmy Buffet song outta my head, and that seems like a harsh punishment for walking past Margaritaville to the CVS for some yogurt and bottled water….

Yep, there I was with my vendor tote bag and my “fit over” sunglasses, rubbing elbows with all the glitzy people who came to Las Vegas for a really different experience!

photo(3)

But ALA has always been a great place to run into former colleagues and catch up.  (These guys had never heard of the Library of Congress…my dinner pal was astonished but hey, they were centuries too early, no?)

Posted in Uncategorized | Tagged , | Leave a comment

Wish I was reading . . .

all the interesting looking books I saw in the ALA Bookstore, such as:

photo(16)photo(5)photo(6)photo(7)photo(8)photo(9)photo(10)photo(11)photo(14)photo(13)photo(15)

Posted in Uncategorized | Leave a comment

…some ALA 2014 sessions, once over lightly

ARL Liaison Programs Discussion

One of the  prompts for me to attend ALA this year was the opportunity to participate in an ARL-sponsored discussion about liaison programs.  The discussion was started at ALA-midwinter in Philly, and continued in Las Vegas.

The facilitators asked the participants–all liaison supervisors–to discuss two questions.

  1. What can I do to improve liaison work at my institution?
  2. What can ARL do to support the development of liaison programs?

We got put into the right frame of mind for the discussion by a presentation from 3 subject liaisons who discussed some of the challenges they face.  The liaisons didn’t agree on everything, but points were made about how collections work (esp., tasks that should be automated) can trump liaison engagement work in the eyes of supervisors or library leadership.  They also mentioned how often incentives for engagement work are lacking, esp. at institutions or in the professional marketplace where published articles still earn the most reputation for librarians trying to build their career.  Another theme was how many low-impact or administrative tasks are still part of the job (or have become part of the job) of public service librarians, fragmenting their days into an hour here or half an hour there.  Outreach has a positive connotation, but outreach that is low-level “infotainment” or participating in this and that to show how friendly the library is  got mentioned as a low-impact but time consuming expectation of liaison librarians by their managers.  These were vivacious, thoughtful, creative and hard-working folks – they wanted structural change from their institutions to do their liaison work better.  It was the perfect kind of thought-provoking event, where you return home and more fully answer some questions for yourself.  In the next two weeks,  I will be thinking about these questions:

  • What is the vision, what is the preferred mission statement, for the liaison program we want?  How do we arrive at that vision institutionally?
  • What are the challenges in achieving that vision?  Although some of the info we need has to do with challenges the liaisons face now, that’s only part of the picture.
  • What are the propitious conditions that allow great liaison work to flourish? (Positive inquiry.)
  • What are ways to move the dial, to push the organization in the direction of liaison work that has the kind of impact we defined in our vision?

It was broadly agreed that given the way liaisons generally work, not enough is known institutionally about how effort is being expended, how resources (time, attention, expertise) are allocated.  If we want to move the dial in a certain direction, we need to understand our starting place.  In my breakout group, I talked about some kind of dashboard so we can see what is happening and adjust (as opposed to an end-of-year report which becomes the occasion for praise or criticism, after the fact, in the annual review.)  In fact, my suggestion was more along the lines of a heat map, because all disciplines are not the same.  It may be appropriate for the history librarian to be doing lots and lots of instruction, but the philosophy liaison to be doing very different things.  Each liaison might generate “hot spots” in their activity dashboards in different areas.

Supporting Globalization at Your Institution – Discussion Group – heads of public services

This was too rapid fire for me to take notes.  The speakers were from NYU and UIUC.  Globalizing the university seems to be a priority for universities everywhere, and these libraries were committing attention and thought to supporting that university priority.  The takeaway for me was that they had both approached this kind of support systematically and proactively. These campuses had very different landscapes in terms of globalization issues, and in neither case was there a neatly tied up,  unified approach at the university level.  Rather, there were a lot of stakeholders and a lot of different experiments and programs being launched.  (Very similar to the data visualization situation, see that post below.)  So the first step was getting a good picture of the landscape.  Who’s playing, what are their goals, what are they doing?  All those questions help the library see where best to contribute.  My happiness was that not once did they mention making a research guide or a list of useful resources or anything like that.  They weren’t trying to bolt something on to the outside of the effort, they were looking for ways to facilitate the actual globalizing work of the campus community.

 

Posted in Organizational Effectiveness, Uncategorized | Tagged , | Leave a comment

Role of Libraries in Data Management & Curation (ALA 2014)

Nicole Vasilevsky is a great speaker (Oregon Health & Science University).  By the third day of a conference, where you’ve gone madly from session to session, her presentation was a beautiful moment of intellectual repose because you never had to struggle to understand her point.   Ahhhh.

My notes will not do her justice, but plenty of what she said resonated with things I learned about at the 2013 Research Data Alliance gathering.

She and her group had some research questions:

  1. How to make science more reproducible?
  2. How can we educate researchers so that their data will be more reusable and reproducible?
  3. How can we use data to generate new hypotheses and make new connections?

Q. #1

Nicole explained that reproducible science involves providing good metadata about resources used in your lab experiments.  Her analogy was cooking – you might copy the recipe of a famous chef, but if your ingredients weren’t of the same quality as the chef’s, your results may vary…  Verifying results in science means using the same exact resources (antibodies, model organisms, etc.)

Another issue is the methodology for an experiment.   In many scientific journals there are length restrictions on this part of an article, so even if a researcher intends to fully describe the methodology, they may be prevented by publishing practices.

She suggested we take a look at the comments at this twitter hashtag:

#overlyhonestmethods

In their study, they took journal articles from biomed literature, across several domains, and looked at 200 papers from journals with various impact factors. Across all those articles, only about 50% of resources were identifiable. (antibodies, cell lines, organisms, knockdown reagents, etc) even when there were stringent requirements for including this information according to journal guidelines, which pointed out that the guidelines weren’t being enforced.

Evidently they looked at lab notebooks too, which are often meticulous.  Where labs are doing a good job tracking the info, they aren’t getting that info into the publications (vendors, catalog numbers, stable unique identifiers, etc).

Tools to help researchers are emerging.  Unique identifiers for resources are available in some places – e.g., biosharing.org.  But in experiments, resources could also be software and tools. There needs to be more registry-like oversight of the identifiers and controled vocabularies that are needed.

So, one of the projects her group is now working on is the Resource Identification Initiative,  promoting unique RRIDs (Research Resource IDs).  Along the lines of Force 11, RRIDs should be: Machine readable, free to generate and access, used consistently across publishers and journals.  to aid in discovery, RRIDs should be used in methods sections and as keywords in published articles.  Even though this is a very recent project, RRIDs are getting used.  Where they are being used,  they are correctly used about 90% of the time.

Q. #2

In their effort to help educate researchers, her boss entered a contest called the one-page challenge:  What would you to with $1000 in order to…..

They won, and used the money to fund a Data Management Happy Hour to advertise their workshops and consultations and other services, and to talk with researchers about their data.  Seems like part of the reason it was a success (besides the wine) had to do with being very open about how everyone is learning how to do this better.  They had some giveaway where people shared some badly managed data sets or visualizations, got people laughing, used the mistakes to make points about better practices and establish themselves as useful consultants with relevant library services.

They also had a data wrangling open house for grad students, who are less immediately concerned with the use and re-use of data or reproducible science — they are really focused on getting thru school, graduating.  In order to do that, they need to be efficient and avoid mistakes in their data management practices, so Nicole and her colleagues involved grad students in organizing and promoting a data wrangling workshop.

Q. #3

Making new connections via data was the third part of the presentation.  I learned, finally, the difference between an ontology and a controlled vocabulary–it’s not complicated, it just requires a clear explainer.   CTSA Connect is the project Nicole reviewed as making the connections alluded to in her third question.  CTSA is explained on their own website, and it sounds like VIVO:

CTSAconnect aims to integrate information about research activities, clinical activities, and scientific resources by creating a semantic framework that will facilitate the production and consumption of Linked Open Data about investigators, physicians, biomedical research resources, services, and clinical activities. The goal is to enable software to consume data from multiple sources and allow the broadest possible representation of researchers’ and clinicians’ activities and research products. Current research tracking and networking systems rely largely on publications, but clinical encounters, reagents, techniques, specimens, model organisms, etc., are equally valuable for representing expertise. http://www.ctsaconnect.org/

 Nicole and others have been working on the VIVO Integrated Semantic Framework (VIVI-ISF) ontology suite.  The general idea as I understand it is to have a semantic framework for describing relationships among all the entities that are interesting to researchers trying to stay up-to-date in their fields.  So there needs to be an ontology for resources as well as an ontology for people – a framework for revealing the relationships that are important about these kinds of entities.

The website for the ontology group at OHSU is here:

http://www.ohsu.edu/xd/education/library/about/departments/ontology

 

Posted in Data Services, Uncategorized | Tagged , | Leave a comment

Data Driven Decision Making – LRRT Forum (ALA – 2014)

My former colleague, Jim Church, co-presented a study of what graduate students at Berkeley are citing – they did a study of about 45,000 citations in dissertations in 4 disciplines (poli sci, business, econ, and history) completed between 2008-2012.  They showed a lot of interesting stats about the sources being cited – who was citing monographs, foreign language material, what the median publishing date was for diff disciplines, how many citations the dissertations averaged, etc.  It was a useful way to look at dissertations and graduate level research and stirred up a lot of other kinds of questions. For example, in one of the disciplines, the median age was much older than expected.  This kind of research is not to be undertaken lightly — they got a library grant and were able to hire students to do some of the number crunching, which was labor intensive and took time.  They were too recently finished with the initial analysis to say how all the data would be used.  Still, it was a wonderful example of how to do top quality research that could help to overcome some of “unjustified trust in anecdotal evidence” mentioned by another ALA speaker . . .

 

Posted in Assessment, Organizational Effectiveness, Uncategorized | Tagged , | Leave a comment

Data, Evidence, and Outcomes – What Does it All Mean? (ALA 2014)

There is a very cool statement associated with Joe Matthews, who presented this session.  I heard it years ago and it goes something like this:

Strategy is about accomplishing more with less, and that requires focus!

Matthews has done a lot of work with libraries on thinking strategically, and translating strategic plans into performance.  It’s great to have a vision, but how do you operationalize it?  And how do you know you are operationalizing it well–committing resources in alignment with what you identified as strategic goals?

Matthews’ name is associated with a balanced scorecard approach for libraries, and the development of measurable performance targets.  Metrics or indicators should not only show where you have been (what you got done) but also help you figure out, during the implementation phase of your strategic plan, how to adjust for greater success.

When you attempt to be data-driven, you can run into problems, such as:

  • Too many measures and no focus
  • Entrenched or no measurement systems
  • Unjustified trust in informal feedback systems
  • Fuzzy objectives

Another problem is being satisfied to just know what’s going on, what you are doing. Of course, having lots of data like that doesn’t tell you whether or not you are having an impact.  But wait, there’s more! Even if your data does begin to demonstrate your impact, that’s still not the point – the point is to use your data to continuously improve your impact.  The phrase they use is “change the target.”

He suggested we look at our own units and ask this question:

How do the library services or resources enhance or expedite what people need to do?

It starts with understanding the work our users are doing, and how the parent institution values that work.  One cannot demonstrate value of a library until one has defined outcomes that are of importance to the parent institution.

There are many kinds of things to measure:  satisfaction/user experienceoperations (a resource perspective; how are we allocating resources and how much of various things are we doing?); impact (how are we affecting outcomes?)

Matthews reviewed some models for establishing metrics.

The Logic Model uses if … then statements.

If the Library does ____, we can produce these _______, so that our users will be able to _____, which will result in this kind of impact.

Example:  If the Library builds ample teaching rooms, develops lesson plans, and trains staff (inputs), we can offer 10 undergrad workshops per semester on data management (outputs), which will enable 50-60 students per year to produce better quality senior theses with less obstacles and failures (outcomes), which will contribute to the University’s ability and understanding for how to provide a strong and valuable research experience for undergrads (impact).

Orr’s Model

input –> process –> output –> outcome –> impact

  • input — resource perspective
    (space, equipment, funding, staff)
  • output –operational perspective
    (workshop, program, report, # attendees, etc)
  • outcomes — user perspective
    (increased skills; know how or know that; behavior change; status change)
  • impact  — stakeholder perspective
    (faster completion; better employment, etc)

Matthews also briefly mentioned the Gates Common Impact Measurement System, which is a model for evaluating the impact of social programs that have been funded philanthropically.

The big takeaway here is the one about alignment with the goals of the university in order to have an impact.  The example of instruction program evaluation is compelling:  is the focus of assessment really only trying to figure out, after X number of library interventions, that students can tell the difference between a catalog and an article database?  or other procedural kinds of things?  Again, not that these aren’t important.  But the university is trying to turn out critical thinkers in various disciplines, practitioners who can go out and present their knowledge coherently and appropriately in various media, experts who efficiently use information sources and tools to maintain their expertise and stay up to date – how is library instruction contributing to that?  No amount of “happy sheets” (on a scale of 1 to 10….) or even pre- and post-tests are going to tell the impact story if your instruction goals and your assessment are not focused on impact from the start.

Matthews was big on being very clear about your goals in order to assess your impact.  Once you start talking about impact goals, it helps you make some difficult choices about your programs — if we want to have an impact on outcomes the university cares about, we will need to prioritize the kind of instruction that has the potential to yield those kinds of outcomes.  It helps you see that perfecting one shot instruction sessions is never going to be about impact in that way, which helps you better understand how to more efficiently resource that activity if you are going to continue to do it.

From this guy to Chris Argyris to so many, many other thought leaders in the area of organizational effectiveness — they all keep urging us to articulate up front why we are doing things.  It seems so obvious, and yet . . .

Posted in Assessment, Organizational Effectiveness, Strategic Planning, Uncategorized | Tagged | Leave a comment

Electronic Lab Notebooks (ALA 2014)

Speakers from Cornell and Yale talked about the product LabArchives and how they are supporting researchers on their campuses with electronic lab notebooks.  It was apparent how deeply they understood the kinds of things that researchers do with lab notebooks, and the day to day issues of data management.

LabArchivesIconThe online environment of LabArchives has great bells and whistles – ability to upload almost anything, link out to other info, share among groups via communication tools and access permissions, etc.  It’s flexible and lets the researcher design it to be organized the way they want.  One caveat – it’s not a great file mgmt system for lots and lots of files–in that situation, better to manage them elsewhere, and link out to files.

I had to leave early so I missed the discussion about how these folks were using their product to engage with their community….nothing I say next reflects on them in any way!

Getting a tool as part of your service menu, and then teaching that tool seems like  a good way to branch out in new areas — it’s a concrete way to market your services and generate requests for curricular support. At the same time, without human thinking and effort, it doesn’t get you embedded into the curriculum.  I feel like this often happens to the library – faculty see us as the folks to demo or teach a tool, rather than as partners who can help students learn threshold concepts that will transform their understanding of research, or whatever information skill or practice is the focus.  So, we can teach Refworks “how to” sessions that focus on mechanics, but we aren’t necessarily invited to help craft assignments that will get students more focussed on reproducible research and the development of connected knowledge.  Our marketing tells people that we teach/demo tools!  I’m not against mechanics, those are certainly important, but those kinds of training sessions can be handled by trainers or even lynda.com.  If teaching the tool can help you get a more in-depth understanding of the research happening on your campus, and the ways students need to be supported (and this definitely seemed to be the case with the presenters) then it is a useful and important first step.  But if you find yourself just teaching the tool in the same way year after year, and not building better instruction by leveraging your relationships with faculty, then I think you might as well point people at online tutorials.

QmarkShort of pointing all students to this kind of tool, especially given its somewhat intense learning curve, I wonder about a lightweight approach — could we or should we craft some kind of PennBox template that students can adopt for projects?  Little or no learning curve to use it . . .  It could be presented in a suite of helpful resources, such as tips for labeling files as part of version control, or how to log actions coherently when different members of a research team are working together asynchronously.  Measuring the use of such a website would be one way to show how the library supports undergraduate research, and get some continuous data back about what students are drawn to and seem to like.

Posted in Data Services, Uncategorized | Tagged | Leave a comment