Government information and the shutdown

I remember when….
Years ago, as a government documents librarian, I was involved with the federal depository program in which libraries received government information–at that time, as  printed materials–which they made available in their various locations across the nation.  The idea is evident — to ensure that all citizens had access to information about the activities of the U.S. government.

As a relatively new documents librarian, I had come on the scene about the time the Government Printing Office (GPO) was articulating a model for moving government information to the web — with obvious advantages in terms of timely access for many citizens.

A number of my colleagues and I raised concerns about a significant change to the model for access.  In the older model, with distribution of physical items, government information that was meant to be shared was not under government control.  In those days, we prided ourselves on that principle, as anyone who had been a docs librarian for long had experienced instances in which agencies in the govt had tried to retract (demand return of) publications for political reasons.  Once the physical items had been distributed to libraries, though, it was nearly impossible for such shenanigans to succeed.

Now we have yet another reason for making govt information available on non-govt servers — perhaps via libraries whose mission is to help keep the citizenry informed.

Due to the shutdown,  already at my school, I have heard stories that students had to get extensions on their assignments because of the unavailability of government information.  Had libraries and the GPO been able to figure out a shared access model by which government information was securely offered from non-government servers, people would still be able to read reports, access census information, and so on.

It’s probably not too late to fix this picture, if public access  to government information is deemed crucial enough as a public good that various stakeholders would commit resources to developing  a more distributed model.

Posted in Uncategorized | Tagged , , , | Leave a comment

Digital Humanities – service design

In early June, I attended PhillyDH@Penn which was a gathering of DH practitioners, scholars, and other people whose work involves DH. Throughout the day, I connected with several librarians whose institutions are all exploring service design for the evolving area of digital humanities scholarship.  This led to a follow-up meeting of librarians in order to have a practical conversation about how to provide library services for DH researchers.

Our questions about service design included:

  • How can we present a non-fragmented service interface to our constituents?
  • How can we determine what levels of support we should be offering to a widely diverse field of researchers using a lot of unique-to-one-project workflows and technologies?  Which folks in the library should be involved in providing services and what should they do?
  • How do we build skills?

That bit in bold, above, is lifted from the blog post I wrote about our meeting for the PhillyDH blog, which is available here if you want to read about the model we came up with.

Posted in Uncategorized | Leave a comment

Digital Literacy Journey

One area of life competency that interests me is how to function effectively in a world where information and communication technologies (ICT) are rapidly changing, converging, diverging, and everything seems to be in perpetual beta.

Some people seem to thrive in this world, but others really don’t.  So, what are the life skills by which people flourish given this environment – remaining excited about new possibilities but not overwhelmed or frustrated?  Some of those skills were identified in the tremendously important paper by Henry Jenkins et al, Confronting the Challenges of Participatory Culture (2009).

When I think about my own experiences, I know that I am not thriving when I hope for clearly-written instructions, don’t know how to approach a new tool efficiently, have no clue about troubleshooting a problem, don’t persist in figuring things out, can’t separate out noise (e.g., outdated instructions relating to an old interface), don’t use inexact info productively, and haven’t planned in sufficient time to be confronted with and to solve tech issues.

I am a person who used the same telephone – not just the same type of phone, but the same exact phone – for the first 20 years of my life.  But still, I have adapted, and it is my adaptations that interest me.  I know that I have much better persistence, that I’m more exploratory, that I have some better instincts about troubleshooting, and so on.  I have opinions now about approaching new tools and what I want to know about them. Where did my instincts come from, and can that process of learning-to-thrive be accelerated by intentional teaching and learning?  Or is it inevitably the kind of learning you just “pick up” by (often painful)  trial and error?

One way that I like to think about my question is that people who thrive, who have less pain in picking up useful skills in this area, are highly social learners.  They use their networks to learn things in an efficient and timely way.

This question of whether there can be intentional instruction, or if the best response is showing people how to build networks and use them effectively for learning, feels central to my work as a librarian and to the way we think about approaching information literacies.  My pedagogy depends on understanding the kinds of adaptive skills people need and how they develop them.

Anyway, I blogged about one of my “digital literacy” experiences in Apps on Tap at this web site:

Posted in Collaboration, Disruptive Technologies, Learning, Organizational Effectiveness, Shifting Education Paradigms, Social Media | Tagged , , , , | Leave a comment

NFAIS 2013 – Altmetrics, ImpactStory, Jason Priem

Slowly I’m summarizing a few of the great presentations at NFAIS 2013.

Jason Priem

Altmetrics are alternative metrics for scholarly output — ways to measure impact,  recognizing that scholarship now happens across media, and impact does not result solely through peer-reviewed journal articles.  Scholarly conversations happen via blogs, via shared software, via re-purposed data, through public “peer review” of shared pre-prints, even on Twitter.  So how can we think we are effectively measuring scholarly impact by counting citations alone?

Jason Priem, a graduate student at UNC in informations science was an electrifying and engaging speaker (although he does, really, talk too fast!).

He started by talking about the first scholarly revolution, where there was effort to standardize the scholarly publication — the “letter” became the basis for the journal article, and very sensibly the format was standardized to clearly present the bits that should be systematically shared (lit review, methodology, findings, etc.).

But now we can use the networked web as the platform for the scholarly record – or, as Priem calls it “web-native science.”  Several things follow from this recognition. First, we can embrace a diversity of scholarly output rather than forcing everyone through a journal article publication tunnel.  And that is already happening – data is being shared, grey literature abounds, significant discussion of results doesn’t wait for publication but happens within networks of people working in the same area. Second, if we see all that as scholarly output, we want to count it, we want to recognize it as worthy output for promotion and tenure.  That’s where altmetrics comes in – how do you measure it, and how do you gather all that disparate data together to tell the story of a scholar’s impact?

A very interesting question is how to ascribe “impact” when discovery is via collective discussion.  One example is “MathOverflow,” a website where people pose problems and everyone who follows that site can suggest solutions or contribute to the analysis.  Just like StackOverflow, a certain community has formed around a common need to have a platform like this.

That knowledge results from community effort is one of the most fascinating aspects of networked culture,  and we have to realize that our image of a lone scholar, solely responsible for her/his own work and able to specifically credit the discrete contributions of others is often inaccurate and in some ways pointless.  Isn’t there significant impact in just asking the right questions in the right places and coordinating input that achieves a solution?  I sometimes speculate what the negative and unintended consequences have been for academia with the promulgation of the lone creator model or the sometimes rigid ways mandated for acknowledging the contributions of others.  I know I say things or think things as a result of all that I have learned in the last decade but I couldn’t begin to untangle what I owe to whom.  Is there a useful way to handle acknowledgement, credit, and authorship that strikes a better balance?   I know that I think about “identifying patterns” as part of learning and knowing, and that I owe some of what I believe about patterns to Josh Lehrer (How We Decide) but what, exactly?  I read it a long time ago and I’ve been thinking ever since….  (I recently watched a documentary about the James Bond books & movies, and there was this endless lawsuit about who “owned” the movie character of James Bond, who was allowed to make the movies. It seemed so sad to me, the premise that there could only be “one James Bond” in the movies, one owner in a winner-takes-all-model.)

Priem uses the phrase “make public” for sharing scholarly output – in direct comparison with the slow and sometimes unfair peer review model of “publishing.”  In Priem’s view “making public” more accurately captures how scholars have impact on others.  You can put something up on a blog and have a far greater impact on the creation of new knowledge than your article — which will only appear 3 years down the road – can hope to have.  In fact, “informal communication” has been recognized as an important aspect of research for at least 60 years.

Ahh, think of every scholar now sharing so much more of what they are doing via the web — tweeting, blogging, grey lit, white papers, group discussions of all kinds.  How will we manage this huge amount of information/emerging knowledge.

Priem showed us briefly how he manages the flow – he organizes his own network via something like Tweetdeck, essentially creating his own “current awareness” journal.

Priem then got into the nuts and bolts of altmetrics, which I’ll just briefly describe.   Once you wade into the open web, how do you measure impact?  He made it look obvious — impact is derived from other peoples’ behaviors.  If you have “made public” your work, then either scholars or members of the general public can do something with it.   They can view it, read it, comment on it, discuss it, share it, save it for future use, cite it, or recommend it.  Clearly some of these activities seem to indicate a deeper engagement than others.  ImpactStory is one response – how to capture the uses made of stuff, weight them as more or less impactful, and create some kind of easily digestible story about the impact of  a scholar’s work, even in some cases being able to benchmark it against other scholarly products (e.g., more “clicked on” than 76% of other stuff in this bucket).  The idea is that a story is more useful than a single numeric indicator of impact, that the story can help reflect the complexity of the ecosystem in which scholarly impact happens.

The non-profit ImpactStory is here:


Posted in Shifting Education Paradigms, Uncategorized | Tagged , | 1 Comment

Knowledge as a Network; Libraries as platforms

David Weinberger

David Weinberger

NFAIS 2013 opened with a glorious bit of philosophizing about knowledge in a  keynote from David Weinberger, author of Everything is Miscellaneous and Too Big To Know.

Some people might not care about “what knowledge is,” but surely anyone in or around the field of education gets the importance.  Teensy example – last year I encountered various theories of economic development which my Penn professor at the Graduate School of Education (the very smart Dr. Ghaffar Kucher) related to epistemology, to different paradigms for knowledge.  So I was poised to love Weinberger’s keynote talk, and I did.

Here’s my selective interpretation of Weinberger’s remarks.

The first startling thing pyramidis that, Marshall McLuhan-like, what we think knowledge is gets mixed up with the medium by which knowledge is delivered.  When knowledge came in physical containers (books, journals, etc.) you couldn’t physically access it all.  So it was important to condense it, to get value out of it by making it smaller.

We thought of knowledge as being part of a pyramid.  By condensing the raw data into information, people could sift through that and create knowledge.  You don’t have wisdom by just plowing through data, you distill at each layer.  We managed to get to wisdom by “reducing the amount of stuff we have to deal with.”   We filtered stuff out.

Another thing the pyramid shows us about our up-til-now view of knowledge — knowledge has traditionally had the property of being that which is settled.  If we’re still arguing about something, we say we don’t know.  We tend to think of knowledge as something we agree on; in a true state of knowledge there isn’t any more reasonable disagreement.

Theorists who first used the knowledge pyramid explained each layer as built on the other –information is structured data, and knowledge is information-you-can-use.  The model is somewhat limiting in that knowledge, in Western culture going way back, has always been thought of as something more than just a way to get things done.  Knowledge was being truly human, understanding our place in the universe etc.

Other things from the world of knowledge-presented-in-tangible-media.  In the physical world you can only have one organizational scheme – pick one.  So, the taxonomy (where we place an organism in the tree of life, for example) becomes the truth almost.  We forget that’s it’s a representation, a model, and instead we take it for reality.

And that leads us to seeing knowledge as a series of stopping points – we get the answer, boom, move on.

Knowledge in the West  shares the properties of its medium (print).

Enter the Internet.  Our new medium for disseminating knowledge is a NETWORK which gives knowledge the properties of networks.  Networks don’t “end.”  They are infinitely extensible (I think).

Knowledge starts to look more like a network.  (I can see our culture in transition here–many disagreements seem to me based on what counts as knowledge.)

Now compare properties of knowledge from the print world to knowledge in the networked world.  Something sparks a response throughout the network – it’s all relevant and it’s all linked.  Guess what – no one agrees!  (cf the print world, where getting published meant a lot of settled agreement about what you have to say before people would undertake to print your work).  So, if I understand Weinberger right, we begin to have a deeper knowledge because when everyone is saying something (slightly or even dramatically) different, we benefit from a whole lot of available multiple perspectives trained on any particular issue or problem.  And we’re not saying everyone is equal — we’re not saying crazy or unfounded opinions help.  We’re saying these multiple perspectives come from thousands of people with expertise and reasons to weigh in.  We would actually learn less or understand less if all these smart people were saying the same thing.  The value comes from multiple perspectives ad infinitum rather than from filtering for less.

It’s almost paradoxical sounding to hear Weinberger say “Disagreement is how you scale knowledge – it’s how knowledge can get really big.”

He then showed us a diagram with a picture of an ordinary robin in the center, surrounded by labels in all directions to show that when the world at large talks about the robin, it can have many “meanings,” including a robin, a songbird, a symbol of Spring, a disease vector, a work of art, etc etc.  This helps me understand that knowledge is not a finite amount of statements about a robin or one placement of the robin in some ontological scheme.  The knowledge is the infinite number of semantic relationships a robin can be part of.  If the knowledge I’m seeking is the kinds of birds I can see in Medford, Mass. in mid-March, then somewhere there may be a data set with geo-tagged, dated photos of robins. And that’s a pretty mundane example — a robin could be included in climate change knowledge, a study of birds as symbols in literature, or even the “livability” of a city.

If we can cope with the messiness of a network as compared to a pyramid,  knowledge can get really, really big.  Which I take to mean, I could conceivably “know” a lot more because with the right tools I can produce knowledge precisely because I have access to sooooo much information.

Then Weinberger remarked on something that has troubled me ever since I started to work in libraries.  Curation.   In the world of tangible media (where space is an issue) but also in the world of licensed electronic information, you can’t have everything so what do you acquire, manage, keep?  Anybody’s “principles” (e.g., buy what people want) are in conflict with someone else’s (e.g., buy what they’ll need, but don’t know it yet).  It’s so obvious, there is no right answer. (In my public library days, it just depended on who was boss.)  What a relief to have someone talk about how problematic curation is.  You simply cannot predict what will be of interest or need, anymore than you can limit the ways a robin might have meaning.

I do think some libraries got over this, realizing that what their constituents could “access” was just as important as what the library might own.  Still, we spend a lot of time selecting stuff for individual libraries, and as Weinberger pointed out in his earlier book, it’s cheaper to just have everything and filter on the way out.  Selecting, or curating, is a very expensive use of staff time.  Since libraries can’t buy everything (even electronic stuff costs money) they are forced into selecting.  But the illogic is increasingly clear, as libraries struggle with what they should stop doing in order to provide impactful services for users with new needs, and perhaps this will be one of the drivers to radical new business models.

What Weinberger calls filtering on the way out I think I would call discovery — it’s sort of the same thing when you can go to a place where “everything” is, and then drill down to what you want.  You don’t filter by putting only a few things into the bucket; you filter by going into the Bucket of Everything and using discovery tools to bring what you want to the surface.

So, in this new world, we’re no longer going to get value by filtering on the way in, by weeding through and letting only some information become knowledge (by selecting, by only publishing some info, by only providing access to a subset, etc).

Instead, thinking about knowledge by thinking about the properties of networks, Weinberger talked about 3 ways to squeeze more value from information.

#1  Iteration

Iterate forward into knowledge (rather than seeing it as something settled that you obtain.)  His example is the stackoverflow site.  The likelihood that you are the first person to have a problem is minimal to vanishing.  People can help you iterate forward.  Iteration at webscale is an efficient way to produce new knowledge when information is superabundant.

Weinberger alluded to some unfolding cultural effects. We’re still not used to what happens when millions of people can be brought to work together. According to traditional ideas of knowledge production, Wikipedia shouldn’t work, but it does.

 #2.  Platforms

Platforms continually increase value.  I can’t remember what example Weinberger used but the one I have heard is facebook.  Facebook didn’t succeed by putting content out on the web and attracting people to it; it succeeded by being a platform on which people could share content, and by designing the platform so that people’s ordinary interactions could make the content more and more valuable.  If my circle of friends are all recommending a particular book or a data visualization tool, no centralized source is bringing that to my attention, it is the result of many individual behaviors.

Weinberger sees the library as a platform.  The top layer is where the value is created (people using stuff and connecting with other people). As a knowledge network where people discover, build, share.  Everything they are doing in this layer should be feeding back in to your system to keep adding value.  This would be a terrific discussion topic back at the ranch! (It’s the sort of thing that makes me want to talk about why “tagging” in academic libraries never seemed to catch on.)

Again, the top layer is users doing whatever they feel like and adding value by their behaviors.  The middle layer is tools, services that allow users to do “whatever.”  The bottom layer is the data, metadata.

This led to another interesting philosophical point — the only difference between metadata and data in the digital world is functional.  Anything that is data can be metadata for something else. The example is “Melville, Moby Dick, Call me Ishmael.”  Any one of those terms could be the metadata for any of the others, depending on what a user is trying to do.

Similarly to how curation is problematic, Weinberger commented that plans don’t scale.  People who try to anticipate what will happen are going to be wrong.  The pain of being wrong becomes greater as we invest more in our plans.  Seeing the library as a platform allows the development of value without recourse to planning which will be mostly wrong.

 #3.  Linked data

What is happening to knowledge (it’s getting massive and interlinked ) is also happening to data.

Weinberger applauded some experiments with “filtering on the way out” such as -  which is full of big, messy stuff.  Just chuck it all in there, and let people develop tools and services around it.

Because data can be meaningfully linked to and used, we don’t need knowledge to be a settled, agreed on thing.  Knowledge is now useful arguments between people…at scale.  The usefulness comes out of linked data.  We can have evidence at our fingertips.  At this point it almost feels better to say that we’re always going to be in the process of knowing rather than to talk of knowledge as if it’s some disembodied thing.  (Although complicated ways of talking like that usually make me impatient….) But it’s really a point to ponder.  What can possibly be seen as a “right answer” when complexity theory demonstrates how mind-blowingly complex problems are, with googobs of interrelated and interacting sub-parts, and we need to think about optimizing in a constantly changing situation rather than getting to one right answer.

So, if we need to have computational ways of making all that data useful to us, we will need semantic search.

Weinberger closed with several points.

These things don’t scale:

  • Agreement
  • Order
  • Control

The web is a good metaphor.  We built this without having control (a project plan, named managers, etc).   There’s no agreement on the web, there’s no real order, no one is in control.

And this Syllogism:

  1. Interests are unpredictable
  2. Value arises from interest
  3. Linked openness enables value to scale

By the way, Weinberger says he works in a basement (at the Law Library at Harvard).  Now that seems too bad.  Let’s hope it’s a nice office down there.


Posted in Disruptive Technologies, Uncategorized | Tagged , , , , | Leave a comment

Courseware, students, participatory design . . .

litldeskI sometimes hear educators say that students want only one courseware to contend with, that students don’t want to deal with multiple platforms for the courses they take each semester.

I haven’t heard evidence for this claim, although one can see how it almost appears to be common wisdom.  But have we got at the truth here?  Untested ideas such as this are the very kinds of potentially wrong hweuristics that ethnographic research is designed to correct.  If you ask students what they want, they will be happy to tell you, but they might provide the wrong solution.  Asking students for their “solutions” is the design charrette approach.  Do you want soft chairs or hard chairs?  Do you want this pink or blue?  Do you want all your courses to have the exact same layout?

I think the more successful approach is participatory service design.

The first step in participatory design is ethnographic research – close observation in a systematic and rigorous way to find out how students interact with courseware.

The second step is rigorously interpreting the data you gather, and applying it to the design problem in order to arrive at an optimal solution.  Interpreting the data and evaluating  potential solutions is done by people whose profession involves service design.

There are many stakeholders in the selection and support of courseware, and one or two platforms on campus will often be the right choice in balancing across constituencies.  And even though that solution may quiet student complaints, it doesn’t address what I’m guessing (and what ethnographic research could determine) is really the source of student complaints — poor design of online learning environments.

I take a lot of classes so I am frequently in the role of student.  But I am not suggesting my perspective leads directly to “the solutions”  for design of online courses.  Being a student just helps me understand why participatory design succeeds. (IT people sometimes call this “eating your own dog food,” meaning that putting yourself in a user’s shoes helps you design better.)

As a student, online courses can annoy me in 3 ways.  There are many, and in my opinion, better solutions to these annoyances than herding everyone into the same courseware corral.   (I’m only looking at a student point of view here; I’ll briefly mention faculty-as-users in a bit.).

My first annoyance is that it is often hard to figure out the navigation of onliNumber Onene courses — where has the usual “stuff” been put.  Not everyone takes the same meaning from labels like “syllabus” or “course policies” or “assignments.”  An institution or program could, of course, “design once” and make everyone fit into the same straitjacket — same courseware platform, same course design, same labels, etc.  Then students just need to learn that one pattern. It isn’t hard to see that there will be a downside to this approach, even if it might smooth things over with students in the short run.  Personally, I’d prefer good designs customized to course objectives over sameness.  Some people get course navigation right — it’s like walking into a room where the furniture is arranged nicely versus walking into a room where things are awkwardly placed.  It’s essentially a design question, not a platform or technology problem. So, how costly is it to raise institutional knowledge of course design?  Or, how costly is it to sidestep it?

Number 2Another issue for me, wearing my “student hat,” is understanding the affordances of a platform.  Can I plz be told how the tool works in some plain simple way?  If the lecture videos can be slowed down, for example, that’s important to know. Can I run my writing thru a check that will help me improve it prior to turning it in?  Again, research can tell us how best to present explanations so that students take advantage of features and avoid pitfalls.

Annoyance number three involves knowing what I have to do each week.

Number 3This is presented in wildly different ways in each course I take – it is not dependent on the platform, it reflects the instructors’ preferences. In the worst cases, weekly lectures are in one place, assignments for submission are described in another, the requirement to post to discussion forums is listed somewhere else.  And as these things inevitably change during a course, instructors sometimes update deadlines in one place but not in all the places where the information is duplicated.  The result is significant administrative overhead for students madly trying to complete weekly requirements.  The problem is not complicated — it’s just this:  the design does not efficiently allow the student to “know” when they are done for the week.  The corollary is inefficiency for the student trying to plan ahead for what they need to do.

I suspect some of these design flaws are actually features for faculty — they are done to minimize the instructor’s administrative overhead either during the course or in migrating the course forward for the next semester.  This is the reason that participatory service design needs to study all users of a system — how do the faculty use courseware?  what about other stakeholders?

My overall point…   Yes, one or two courseware platforms on campus may well be necessary for a host of reasons, but I’m not ready to interpret student complaints as buttressing a “one-size-fits-all” decision about instructional design.   For me, student complaints may lead in a different direction.

One of the IT speakers I heard at ELI a few years back spoke to the problem of balancing standardization and innovation, questioning if short term gains from standardization are outweighed by creating inflexible people in a rapidly evolving climate.  It could be that the really strong institutions of the future will have devoted as much or more time to raising institutional knowledge about the design of learning environments as they do to standardizing on pre-selected platforms.


Posted in Learning Spaces, Shifting Education Paradigms | Tagged , , , | Leave a comment

Tools for organizational effectiveness

asemblylineI believe quite a lot of people–even with no background in operations or manufacturing–could set up an assembly line that delivered a reasonable amount of efficiency.

I believe the skill required to do that is based on knowledge that has become part of our culture.  Through our ordinary, everyday living and working experiences, we have somehow learned how to organize a productive process that is sequential, linear, modular and cumulative.  And part of our cultural knowledge is knowing when an assembly-line approach is appropriate.  We know almost instinctively how to take advantage of the particular affordances of that kind of process for volume, efficiency and quality control.  We seem to quite naturally understand the difference in affordances between the home-based workshop or skilled artisan, and the collective enterprise that uses the assembly line approach.

What does the future hold regarding our cultural knowledge of productive processes?

Assembly lines are useful in some situations, but we’re increasingly working in a world that needs knowledge workers–handling big, messy, unstructured problems. Knowing how to put together an assembly line is of limited usefulness.

Hammer and WrenchI think the tools of production we need now, and which should seem as easy and natural to us as the organizational tools of the manufacturing era, are tools like agile project management (or even standard project management), change management, establishment and leadership of high performing teams, knowledge management, fostering learning organizations, understanding and nourishing innovation, service design, program assessment, and so on.

We all spend probably quite a lot of time and effort acquiring skills in these areas.  Even so, sometimes it’s hard to get a lot of traction in your workplace for putting your newly acquired skills into common practice.  Legacy “best practices” and “common wisdom” carry cultural weight even when they are no longer yielding the results they once did.

Just as the successful enterprises of the manufacturing era made an asset of their ability to use processes appropriate to their sphere of work, I think organizations today will succeed or stumble based on their ability to make use of the appropriate process, design and management tools.

And that our grandchildren will find it a simple matter to establish and launch high performing teams…                                                                                     Karrie

Posted in Collaboration, Organizational Effectiveness | Tagged | Leave a comment