Slowly I’m summarizing a few of the great presentations at NFAIS 2013.
Altmetrics are alternative metrics for scholarly output — ways to measure impact, recognizing that scholarship now happens across media, and impact does not result solely through peer-reviewed journal articles. Scholarly conversations happen via blogs, via shared software, via re-purposed data, through public “peer review” of shared pre-prints, even on Twitter. So how can we think we are effectively measuring scholarly impact by counting citations alone?
Jason Priem, a graduate student at UNC in informations science was an electrifying and engaging speaker (although he does, really, talk too fast!).
He started by talking about the first scholarly revolution, where there was effort to standardize the scholarly publication — the “letter” became the basis for the journal article, and very sensibly the format was standardized to clearly present the bits that should be systematically shared (lit review, methodology, findings, etc.).
But now we can use the networked web as the platform for the scholarly record – or, as Priem calls it “web-native science.” Several things follow from this recognition. First, we can embrace a diversity of scholarly output rather than forcing everyone through a journal article publication tunnel. And that is already happening – data is being shared, grey literature abounds, significant discussion of results doesn’t wait for publication but happens within networks of people working in the same area. Second, if we see all that as scholarly output, we want to count it, we want to recognize it as worthy output for promotion and tenure. That’s where altmetrics comes in – how do you measure it, and how do you gather all that disparate data together to tell the story of a scholar’s impact?
A very interesting question is how to ascribe “impact” when discovery is via collective discussion. One example is “MathOverflow,” a website where people pose problems and everyone who follows that site can suggest solutions or contribute to the analysis. Just like StackOverflow, a certain community has formed around a common need to have a platform like this.
That knowledge results from community effort is one of the most fascinating aspects of networked culture, and we have to realize that our image of a lone scholar, solely responsible for her/his own work and able to specifically credit the discrete contributions of others is often inaccurate and in some ways pointless. Isn’t there significant impact in just asking the right questions in the right places and coordinating input that achieves a solution? I sometimes speculate what the negative and unintended consequences have been for academia with the promulgation of the lone creator model or the sometimes rigid ways mandated for acknowledging the contributions of others. I know I say things or think things as a result of all that I have learned in the last decade but I couldn’t begin to untangle what I owe to whom. Is there a useful way to handle acknowledgement, credit, and authorship that strikes a better balance? I know that I think about “identifying patterns” as part of learning and knowing, and that I owe some of what I believe about patterns to Josh Lehrer (How We Decide) but what, exactly? I read it a long time ago and I’ve been thinking ever since…. (I recently watched a documentary about the James Bond books & movies, and there was this endless lawsuit about who “owned” the movie character of James Bond, who was allowed to make the movies. It seemed so sad to me, the premise that there could only be “one James Bond” in the movies, one owner in a winner-takes-all-model.)
Priem uses the phrase “make public” for sharing scholarly output – in direct comparison with the slow and sometimes unfair peer review model of “publishing.” In Priem’s view “making public” more accurately captures how scholars have impact on others. You can put something up on a blog and have a far greater impact on the creation of new knowledge than your article — which will only appear 3 years down the road – can hope to have. In fact, “informal communication” has been recognized as an important aspect of research for at least 60 years.
Ahh, think of every scholar now sharing so much more of what they are doing via the web — tweeting, blogging, grey lit, white papers, group discussions of all kinds. How will we manage this huge amount of information/emerging knowledge.
Priem showed us briefly how he manages the flow – he organizes his own network via something like Tweetdeck, essentially creating his own “current awareness” journal.
Priem then got into the nuts and bolts of altmetrics, which I’ll just briefly describe. Once you wade into the open web, how do you measure impact? He made it look obvious — impact is derived from other peoples’ behaviors. If you have “made public” your work, then either scholars or members of the general public can do something with it. They can view it, read it, comment on it, discuss it, share it, save it for future use, cite it, or recommend it. Clearly some of these activities seem to indicate a deeper engagement than others. ImpactStory is one response – how to capture the uses made of stuff, weight them as more or less impactful, and create some kind of easily digestible story about the impact of a scholar’s work, even in some cases being able to benchmark it against other scholarly products (e.g., more “clicked on” than 76% of other stuff in this bucket). The idea is that a story is more useful than a single numeric indicator of impact, that the story can help reflect the complexity of the ecosystem in which scholarly impact happens.
The non-profit ImpactStory is here: http://impactstory.org/