The NISO standards update program was all new information for me. In general, I understand the value of standards, but I don’t have a good way to stay up-to-date in all the areas where standards are of interest to me. Partly because standards documentation very quickly becomes too technical and detailed for the amount of time I can devote to “non-day-job” reading. That’s why programs like the NISO update are so useful – lots of high-level conceptual explanation.
The area of discovery was covered first — NISO has just participated in releasing a new set of recommendations in this area via the paper Open Discovery Initiative: Promoting Transparency in Discovery (NISO RP-19-2014). Moving from library catalogs (what a library owns) to web-scale discovery systems raised a lot of issues. If a vendor is offering a web-scale discovery system, what content is included? What metadata is made available for searches to run against? (For example, article “metadata” exists at several levels from multiple sources — some available from the publishing journal. some from a separate A&I service, even the full-text of the article itself. ) Are there practices in place that affect how the results are presented? The ODI guidelines attempt to make the whole process more transparent, so that users and librarians can better understand and use the discovery service, but the guidelines avoid territory by which discovery services get compettitive advantage. Interface, performance, and relevance ranking were named as areas in which these products legitimately compete against each other in the market place. It’s interesting to think about when it becomes useful to establish guidelines like this — you could imagine that acting too late would be bad, because huge amounts of time/money would be wasted trying to achieve interoperability if everything about discovery was secret and proprietary and required making commercial arrangements individually with all players. Presumably acting too soon might also be a problem, if the issues that arise from new kinds of discovery weren’t fully apparent. Although “it’s never too soon” seems like the best answer…
Another area covered was digital rights management in a global context. The Linked Content Coalition formed in March 2014 as an umbrella organization to gather together stakeholders dealing with metadata and indicators about the access rights associated with content. This seems like a really interesting problem space too, in which the goal is to have machines be able to understand when and how to allow access to content rather than having to manually manage it. By supplying the right metadata, for example, it would become simpler for a faculty person to know immediately whether or not they could put the fulltext of a paper in their MOOC without having to consult an IP expert for guidance. Think about the work of this group as also simplifying things with regard to rights management in an international context. LCC has already produced a lot of work. They have a reference model which spells out all the entities that have to be addressed (people/organizations; place; creation; and then all the kinds of entities associated with rights such as the right itself, the assignment of a right, assertions made about rights, and conflicts). All these entities then have to have identifiers, which are linked in standard ways, and managed by the appropriates authorities (registries, etc), and this bit of work is laid out in the LCC’s Ten Targets document. It quickly gets very complicated, which is why you begin to see that machine-readable metadata is essential — any system of manually figuring out who can do what with a creation leaves us with the situation of people doing nothing because they don’t know what’s legal or doing whatever they want because it’s too complicated to figure out, and life’s too short.
The last part of the NISO workshop I saw was also concerned with global interoperability. I had never heard of BISAC — this is a North American standard maintained by the Book Industry Study Group (BISG) for applying subject headings to published works. It’s the way, for example, that Amazon and Barnes&Noble “know” what a book is about. Evidently, every country seems to have their own scheme like this, and the craziness happens when all these schemes have to be mapped to each other (every time the schemes are updated) so that books can be sold internationally. Very recently, work has been done to produce Thema, which will be an international schema for subject headings. Thema development is managed by EDitEUR, the international group that oversees a lot of standards used in the book publishing industry. There is a lot of legacy infrastructure based on the nationally-produced subject schemata, so Thema is not the replacement for BISAC (yet) but it may eventually develop to the point where it supplants national schemes. It had some interesting features, including “expected audience.” However, it was not clear to me why, other than simply the typical historical legacy of efforts being siloed, we have BISAC in addition to LoC subject headings.
The program got me thinking a lot about how sketchy my knowledge of standards is, and how over-simplified problems can seem until you begin to specify in detail what it will take to achieve seamless interoperabilty. It also got me thinking about “information literacy” and how instruction librarians often say that students need to understand how scholarly knowledge is generated and disseminated, and sometimes they even include “business models” as important conceptual understanding for successfully navigating the information landscape. Should we begin to include some understanding of the way standards affect the information landscape? Clearly not all the technical details, but at least some familiarity?