The First Step Towards a System of Open Digital Scholarly Communication Infrastructure

 A guest post by David W. Lewis, Mike Roy, and Katherine Skinner

We are working on a project to map the infrastructure required to support digital scholarly communications.  This project is an outgrowth of David W. Lewis’ “2.5% Commitment” proposal.

Even in the early stages of this effort we have had to confront several uncomfortable truths. 

First Uncomfortable Truth: In the main, there are two sets of actors developing systems and services to support digital scholarly communication.  The first of these are the large commercial publishers, most notably Elsevier, Wiley, and Springer/Nature.  Alejandro Posada and George Chen have documented their efforts.  A forthcoming SPARC report previewed in a DuraSpace webinar by Heather Joseph confirms these findings.  The second set of actors may currently be more accurately described as a ragtag band of actors: open source projects of various sizes and capacities.  Some are housed in universities like the Public Knowledge Project (PKP), some are free standing 503(C)3s, and others are part of an umbrella organization like DuraSpace or the Collaborative Knowledge Foundation (COKO). Some have large installed bases and world-wide developer communities like DSpace.  Others have yet to establish themselves, and do not yet have a fully functional robust product.  Some are only funded with a start-up grants with no model for sustainability and others have a solid funding based on memberships or the sale of services.  This feels to us a bit like the rebel alliance versus the empire and the death star. Read more

Weighing the Costs of Offsetting Agreements

A guest post by Ana Enriquez, Scholarly Communications Outreach Librarian in the Penn State University Libraries.

Along with others from the Big Ten Academic Alliance, I had the pleasure of participating in the Choosing Pathways to Open Access forum hosted by the University of California Libraries in Berkeley last month. The forum was very well orchestrated, and it was valuable to see pluralism in libraries’ approaches to open access. (The UC Libraries’ Pathways to Open Access toolkit also illustrates this.) The forum rightly focused on identifying actions that the participants could take at their own institutions to further the cause of open access, particularly with their collections budgets, and it recognized that these actions will necessarily be tailored to particular university contexts.

Collections spending is a huge part of research library budgets and thus — as the organizers of the forum recognized — of their power. (At ARL institutions, the average share of the overall budget devoted to materials was 47% in 2015-2016.) Offsetting agreements were a major theme. These agreements bundle a subscription to toll access content with payments that make scholarship by the institution’s researchers available on an open access basis. The idea behind offsetting agreements is that if multiple large institutions pay to make their researchers’ materials open access, then not only will a large majority of research be available openly but subscription prices for all libraries should come down as the percentage of toll access content in traditional journals decreases. The downside is that offsetting agreements tie up library spending power with traditional vendors; they redirect funds to open access, but the funds go to commercial publishers and their shareholders instead of supporting the creation of a new scholarly ecosystem.

Experiments with offsetting are underway in Europe, and MIT and the Royal Society of Chemistry have recently provided us a U.S. example. I look forward to seeing the results of these agreements and seeing whether they make a positive difference for open access. However, I am concerned that some see offsetting as a complete solution to the problems of toll access scholarship, when it can be at best a transitional step. I am concerned that it will be perceived, especially outside libraries, as a cost-containing solution, when it is unlikely to contain costs, at least in the near term. And I am also concerned that libraries and universities will commit too many resources to offsetting, jeopardizing their ability to pursue other open access strategies.

Offsetting agreements must be transitional, if they are used at all. They are inappropriate as a long-term solution because they perpetuate hybrid journals. Within a particular hybrid journal, or even a particular issue, articles from researchers at institutions with a relevant offsetting agreement are open access, as are some other articles where authors have paid an article processing charge (APC). However, other articles within that same journal or issue are not open access. An institution that wants access to all the journal’s content must still pay for a subscription. In contrast, if the library that made the offsetting agreement had instead directed those funds into a fully open investment (e.g., open infrastructure or library open access publishing), the fruits of that investment would be available to all.

Controlling the costs of the scholarly publishing system has long been a goal of the open access movement. It is not the only goal — for many institutions, promoting equity of access to scholarship, especially scholarship by their own researchers, is at least as important. However, with library and university budgets under perpetual scrutiny, and with the imperative to keep costs low for students, it is important to be transparent about the costs of offsetting. In the near term, offsetting agreements will cost the academy more, not less, than the status quo. Publishers will demand a premium before acceding to this experimental approach, as they did in the deal between MIT and the Royal Society of Chemistry. The UC Davis Pay it Forward study likewise estimated that the “break-even” point for APCs at institutions with high research output was significantly below what the big five publishers charge in APCs. In other words, shifting to a wholly APC-funded system would increase costs at such institutions.

The authors of the Pay it Forward study and others have written about structuring an APC payment model to foster APC price competition between journals. Institutions pursuing offsetting agreements should build this into their systems and take care not to insulate authors further from these costs. They will then have some hope of decreasing, or at least stabilizing, costs in the long term. Barring this, libraries’ payments to traditional publishers would continue to escalate under an offsetting regime. That would be disastrous.

Whether or not offsetting agreements stabilize costs, libraries will have to be cautious not to take on costs currently borne by other university units (i.e., APCs) without being compensated in the university’s budgetary scheme. What’s more, because offsetting agreements reinforce pressure to maintain deals with the largest publishers, they undermine libraries’ abilities to acquire materials from smaller publishers, to develop community-owned open infrastructure, to invest more heavily in library publishing, to support our university presses in their open access efforts, and to invest in crowdfunding schemes that support fully open access journals and monographs.

To maintain this pluralistic approach to open access, either within a single research library or across the community, libraries signing offsetting agreements should be cautious on several points. To inform their negotiations, they should gather data about current APC outlays across their institutions. They should structure the APC payment system to make costs transparent to authors, enabling the possibility of publishers undercutting each other’s APCs. They should safeguard flexibility in their collections budgets and invest in other “pathways” alongside offsetting. And they should, if at all possible, make the terms of their offsetting agreement public, in the spirit of experimentation and of openness, to enable others to learn from their experience with full information and to enable themselves to speak, write, and study publicly on the impact of the agreement.

The GSU Copyright Case: Lather, Rinse, Repeat

On Friday, a panel of the 11th Circuit Court of Appeals issued its decision in the publisher’s appeal from the second trial court ruling in their lawsuit against Georgia State University, challenging GSU’s practices regarding library electronic reserves.  The decision came 449 days after the appeal was heard, which is an astonishingly long time for such a ruling.  I wish I could say that the wait was worth it, and that the ruling adds to our stock of knowledge about fair use.  Unfortunately, that is not what happened, and the case continues to devolve into insignificance.

The judges on the appellate panel seem to realize how trivial the case has become.  After working on it for one year, two months, and three weeks, the court produced a decision on only 25 pages, which sends the case back, yet again, for new proceedings in the district court.  The short opinion simply reviews their earlier instructions and cites ways in which the panel believes that Judge Orinda Evans misapplied those instructions when she held that second trial.  What it does not do is probably more significant than what it does.  The ruling does not fundamentally alter the way the fair use analysis has been done throughout this case.  The publishers have wanted something more sweeping and categorical, but they lost that battle a long time ago. The 11th Circuit also affirms Judge Evans’ decision to not reopen the record, thus preventing the publishers, and the Copyright Clearance Center that is pulling their strings, from introducing new evidence of licensing options that did not exist when they brought the case in 2008.  Although it seems like a mere technicality, this ruling, another loss for the publishers, really points out how silly and out-of-date the lawsuit now is.

This time around, the circuit court seems to say more explicitly that they expect more of the excerpts that are at the center of this dispute to be found to be infringing.  They clearly do not like the fact that, after the first appeal, and with their instructions to be less mathematical in her analysis and to weigh the fourth factor more heavily, Judge Evans found fewer infringements (by one) than she had in the first trial.  So if there is a third trial, maybe the outcome will be six infringements, or even ten.  But the big principles that the publishers were trying to gain are all lost.  There will be no sweeping injunction, nor any broad assertion that e-reserves always require a license. The library community will still have learned that non-profit educational use is favored under the first fair use factor even when that use is not transformative.  The best the publisher plaintiffs can hope for is a split decision, and maybe the chance to avoid paying GSU’s costs, but the real victories, for fair use and for libraries, have already been won.

The saddest thing about this case is that, after ten years, it continues to chew over issues that seem less and less relevant.  Library practices have evolved during that time, and publishing models have changed.  Open access and the movement toward OERs have had a profound impact on the way course materials are provided to students.  So the impact of this case, and of any final decision, if one ever comes, will be negligible.  The plaintiff publishers actually lost a long time ago, they simply lack the wisdom to recognize that fact.

Cambridge University Press, Oxford University Press and Sage Publishing v. J.L Albert should have settled years ago.  Instead it has devolved into a kind of punchline, much like Jarndyce v. Jarndyce from Dicken’s Bleak House; the mere mention of it causes people to roll their eyes and giggle.  The final resolution of this dispute may yet be a long way off, but at this point the takeaway from the case is clear: carry on with your daily work, teachers and librarians, there is nothing to see here.

What does Icelandic fishing have to do with commercial publishing?

Siglufjordur is a small fishing village in the north of Iceland that my wife and I had the pleasure of visiting this past summer.  It nestles between the mountains of the Icelandic highlands and the sea in a way characteristic of towns on the northern coast.

What is unusual about Siglufjordur is its economic history.  It was a boom town in the 1940s and 50s, the center of the North Atlantic herring trade.  In addition to fishing, a great deal of processing and packing was done in Siglufjordur, and the town was triple its current size.  In the early 1960s, however, the herring industry in Siglufjordur collapsed quite suddenly, because the fishing grounds had been overfished.  Now the town is a shadow of its former self, surviving on sport fishing and tourism (the Herring Museum, perhaps surprisingly, is very much worth a visit).

We often refer to scholarly communications as a kind of eco-system, and I think the problem of overfishing has an important place in that analogy.  The proliferation of new journal titles, whose sole function seems to be padding out the “big deals” that publishers sell, using the growing number of titles to justify the ever-increasing cost, strikes me as a kind of overfishing.  It is an example of pushing the market too far.  In Siglufjordur, it was the product that dried up, however; in commercial publishing it is the customer base, which is being systematically priced out of the market.

A sign that they are creating a market where monopoly pricing is slowly pushing customers out is the growing gap between bundle pricing, which publishers now like to call a “database model” in order to distance themselves from the unpopular phrase “big deal,” and the list prices of journals.  I was recently part of a conversation where a rep for one of the large commercial academic publishers told us, inadvertently, I think, that while the bundle she was selling cost $800,000, the list price for all those journals would be about 9 million.  If she intended to tell us what a great deal the bundle was, her comment had the opposite effect; it emphasized how absurd the list prices are.  They are punitive, and obviously unrelated to the cost of production; when list prices are 11 times the price most customers actually pay, I think they qualify as pure fiction.  This pricing practice is equivalent to throwing an explosive into the water to drive the fish into the nets.  It represents a blatant effort by these publishers to force customers to buy the bundled packages, so they can profit off junk titles they could not sell on their own merits.

There was a time when similar practices were called illegal tying under the U.S.’s anti-trust laws.  Movie companies, for example, were found to be illegally using their intellectual property monopoly to force theaters to rent unwanted titles in order to get the movies they really wanted to show; the Supreme Court forbade such “block booking” in 1948.  But anti-trust enforcement has changed dramatically over the years, and this kind of tying is now tolerated in the cable TV industry, as well as in scholarly publishing.  (For the record,  a U.S. court has held that bundling channels on cable is not illegal tying, but there are ongoing antitrust lawsuits over related practices.)  Publishing, in my opinion, has pushed the practice even farther than cable TV has, as the bundle prices spiral upward, the list prices become more and more penal, and customers are forced to consider a draconian loss of access, the academic equivalent of “cutting the cable.”

The problem with this kind of “overfishing” is that it is unsustainable; the commercial academic publishers are pushing the market so far that their customers simply can no longer afford the resources they need and, incidentally, create in the first place.  The profits generated by these companies are still extremely high, in the range of 35% and rising, but, as happened in Siglufjordur, the bottom can drop out quite suddenly.  In recent months we have seen whole nations, not to mention individual universities, start to reconsider not only whether the value offered by these publishers is worth the price, but even whether the price itself is simply out of reach.  And back in June, the Financial Times reported that RELX, the parent of Elsevier, had suffered its biggest decline in value in 18 months, and the financial services firm UBS was advising investors to sell, to take their profits and get out, due to “structural risks.”  Structural risk is a very accurate description of the problem you create when you push your market well beyond its capacity.

Why just 2.5%?

Sustainability planning is certainly a tricky business. Over the last several months I have been working with teams grappling with sustainability and other long-term plans for four projects: the Big Ten Academic Alliance’s Geoportal, Mapping Prejudice, the Data Curation Network, and AgEcon Search.  These are all cross-unit collaborative projects, and multi-institutional in most cases, but their common element is that my library serves as administrative and/or infrastructural home and/or lead institution. This planning has led to an interesting thought experiment, spurred by the AgEcon Search planning.

First, brief background: AgEcon Search is a subject repository serving the fields of applied and agricultural economics. The University of Minnesota (UMN) has been operating it since 1995, back in the pre-web days; the first iteration was Gopher based. It is still jointly sponsored by the UMN Libraries and the UMN Department of Applied Economics, but is now also a partnership that includes financial and other support from the USDA, private foundations, both the national and the international agricultural economics scholarly associations, and others (full list and other info here). There is tremendous support within its scholarly community for AgEcon Search and, increasingly, very strong international use and participation, especially in Africa.

The two UMN host units have started long-term sustainability planning. Right now, a leading strategy is a joint fundraising program with a goal of building an endowment.

Here’s the thought experiment. Roughly speaking, a $2 million endowment would generate sufficient revenue to pay most of AgEcon Search’s annual staffing, infrastructure and other costs. $2 million is about 11% of what the University of Minnesota Libraries spends annually for collections. So if we were able to take just 10% of what we spend in just one year on collections, we would be most of the way towards ensuring a long-term financial future for one project. And if Minnesota could double that, or even go to 25%, then in one year we would be able to do this for two similarly-sized, community controlled open projects. And if we did it for two years, we probably would have funded all four of these (Minnesota-homed) projects. And if we kept going, in the next and following years and be able to use that money to do the same for other projects, at other institutions. And if all academic libraries did the same, how many years would it take to put a really huge dent in our collective open funding problem?

Obviously, there are many, many practical, political, logistical and other challenges to actually doing this with our collections funding, but I’m leaving those aside for the moment, though they are far from trivial. This thought experiment has helped bring focus to my thinking about David Lewis’s 2.5% solution (see also his post on this blog and his later writings with other colleagues), and Cameron Neylon’s response in ‘Against the 2.5% Solution.’ Which, spoiler alert, is not strictly speaking against the solution, but in favor of a number of things including an investment quality index that can guide these investments, a variety of different strategies, and in support of a much bigger investments than the 2.5%.

Which is where I think we absolutely need to be — more aggressively and more deeply investing in open. 2.5% per year is not enough. 25% might be getting warmer. Would I love for that money to come from our universities instead of from our collections budgets? Sure. But will it happen and how long will it take? Speed and agility will be increasingly important. To underscore that point: the Data Curation Network got its Sloan pilot grant funding and was well underway planning and (openly) sharing rich information about what it takes and how to do data curation when Springer announced it would offer for-fee data management and curation services. Wellcome Trust is now in a pilot to fund its investigators to use Springer’s service (I’m not linking, use your favorite search tool). The Data Curation Network, like many collective projects, has been starting to make the case for community support, with the usual mixed responses. How many more projects will teeter on the brink of survival while publishers with a long history of enclosure and extortionate pricing gobble them up, or out-market us, or out-innovate us?  What’s your favorite open project or workflow tool? Has it been asking for support?

I am, personally, decidedly lukewarm on the all-APC flip that started the OA2020 conversation, but don’t think we have the luxury of ruling out many strategies at this point. More, smarter, and faster are the words that guide my thinking and my hopes for a more open, community-owned scholarly communications ecosystem. I’m very much looking forward to the ‘Choosing Pathways to OA‘ workshop at Berkeley in October, and grateful to colleagues at the University of California, including the faculty, who have injected recent energy and inspiration, and who have invested in space to bring us together to talk about advancing practical strategies. See other posts on this blog about offsetting at MIT and the

publishing layers (formerly known as RedOA) project. Read more

Copyright in government works, technical standards, and fair use

It is one of the simplest, yet most frequently misunderstood, provisions of the U.S. copyright law.  Section 105 of Title 17 says that “Copyright protection under this title is not available for any work of the United States government, but the United States government is not precluded from receiving and holding copyrights transferred to it by assignment, bequest or otherwise.”  A single sentence, but lots of nuance, both because of what it says and what it does not say.  Last week, an important decision from the DC Circuit Court of Appeals again highlights some of the scope for confusion.

The clear point of this provision is that works created by the federal government – created, that is, by federal employees working within the scope of their employment — do not receive any copyright protection.  Such federal works are immediately in the public domain.  But there are some important points that have to be made regarding what this provision does not say:

  • Government works from other countries can be, and usually are, protected by copyright.  Section 105 explicitly rejects such “Crown copyright” for the U.S., but such copyrights are the rule, literally, in most other nations.
  • Government employees can and do hold copyright in works they create that are not part of their employment.  An EPA scientist has no rights in a report she writes as part of her job, even if it includes photographs she took to illustrate that report.  But she is entitled to copyright in her own private photographs, in works of poetry she composes, and in the video she makes of her son’s soccer game.
  • Independent contractors may have created many things that appear to be U.S. government works; such works likely do have copyright and the federal government may hold that copyright.  That glossy brochure you can pick up when you visit the Grand Canyon, for example, might have been created by National Park Rangers (no copyright) or by contractors working for the government (a copyright was likely transferred to the federal government “by assignment… or otherwise”).
  • Works by state and local governments are not precluded from copyright protection.  Judicial opinions from state courts are pretty much universally recognized to be public domain, but the issue of local laws and regulations is very much up in the air.  The State of Oregon, for example, has tried to assert a copyright in, at least, the collection of the Oregon Revised Statutes.  With local regulations, and the thornier issue of privately developed standards that are made part of laws, which is the subject of the case I want to discuss, the situation is even muddier.
  • Read more

    The Case for Fixing the Monograph by Breaking It Apart

    Earlier this month the University of North Carolina Press (where I am director) received a nearly $1 million grant from The Andrew W. Mellon Foundation to lead an OA pilot among multiple university presses (UPs). During the three-year experiment we will utilize web-based digital workflows to publish up to 150 new monographs. We intend to transform how university presses might publish their most specialized books while validating the legitimacy of high quality scholarship delivered in digital-first formats.

    I have argued that there are two interlocking reasons why OA hasn’t taken hold in university press monograph publishing. UPs are optimized for the creation of high quality print and so we are unprepared to publish in ways that maximize digital dissemination and use. And without a viable digital model with clear deliverables and accountability, there is no durable funding source for OA monographs.

    But despite these obstacles, UPs must find a new way to publish their most specialized books. The current cost-recovery model, which works pretty well when sales are in the thousands, is falling apart when net sales number in the few hundred. We are incurring significant debt to create bespoke products available to a privileged few. And more and more books are fitting into the latter sales pattern. But these are vital volumes that are critical to the advancement of scholarship.

    What we have proposed is a solution that requires a dramatic uncoupling and resequencing of our workflows. We need to take the book publishing process apart in order to ensure we’re focusing primarily on creating high quality digital editions that will be widely disseminated and used. A switch away from high-quality-print-with-trailing-digital and toward digital-first will have some disruptions, but it should also lead to lower costs. It requires that our surrounding ecosystem embrace digital—where the digital editions of record will be the ones getting reviewed, winning awards, and being considered in promotion and tenure packages. It is pay-walled print that will be a secondary format, available to those that require (and can afford) it.

    Breaking the publishing process apart helps clarify what parts of publishing should be subsidized versus the parts where cost recovery can provide funding. I operate in a state university system where “accountability” is almost always required to secure new funding. Our new paradigm looks to do just that. With streamlined costs, high levels of access, and robust analytics, it aims to ensure the long-term viability of humanities monographs as well as the university presses that are key to their creation and dissemination.

    There’s a lengthy post here with more detail about the rationale and details of the pilot.

    Offsetting as a path to full Open Access: MIT and the Royal Society of Chemistry sign first North American ‘read and publish’ agreement

    Over the past few years the MIT Libraries – like many US research libraries– have been watching with interest the development of “offsetting” agreements in Europe and the UK.  In offsetting agreements, a single license incorporates costs associated with access to paywalled articles and costs associated with open access publication.   This type of agreement has emerged in Europe and the UK and been the source of both new deals and broken deals.

    In the MIT Libraries, we have been following this offsetting approach closely, as it seems to have the potential to transition subscription-based journals to a fully open access model.  We have felt, though, that there was one major contingency we would want met in order for us to go down this path: the agreement would need to establish that over the long term, the publisher plans to use the hybrid approach to enable a transition to full OA.  Our concern is that –if perpetuated indefinitely– a hybrid model will not realize the full potential of open access to make research available to all, worldwide, rather than to only those with the capacity to purchase access.

    Given this context, we are pleased to report that we have just completed a license with the Royal Society of Chemistry –the first “Read and Publish” license agreement among North American institutions – that contains language acknowledging that the read and publish model is a step on a path to full OA. The language reads:

    Publisher represents that the Read & Publish model, with its foundation in “hybrid” open access – where some articles are paywalled and others published open access – is a temporary and transitional business model whose aim is to provide a mechanism to shift over time to full open access. The Publisher commits to informing Customer of progress towards this longer-term aim on an annual basis, and to adjusting Read & Publish terms based on its progress towards full open access.

    The agreement will run for two years, through 2019; articles published by MIT authors during that period, when the MIT author is the corresponding author, will be made openly available at the time of publication.  Costs are calculated through a formula based on the volume of MIT authorship and the volume of paywalled articles.  The idea is that over time, as more universities adopt this kind of contract, the proportion of paywalled articles will decline, and funding will shift from paying for access to closed content, to supporting open access to research produced by authors on one’s campus.  In this way, the read and publish model provides a mechanism for a staged transition from hybrid to full OA.

    For the MIT Libraries, this contract represents an important experiment with a nonprofit scholarly society, in which we use library collection funds to accomplish open access to MIT research through a business model that aims to transition journal publishing more broadly to open access.   This experiment builds on the idea that there is enough money in the system to support a move to open access, if there is a collective will to make that move, and it is accompanied by transparent, sustainable mechanisms (possibly, as some have called for, incorporating author participation) to shift subscription dollars towards open access models.

    We take seriously our effort to ‘vote with our dollars’ by supporting publishers whose values and aims align with ours and whose business models have the potential to make science and scholarship more openly available.  That effort includes assessing whether costs are reasonable and justifiable.  We carefully considered whether increased library-based payments to the Royal Society of Chemistry, necessary in order to adopt the read and publish approach, was viable and justifiable for us.  We concluded that it was a worthy experiment, particularly as the added costs are directly making MIT authored articles openly accessible, and because the Royal Society of Chemistry was willing to work with us to contain the cost.

    These kinds of judgements and strategic decisions –within a complex and evolving market– are difficult. We recognize and are engaging with important questions about how a move to a publishing-based fee structure for journals could impact universities and authors around the world.   For now, we believe this experiment promises to be a productive one on the path to finding a workable model for scholarly societies to transition their highly-valued, high-quality subscription journals to open access, and for universities like MIT – whose mission is to disseminate research as widely as possible – to expand open access to articles authored on their campuses.

    In order for the transition to full open access to take place, however, it will take more than the actions of one campus.   We look forward to engaging with others throughout this experiment, and to thinking together about how we can collaborate to advance open access to science and scholarship in productive ways.







    The Impact of Big Deals on the Research Ecosystem

    Earlier this month I read this article by Kenneth Frazier from D-Lib Magazine which argues that academic libraries should reconsider the value of so-called “big deals” from publishers. The core of the argument is that the downsides of these journal packages outweigh the benefits of convenience and an arguably lower cost per title. I say “arguably” about cost per title because, if one excludes the titles in a bundle that are rarely or never used when calculating per title cost, the value proposition is significantly different.

    The simple fact is that publisher bundling “deals” are larded with what, from the point of view of usage, is simply junk – obscure titles of little value that can only be sold by tying them to more desirable resources. If I want “Cell Biology” for my researchers, I also must buy “Dancing Times,” even if no one on my campus uses the latter.* At my institution, to give just one example, over 30% of the titles in our journal package from Wiley are “zero-use,” but it is still less expensive to buy the package than to subscribe, at list price, only to the titles that would get substantial use.  This tying of titles, and enforcing the bulk purchase by charging grossly-inflated “list prices” for title-by-title purchases, is highly coercive, as Frazier points out, but it also creates some perverse incentives for the publishers themselves, which led me to think about the potential consequences of big deals for things like peer review.

    Publishers make more money using these big deals, of course. They justify the price tag of a package by highlighting how many titles we are getting.  They claim that the annual price increases, which far outstrip any growth in our collection budgets, are justified because of the growth in the number of papers published. These sales techniques give the publishers a strong motive to encourage the proliferation of titles in order to increase the perceived value of their products and continue to raise prices for each package. In short, there is an incentive to publish more journals, even if they do not meet basic academic standards of quality or appeal only to a tiny niche of research that is unneeded on many campuses.

    It is ironic that we hear a lot about the incentive to publish without attention to quality in the open access world, where the unfortunate phrase “predatory publishing” has become almost a cliche, but we often fail to notice the commercial incentives that encourage similar practices in the subscription market, thanks to these “big deals. More is better, regardless of quality, and it justifies ever increasing prices.

    The impact on peer review is inevitable. As more and more articles are submitted, the bandwidth for review is stretched thin. We hear about this a lot; faculty members complaining about how many requests to review they get, and also complaining about the declining quality of the feedback they receive on their own articles. Yet we seldom make the obvious connection. Big deals, with the pressure for more and more articles to publish, encourage the trend to require more articles in our tenure reviews, and to ask graduate students to have multiple publications even before they complete their degrees. These packages also have the effect of reducing the quality of peer review. This is simple logic – the more widgets you make, the less able you are to assure the quality of each widget while still remaining cost effective. As publishers turn out articles like so many widgets, the decline in peer review, attested to in so many stories of its failures, is as logical as it is damaging. It becomes no surprise at all when we hear faculty say, as I have heard twice in recent weeks, that the peer review process at non-commercial open access publications is the best and most helpful feedback process they have experienced in years.

    Prestigious publishers keep their impact factors high by rejecting lots of articles. In the era of the digital big deal, those articles still get published, however, they just slide down to lower-ranked journals, and the standard of review decreases. Big deals do not just harm the sustainability of the library subscription market, although they certainly do that; they also undermine the very activity they were born to support. The scholarly publishing industry, which after initially trying to ignore the digital environment has now turned to ruthless exploitation of it, has become actively detrimental to the scholarly enterprise itself.


    *Author’s note: This example simply uses a very highly ranked title from Web of Science and a very low-ranked one; it is illustrative but does not necessarily reflect the actual subscription at my institution or any other.

    Saying it doesn’t make it so

    [Authors note — this post was drafted back in January, so although the Scholarly Kitchen post that inspired it is a little old, the general themes are still relevant]

    Joseph Esposito was being intentionally provocative, perhaps even tongue-in-cheek in places, in his post back in January, Why Elsevier is a Library’s Best Friend. There are some good exchanges with commenters, many of whom had the same thoughts I did as I read. Here are a few additional responses both to Esposito and to fellow SK’er David Crotty about the post and the back-and-forth in the comments.

    Esposito on economies of scale:

    We often hear that the monopoly in copyrights is what makes companies so profitable, but even publishers that lose money have full control of their copyrights. It is scale, not monopoly copyrights, that drives a high level of profitability. … With a thousand [web sites] you can hire skilled professionals and negotiate with suppliers for better pricing. This is what RELX does. On a per-unit basis RELX is probably paying less than the average publisher for the materials and services it needs. Only its peers in scale (Springer Nature, John Wiley, and Taylor & Francis) have the same purchasing clout in the marketplace. Read more

    Demanding More

    At the American Library Association (ALA) Midwinter Meeting earlier this month, I attended the Association of College and Research Libraries (ACRL) and the Scholarly Publishing and Academic Resources Coaltion (SPARC) Forum on “Shaping the Landscape of Open Access Publishing: Individually, Locally, and Collectively.” One of the speakers was my friend Chealsye Bowley, Community Manager for Ubiquity Press, a U.K. based open access publisher. Bowley also happens to be a featured “Woman Working In the Open.”

    During her talk, Bowley discussed what it means to be a “Community Aligned Service Provider.” Ubiquity, while a for-profit press, works hard to make sure its work, business model, and methods align with the values of its community, namely the scholars and libraries that use its content. Bowley noted, “Many in the academic community are concerned that commercial interests may be fundamentally misaligned with those of academic researchers.”

    Slide reads "How can we be a better partner? How can we reflect community values?"

    Screen shot of C. Bowley’s slide

    To combat against that misalignment, Ubiquity has created a Library Advisory Board (LAB) that helps to provide guidance and feedback to ensure services align with library and scholar values. However, unlike many other publisher advisory boards that exist, this one aims to actually and meaningfully incorporate feedback into Ubiquity’s practices. Thus far, based on feedback from the LAB, Ubiquity has initiated plans to make all its underlying code and platforms fully open source by the end of the spring and they are piloting the creation of an open source repository. Ubiquity is finding ways to shift its business model to meet the values of its users while still, well, keeping itself in business. Their thinking seems to be that if they make these important efforts, they can ensure long-lasting, and yes even profitable, relationships with their customers for years to come.

    Slide reads "Service providers work for libraries, and should do the bulk of the work to align themselves with the communities they serve."

    Screen shot of C. Bowley’s slide

    Going beyond the specific Ubiquity model, Bowley, herself a librarian and scholar, went on to discuss the ways that libraries can demand more from their content providers: “Service providers work for libraries, and should do the bulk of the work to align themselves with the communities they serve. . . . [Libraries can and should] be critical of every vendor [and] push back.”

    Slide reads "Be critical of every vendor. Push back. Get contracts that reflect values."

    Screen shot of C. Bowley’s slide

    For this librarian and scholar working in the open, Bowley’s presentation was a breath of fresh air and a source of hope for the future. It is possible for vendors and librarians to work together in mutually beneficial, value-directed partnerships. Ubiquity Press provides just one example of how we can get it right.

    What’s even better, Bowley’s entire slide deck is available on the new LIS Scholarship Archive (LISSA) (of which Bowley is an advisory board member), an open access platform for sharing library and information science scholarship housed on the Open Science Framework (more about LISSA in a future post, stay tuned!).

    Can University Presses Lead the Way in OA?

    Last July at MIT Press, a press release went out that should have caught the eye of any reader of this blog. MIT Press announced the creation of a new leadership position called Director of Journals and Open Access and the appointment of Nick Lindsay to the role. To my knowledge, Nick is the only person in the North American university press world who has OA in his title. Last month, I sent him a few questions about this unique initiative.


    JS: What was the backstory on the creation of this position?

    NL: The MIT Press has a long-standing commitment to open access across both our books and journals programs and has taken a leading position in OA publishing within the university press community. Over the last decade in particular, the Press has grown its OA output considerably and we now publish seven OA journals using a few different financial models, along with over seventy books that have an OA edition available, with more on the way. This open spirit towards OA has led to some very successful projects such as Peter Suber’s Essential Knowledge series book, Open Access, and flipping our journal Computational Linguistics from subscription based to OA. Along with this growth has come a new set of challenges some of which are associated with the increased size of our program and others with the changing landscape of OA. In previous years the Press was able to publish OA books, primarily monographs, without significant concern that there would be a major decline in print sales for that title. In fact, there was some evidence that the presence of an OA version of a book actually enhanced print sales, which was the case with our book City of Bits in 1995

    But today it’s clear that the pressures on OA publishing are heavier and that acceptance of digital reading is much greater, which can imperil print sales revenues.  The calculations the Press needs to make when considering OA options have therefore become much more complex and we have moved to a situation where subventions, embargoes, and royalties all need to be considered when publishing an open access book.  Couple this with the development of new OA publishing platforms such as PubPub from the MIT Media Lab and the increased possibilities for experimentation with new dissemination models, it’s become apparent there is a great deal of new work to be done. Press Director Amy Brand saw the need to bring stronger direction and more structure to our OA efforts so as to ensure that we don’t miss out on these new opportunities and that we address future challenges effectively. That’s what triggered the creation of the role.


    JS: Are there models at other publishers you’re looking at to duplicate or build from?

    NL: I think just about all university presses are trying to figure out where they sit in relation to OA. At MIT Press it’s perhaps slightly more urgent given the amount of scientific publishing we do but I know from talking about it with journals directors, press directors, and others at university presses that it’s an issue that’s top of mind for everyone. We’ve looked at the models that have been created at other Presses and there is much to be learned from them, but the Press is determined to chart its own course when it comes to OA. We’ve had extensive internal discussions across multiple departments at The MIT Press and have heard from many voices: acquisitions, financial, marketing and sales, and others, and we are constructing a flexible open access model that considers each book on its own merits and continues to uphold the quality standards readers have come to expect from The MIT Press. Up to now we have treated all of our OA books with the same level of consideration that we do for any title from The MIT Press: high quality design and production services, extensive marketing support and the same level of rigorous peer review and editing that we bring to all of our books and journals. We plan to ensure that this remains the case regardless of the business model used to support a title. One long term goal we have is to develop an open access fund that will allow us to not be constrained by available funding sources that authors may have for OA books. If we can build a sufficiently large fund we should be able to broaden our OA publishing opportunities.


    JS: How are you managing the differences between OA models in journals and books?

    NL: On the journals side we’ve become comfortable with different financial models for supporting open access and this has allowed us to go down a few pathways when it comes to how we structure our OA agreements. Societies and individual scholars with a journal idea have gravitated towards the MIT Press as a home for their journal since we started publishing serials in 1972. The combination of quality services and relatively low pricing has been appealing and with the acceleration of interest in open access we’ve seen an uptick in proposals with an OA component. Currently, we have a couple of titles where the sponsoring society is willing to cover all costs for publication including paying for a portion of the Press’ overhead. It’s a low-risk, low-reward approach but it works for us. We’ve also started three new journals in the brain sciences in 2017, Computational Psychiatry, Network Neuroscience, and Open Mind: Discoveries in Cognitive Science. All three are based on an APC-model and we’re happy with the results so far. Adapting internal processes to account for new workflows such as APC payments presents some challenges, but these are smoothing out over time.

    With books, author preferences strongly shape the decision to publish books openly, and these vary from field to field. In computer science, for example, the benefits of open models, particularly for software development, are well known and appreciated. In other fields, the decision might be made with the recognition that an open version may increase the dissemination of important scholarship and the global audience for books which might otherwise be limited to readers (mostly in developed countries) with access to adequately funded academic libraries.

    Authors often select The MIT Press as a place to publish their work in part because of our reasonable pricing and we’re pleased to be able to offer OA alternatives as well.


    JS: What has been the reaction to the announcement?

    NL: Very positive! From colleagues and MIT Press and MIT to others in the university press community it’s been great and I suspect I’m going to have plenty of company when it comes to OA positions at other university presses in the near future.


    JS: Is MIT uniquely positioned to do this or is this something that you think other university presses can also do?

    NL: For journals in particular we undeniably have one big advantage in that we publish in STM. The financial models and customs are already in place and the idea of an OA neuroscience journal like Network Neuroscience coming from MIT Press is easy for the academy to accept. We know where the money is going to come from and we’re already well-known by this audience and are in frequent contact with them at conferences and via social media and other outlets. Like many others we are waiting for a breakthrough in humanities OA journal publishing. This may come but as long as publishing grants are non-existent in the humanities and the publishing operations require market-based revenues to offset costs it’s going to a difficult proposition. But I don’t see the need to develop a full-blown science-publishing program to at least begin a pilot in OA publishing. The investment to make this happen, if you already have a journals program and a platform, is quite reasonable given that with the APC model much of the publication cost is covered up front.

    On the books side, we’re open to open access discussions for books from all fields in which we publish and there are plenty of non-STM OA books on our list including John Palfrey’s recent Safe Spaces, Brave Spaces. We’re also keen to develop OA books out of non-traditional sources such as pre-print servers. For example, the Press will be publishing a physics book that first appeared as a long article on the arXiv, “Holographic Quantum Matter” by Sean A. Hartnoll, Andrew Lucas, Subir Sachdev.

    In both cases, books and journals, the Press does have advantages that come from being one of the first movers in OA, as they provide a strong base of knowledge and expertise on which to continue to expand our OA program. It’s encouraging to see others in the UP community embrace OA and I look forward to seeing OA become a more regular part of UP activities moving forward.



    What does a journal brand mean?

    Brands and branding are an important part of a consumer society, and they are largely about goodwill.  Trademarks, which are, roughly speaking,the legal protection given to brands, are premised on the idea that consumers should have some assurance about the continuity of the source of the goods and services they purchase.  A brand name is supposed to provide that continuity; whether you are buying from McDonald’s or Land’s End, the brand helps you know what you are going to get.  This is why trademarks protect against any use that might cause consumers to be confused about whether the goods or services they are buying are really from the same source.  The sense of continuity is what we call goodwill.

    Branding is extraordinarily important in scholarly publishing.  As a scholar quoted in a recent WIRED article put it, academic publishers hold a great deal of power in scholarly communications (his phrase was more colorful) because “we are addicted to prestige.”  This addiction depends on journal brands and, I want to suggest, journal branding is a pretty slippery thing.

    Most scholars believe that a journal’s reputation is primarily based on the editoral board.  If that board is populated by respected scholars, potential authors are inclined to believe that the review of their submissions will be useful and that others will be more inclined to see their work and view it positively.  A solid editorial board is at the core of academic journal goodwill.  So what happens when a board rebels against the journal?

    Consider the International Journal of Occupational and Environmental Health, which is published by Taylor & Francis.  A couple of weeks ago, the entire editorial board of this journal resigned in protest of decisions made by the editor-in-chief.  According to the reports, the disagreements that lead to the resignations were about fundamental matters that impact the quality of the journal, such as its approach to corporate-sponsored research.  The question, then, is what is left to a journal brand when the editorial board that forms the core of its goodwill not only leaves — editorial boards turn over regularly, of course  usually in an orderly process that preserves continuity — but leaves because they no longer trust the integrity of the journal.

    Retraction Watch reports that, essentially, the publisher does not plan to change the direction of the journal and intends to appoint a new editorial board.  At the moment, the “editorial board” page on the journal website only lists the new editor-in-chief, whose appointment was part of what prompted the mass resignation. So what remains of the brand for this journal, if “consumers” cannot trust the continued quality of the articles, as evidenced by the resignation of twenty-two board members?

    Brands are an important part of the promotion and tenure process, and concern over branding, essentially, is sometimes raised to challenge publications in newer, often open access journals.  Scholars sometimes worry that their open publications won’t get the same respect as papers published in more “traditional” journals.  But if these traditional journals can change the editorial direction and their entire editorial board overnight, what is the brand that promotion and tenure committees really relying on?  Will newer scholars now worry that publishing in the International Journal of Occupational and Environmental Health is not sufficiently respectable, that P&T committees might look askance at such papers?

    Branding is a delicate exercise, and brands can shift and change as consumer perceptions change.  Think New Coke.  If the scholarly community is going to rely on brands, we ought to pay more attention and not accept that a journal that was respected and respectable a few years ago is still an acceptable outlet today.

    It is sometimes said that the fundamental scholarly transaction is to trade copyright for trademark.  We give away a valuable asset– the right to control the reproduction, distribution and use of the fruits of our academic labor — in exchange for a brand that will mean something to P&T committees.  In an ideal world, I wish we depended less on journal reputation, not only because it is slippery and subject to the kind of editorial revolution described above, but because even at its best it does not actually reflect the thing that really matters in the P&T process, the quality of individual articles.  This disconnect is most stark when an entire editorial board resigns and is replaced.  But if we are going to make this dubious bargain, it is fundamental to our responsibility to the scholarly community to know what the key brands are and to be aware of events that compromise those brands.

    As a dean responsible for the tenure processes of a large number of librarians, I meet every year with the University Promotion and Tenure Committee.  The purpose of that meeting is to give me a chance to communicate important information that the committee needs as it evaluates candidates from the libraries — what is different about our field and what changes in our expectations and procedures they should be aware of.  I wonder how many deans in the health professions who are going into such meetings will have the awareness and attention to remind P&T committees that the brand of this particular journal has been fundamentally damaged — it simply no longer represents what it did a few months ago — and that that damage should be considered as they evaluate publications?  Will they talk about these events in faculty meetings and help their younger professors consider whether this new perception of the journal brand is potentially harmful to their academic careers?  That, I believe, is our responsibility.

    Accelerating academy-owned publishing

    (Note: This post was collaboratively written by several members of the ARL project group described below.)

    How can libraries develop more robust mechanisms for supporting services and platforms that accelerate research sharing and increase participation in scholarship? What kind of funding and partnerships do scholarly communities, public goods technology platforms, and open repositories need to transform into true, academy-owned open access publication systems? In an initiative formerly known as “Red OA,” these are the questions a group of ARL deans and directors have recently committed to address through engagement with scholarly communities and open source platform developers.

    The current system of scholarly journal publishing is too expensive, too slow, too restrictive, and dominated by entities using lock-in business practices. Fortunately, there are a growing number of scholarly communities embracing services and platforms that accelerate research sharing and increase participation in scholarship. Some of these communities are keenly interested in integrating journal publishing and peer review services into repository platforms, and bringing greater transparency and efficiency to the peer review process itself. At the same time, there is a global movement to build value-added services on top of distributed, library-based repositories, in order to “establish [open] repositories as a central place for the daily research and dissemination activities of researchers,” rather than the commercial sphere.

    As stewards of the scholarly record at research institutions, research libraries have a fundamental interest in maximizing access to and preservation of that record to further research, teaching, and learning. Research libraries have a long history of supporting open access scholarship, in the form of policy advocacy, open access funds to support authors willing to pay article processing charges (APCs) to make their scholarship open immediately upon publication, and through investments in open infrastructure to disseminate scholarship—including institutional repositories and established disciplinary preprint services like arXiv. But with more than 70% of a typical academic library collections budget consumed by paywalled electronic resources (databases and journals), it’s not surprising that the open alternative is under-resourced. (See ARL Statistical Trends: Cumulative Change in Total Annual Ongoing Resource Expenditures by Library Type (Public and Private))

    “We want to honor the tremendous labor and advocacy that scholarly communications librarians in our organizations have put into advancing open access,” says Chris Bourg, MIT Libraries Director, and a member of this collective as well as the SocArXiv Steering Committee. “While also recognizing that the next big breakthrough requires that deans and directors, step up to the plate and put our combined resources and influence behind collaborative, high-impact initiatives.”

    For journal articles, one pathway toward the goal of open academic scholarship lies in developing platforms for publishing open, peer-reviewed journals that are managed by the academy in conjunction with scholarly societies and communities. The ARL project group is particularly interested in models that are both free to read and free to publish, without APCs. There may be a place, particularly in a transitionary time, for low-cost APCs in academically-grounded publishing models, but the group’s long-term focus is on retaining control of the content and keeping the costs of sharing and participating in scholarship as low as possible.

    While green and APC-funded gold OA have made some progress in making research articles more accessible to readers, these approaches do not by themselves transform the underlying economics of scholarly journals, and they allow commercial publishers to retain control of the scholarly record, a situation that does not serve the current and long-term interests of the academy. This group of library leaders offers an alternative vision in which the academy assumes greater responsibility for publishing research articles, contributing to a more sustainable, inclusive and innovative alternative to the existing system.

    Through direct library investment in developing publishing layers on preprint servers and open repositories, these ARL libraries aim to influence the scholarly marketplace. The proposed strategy is designed to provide authors with high-quality, peer-reviewed publishing options; for example, through overlay journals, to accelerate the publishing capabilities and services available through open academic infrastructure. As these new journal forms gain traction with authors, research libraries will be positioned to redirect subscription and APC funds toward support for the development and maintenance of these new journals and the infrastructure that supports them. The project group also envisions helping subscription journals flip to a sustainable open model by investing in the migration of those journals to open platforms and services sustained by universities and libraries.

    “If we want to improve the academic publishing system, we must invest in the development of trusted and sustainable alternatives,” says Kathleen Shearer, Executive Director of the Confederation of Open Access Repositories. “Our work aims to help nurture a diversity of new models that will strengthen the open ecosystem, as well as contribute to defining a significant role for research libraries in the future.”

    This group, which is already engaging with scholarly communities in sociology, psychology, mathematics, and anthropology, will be meeting over the next two months to map out its 2018 plans and investments.

    ARL project group: Ivy Anderson, California Digital Library; Austin Booth, University of Buffalo; Chris Bourg, MIT; Greg Eow, MIT; Ellen Finnie, MIT; Declan Fleming, UC San Diego; Martha Hruska, UC San Diego; Damon Jaggars, The Ohio State University; Charles Lyons, University of Buffalo; Brian Schottlaender, UC San Diego; Kathleen Shearer, Confederation of Open Access Repositories; Elliott Shore, Association of Research Libraries; Kevin Smith, University of Kansas; Jeffrey Spies, Center for Open Science; Ginny Steel, UCLA; Shan Sutton, University of Arizona; Kornelia Tancheva, University of Pittsburgh; Günter Waibel, California Digital Library

    ARL staff liaison: Judy Ruttenberg

    A new contract

    When I complained, in a blog post written several weeks ago, about the contract I had signed, and regretted, for a book to be published by the American Library Association, I really did not expect the kind of reaction I got.  Quite a few readers made comments about the unequal position of authors in publishing negotiations, and especially about the need for the library world to do a better job of modeling good behavior in this area; that was to be expected.  A few people took me to task for agreeing to a contract I disliked so much, which was no more than I deserved.  But I truly was surprised by the number of folks from the ALA, including current ALA president Jim Neal, who reach out to me and expressed a determination to fix the problem I had described.

    Readers will recall that my principal objection to the contract was an indemnification clause that, in my opinion, made me and my co-authors responsible for problems that we could not control.  In short, the original contract allocated too much risk to us and tried to protect the ALA from the potential consequences of its own actions by shifting those consequences to us as authors.  Although other parts of the contract were negotiable, and I felt good about some of the things we had accomplished in discussions about it, I was told that this indemnification clause was not negotiable.

    A couple of weeks after I wrote that post, I had a phone conversation with Mary Mackay, the new Director of Publications for ALA.  She told me three things I thought were significant.  First, that revising the publication contract used by ALA was on her “to do” list from the time she started.  Second, that all aspects of their agreements are negotiable, and I should not have been told otherwise.  Third, that I would get a new, revised contract.

    Last week I received the revised contract, which my co-authors and I plan to sign, which will replace our current agreement. The difference in the indemnification clause is like night and day, and I want to present the clause contained in this new agreement as a model for appropriate risk allocation in such contracts.

    The new clause is three short paragraphs.  In the first, we authors warrant that we have the authority to enter into the agreement, that the work is original, that it has not been previously published, that it does not infringe anyone else’s copyright and that it is not libelous, obscene or defamatory.  We also promise that it does not violate anyone’s right to privacy and “is not otherwise unlawful.” In the next paragraph, we promise to detail any materials incorporated in the work for which we are not the rights holders, and to provide documentation of any necessary permissions.  Then comes the third paragraph, where the big difference from our previous agreement is most notable:

    Author shall indeminify and hold harmless against claims, liability, loss or expense (including reasonable attorneys’ fees) arising from a breach of the above warranties. (emphasis added)

    In short, we agree to indemnify the publisher from liability for things we do, and for breaking promises we make.  Unlike the first version, we do not indemnify for any liability “arising from publication of the work.”  This is, to me, a huge and significant difference, and I believe that this second contract does what contracts are supposed to do; it apportions risk fairly between the parties.

    I take three lessons away from this experience.

    First, it is important to negotiate publishing agreements.  Most publishers will negotiate, and when you are told that something is non-negotiable, it is still sometimes important to push back on that claim.

    Second, the library community should, and sometimes will, take the lead in producing fair agreements for authors.  This is a moving target, and we should continue to push our own profession to live up to the values we articulate.

    Third, the American Library Association did something here that is all too rare; they admitted a mistake and took steps to correct it.  For that, especially, I am grateful to them.

    Join the Movement: The 2.5% Commitment

    NB: This is a guest post from David Lewis, Dean of the IUPUI University Library.  David and the regular IO authors hope that this post will generate discussion, and we invite you to comment.

    The 2.5% Commitment: Every academic library should commit to contribute 2.5% of its total budget to support the common infrastructure needed to create the open scholarly commons.

    A number of things came at me at in late summer.

  • In July, Kalev Leetaru concluded an article on Sci-Hub in Forbes (who would have thought?) by saying in reference to academic research, “In the Internet era information will be free, the only question remaining is who pays for that freedom.” I started using the quote as the tag line on my e-mail signature.
  • Elsevier bought Bepress for $115 million.
  • I am a member of the DSpace Steering Group and we were trying to raise the last 15% of our $250K membership goal. We were doing OK, but it was a tough slog.
  • A colleague gave me a College & Research Libraries article by John Wenzler that argues that academic libraries face the dilemma of collective action. This Wenzler claim is why journal prices are so high and by extension it will make creating the open scholarly commons difficult at best, and maybe impossible.
  • Read more

    Foibles and Follies, part three

    The final foible I wanted to write about in this series of posts involves a distressingly common situation – a copyright holder who does not understand what the rights they hold actually are.

    This is not the first blog post to point out that Human Synergistics International is pretty clueless about copyright.  Almost five years ago, the TechDirt blog made an effort to school Human Synergistics about fair use.  Apparently it did not work; they seem to continue to misunderstand the copyright law.

    Human Synergistics is a leadership and organizational development company that offers, among other things, simulations to help people experience real management situations.  At some point, at least, it was possible to purchase materials from Human Synergistics, and there are libraries out there that own materials they marketed.  One such library got in touch with me recently because they were having a contentious exchange with Human Syergistics and wanted to inquire what I thought.

    According to the librarian I spoke to, a representative of Human Synergistics was contending that the library could not lend, nor even display on public shelving, a DVD that they held rights to.  The representative of the company repeatedly sited section 113 of the copyright law.  The librarian later sent me a copy of an “Agreement to Comply with Copyright Regulations” that she had been given by Human Synergistics and that confirmed to me their fundamental misunderstanding of copyright law.  As she had told me, it would have bound the library not to do anything “in violation of section 113 of the U.S. Copyright Act, Title 17 of the United States Code.”

    I told the librarian that this agreement was fundamentally meaningless.  But the library decided that they would not sign it and would, instead, withdraw the DVD from their collection, not because they have to, but in order to avoid a conflict.

    The problems with this exchange are numerous, so let me try to enumerate them.

    First, section 113 is about the “scope of exclusive rights in pictorial, graphic or sculptural works.”  A quick look at the definitions in section 101 of Title 17 will tell one that a film falls into the category of motion picture, so section 113 actually has nothing to do with the DVD that the library owns.  That is why I told the librarian that the agreement was meaningless.  And if one reads section 113, it is very obvious that it is unrelated to motion pictures, since it deals with copyrighted works incorporated into useful articles and buildings.  It is almost unbelievable that an organization claiming authority over copyright would so badly misread the law.

    Second, section 113 does not actually confer any rights on a copyright holder, although it does make mention of some of the rights conferred in section 106.  In fact, section 113 is a limitation; it prescribes limits on the scope of exclusive rights in a particular category of copyrighted subject matter.  So the demand in that copyright compliance agreement not to “violate” section 113 is gobbledygook.

    Finally, and most obviously, nothing in section 113 could possibly prevent a lawful owner of a DVD from displaying it or lending it.  Those rights are conferred on owners of particular copies of copyrighted material by section 109, the doctrine of first sale.  The company representative who demanded that the DVD be removed from the shelves, and who believed that the presented agreement would enforce that prohibition, was seriously misinformed.

    I profoundly hope that Human Synergistics is not getting these interpretations from a lawyer; as unfortunate as it would be, I have to assume that some company official with no training, but who thinks he or she understands the law, is behind their policies.  It is true that many lawyers graduate from law school without knowing much about copyright.  This issue, however, is more fundamental.  Someone is reading the law both selectively and flat-out wrongly.  Words do have specific meanings, and while there is lots of room for varying interpretations of the law, there is also outright ignorance.

    This foible could be comic, since its major impact is to defeat the company’s own desire to have their materials used.  But it is indicative of a very unfortunate situation, since Human Synergistics is not the only rights holder who does not understand what they own and makes absurd demands in the name of copyright.


    Foibles and Follies, Part 2

    The second folly I want to talk about is somewhat embarrassing, since it is my own.  Publication contracts are always an adventure for academic authors, of course; we are routinely taken advantage of by publishers who know that publication is a job requirement and believe they have us in a stranglehold.  I once read a comment by a lawyer who works with authors that signing an agreement with one of the major publishers was akin to getting into a car with a clearly intoxicated driver – no sensible person should do it.  So in this story I have no one but myself to blame.  Nevertheless, I want to tell folks about it because it was not one of the big publishers that treated me badly; it was my own professional organization, the American Library Association.

    The publishing arm of the ALA asked me last spring if I was interested in writing or editing a book on a particular topic that they identified.  I was interested and, after talking with some colleagues and devising a plan that would combine some long essays with shorter case studies, enjoying, I hope, the best of both monographs and edited volumes, I agreed.

    Once our proposal was accepted by ALA, we got to the point of negotiating a publication agreement.  Our editor was quite accommodating about most of the issues we raised, but on one point he was inflexible — the indemnification clause.  As most authors know, these clauses are used by publishers to shift most of the risk of running their business to the authors, the people who have the least control over the process and are least able to actually defend any lawsuit.  Sometimes, of course, these clauses are moderate and really try to balance the risks.  But not the ALA’s.  Here is the clause I agreed to:

    “The Author shall indemnify and hold the Publisher harmless from any claim, demand, suit, action, proceeding, or prosecution (and any liability, loss, expense, or demand in consequence thereof) asserted or instituted by reason of publication or sale of the Work or the Publisher’s exercise or enjoyment of any of its rights under this agreement, or by reason of any warranty or indemnity made, assumed or incurred by the Publisher in connection with any of its rights under this agreement.” Read more

    Early Career Researchers as key partners for dislodging legacy publishing models

    It’s been a busy summer for OA in Europe. On one hand, nationally coordinated efforts in places like Finland and Germany have sought (unsuccessfully so far) to pressure Elsevier into better subscription pricing and OA options. On the other hand, a group of early career researchers (ECRs) at the University of Cambridge are looking to mobilize fellow ECRs to embrace open models that are not controlled by commercial entities. In my view, these divergent approaches illustrate why we should focus our collective energies away from strategies in which commercial interests retain control under new economic conditions (see also, proposals to flip subscription payments to APCs), and towards working with ECRs and others who envision a return of scholarly dissemination responsibility to the academy.

    One aspect of the Finnish No Deal, No Review boycott that seems especially telling is that signees refuse to serve as reviewers or editors for Elsevier journals, but make no such commitment in terms of ceasing to submit articles to those same journals for publication. That is probably a bridge too far for many who feel compelled to meet traditional promotion and tenure expectations of publishing in prestigious journals that are often controlled by publishers such as Elsevier. While the Finnish position is admirable in a general sense, even if the demands for better economic terms are met, Elsevier would remain a profit-driven conduit through which dissemination occurs, though with slightly less robust profit margins.

    Conversely, the ECRs involved with the Bullied into Bad Science statement urge their colleagues to “Positively value a commitment to open research and publishing practices that keep profits inside academia when considering candidates for positions and promotions (in alignment with DORA).” (Bold added.) In a related article, Corina J. Logan describes this is as the “ethical route to publication,” which she contrasts with the “exploitive route to publication” that typically involves subscription, OA, or hybrid journals from commercial publishers who extract from the academy the funds, labor, and content required to maintain a journal. Of course, a root problem is so many scholars willingly give up their intellectual property and labor in support of commercial publishers who are logically driven to maximize profits via funding from the academy through either subscriptions or APCs.

    It’s encouraging to see some ECRs challenge these publishing models, even while they must navigate promotion and tenure criteria that may temper their OA practices. My interactions with junior faculty repeatedly reveal that many naturally gravitate to open models, an observation that is confirmed by ECR initiatives such as Bullied into Bad Science and OpenCon. I suspect the “ethical” vs. “exploitive” dichotomy presented by Logan will find increasing traction among ECRs. I hope it will also galvanize their support for OA models that are owned and managed by the academy to dislodge commercial control of scholarly dissemination. Proactive outreach to ECRs through local means like new faculty orientations and graduate student/post doc organizations, as well as through ECR-focused OA initiatives, should be an important element of shaping an OA future in which they will thrive. This includes soliciting ECR input and buy-in while designing OA publishing platforms to fully enable their interest in “ethical” options.