“Subscribe to Open” as a model for voting with our dollars

Elsewhere in this blog I’ve made the case that academic libraries should “vote with their dollars” to encourage and enable the kind of changes in the scholarly communication system that we’d like to see — those that move us towards a more open and equitable ecosystem for sharing science and scholarship.  Most recently, Jeff Kosokoff and I explored the importance of offering financial support for transitional models that help publishers down a path to full open access, and we called out Annual Reviews’ emerging “Subscribe to Open” model as a good example of one that provides a pathway to OA for content that is not suited to an article processing charge (APC) approach.  Here, Richard Gallagher, President and Editor-in-Chief of Annual Reviews (AR), and Kamran Naim, Director of Partnerships and Initiatives, explore with us the rationale for AR’s pursuit of an open access business model that is not based in APCs, provide details how “Subscribe to Open” is intended to work, and describe its transformative potential.
Read more


Supporting the Transition: Revisiting organic food and scholarly communication


This is a joint post by guest Jeff Kosokoff, Assistant University Librarian for Collection Strategy, Duke University, and IO author Ellen Finnie, MIT Libraries.

From the perspective of academic libraries and many researchers, we have an unhealthy scholarly information ecosystem. Prices consistently rise at a rate higher than inflation and much faster than library budgets, increasingly only researchers in the most well-financed organizations have legitimate access to key publications, too few businesses control too much of the publishing workflow and content data, energy toward realizing the promise of open access has been (as Guédon describes it)  “apprehended” to protect publisher business models, and artificial scarcity rules the day. Massive workarounds like SciHub and #icanhaspdf have emerged to help with access, but such rogue solutions seem unsustainable, may violate the law, and most certainly represent potential breaches of license agreements. While many of the largest commercial publishers are clinging to their highly profitable and unhealthy business models, other more enlightened and mission-driven publishers — especially scholarly societies — recognize that the system is broken and are looking to consider new approaches. Often publishers are held back by the difficulty and risk in moving from the current model to a new one. They face legitimate questions about whether a new funding structure would work, and how a publisher can control the risks of transition. Read more


Transitioning Society Publications to Open Access

This is a guest post written by Kamran Naim, Director of Partnerships & Initiatives, Annual Reviews; Rachael Samberg, Scholarly Communications Officer, UC Berkeley Library; and Curtis Brundy, AUL for Scholarly Communications and Collections, Iowa State University.

The Story of Society Publications

Scientific societies have provided the foundations upon which the global system of scholarly communication was built, dating back to the 17th century and the birth of the scholarly journal. More recent developments in scholarly communication–corporate enclosure, financial uncertainty, and open-access policies from funders and universities–have shaken these foundations. Recognizing the crucial role that societies have played–and must continue to play in advancing scientific research and scholarship–a group of OA advocates, library stakeholders, and information strategists has organized to provide concrete assistance to society journals. The aim is to allow scholarly societies to step confidently towards OA, enabling them to renew, reclaim, and reestablish their role as a vital and thriving part of the future open science ecosystem. Read more


Lessons from the ReDigi decision

The decision announced last month in the ReDigi case, more properly known as Captiol Records v. ReDigi, Inc. was, in one sense, at least, not a big surprise.  It was never very likely, given the trajectory of recent copyright jurisprudence, that the Second Circuit would uphold a digital first sale right, which is fundamentally what the case is about.  The Court of Appeals upheld a lower court ruling that the doctrine of first sale is only an exception to the distribution right and, therefore, does not protect digital lending because, in that process, new copies of a work are always made.  His reasoning pretty much closes the door on any form of digital first sale right, even of the “send and delete” variety that tries to protect against multiple copies of the work being transferred. Read more


What really happens on Public Domain Day, 2019

The first of January, as many of you know, is the day on which works whose copyright term expired the previous year officially rise into the public domain. For many years now, however, no published works have entered the PD because of the way the 1976 Copyright Act restructured the term of copyright protection. 2018 was the first year in decades that the term of protection for some works – those published in 1923 – began to expire, so on January 1, 2019, many such published works will, finally, become public property. Lots of research libraries will celebrate by making digital versions of these newly PD works available to all, and the Association of Research Libraries plans a portal page, so folks can find all these newly freed works.

I want to take a moment to try to explain the complicated math that brought us to this situation, and to spell out what is, and is not, so important about this New Year’s Day.

Published works that were still protected by copyright on Jan.1, 1978, when the “new” copyright act of 1976 went into effect, received a 95-year term of protection from the date of first publication. For works that were in their first term of copyright when the law took effect, this was styled (in section 304(a)) as a renewal for a term of 67 years, so that 28 years from when the copyright was originally secured plus the 67 years renewal term equaled 95 years. If a work was in a second, renewed term on Jan 1, 1978, section 304(b) simply altered the full term of protection to 95 years from date of first publication (or, as the law phrases it, when “copyright was first secured”).

So what is special about 1923? Prior to the 1976 act, copyright lasted for 28 years with a potential renewal term of another 28 years. This adds up, of course, to 56 years. Thus, works published in 1923 would be the first batch of copyrighted work that would receive this 95-year term, because they would be the oldest works still in protection when the new act took effect (1923 plus 56 years being equal to 1978). A work published in 1922 would be entering the public domain just as the new act took effect, and those works stayed in the public domain. But those published a year later were still protected in 1978, and they got the benefit of this new, extended term, which ultimately resulted in 37 more years of protection than an author publishing her work in 1923 could have expected. Therefore, everything published from 1923 until 1977 enjoyed an extension, first to 75 years, then, thanks to the Sonny Bono Copyright Term Extension Act, 95 years. Since 2018 is 95 years after 1923, it is those works published in 1923 whose terms expired during 2018, so they officially rise into the public domain on Jan. 1, 2019.

All this math does not mean, however, that everything published in 1923 has been protected up until now. Notice that, in the description above, we distinguish between those works in their first (28-year) term of protection and those in their second term. That is because, under the older law, a book, photograph, song or whatever, had to be renewed in order to continue to have protection past that first 28 years. Many works were not renewed, and the 1976 act only applies the extended 95-year term to those older works that were in their second term of protection when it took effect. So if a work was not renewed, and its copyright had already lapsed, the extended term did not apply. A basic principle is that the 1976 copyright law did not take anything out of the public domain that had already risen into it (although a later amendment did exactly that for certain works that were originally published in other countries).

What is really happening then, is that some 1923 works – those whose copyright term was renewed after 28 years (in 1951) – really do become public domain for the first time. But for a great many works, which were already PD due to a failure to renew, what really happens this week is that we gain certainty about their status. Research suggests that a sizable percentage of works for which renewal was necessary were not, in fact, renewed; estimates range from 45% to 80%. So many of the works we will be celebrating were certainly already public domain; after January 1 we just have certainty about that fact. Finding out if a work was renewed is not easy, given the state of the records. The HathiTrust’s Copyright review program has been working hard at this task for a decade, and they have been able to open over 300,000 works. But it is painstaking, labor-intensive work that mostly establishes a high probability that a work is PD. On Public Domain day, however, we get certainty, which is the real cause for celebration.

Let me illustrate this situation by considering one of the works that the KU Libraries plan to digitize and make openly accessible in celebration of Public Domain Day, 2019. Seventeen Nights with the Irish Story Tellers, by Edmund Murphy, is an interesting collection of poems that is part of our Irish literature collection, one of the real strengths of the Spencer Research Library at KU. It was published in Baltimore in 1923, and OCLC lists holdings for only 15 libraries. It is apparently not held by the HathiTrust, likely because no libraries whose holdings Google scanned owned a copy. But my guess is that it is already in the public domain and has been since the early 1950s. And there is no record of any renewal of its copyright in the database of renewal records maintained by the library at Stanford University. That database only holds renewal records for books, and it contains no indication that Seventeen Nights ever received a second term of protection. So the chances are good that this work, like so many others, has been in the public domain for many years.

There are a great many works published between 1923 and 1963 that simply exist in a state of limbo, probably in the public domain but not (yet) subject to the effort needed to determine whether there has ever been a renewal of the copyright. On Public Domain Day, 2019, we should certainly be delighted by the “new” 1923 works, such as Robert Frost’s Stopping by Woods on a Snowy Evening, that will become PD for the first time. But we also need to recall that many published works from the mid-20th century are already in the public domain. If we want to make the effort to do the needed research, there is lots of opportunity to free up some of these works without waiting for another January 1.

Happy Public Domain Day to all!


The First Step Towards a System of Open Digital Scholarly Communication Infrastructure

 A guest post by David W. Lewis, Mike Roy, and Katherine Skinner

We are working on a project to map the infrastructure required to support digital scholarly communications.  This project is an outgrowth of David W. Lewis’ “2.5% Commitment” proposal.

Even in the early stages of this effort we have had to confront several uncomfortable truths. 

First Uncomfortable Truth: In the main, there are two sets of actors developing systems and services to support digital scholarly communication.  The first of these are the large commercial publishers, most notably Elsevier, Wiley, and Springer/Nature.  Alejandro Posada and George Chen have documented their efforts.  A forthcoming SPARC report previewed in a DuraSpace webinar by Heather Joseph confirms these findings.  The second set of actors may currently be more accurately described as a ragtag band of actors: open source projects of various sizes and capacities.  Some are housed in universities like the Public Knowledge Project (PKP), some are free standing 503(C)3s, and others are part of an umbrella organization like DuraSpace or the Collaborative Knowledge Foundation (COKO). Some have large installed bases and world-wide developer communities like DSpace.  Others have yet to establish themselves, and do not yet have a fully functional robust product.  Some are only funded with a start-up grants with no model for sustainability and others have a solid funding based on memberships or the sale of services.  This feels to us a bit like the rebel alliance versus the empire and the death star. Read more


Weighing the Costs of Offsetting Agreements

A guest post by Ana Enriquez, Scholarly Communications Outreach Librarian in the Penn State University Libraries.

Along with others from the Big Ten Academic Alliance, I had the pleasure of participating in the Choosing Pathways to Open Access forum hosted by the University of California Libraries in Berkeley last month. The forum was very well orchestrated, and it was valuable to see pluralism in libraries’ approaches to open access. (The UC Libraries’ Pathways to Open Access toolkit also illustrates this.) The forum rightly focused on identifying actions that the participants could take at their own institutions to further the cause of open access, particularly with their collections budgets, and it recognized that these actions will necessarily be tailored to particular university contexts.

Collections spending is a huge part of research library budgets and thus — as the organizers of the forum recognized — of their power. (At ARL institutions, the average share of the overall budget devoted to materials was 47% in 2015-2016.) Offsetting agreements were a major theme. These agreements bundle a subscription to toll access content with payments that make scholarship by the institution’s researchers available on an open access basis. The idea behind offsetting agreements is that if multiple large institutions pay to make their researchers’ materials open access, then not only will a large majority of research be available openly but subscription prices for all libraries should come down as the percentage of toll access content in traditional journals decreases. The downside is that offsetting agreements tie up library spending power with traditional vendors; they redirect funds to open access, but the funds go to commercial publishers and their shareholders instead of supporting the creation of a new scholarly ecosystem.

Experiments with offsetting are underway in Europe, and MIT and the Royal Society of Chemistry have recently provided us a U.S. example. I look forward to seeing the results of these agreements and seeing whether they make a positive difference for open access. However, I am concerned that some see offsetting as a complete solution to the problems of toll access scholarship, when it can be at best a transitional step. I am concerned that it will be perceived, especially outside libraries, as a cost-containing solution, when it is unlikely to contain costs, at least in the near term. And I am also concerned that libraries and universities will commit too many resources to offsetting, jeopardizing their ability to pursue other open access strategies.

Offsetting agreements must be transitional, if they are used at all. They are inappropriate as a long-term solution because they perpetuate hybrid journals. Within a particular hybrid journal, or even a particular issue, articles from researchers at institutions with a relevant offsetting agreement are open access, as are some other articles where authors have paid an article processing charge (APC). However, other articles within that same journal or issue are not open access. An institution that wants access to all the journal’s content must still pay for a subscription. In contrast, if the library that made the offsetting agreement had instead directed those funds into a fully open investment (e.g., open infrastructure or library open access publishing), the fruits of that investment would be available to all.

Controlling the costs of the scholarly publishing system has long been a goal of the open access movement. It is not the only goal — for many institutions, promoting equity of access to scholarship, especially scholarship by their own researchers, is at least as important. However, with library and university budgets under perpetual scrutiny, and with the imperative to keep costs low for students, it is important to be transparent about the costs of offsetting. In the near term, offsetting agreements will cost the academy more, not less, than the status quo. Publishers will demand a premium before acceding to this experimental approach, as they did in the deal between MIT and the Royal Society of Chemistry. The UC Davis Pay it Forward study likewise estimated that the “break-even” point for APCs at institutions with high research output was significantly below what the big five publishers charge in APCs. In other words, shifting to a wholly APC-funded system would increase costs at such institutions.

The authors of the Pay it Forward study and others have written about structuring an APC payment model to foster APC price competition between journals. Institutions pursuing offsetting agreements should build this into their systems and take care not to insulate authors further from these costs. They will then have some hope of decreasing, or at least stabilizing, costs in the long term. Barring this, libraries’ payments to traditional publishers would continue to escalate under an offsetting regime. That would be disastrous.

Whether or not offsetting agreements stabilize costs, libraries will have to be cautious not to take on costs currently borne by other university units (i.e., APCs) without being compensated in the university’s budgetary scheme. What’s more, because offsetting agreements reinforce pressure to maintain deals with the largest publishers, they undermine libraries’ abilities to acquire materials from smaller publishers, to develop community-owned open infrastructure, to invest more heavily in library publishing, to support our university presses in their open access efforts, and to invest in crowdfunding schemes that support fully open access journals and monographs.

To maintain this pluralistic approach to open access, either within a single research library or across the community, libraries signing offsetting agreements should be cautious on several points. To inform their negotiations, they should gather data about current APC outlays across their institutions. They should structure the APC payment system to make costs transparent to authors, enabling the possibility of publishers undercutting each other’s APCs. They should safeguard flexibility in their collections budgets and invest in other “pathways” alongside offsetting. And they should, if at all possible, make the terms of their offsetting agreement public, in the spirit of experimentation and of openness, to enable others to learn from their experience with full information and to enable themselves to speak, write, and study publicly on the impact of the agreement.


The GSU Copyright Case: Lather, Rinse, Repeat

On Friday, a panel of the 11th Circuit Court of Appeals issued its decision in the publisher’s appeal from the second trial court ruling in their lawsuit against Georgia State University, challenging GSU’s practices regarding library electronic reserves.  The decision came 449 days after the appeal was heard, which is an astonishingly long time for such a ruling.  I wish I could say that the wait was worth it, and that the ruling adds to our stock of knowledge about fair use.  Unfortunately, that is not what happened, and the case continues to devolve into insignificance.

The judges on the appellate panel seem to realize how trivial the case has become.  After working on it for one year, two months, and three weeks, the court produced a decision on only 25 pages, which sends the case back, yet again, for new proceedings in the district court.  The short opinion simply reviews their earlier instructions and cites ways in which the panel believes that Judge Orinda Evans misapplied those instructions when she held that second trial.  What it does not do is probably more significant than what it does.  The ruling does not fundamentally alter the way the fair use analysis has been done throughout this case.  The publishers have wanted something more sweeping and categorical, but they lost that battle a long time ago. The 11th Circuit also affirms Judge Evans’ decision to not reopen the record, thus preventing the publishers, and the Copyright Clearance Center that is pulling their strings, from introducing new evidence of licensing options that did not exist when they brought the case in 2008.  Although it seems like a mere technicality, this ruling, another loss for the publishers, really points out how silly and out-of-date the lawsuit now is.

This time around, the circuit court seems to say more explicitly that they expect more of the excerpts that are at the center of this dispute to be found to be infringing.  They clearly do not like the fact that, after the first appeal, and with their instructions to be less mathematical in her analysis and to weigh the fourth factor more heavily, Judge Evans found fewer infringements (by one) than she had in the first trial.  So if there is a third trial, maybe the outcome will be six infringements, or even ten.  But the big principles that the publishers were trying to gain are all lost.  There will be no sweeping injunction, nor any broad assertion that e-reserves always require a license. The library community will still have learned that non-profit educational use is favored under the first fair use factor even when that use is not transformative.  The best the publisher plaintiffs can hope for is a split decision, and maybe the chance to avoid paying GSU’s costs, but the real victories, for fair use and for libraries, have already been won.

The saddest thing about this case is that, after ten years, it continues to chew over issues that seem less and less relevant.  Library practices have evolved during that time, and publishing models have changed.  Open access and the movement toward OERs have had a profound impact on the way course materials are provided to students.  So the impact of this case, and of any final decision, if one ever comes, will be negligible.  The plaintiff publishers actually lost a long time ago, they simply lack the wisdom to recognize that fact.

Cambridge University Press, Oxford University Press and Sage Publishing v. J.L Albert should have settled years ago.  Instead it has devolved into a kind of punchline, much like Jarndyce v. Jarndyce from Dicken’s Bleak House; the mere mention of it causes people to roll their eyes and giggle.  The final resolution of this dispute may yet be a long way off, but at this point the takeaway from the case is clear: carry on with your daily work, teachers and librarians, there is nothing to see here.


What does Icelandic fishing have to do with commercial publishing?

Siglufjordur is a small fishing village in the north of Iceland that my wife and I had the pleasure of visiting this past summer.  It nestles between the mountains of the Icelandic highlands and the sea in a way characteristic of towns on the northern coast.

What is unusual about Siglufjordur is its economic history.  It was a boom town in the 1940s and 50s, the center of the North Atlantic herring trade.  In addition to fishing, a great deal of processing and packing was done in Siglufjordur, and the town was triple its current size.  In the early 1960s, however, the herring industry in Siglufjordur collapsed quite suddenly, because the fishing grounds had been overfished.  Now the town is a shadow of its former self, surviving on sport fishing and tourism (the Herring Museum, perhaps surprisingly, is very much worth a visit).

We often refer to scholarly communications as a kind of eco-system, and I think the problem of overfishing has an important place in that analogy.  The proliferation of new journal titles, whose sole function seems to be padding out the “big deals” that publishers sell, using the growing number of titles to justify the ever-increasing cost, strikes me as a kind of overfishing.  It is an example of pushing the market too far.  In Siglufjordur, it was the product that dried up, however; in commercial publishing it is the customer base, which is being systematically priced out of the market.

A sign that they are creating a market where monopoly pricing is slowly pushing customers out is the growing gap between bundle pricing, which publishers now like to call a “database model” in order to distance themselves from the unpopular phrase “big deal,” and the list prices of journals.  I was recently part of a conversation where a rep for one of the large commercial academic publishers told us, inadvertently, I think, that while the bundle she was selling cost $800,000, the list price for all those journals would be about 9 million.  If she intended to tell us what a great deal the bundle was, her comment had the opposite effect; it emphasized how absurd the list prices are.  They are punitive, and obviously unrelated to the cost of production; when list prices are 11 times the price most customers actually pay, I think they qualify as pure fiction.  This pricing practice is equivalent to throwing an explosive into the water to drive the fish into the nets.  It represents a blatant effort by these publishers to force customers to buy the bundled packages, so they can profit off junk titles they could not sell on their own merits.

There was a time when similar practices were called illegal tying under the U.S.’s anti-trust laws.  Movie companies, for example, were found to be illegally using their intellectual property monopoly to force theaters to rent unwanted titles in order to get the movies they really wanted to show; the Supreme Court forbade such “block booking” in 1948.  But anti-trust enforcement has changed dramatically over the years, and this kind of tying is now tolerated in the cable TV industry, as well as in scholarly publishing.  (For the record,  a U.S. court has held that bundling channels on cable is not illegal tying, but there are ongoing antitrust lawsuits over related practices.)  Publishing, in my opinion, has pushed the practice even farther than cable TV has, as the bundle prices spiral upward, the list prices become more and more penal, and customers are forced to consider a draconian loss of access, the academic equivalent of “cutting the cable.”

The problem with this kind of “overfishing” is that it is unsustainable; the commercial academic publishers are pushing the market so far that their customers simply can no longer afford the resources they need and, incidentally, create in the first place.  The profits generated by these companies are still extremely high, in the range of 35% and rising, but, as happened in Siglufjordur, the bottom can drop out quite suddenly.  In recent months we have seen whole nations, not to mention individual universities, start to reconsider not only whether the value offered by these publishers is worth the price, but even whether the price itself is simply out of reach.  And back in June, the Financial Times reported that RELX, the parent of Elsevier, had suffered its biggest decline in value in 18 months, and the financial services firm UBS was advising investors to sell, to take their profits and get out, due to “structural risks.”  Structural risk is a very accurate description of the problem you create when you push your market well beyond its capacity.


Why just 2.5%?

Sustainability planning is certainly a tricky business. Over the last several months I have been working with teams grappling with sustainability and other long-term plans for four projects: the Big Ten Academic Alliance’s Geoportal, Mapping Prejudice, the Data Curation Network, and AgEcon Search.  These are all cross-unit collaborative projects, and multi-institutional in most cases, but their common element is that my library serves as administrative and/or infrastructural home and/or lead institution. This planning has led to an interesting thought experiment, spurred by the AgEcon Search planning.

First, brief background: AgEcon Search is a subject repository serving the fields of applied and agricultural economics. The University of Minnesota (UMN) has been operating it since 1995, back in the pre-web days; the first iteration was Gopher based. It is still jointly sponsored by the UMN Libraries and the UMN Department of Applied Economics, but is now also a partnership that includes financial and other support from the USDA, private foundations, both the national and the international agricultural economics scholarly associations, and others (full list and other info here). There is tremendous support within its scholarly community for AgEcon Search and, increasingly, very strong international use and participation, especially in Africa.

The two UMN host units have started long-term sustainability planning. Right now, a leading strategy is a joint fundraising program with a goal of building an endowment.

Here’s the thought experiment. Roughly speaking, a $2 million endowment would generate sufficient revenue to pay most of AgEcon Search’s annual staffing, infrastructure and other costs. $2 million is about 11% of what the University of Minnesota Libraries spends annually for collections. So if we were able to take just 10% of what we spend in just one year on collections, we would be most of the way towards ensuring a long-term financial future for one project. And if Minnesota could double that, or even go to 25%, then in one year we would be able to do this for two similarly-sized, community controlled open projects. And if we did it for two years, we probably would have funded all four of these (Minnesota-homed) projects. And if we kept going, in the next and following years and be able to use that money to do the same for other projects, at other institutions. And if all academic libraries did the same, how many years would it take to put a really huge dent in our collective open funding problem?

Obviously, there are many, many practical, political, logistical and other challenges to actually doing this with our collections funding, but I’m leaving those aside for the moment, though they are far from trivial. This thought experiment has helped bring focus to my thinking about David Lewis’s 2.5% solution (see also his post on this blog and his later writings with other colleagues), and Cameron Neylon’s response in ‘Against the 2.5% Solution.’ Which, spoiler alert, is not strictly speaking against the solution, but in favor of a number of things including an investment quality index that can guide these investments, a variety of different strategies, and in support of a much bigger investments than the 2.5%.

Which is where I think we absolutely need to be — more aggressively and more deeply investing in open. 2.5% per year is not enough. 25% might be getting warmer. Would I love for that money to come from our universities instead of from our collections budgets? Sure. But will it happen and how long will it take? Speed and agility will be increasingly important. To underscore that point: the Data Curation Network got its Sloan pilot grant funding and was well underway planning and (openly) sharing rich information about what it takes and how to do data curation when Springer announced it would offer for-fee data management and curation services. Wellcome Trust is now in a pilot to fund its investigators to use Springer’s service (I’m not linking, use your favorite search tool). The Data Curation Network, like many collective projects, has been starting to make the case for community support, with the usual mixed responses. How many more projects will teeter on the brink of survival while publishers with a long history of enclosure and extortionate pricing gobble them up, or out-market us, or out-innovate us?  What’s your favorite open project or workflow tool? Has it been asking for support?

I am, personally, decidedly lukewarm on the all-APC flip that started the OA2020 conversation, but don’t think we have the luxury of ruling out many strategies at this point. More, smarter, and faster are the words that guide my thinking and my hopes for a more open, community-owned scholarly communications ecosystem. I’m very much looking forward to the ‘Choosing Pathways to OA‘ workshop at Berkeley in October, and grateful to colleagues at the University of California, including the faculty, who have injected recent energy and inspiration, and who have invested in space to bring us together to talk about advancing practical strategies. See other posts on this blog about offsetting at MIT and the

publishing layers (formerly known as RedOA) project. Read more