Introducing the Feminist Framework for Radical Knowledge Collaboration

This is an abridged version of a longer blog post. You can read a more complete description of our work in my post on At the Intersection.

  1. How has the patriarchy affected you?
  2. How has the patriarchy impacted your work?
  3. How have you been complicit in perpetuating the patriarchy?

These were the three questions we started with when beginning our reflection on what has become the Femifesto: Feminist Framework for Radical Knowledge Collaboration.

My colleagues Sandra Enimil, Charlotte Roh, Ivonne Lujano, Sharon Farb, Gimena del Rio Riande, and Lingyu Wang began working on this idea several months ago as a proposal for the Triangle Scholarly Communication Institute in Chapel Hill, NC in the U.S., situated on the unceded lands of the Eno, Shakori, and Catawba nations and on land worked by countless enslaved people of the African diaspora. What initially began as a possible toolkit, quickly, through our individual and collective reflection work, evolved into a framework for thinking through equitable collaboration in knowledge work. Read more

Publishing, technology, and lawsuits

When Arizona State’s University Librarian, Jim O’Donnell, posted a link to an article about the status and strategy of four lawsuits brought in the past few years by commercial publishers on the LibLicense list, it started me on a series of rather disparate reflections about the state of scholarly communications.

Jim’s post was quite innocuous, of the “folks might be interested in this” variety, but he did note that some people might encounter a paywall. The article, “On the limitations of recent lawsuits against Sci-Hub, OMICS, ResearchGate, and Georgia State University,” by Stewart Manley, was published this summer in Learned Publishing, which is available, only with a subscription, through Wiley. My institution ended its Wiley “big deal” a year ago because we could no longer afford it, so I did encounter a paywall — $42 for this single, seven-page article (I ultimately obtained the article using inter-library loan, and am not providing a link to the pay-walled version). I commented, in response to Jim’s post, on this high cost of access, which leads to my first observation about the state of academic publishing. Read more

Two posts in one

Farewell to Justice Stevens

As it did for so many people, the passing last week of Justice John Paul Stevens saddened me, and caused me to reflect on his remarkable tenure.  It is curious to realize that, at his confirmation hearing, his health — he had recently had bypass surgery — and his ability to serve a “full” term on the Supreme Court was an issue.  He went on to serve for nearly 35 years and was just short of 91 years old when he retired.

For me, Justice Stevens provided my first acquaintance with Supreme Court jurisprudence, since his ruling in Sony v. Universal Studios, 464 U.S. 417 (1984), was the second copyright decision I ever read.  The first was the 1996 opinion of the 6th Circuit in Princeton University Press v. Michigan Document Service, and it was my gut feeling that that case was wrongly decided that sent me back to Sony and Justice Stevens, then on through a series of explorations of copyright issues, and finally to law school.  So while, like most Americans, I have Justice Stevens to thank for my TV watching habits, I also think of him as at the beginning of what has been a marvelous journey for me.

Many of the memorial articles to Justice Stevens do not mention the Sony decision, so I want to recommend this Washington Post piece which, while it is a little bit flippant, does pay attention to what may be John Paul Stevens’ most lasting gift to America.  It is worth noting, I think, that while the impact of many court decisions wane over time, Sony has grown more important over the years, because it provides a pathway for copyright to adapt to changing technologies.

Free Lunch for Trolls

Earlier this week, I started writing a post about Senate bill S. 1273, the Copyright Alternative in Small-claims Enforcement (or CASE) Act of 2019. The Senate Judiciary Committee was about to mark up the bill, which includes voting to report it out to the full Senate, and I wanted to explain why I think the bill is a bad idea. Before I could finish my post, however, Stan Adams wrote this important piece for TechDirt that makes many of the points I had intended to make. So instead of repeating much of the same arguments, I decided the my most important task was just to make sure that readers of In the Open were aware of Adams’ excellent post.

Adams does a nice job of explaining where the legislation stands, and why it should not be enacted as currently written.  I really encourage folks to read his post, and will add just these three summary points about the potential negative effect of the CASE Act:

  • First, the Case Act would disconnect statutory damages from the mechanism of copyright registration.  That is, one could file, get a judgment, and collect statutory damages for a claim in the new “small-claims” copyright court without having to register.  Rights holders in unregistered works, if the tribunal found they had been infringed, would be able to collect up to $15,000 in statutory damages.  So the incentive to register, which can help prevent infringement by making it easier to find a rights holder from whom to seek permission, would be undermined.
  • Second, the CASE Act would increase nuisance claims.  Because statutory damages would not be dependent any longer on timely registration, and because the barriers to bringing an infringement suit would be lowered, lots of people fishing for settlements — both real copyright trolls and rights-holders just “trying their luck” on weak claims — would be emboldened to send demand letters.  Such letters are common for libraries and universities; they are time-consuming and expensive to deal with, even though most come to nothing in the end.
  • Which brings me to my final point, the chilling effect on fair use that the CASE Act is likely to have.  Fair use is the proper response to many of those nuisance letters, and if they increase, the burden of exercising fair use will also go up.  And more librarians and teachers will likely be discouraged from even considering fair use, if statutory damages are more easily available through this streamlined “small”-claims system, since $15,000 is not a small amount at all to  many of them.
  • Read more

    “Subscribe to Open” as a model for voting with our dollars

    Elsewhere in this blog I’ve made the case that academic libraries should “vote with their dollars” to encourage and enable the kind of changes in the scholarly communication system that we’d like to see — those that move us towards a more open and equitable ecosystem for sharing science and scholarship.  Most recently, Jeff Kosokoff and I explored the importance of offering financial support for transitional models that help publishers down a path to full open access, and we called out Annual Reviews’ emerging “Subscribe to Open” model as a good example of one that provides a pathway to OA for content that is not suited to an article processing charge (APC) approach.  Here, Richard Gallagher, President and Editor-in-Chief of Annual Reviews (AR), and Kamran Naim, Director of Partnerships and Initiatives, explore with us the rationale for AR’s pursuit of an open access business model that is not based in APCs, provide details how “Subscribe to Open” is intended to work, and describe its transformative potential.
    Read more

    Supporting the Transition: Revisiting organic food and scholarly communication

    This is a joint post by guest Jeff Kosokoff, Assistant University Librarian for Collection Strategy, Duke University, and IO author Ellen Finnie, MIT Libraries.

    From the perspective of academic libraries and many researchers, we have an unhealthy scholarly information ecosystem. Prices consistently rise at a rate higher than inflation and much faster than library budgets, increasingly only researchers in the most well-financed organizations have legitimate access to key publications, too few businesses control too much of the publishing workflow and content data, energy toward realizing the promise of open access has been (as Guédon describes it)  “apprehended” to protect publisher business models, and artificial scarcity rules the day. Massive workarounds like SciHub and #icanhaspdf have emerged to help with access, but such rogue solutions seem unsustainable, may violate the law, and most certainly represent potential breaches of license agreements. While many of the largest commercial publishers are clinging to their highly profitable and unhealthy business models, other more enlightened and mission-driven publishers — especially scholarly societies — recognize that the system is broken and are looking to consider new approaches. Often publishers are held back by the difficulty and risk in moving from the current model to a new one. They face legitimate questions about whether a new funding structure would work, and how a publisher can control the risks of transition. Read more

    Transitioning Society Publications to Open Access

    This is a guest post written by Kamran Naim, Director of Partnerships & Initiatives, Annual Reviews; Rachael Samberg, Scholarly Communications Officer, UC Berkeley Library; and Curtis Brundy, AUL for Scholarly Communications and Collections, Iowa State University.

    The Story of Society Publications

    Scientific societies have provided the foundations upon which the global system of scholarly communication was built, dating back to the 17th century and the birth of the scholarly journal. More recent developments in scholarly communication–corporate enclosure, financial uncertainty, and open-access policies from funders and universities–have shaken these foundations. Recognizing the crucial role that societies have played–and must continue to play in advancing scientific research and scholarship–a group of OA advocates, library stakeholders, and information strategists has organized to provide concrete assistance to society journals. The aim is to allow scholarly societies to step confidently towards OA, enabling them to renew, reclaim, and reestablish their role as a vital and thriving part of the future open science ecosystem. Read more

    Lessons from the ReDigi decision

    The decision announced last month in the ReDigi case, more properly known as Captiol Records v. ReDigi, Inc. was, in one sense, at least, not a big surprise.  It was never very likely, given the trajectory of recent copyright jurisprudence, that the Second Circuit would uphold a digital first sale right, which is fundamentally what the case is about.  The Court of Appeals upheld a lower court ruling that the doctrine of first sale is only an exception to the distribution right and, therefore, does not protect digital lending because, in that process, new copies of a work are always made.  His reasoning pretty much closes the door on any form of digital first sale right, even of the “send and delete” variety that tries to protect against multiple copies of the work being transferred. Read more

    What really happens on Public Domain Day, 2019

    The first of January, as many of you know, is the day on which works whose copyright term expired the previous year officially rise into the public domain. For many years now, however, no published works have entered the PD because of the way the 1976 Copyright Act restructured the term of copyright protection. 2018 was the first year in decades that the term of protection for some works – those published in 1923 – began to expire, so on January 1, 2019, many such published works will, finally, become public property. Lots of research libraries will celebrate by making digital versions of these newly PD works available to all, and the Association of Research Libraries plans a portal page, so folks can find all these newly freed works.

    I want to take a moment to try to explain the complicated math that brought us to this situation, and to spell out what is, and is not, so important about this New Year’s Day.

    Published works that were still protected by copyright on Jan.1, 1978, when the “new” copyright act of 1976 went into effect, received a 95-year term of protection from the date of first publication. For works that were in their first term of copyright when the law took effect, this was styled (in section 304(a)) as a renewal for a term of 67 years, so that 28 years from when the copyright was originally secured plus the 67 years renewal term equaled 95 years. If a work was in a second, renewed term on Jan 1, 1978, section 304(b) simply altered the full term of protection to 95 years from date of first publication (or, as the law phrases it, when “copyright was first secured”).

    So what is special about 1923? Prior to the 1976 act, copyright lasted for 28 years with a potential renewal term of another 28 years. This adds up, of course, to 56 years. Thus, works published in 1923 would be the first batch of copyrighted work that would receive this 95-year term, because they would be the oldest works still in protection when the new act took effect (1923 plus 56 years being equal to 1978). A work published in 1922 would be entering the public domain just as the new act took effect, and those works stayed in the public domain. But those published a year later were still protected in 1978, and they got the benefit of this new, extended term, which ultimately resulted in 37 more years of protection than an author publishing her work in 1923 could have expected. Therefore, everything published from 1923 until 1977 enjoyed an extension, first to 75 years, then, thanks to the Sonny Bono Copyright Term Extension Act, 95 years. Since 2018 is 95 years after 1923, it is those works published in 1923 whose terms expired during 2018, so they officially rise into the public domain on Jan. 1, 2019.

    All this math does not mean, however, that everything published in 1923 has been protected up until now. Notice that, in the description above, we distinguish between those works in their first (28-year) term of protection and those in their second term. That is because, under the older law, a book, photograph, song or whatever, had to be renewed in order to continue to have protection past that first 28 years. Many works were not renewed, and the 1976 act only applies the extended 95-year term to those older works that were in their second term of protection when it took effect. So if a work was not renewed, and its copyright had already lapsed, the extended term did not apply. A basic principle is that the 1976 copyright law did not take anything out of the public domain that had already risen into it (although a later amendment did exactly that for certain works that were originally published in other countries).

    What is really happening then, is that some 1923 works – those whose copyright term was renewed after 28 years (in 1951) – really do become public domain for the first time. But for a great many works, which were already PD due to a failure to renew, what really happens this week is that we gain certainty about their status. Research suggests that a sizable percentage of works for which renewal was necessary were not, in fact, renewed; estimates range from 45% to 80%. So many of the works we will be celebrating were certainly already public domain; after January 1 we just have certainty about that fact. Finding out if a work was renewed is not easy, given the state of the records. The HathiTrust’s Copyright review program has been working hard at this task for a decade, and they have been able to open over 300,000 works. But it is painstaking, labor-intensive work that mostly establishes a high probability that a work is PD. On Public Domain day, however, we get certainty, which is the real cause for celebration.

    Let me illustrate this situation by considering one of the works that the KU Libraries plan to digitize and make openly accessible in celebration of Public Domain Day, 2019. Seventeen Nights with the Irish Story Tellers, by Edmund Murphy, is an interesting collection of poems that is part of our Irish literature collection, one of the real strengths of the Spencer Research Library at KU. It was published in Baltimore in 1923, and OCLC lists holdings for only 15 libraries. It is apparently not held by the HathiTrust, likely because no libraries whose holdings Google scanned owned a copy. But my guess is that it is already in the public domain and has been since the early 1950s. And there is no record of any renewal of its copyright in the database of renewal records maintained by the library at Stanford University. That database only holds renewal records for books, and it contains no indication that Seventeen Nights ever received a second term of protection. So the chances are good that this work, like so many others, has been in the public domain for many years.

    There are a great many works published between 1923 and 1963 that simply exist in a state of limbo, probably in the public domain but not (yet) subject to the effort needed to determine whether there has ever been a renewal of the copyright. On Public Domain Day, 2019, we should certainly be delighted by the “new” 1923 works, such as Robert Frost’s Stopping by Woods on a Snowy Evening, that will become PD for the first time. But we also need to recall that many published works from the mid-20th century are already in the public domain. If we want to make the effort to do the needed research, there is lots of opportunity to free up some of these works without waiting for another January 1.

    Happy Public Domain Day to all!

    The First Step Towards a System of Open Digital Scholarly Communication Infrastructure

     A guest post by David W. Lewis, Mike Roy, and Katherine Skinner

    We are working on a project to map the infrastructure required to support digital scholarly communications.  This project is an outgrowth of David W. Lewis’ “2.5% Commitment” proposal.

    Even in the early stages of this effort we have had to confront several uncomfortable truths. 

    First Uncomfortable Truth: In the main, there are two sets of actors developing systems and services to support digital scholarly communication.  The first of these are the large commercial publishers, most notably Elsevier, Wiley, and Springer/Nature.  Alejandro Posada and George Chen have documented their efforts.  A forthcoming SPARC report previewed in a DuraSpace webinar by Heather Joseph confirms these findings.  The second set of actors may currently be more accurately described as a ragtag band of actors: open source projects of various sizes and capacities.  Some are housed in universities like the Public Knowledge Project (PKP), some are free standing 503(C)3s, and others are part of an umbrella organization like DuraSpace or the Collaborative Knowledge Foundation (COKO). Some have large installed bases and world-wide developer communities like DSpace.  Others have yet to establish themselves, and do not yet have a fully functional robust product.  Some are only funded with a start-up grants with no model for sustainability and others have a solid funding based on memberships or the sale of services.  This feels to us a bit like the rebel alliance versus the empire and the death star. Read more

    Weighing the Costs of Offsetting Agreements

    A guest post by Ana Enriquez, Scholarly Communications Outreach Librarian in the Penn State University Libraries.

    Along with others from the Big Ten Academic Alliance, I had the pleasure of participating in the Choosing Pathways to Open Access forum hosted by the University of California Libraries in Berkeley last month. The forum was very well orchestrated, and it was valuable to see pluralism in libraries’ approaches to open access. (The UC Libraries’ Pathways to Open Access toolkit also illustrates this.) The forum rightly focused on identifying actions that the participants could take at their own institutions to further the cause of open access, particularly with their collections budgets, and it recognized that these actions will necessarily be tailored to particular university contexts.

    Collections spending is a huge part of research library budgets and thus — as the organizers of the forum recognized — of their power. (At ARL institutions, the average share of the overall budget devoted to materials was 47% in 2015-2016.) Offsetting agreements were a major theme. These agreements bundle a subscription to toll access content with payments that make scholarship by the institution’s researchers available on an open access basis. The idea behind offsetting agreements is that if multiple large institutions pay to make their researchers’ materials open access, then not only will a large majority of research be available openly but subscription prices for all libraries should come down as the percentage of toll access content in traditional journals decreases. The downside is that offsetting agreements tie up library spending power with traditional vendors; they redirect funds to open access, but the funds go to commercial publishers and their shareholders instead of supporting the creation of a new scholarly ecosystem.

    Experiments with offsetting are underway in Europe, and MIT and the Royal Society of Chemistry have recently provided us a U.S. example. I look forward to seeing the results of these agreements and seeing whether they make a positive difference for open access. However, I am concerned that some see offsetting as a complete solution to the problems of toll access scholarship, when it can be at best a transitional step. I am concerned that it will be perceived, especially outside libraries, as a cost-containing solution, when it is unlikely to contain costs, at least in the near term. And I am also concerned that libraries and universities will commit too many resources to offsetting, jeopardizing their ability to pursue other open access strategies.

    Offsetting agreements must be transitional, if they are used at all. They are inappropriate as a long-term solution because they perpetuate hybrid journals. Within a particular hybrid journal, or even a particular issue, articles from researchers at institutions with a relevant offsetting agreement are open access, as are some other articles where authors have paid an article processing charge (APC). However, other articles within that same journal or issue are not open access. An institution that wants access to all the journal’s content must still pay for a subscription. In contrast, if the library that made the offsetting agreement had instead directed those funds into a fully open investment (e.g., open infrastructure or library open access publishing), the fruits of that investment would be available to all.

    Controlling the costs of the scholarly publishing system has long been a goal of the open access movement. It is not the only goal — for many institutions, promoting equity of access to scholarship, especially scholarship by their own researchers, is at least as important. However, with library and university budgets under perpetual scrutiny, and with the imperative to keep costs low for students, it is important to be transparent about the costs of offsetting. In the near term, offsetting agreements will cost the academy more, not less, than the status quo. Publishers will demand a premium before acceding to this experimental approach, as they did in the deal between MIT and the Royal Society of Chemistry. The UC Davis Pay it Forward study likewise estimated that the “break-even” point for APCs at institutions with high research output was significantly below what the big five publishers charge in APCs. In other words, shifting to a wholly APC-funded system would increase costs at such institutions.

    The authors of the Pay it Forward study and others have written about structuring an APC payment model to foster APC price competition between journals. Institutions pursuing offsetting agreements should build this into their systems and take care not to insulate authors further from these costs. They will then have some hope of decreasing, or at least stabilizing, costs in the long term. Barring this, libraries’ payments to traditional publishers would continue to escalate under an offsetting regime. That would be disastrous.

    Whether or not offsetting agreements stabilize costs, libraries will have to be cautious not to take on costs currently borne by other university units (i.e., APCs) without being compensated in the university’s budgetary scheme. What’s more, because offsetting agreements reinforce pressure to maintain deals with the largest publishers, they undermine libraries’ abilities to acquire materials from smaller publishers, to develop community-owned open infrastructure, to invest more heavily in library publishing, to support our university presses in their open access efforts, and to invest in crowdfunding schemes that support fully open access journals and monographs.

    To maintain this pluralistic approach to open access, either within a single research library or across the community, libraries signing offsetting agreements should be cautious on several points. To inform their negotiations, they should gather data about current APC outlays across their institutions. They should structure the APC payment system to make costs transparent to authors, enabling the possibility of publishers undercutting each other’s APCs. They should safeguard flexibility in their collections budgets and invest in other “pathways” alongside offsetting. And they should, if at all possible, make the terms of their offsetting agreement public, in the spirit of experimentation and of openness, to enable others to learn from their experience with full information and to enable themselves to speak, write, and study publicly on the impact of the agreement.