Accessibility for Publishers: Practical Tips That Demonstrate it's Well Within Your Reach

a free report from Riverwinds Consulting and Cenveo Publisher Services

Accessibility is an approach to publishing and design that makes content available to all, including those with disabilities who use assistive technologies on the computer. The aim of accessible publishing is to make reading easier for users who have difficulties or disabilities including the blind, partially sighted, and people with learning disabilities. Making content accessible enables readers to experience content in the most efficient format and allows them to absorb the information in a better way. The term “accessibility” is used to address issues of content structure, format, and presentation.

The question of “why make the effort to have content accessible to readers with disabilities” still lingers. Of course, accessibility comes with a cost. However, publishing indeed benefits from embracing this essential initiative. When accessibility is well executed, it can expand readership and provide a higher-quality user experience for everyone. 

Let's look at an example comparing accessible alt text with alt text captured from a figure legend. Visual items such as images that are important to the content should include alternate-text descriptions (alt text), which allows users to understand visual information. Alt text descriptions should capture information that is not included in the caption or surrounding text, and convey meaningful information to the user from the visual item. Descriptive alt text is critical to understand the full meaning of an image for the visually impaired reader. The following image illustrates an example of accessible alt text that provides a more useful description for a visually impaired reader compared with alt text that simply repeats a figure legend.

In our latest report "Accessibility for Publishers: Practical Tips That Demonstrate it's Well Within Your Reach," we provide business cases that can be brought to leadership and stakeholders in a publishing organization. Download this free report and understand

  •    how you can build the business case for accessibility in your publishing organization
  •    emerging and compelling reasons for making content accessible
  •    the key principles of accessibility
 

Happy Birthday Adobe PDF!

Adobe Acrobat turned 25 this month. For those of us who remember the pre-PDF days and what it was like sending that floppy disk to a colleague only to find out later it was gibberish when opened, we also might believe that the PDF is "sheer elegance in its simplicity."

Elegant? Yes!

Dr. John Warnock recognized that looks do matter and effective communication happens when an author's intended design, formatting, and images all combine to present an idea as originally intended. In 1990, Dr. Warnock launched his idea, The Camelot Project, in which anyone could capture documents from any application, send those documents anywhere, and even print those documents from any machine without compromising the integrity of the content. "Take that Apple IIc Plus!" Sincerely, Tandy 1000.

In August 1990, Dr. Warnock published a six-page white paper to support his Camelot idea and thus work commenced on the radical idea of a "portable document format."

PDF has been around for 25 years -- but what does it stand for? Here's what a few people had to say on the streets of Salt Lake CIty.

Simple? No.

Take a moment and think about how we take for granted all the complexity that exists behind the three clicks "Save As PDF." The following excerpt from Dr. Warnock's paper explains the inception of the PDF (née "Interchange PostScript"):

 

By redefining “moveto” and “lineto” very different things can happen. For example, if these operators are defined as follows:

/moveto
{exch writenumber writenumber (moveto) writestring}def
/lineto
{exch writenumber writenumber (lineto) writestring}def

then when the “poly” procedure is executed a file is written that has the following contents:
1.0 0.0 moveto
0.809 0.588 lineto
0.309 0.951 lineto
-0.309 0.951 lineto
-0.809 0.588 lineto
-1.0 0.0 lineto
-0.809 -0.588 lineto
-0.309 -0.951 lineto
0.309 -0.951 lineto
0.809 -0.588 lineto
1.0 0.0 lineto

In this example the new redefined “moveto” and “lineto” definitions don’t build a path. Instead they write out the coordinates they have been given and then write out the names of their own operations. The resulting file that is written by these new definitions draws the same polygon as the original file but only uses the “moveto” and “lineto” operators. Here, the execution of the PostScript file has allowed a derivative file to be generated. In some sense this derivative file is simpler and uses fewer operators than the original PostScript file but has the same net effect. We will call this operation of processing one PostScript file into another form of PostScript file “rebinding."

---The Camelot Project, J. Warnock

 

It took fewer than 3 years for Dr. Warnock's vision and diligent work by a brilliant production team to solve the problem and release the first iteration of Adobe Acrobat's Portable Document Format.

Creating PDFs in the early days was nowhere near as simple as it is today. I recall diligently writing down in my notebook all the steps required. I don't recall every step but I do remember the IT request to install three pieces of hefty and pricey software on my machine: Acrobat Exchange, Acrobat Distiller, and Acrobat Reader. Yes, in the early days Acrobat Reader had a price tag associated with it.

 Software that changed the world.

Software that changed the world.

In today's mobile responsive world, the PDF can cause frustration on an iPhone (I'm guilty). Yet I would argue that no other document technology has as much ubiquitous influence across markets and demographics as the beautiful PDF (more to come).

 

Videos in Your Journal Publishing Program?

Integrating video into a journal publishing program is not new but it's also not ubiquitous across the market. Videos can be a useful component to support an individual article while also helping authors to promote their research and publications.

The New England Journal of Medicine surveyed its authors and readers on the effectiveness of its Quick Take Videos (QTs). The survey experienced a 51% response rate from 95 authors and 411 readers who were contacted to share their views.

Of those authors who responded, 75% replied that they were very satisfied with their role in helping to create QTs. While 17% responded they were very dissatisfied with their role in helping to create QTs.

98% of authors somewhat or strongly agreed that the QT accurately summarized their article and presented it in an engaging way.

Authors shared the following reasons when asked why they use QTs.

Readers shared the following reasons as relevant for why they view QTs.

When asked “Do you believe that videos represent the abstract of the future," 84% responded yes. The answer to this question is where real value can be found for journal publishers. Particularly in a time when journal publishing strives to provide greater benefits to authors, offering video shorts of articles is most certainly beneficial.


Are you currently integrating videos in your journal publishing program? Video abstracts? Training? Share your ideas in the comments section below.

 

Video Services


1 Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Smart Suite 2.0 Released - A New Approach to Pre-editing, Copyediting, Production, and Content Delivery

Smart Suite Version 2.0 is a cloud-based ecosystem of publishing tools that streamlines the production of high-quality content. The system has a complete interface (UI) redesign and tighter integration with high-speed production engines to solve the challenges related to multi-channel publishing.

Smart Suite 2.0 is the next generation publishing engine that focuses on a combination of artificial intelligence, including NLP, and system intelligence that eliminates human intervention and achieves the goal of high-speed publishing with editorial excellence. Smart Suite auto generates multiple outputs, including PDF, XML, HTML, EPUB, and MOBI from a manuscript in record-setting time.
— Francis Xavier, VP of Operations at Cenveo Publisher Services

Offering a fresh approach to streamline production, the unified toolset comprises four modules that seamlessly advance content through publishing workflows while validating and maintaining mark-up language behind the scenes.

  • Smart Edit is a pre-edit, copyedit, and conversion tool that incorporates natural language processing (NLP) and artificial intelligence (AI) to benefit publishers not only in terms of editorial quality but also better, faster markup and delivery to output channels.
  • Smart Compose is a fully automated production engine that ingests structured output from Smart Edit and generates page proofs. Designed to work with both 3B2 and InDesign, built-in styles based on publisher specifications guarantee consistent, high-quality layouts.
  • Smart Proof provides authors and editors with a browser-based correction tool that captures changes and allows for valid round tripping of XML.
  • Smart Track brings everything together in one easy UI that logs content transactions. The kanban-styled UI presents a familiar workflow overview with drill-down capabilities that track issues and improve both system and individual performance.

Smart Suite is fully configurable for specific publisher requirements and content types. Customized data such as taxonomic dictionaries, and industry integrations such as FundRef, GenBank, and ORCID, enhance the system based on publisher requirements.

 

Download Brochure

Taylor & Francis Group Awards Full-Service Production for Global Journal Content to Cenveo

Cenveo’s Technological Innovation Aligns With Taylor & Francis’ Journal Publishing Vision

Cenveo announces a major increase in full-service content production for Taylor & Francis’ global journal production program. Taylor & Francis selected Cenveo as a core content service provider to support Taylor & Francis’s continued growth.

PR-quote_T-and-F.png

As a world-leading academic and professional publisher, Taylor & Francis cultivates knowledge through its commitment to quality. Taylor & Francis identified in Cenveo a shared vision to develop production workflows designed to improve the velocity of research dissemination. This planned strategic initiative enhances customer experience for Taylor & Francis' contributor base, particularly newer generations of researchers and scientists, without alienating its traditional market.

“The critical piece that convinced us Cenveo was the right partner was their technology stack supports our publishing model and provides real-world, expedited publication turnaround times using AI and natural language processing technology,” explains Stewart Gardiner, Global Production Director of Journals at Taylor & Francis Group. “The organizational and operational innovations Cenveo proposed to support a rapid scale-up in production volumes were something we haven’t seen from other providers and were clearly based on lessons learned in previous ramp-ups.”

In February 2018, Cenveo announced a financial restructure and reorganization to strengthen its fiscal health. Mr. Gardiner remarks, “Given the company is currently reorganizing following a Chapter 11 process, our legal and financial people looked at Cenveo closely and came to the view that this is a relatively straightforward debt for equity restructure. Refinancing of this sort is not out of line with what one might expect for a company in Cenveo’s market position, scale, and acquisition history.”

Cenveo and Taylor & Francis have shared a long work history prior to this fivefold increase in volume. The transition process has already begun and onboarding the additional Taylor & Francis work is scheduled to take place in structured phases throughout the remainder of 2018.

Given the company is currently reorganizing following a Chapter 11 process, our legal and financial people looked at Cenveo closely and came to the view that this is a relatively straightforward debt for equity restructure. Refinancing of this sort is not out of line with what one might expect for a company in Cenveo’s market position, scale, and acquisition history.
— Stewart Gardiner, Global Production Director of Journals, Taylor & Francis Group

“This major win is a result of considerable work and effort that we have put into the next generation of Smart Suite combined with a focus on operational excellence,” explains Atul Goel, EVP Global Content Operations and President and COO of India Operations at Cenveo. “We are grateful for the trust placed in Cenveo by Taylor & Francis and heartened that Cenveo’s long-term vision of innovative publishing workflows aligns with a global leader in publishing.”

Cenveo is consistently rated as one of the highest performing content service providers by its customers. Cenveo’s ongoing commitment to publishers and extensive experience with volume ramp-up is further demonstrated by its significant investments in technology and staff.

The Center for Open Science | Preregistration Challenge

Some of the world's leading journals are taking steps to maximize the transparency and reproducibility of science by promoting the preregistration of research. Those journals include

  • Frontiers in Human Neuroscience
  • Journal of Experimental Social Psychology
  • Journal of Memory and Language
  • Memory & Cognition
  • Nature & Nature Research Journals
  • Ecology
  • Proceedings of the NAS
  • Brain and Behavior
  • Cognition & Emotion
  • Cortex
  • Learning & Behavior
  • PLOS Biology
  • Psychological Science
  • Science

Why Should Research be Preregistered?

When research is preregistered, there is an advanced commitment before data are gathered. Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. Both are important, but the same data cannot be used to generate and test a hypothesis, which can happen unintentionally and reduce the clarity and quality of results. Removing potential conflicts through planning improves the quality and transparency of research, helping others who may wish to build on it.

The Center for Open Science (COS) is promoting preregistration through its Preregistration Challenge. The COS is giving away $1,000 to 1,000 researchers who preregister their projects before they publish them!

Publishers can support this initiative by reaching out to authors and promoting the challenge. Following is an introductory video that explains the challenge and you can learn more by clicking here.

 

Publishing Defined: What is Open Peer Review?

 

This short video by John Bond of Riverwinds Consulting talks about the different types of Open Peer Review. John recently published a new book titled "Scholarly Publishing: A Primer." 

 

Learn About our Peer Review Services for Publishers


Follow Us!

Open Practice Badges: A Primer and How to Get Started

The Center for Open Science (COS) provides tools, training, support, and advocacy that help researchers and scholars manage, share, and discover scientific research. The COS’ mission is to “increase the openness, integrity, and reproducibility of scholarly research. Acceleration of scientific progress can be a primary motivator for scholarship and a powerful driver of real solutions.

The COS develops software tools, workflows, data storage solutions, and more based on its free Open Science Framework (OSF). The OSF is an ecosystem of solutions, partnering companies, technologies, and ideas that support researchers across the entire research life cycle.  One initiative that is gaining momentum is the use of Open Practice Badges in the publishing workflow.

Openness is a core value of scientific practice.
 

The scholarly publishing community agrees on the relevance and importance of open communication for scientific research and progress. In 2009 there were approximately 4,800 OA journals publishing approximately 190,000 articles. In January 2017, the estimate is that there are around 9,500 active OA journals. At Cenveo Publisher Services, we work with a large number of society and commercial publishers who have launched or are preparing to add OA publication models to their workflows.

Awarding Open Practice Badges on published content is a way of designating and awarding authors badges that acknowledge their use of open practices during the research life cycle.

Incorporating Open Practice Badges Into Publishing Workflows

By acknowledging open practices in scientific research, journal publishers can use badges in their publications to certify that a particular research practice was followed. Badges can be awarded to the published content as part of the peer review process or they can be awarded post-publication. As long as processes and practices are transparent, any organization can issue badges. Most publishers are awarding the badges during peer review. Publishing platforms and review services are likely to use the badges post publication.

For publishers, the journal awards the badge and it is linked to the specific article. Each publisher tends to have specific methods for incorporating badges into the published article. However, it is critical that the badge is machine discoverable and readable.

Detailed information on incorporating OA badges into your publication workflow can be found at the OSF Wiki page here.

Badge Overview

There are three badges currently used:

  1. Open Data
  2. Open Materials
  3. Preregistered

Following is an overview of the three badges and corresponding criteria. Detailed information is available on the OSF Wiki page, including corresponding links.

Open Data

The Open Data badge is earned for making publicly available the digitally-shareable data necessary to reproduce the reported results.

Criteria

Digitally-shareable data are publicly available on an open-access repository. The data must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

A data dictionary (e.g., a codebook or metadata describing the data) is included with sufficient description for an independent researcher to reproduce the reported analyses and results. Data from the same project that are not needed to reproduce the reported results can be kept private without losing eligibility for the Open Data Badge.

An open license allowing others to copy, distribute, and make use of the data while allowing the licensor to retain credit and copyright as applicable. Creative Commons has defined several licenses for this purpose, which are described at www.creativecommons.org/licenses. CC0 or CC-BY is strongly recommended.

Open Materials

The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.

Criteria

Digitally-shareable materials are publicly available on an open-access repository. The materials must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

Infrastructure, equipment, biological materials, or other components that cannot be shared digitally are described in sufficient detail for an independent researcher to understand how to reproduce the procedure.

Sufficient explanation for an independent researcher to understand how the materials relate to the reported methodology.

Preregistered/Preregistered+Analysis Plan badges 

The Preregistered/Preregistered+Analysis Plan badges are earned for preregistering research.

Preregistered

The Preregistered badge is earned for having a preregistered design. A preregistered design includes: (1) Description of the research design and study materials including planned sample size, (2) Description of motivating research question or hypothesis, (3) Description of the outcome variable(s), and (4) Description of the predictor variables including controls, covariates, independent variables (conditions). When possible, the study materials themselves are included in the preregistration.

Criteria for earning the preregistered badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.govOpen Science FrameworkAEA RegistryEGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with registered plan.

Badge eligibility does not restrict authors from reporting results of additional analyses. Results from preregistered analyses must be distinguished explicitly from additional results in the report. Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design was altered but the changes and rationale for changes are provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.

Preregistered+Analysis Plan

The Preregistered+Analysis Plan badge is earned for having a preregistered research design (described above) and an analysis plan for the research and reporting results according to that plan. An analysis plan includes specification of the variables and the analyses that will be conducted. Guidance on construction of an analysis plan is below.

Criteria for earning the preregistered+analysis plan badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.gov, Open Science Framework, AEA registry, EGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with the registered plan.

Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design or analysis plan was altered but the changes are described and a rationale for the changes is provided. Where possible, analyses following the original specification should also be provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.”

What Journals Are Using Open Badges?

A list of journals currently using Open Practice Badges can be found here. The list continues to grow as more publishers understand the benefits of providing this acknowledgement to researchers and readers.


Cenveo Publisher Services is an advocate of Open Practice Badges. If your publishing organization would like to learn how we can support open badges in your workflow, feel free to reach out to us directly.

Are you currently using Open Practice Badges? Please share your findings or observations in the comments section below.

 

 

 

 

Innovative Research and Creative Output: From Ideas to Impact

Society for Scholarly Publishing - Philadelphia Regional Event

This post is a collaboration between SSP members, including Nicola Hill, Emma Sanders, and Adrian Stanley.

Left to right: Kathi Martin, Drexel Digital Museum; Jen Grayburn, CLIR Postdoc; Alex Humphreys, JSTOR Labs

On October 30th, the Society for Scholarly Publishing (SSP) hosted a regional event at the University of Pennsylvania, Van Pelt Library. The topic, "Innovative Research and Creative Outputs: From Ideas to Impact" brought together Philly-area publishers, librarians, and content professionals for a panel discussion on new and innovative methods of producing scholarship.

Jen Grayburn, CLIR Postdoctoral Fellow

Jen spoke about her use of Google Scholar, SketchFab and Unity in her work, which centers around the intersection of architecture and text. Using GIS (Geographic Information Systems) mapping software, Jen examines locations of historic sites. She shared an example of a mapping she did of St. Magnus Cathedral in the islands off the north coast of Scotland. In this particular example, Jen generated a binary map that  indicated what would and wouldn’t be visible on the ground from a certain height.

She uses geo-TIFs (TIF files encoded with geographical coordinates) to create a 3D topographic map to illustrate what is visible and why. Eventually, these mappings were confirmed with on-site visits she conducted. In her work, Jen uses Sketchfab to store the large 3D modeling files

Currently, there is a lack of standards around 3D scholarly outputs—how they’re reviewed, stored, and made accessible.3D collections are siloed by institution—there is really no repository. The only exception Jen cites is Duke University’s MORPHO SOURCE. For these reasons, evaluating and citing digital work is still a challenge.

Studies in Digital Heritage content are inextricably linked to the 3D model created in the course of those studies. There is a real need for community standards for 3D data presentation. Academic departments are generally slow to reward digital projects, or have a process for incorporating these scholarly outputs in formal evaluations.

Archeologists with an interest in Jen’s work, for example, always want the original 3D model she created, not the version on Sketchfab. But these models haven’t been peer-reviewed, and for that reason, Jen is reluctant to provide. In the near future, more standard development and community standards for 3D and VR creation and curation in higher education is certainly warranted.

Kathi Martin, the Drexel Digital Museum Project

Kathi Martin  presented her work with The Drexel Digital Museum Project: Historic Costume Collection (digimuse)---a searchable image database comprising select fashion from historic costume collections. Initially, fashion images were highly protected by using low-res images and watermarked images on the website. Kathi explained that Polish hacktivists demonstrated to her how easy it is to remove the watermark and improve resolution.

The museum has always been driven by open access and open source to share information and further usage and research. Interoperability is key to the museum’s mission—this allows the data on the museum’s website to be easily harvested across browsers.

The museum has widened beyond Drexel’s collection; for example, Iris Barre Apfel’s Geoffery Beene collection was displayed and that exhibit is archived on the museum site. Quicktime VR was used to film the collection and provide high-resolution captures of the fashion collections.

The technology DigiMuse is used in the Drexel project and provides a new level of engagement with the collections Kathi is preserving. Drexel's Digital Museum project website allows a site visitor to interact personally and actively with a distributed, collected narrative. The site includes rich metadata descriptions for every picture. The variety of contributions on the site, Kathi feels, stimulate varying and often deeply personal reactions.

She believes the site is very powerful due to its “baked-in connectedness.” Kathi closed with Grace Kelly’s gown, made by Givenchy in part out of actual coral (gasp!). The site complements the high-res images of the gown itself with media of Grace Kelly in the gown.

Alex Humphreys, JSTOR Labs

Alex discussed how JSTOR Labs applies methods and tools from digital scholarship to create tools for researchers, teachers, and students "that are immediately useful – and a little bit magical." JSTOR is a member of ITHAKA, a non-profit devoted to digital sustainability.

Alex Humphreys, director at JSTOR LaBs

Alex works with a team of five on innovative projects that benefit humanities scholars. He demonstrated JSTOR Labs’ Understanding Shakespeare tool, which uses the Folger Shakespeare Library’s digital version of Shakespeare plays to hyperlink each line of the play to a search showing all JSTOR articles that contain a particular line of prose. 

JSTOR Labs works from a philosophy of play—Alex sees what resources other organizations (like Folger Shakespeare Library) bring, what LABS brings, and what kind of sandbox they might build in collaboration. Part of JSTOR Labs’ philosophy values what Alex calls “multi-disciplinarity.” For example, JSTOR Labs’ partnership with Eigenfactor (which measures influential and highly cited articles) has resulted in a tool that helps scholars discover the most influential articles in a given field or topic area. 

JSTOR Labs also believes in hypothesis-driven development. Alex explained the key is ITERATING, ITERATING, INTERATING! Alex also presented the topic modeling examples, including Reimagining the Monograph, which started from JSTOR Labs asking, "Can we improve the experience and value of long-form scholarship?"

The “topicgraph” provides a fingerprint of a monograph. Each term has a set of associated keywords, containment of which in the text make the probability higher that the term is being discussed. 

Last but certainly not least, Alex unveiled am amazing and brand new tool with the working name “Text Analyzer.” This tool is essentially a multi-language analyzer—text can be pulled from, say, a Russian Wikipedia entry. The tool will translate the text and list in English the topics included in the entry. 

Alex notes that so much of digital humanities is about probabilities, not known data. The label modelling that JSTOR Labs most frequently uses (as opposed to cluster topic modeling).


The Philadelphia SSP Regional Meetings are an excellent venue to engage with the scholarly and scholarly publishing community. All are welcome. To learn more, click here!

 

Rights & Permissions Service for Publishers

Copyright is far more than just a necessary evil to protect intellectual property from theft. Copyright furthers all creative interests by making the rich marketplace of ideas available to a wider audience. Resourceful rights and permissions management supports author content while maximizing the publisher’s budget.

Hiring one person to perform all the rights and permissions functions requires finding a pretty special person: an editorial specialist with enough copyright expertise to be an IP strategist, while being a skilled digital-image savvy photo researcher and database manager. That's why we offer R&P as a service for publishers.

Cenveo Publisher Services manages all aspects of text, image, and rich media content R&P. We assemble a team of project managers, assessment specialists, data entry staff, photo researchers, and permissions experts to support the management of R&P in your organization.

By identifying a rights strategy early, authors can stay on budget. Research and permissions runs alongside production cycles with clearly defined milestones. Targeted international expertise also allows a spectrum of pricing options. Contact us to learn how we can support R&P for your journals or books program.

 

Download Brochure


Accessibility for Journal Publishers

The terms “access” and “scholarly journals” are often linked to Open Access publishing. Less often discussed—but still very important—are issues and challenges of making journal content accessible to the visually, cognitively, or otherwise impaired.

Guest blog by John Parsons


content accessibility for journal publishers

Peer-reviewed, scholarly journals are a specialized slice of the publishing universe. Worldwide, it is a $25 billion market. Unlike consumer and trade magazines, journals are not supported by advertising revenue, but rely on subscriptions, institutional funding, and/or open access funding mechanisms. Readership varies widely in size and scope, and includes students, journalists and government employees as well as researchers themselves. They are also delivered by a wide array of specialized digital platforms and websites.

What they do share with other publications is the assumption that their audience can read words and images on a page or screen. For the majority of journal readers, this poses few problems. However, for readers with visual or other impairments, content accessibility is a major concern.

Justifying Journal Content Accessibility

Some might argue, without foundation, that scholars qualified to consume peer-reviewed content are less likely to be impaired in the first place, making the number of affected users too low to justify the added costs. (If cost were the only issue, one Stephen Hawking in a journal’s potential audience would more than justify the cost of making scholarly exchange possible for disabled readers. Also, as was mentioned, scholars and researchers are not the only readers in the equation.)

In other words, one justification for accessibility is a moral argument. It’s simply the right thing to do. However, for most journals, this argument is moot. Government-funded research typically carries minimum accessibility requirements, such as those spelled out in U.S. Code Section 508.

Building content accessibility into a journal workflow need not even be a daunting financial question at all. Well-structured XML content and metadata has many benefits, of which accessibility is only one. (This will be the subject of another blog.)

Regardless of the reason, most journal publishers understand the why aspect of content accessibility. So, let’s focus on how best to do it.

Identifying the Pieces---WCAG 2.0, Section 508, and VPAT

To understand the scope of journal article accessibility, we need to know that it has two basic versions—a document (PDF or EPUB) and a webpage. These are similar in many ways, especially to a sighted person, but they have different accessibility requirements.

What each of these formats have in common are

  • accessibility metadata
  • meaningful alt text for images (including math formulas and charts)
  • a logical reading order
  • audible screen reading
  • alternative access to media content

Only two (EPUB and webpages) have potentially resizable text and a clear separation of presentation and content. (PDF’s fixed page and text size often can be problematic. But in areas where PDF is a commonly used format, notably healthcare, service providers can provide workflow mechanisms to remediate PDFs for Section 508 compliance.)

Webpages have the added requirements of color contrast, keyboard access, options to stop, pause, or hide moving content, and alternatives to audio, video, and interactive content. Most of these are covered in detail in the W3C Web Content Accessibility Guidelines (WCAG) 2.0 guidelines, many of which are federally mandated. Service provider solutions in this area include a Voluntary Product Accessibility Template (VPAT) for journal content. This template applies to all “Electronic and Information Technology” products and services. It helps government contracting officials and other buyers to evaluate how accessible a particular product is, according to Section 508 or WCAG 2.0 standards.

There are several “degrees of difficulty” when it comes to making journal articles accessible. Research that is predominantly text is the easiest, but still requires careful thought and planning. With proper tagging of text elements, clearly denoting reading order and the placement of section headings and other cues, a text article can be accessibility-enhanced by several methods, including large print and audio.

More difficult by far are the complex tables, charts, math formulas, and photographic images that are prevalent in STM journals. Here, extra attention must be paid to type size and logical element order (for tables). In the case of charts, formulas, and pictures, the answer is alternative or “alt” text descriptions.

Think of it as explaining a visual scene to someone who is blindfolded. Rudimentary alt text, like “child, doll, hammer,” would probably not convey the full meaning of a photograph depicting Bandura’s famous Bobo Doll experiment. Rather, the best alt text would be a more nuanced text explanation of what the images depict—preferably by a subject matter expert.

Automation in Workflow is Key

When Braille or even large print were the only solutions, journal content accessibility was not an option for most. All that changed, for the better, with the advent of well-structured digital content. Again, publishing service providers have done much to advance this process, and in many cases, automate it.

Not every issue can be automated, however. Making content accessible may involve redesign. For example, footnotes may need to be placed at the end of an article—similar to a reference list—to ensure continuity of reading. Other steps support the logical flow of content and reading order, semantic structuring for discoverability, inclusion of alt text descriptions for images, simplifying presentation and tagging of complex tabular data, and the rendering of math equations as MathML.

Journal publishers can facilitate this in part by selecting formats that are more accessible by nature. Articles published online or available as EPUB are accessible by default, although they need to be enhanced to meet all the requirements of WCAG 2.0. The gap is small and can be easily bridged by focusing on the shortcomings and addressing it in design, content structuring, and web hosting.

Many of the basic, structural issues of making journal content accessible can be resolved, more or less automatically, if the publishing system or platform enforces standardized metadata rules. Titles, subheads, body copy, and other text elements will have a logical order, and can easily be presented in accessible ways. For elements where knowledgeable human input is required (as with alt text), a good system will facilitate such input.

Accessibility is not just the right thing to do, for the sake of science. It is also an obtainable goal—with the right service provider.

 


Recent Reports


Publishing Defined: John Bond's STM Publishing Video Series

What is Crossmark?

John Bond of Riverwinds Consulting is creating a video library of useful shorts about topics and terms important to the STM publishing industry. For some people, his shorts may provide a great refresher or another take on subjects that impact our market. For those just starting their career in STM publishing, his video series should be required viewing!

The series is titled "Publishing Defined" and covers a broad range of topics from defining specific terms to strategic advice regarding RFPs. Also helpful are the playlists he’s put together. You are sure to add a little something to your own knowledgebase from this series!

The following video explains Crossmark and why it’s important for publishers and service providers:

The Crossmark playlist can be viewed here.


Crossmark and Crossref are explained in our white paper, "All Things Connected." Download your copy today by clicking on the cover in the right column.

 

Resources for Publishers

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Publishers Keep Calm and Carry On

It was another busy year at London Book Fair last week with reports of increased registration numbers up by a double-digit percentage.

 
 

The following captured a brief quiet moment at the Cenveo Publisher Services Stand. The global team met with publishers, production managers, archivists, technology executives, and many others to discuss all things related to the creation and management of content.

 
 

Accessibility

Indeed, the hot topic for LBF17 at the Cenveo Stand was content accessibility. Long a champion of digital equality, we're helping publishers create and architect content that is "born accessible." The same technologies and guidelines that improve access to materials for people with visual or hearing impairments, limited mobility, perceptual and cognitive differences, are also tremendously useful for all publishers' customers.

No longer limited to education publishers, we see that journal publishers and others have a driving need to do more with content accessibility.

 

Google Books Decision

In an extremely packed room, America’s foremost copyright jurist and a judge on the U.S. Court of Appeals Second Circuit, told attendees that Google’s program to scan tens of millions of library books to create an online index “conferred gigantic benefits to authors and the public equally,” and did not “offer a substitute or interfere with authors’ exclusive rights” to control distribution. READ MORE: Judge Pierre Leval Defends Google Books Decision, Fair Use

Scholarly Publishing and Academic Market

The Research and Scholarly Publishing Forum offered academic publishers and service providers a half-day program with lively debates from Elsevier, Wiley, and Taylor & Francis. Some of the highlights included

  • A discussion about the future of Open Access in the UK between Alicia Wise, Elsevier’s Director of Policy and Access, Liam Earney, Jisc Collections’ Head of Library Support Services, and Chris Banks, Assistant Provost (Space) & Director of Library Services, Central Library, Imperial College London
  • A panel presenting global research policy developments chaired by Wiley’s James Perham-Marchant, featuring speakers from Taylor & Francis, Berghahn Books and Research Consulting
  • A panel session on new innovations to watch, chaired by Tracey Armstrong, President and CEO of the Copyright Clearance Center, including speakers from Sparrho, Frontiers and Cold Spring Harbor Laboratory Press

Full Coverage via Publishers Weekly

Publishers Weekly covered a range of topics across the many markets represented at the Fair.

 

Resources for Publishers


Stay Connected

How Open Access is Changing Scholarly Publishing

Guest blog by John Parsons

After almost two decades, the Open Access publishing model is still controversial, and misunderstood. Here’s where we stand today.

The beginnings of scholarly publishing correspond roughly to the Enlightenment period of the late 17th and early 18th Centuries. The practice of publishing one’s discoveries was driven by a belief—championed the Royal Society—in the transparent, open exchange of experiment-based ideas. Over the centuries, journals embraced a rigorous peer review process, to maintain the integrity (and the subscription value) of its research content.

Transparency, openness, and integrity all come at a cost, however. For many years, that cost was met by charging journal subscription fees—usually borne by institutions who either produced the research, benefited from it, or both. So long as the publishing model was solely print-based, the subscription model worked well, especially for institutions with deep pockets. That all changed with the Internet. Not only did the scope and volume of research increase rapidly, so did the perception that all information should be easily findable via search engines.

The Internet expanded the audience for research outside traditional institutions—to literally anyone with a connected device. With this expansion, the disparity between the well-funded and those less fortunate became acute. As it did with other publishing workflows, this disruption drove a need for new economic models for scholarly publishing.

Open Access Basics

Advocacy for less fettered access to knowledge is nothing new. But the current Open Access (OA) movement began in earnest in the early 2000s, with the “Three Bs” (the Budapest Open Access Initiative, the Bethesda Statement, and the Berlin Declaration by the Max Planck Institute). Much of the impetus occurred in the Scientific, Technical, and Medical, or STM publishing arena, and from research funding and policy entities like the European Commission and the U.S. National Institutes of Health. The latter’s full-text archive of free biomedical and life sciences articles, PubMedCentral or PMC, is a leading example—backed by a mandate that the results of publicly-funded research be freely available to the public.

In a nutshell, Open Access consists of two basic types—each with its own variations and exceptions. “Green” OA is the practice of self-archiving scholarly articles in a publicly-accessible data repository, such as PMC or one of many institutional repositories maintained by academic libraries. There is often a time lag between initial publication—especially by a subscription-based journal—and the availability of the archived version.

As we will discuss in future blogs, publishers and their service providers are exploring better ways to adapt their publishing workflows to the realities of OA and hybrid journals. In some cases, such as metadata tagging, XML generation, and output to print and online versions, these workflows can be highly automated. In others, publishers must find cost-effective ways to add value—while being as transparent as possible to the authors and users of journal content.

The alternative is the “Gold” OA model. It includes a growing number of journals, such as the Public Library of Science (PLOS), that do not charge subscription fees. Instead, they fund the cost of publishing through article processing charges (APCs) and other mechanisms. Although APCs are commonly thought of as being paid by the author, the real situation is more complex. Often, in cases where OA is mandated, APCs are built into the funding proposals, or otherwise factored into institutional and research budgets. PLOS and other journals can also waive APCs, or utilize voluntary funding “pools,” for researchers who cannot afford to pay them.

The appeal of Open Access is obvious to researchers and libraries of limited means. It also has the potential to accelerate research—by letting scientists more easily access and build upon others’ work. But for prestigious institutions, publishers, and their partners, the picture is more complicated.

Publishers in particular can be hard pressed to develop and enhance their brand—or offer a multitude of services that scholars may take for granted—when constrained by the APC funding model. (Those challenges will be addressed in a future blog.)

Misconceptions, Problems—and Solutions

Even today, researchers are not always clear about what Open Access means for scholarly publishing. Research librarians have their work cut out for them. They cite the common misconception that OA journals do not have an adequate peer review process, for example. This is caused by disreputable or “predatory” journals that continually spam researchers with publication offers. Librarians counter this with a growing arsenal of blacklist and whitelist sources, such as the Directory of Open Access Journals.

Perhaps a major contributor to the uncertainty surrounding OA is the practice of openly publishing “preprint” versions of articles prior to—or during the early stages of—the peer review process. Sometimes, this is part of the researcher’s strategy to secure further funding, but it can fuel the mistaken notion that peer review is not required in OA publishing workflow. Distinguishing preprints from final OA articles must be a goal for publishers and their partners.

Another problem is scholars’ unfamiliarity with the OA-driven changes in publishing workflows. Gold OA journals—particularly those involved in STM publishing—are usually quite adept at guiding authors through the publication process, just as their subscription-based counterparts and publishing service providers have been. For example, the practice of assigning Digital Object Identifiers (DOIs), ISSNs, and other metadata to scholarly publishing works is becoming increasingly efficient for both Gold OA and subscription journals.

Green OA is a thornier problem for traditional publishing workflows. Each institutional repository is separate from the others—with its own funding sources, development path, and legacy issues. A common approach to article metadata, for example, has not happened overnight. Fortunately, organizations like Crossref are working with multiple partners and initiatives to make these workflows universal—and transparent to the researcher.

Perhaps the biggest issue posed by OA is the fate of traditional, subscription-based journals. Despite the push to “flip” journals from a subscription model to Open Access, there are cases where this is simply not feasible or even desirable. Many journals have a large subscriber base of professionals who, although they value the research, do not themselves publish peer reviewed articles. This is especially true for STM publishing. Some of these journals have adopted a “hybrid” approach, charging APCs for some articles (which are available immediately) while maintaining others for subscribers only. These are eventually made Open Access under the Green model, especially when Open Access is a funding requirement.

Scanning the Horizon

As we will discuss in future blogs, publishers and their service providers are exploring better ways to adapt their publishing workflows to the realities of OA and hybrid journals. In some cases, such as metadata tagging, XML generation, and output to print and online versions, these workflows can be highly automated. In others, publishers must find cost-effective ways to add value—while being as transparent as possible to the authors and users of journal content.

Despite these challenges, Open Access is changing the scholarly publishing landscape forever. There is a compelling need for researchers to find and build upon the research of others—each needle buried in a haystack of immense proportions—to advance the human condition. Publishers and their service partners are well positioned to make that open process accessible and fair to all.

 

Resources for Publishers


Peer Review Management Services: Ensuring the Integrity of the Scientific Publishing Process

Cenveo Publisher Services now offers peer review management as a service. Journal publishers depend on the peer review process to validate research and uphold the quality of published articles. With deep expertise in scholarly publishing, our staff is fluent in all peer review models as well as the nuances of major peer review systems.

Download Brochure

 Click here to download brochure

Click here to download brochure

Our mission is to support both commercial and scholarly journal publishers with services that ensure editorial excellence while demonstrating time and cost savings. Peer review management fits well in our service portfolio because we’ve been working with the STM publishing industry for more than 135 years and peer review is most certainly the cornerstone of scholarly publishing
— McClanahan, Vice President of Publishing Services, Cenveo Publisher Services

Customized peer review management solutions are based on each publisher’s workflows and business requirements. Peer review management is offered as a stand-alone service or integrated with Cenveo’s full-service journal production model. Dedicated staff work exclusively on peer review---maintaining deadlines, communicating with reviewers, and streamlining responses to authors. The service is bundled with regular performance reports that detail submission numbers, processing times, decision rates, and more.

Click the link below to learn more about this new service offering.

 

Resources for Publishers


Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

The Scholarly Publishing Process Plays a Critical Role in Combating Fake News

 Time Reveals Truth by Giovanni Domenico Cerrini

Time Reveals Truth by Giovanni Domenico Cerrini

"Time reveals truth."

As 2017 quickly approaches, we're sure to read, learn, and understand more about the role scholarly publishing will play in our post-truth world. Content validation, peer review, image forensics, traditional citation databases---these are long-established and critical components of the scholarly publishing process. While the demand for increased speed to publication became a critical measurement of a journal publisher's success, editorial integrity and quality remain the gold standard by which publications are judged.

Kalev Leetaru, a contributer to Forbes, recently wrote "How Academia, Google Scholar And Predatory Publishers Help Feed Academic Fake News." In this article he shares a number of his experiences and conversations that illustrate how content validation is not at the forefront or even a consideration in some people's minds:

 
  • "Not a day goes by that an academic paper doesn’t pass through my inbox that contains at least one claim that the authors attribute to a source it did not come from."
  • "I constantly see my own academic papers cited as a source of wildly inaccurate numbers about social or mainstream media where the number cited does not even appear anywhere in my paper."
  • "...many [graduate students] I’ve spoken with have never even heard of more traditional bibliographic search engines and prefer the ease-of-use and instant access of Google Scholar for quick citation searches."
  • "The Editor-in-Chief of one of the world’s most prestigious and storied scientific journals recently casually informed me that his journal now astoundingly accepts citations to non-peer-reviewed personal web pages and blog posts as primary citations supporting key arguments in papers published in that journal."
 

Within scholarly publishing the conversation around "Open" echoes louder all the time. The first SSP Focus Group meeting on January 31, 2017 is on the topic of "Open Data, Science, and Digital Scholarship." PSP's Annual Conference (February 1 to 3) will discuss "Adding Value in the Age of Open."

The concept of "open" is not a new one. Though the term Open Access publishing started to proliferate in the early 2000s, the idea has been around for some time. Computer scientists had been self-archiving in anonymous ftp archives since the 1970s and physicists had been self-archiving in arxiv since the 1990s.  In 1994, Stevan Harnad proposed "The Subversive Proposal," calling on all authors of "esoteric" research writings to archive their articles for free for everyone online.

Leetaru's article suggests that the combination of academia, Google Scholar, and predatory publishing practices play a role in the proliferation of fake news. One could also maintain that the scholarly publishing process plays a pivotal role in combating fake news.

How is your publishing organization navigating the challenges of open in our internet-connected world? What are the consequences of our movement into a more open ecosystem in the scholarly publishing community? Can quality and peer-reviewed content override non-peer-reviewed personal web pages and blog posts?

Time will tell.

 

Peer Review Services

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Web-First Production or Publish-Ahead-of-Print...That Which We Call a Workflow Should Publish Just as Fast

In the STM journal publishing world, it seems like every few years we have a new phrase to describe the dissemination of scholarly content. Each phrase describes a slightly different aspect of journal publishing and based on where you work in an organization, it may mean something slightly different. A collection of phrases I've encountered over the years include

  • XML-early workflow
  • XML-first workflow
  • Publish ahead of print (PAP)
  • Cloud-based publishing
  • Web-first production
  • HTML-based publishing

I'm sure there are other terms specific journals and specific publishing organizations use.

No matter the name, and without parsing every word, I believe the big takeaway is that now more than ever, it's critical to publish STM content quickly without compromising editorial quality. Speed is critical for journal content and scholarly communication. Longevity is important as well. Researchers need to go back to articles to understand corrections, errata, retractions, and updates. And no matter the name, mark-up language is the driving force behind speed, accuracy, longevity, and discoverability.

To provide our publishers with automated production at record-setting speed we use Cenveo Publisher Suite. The cloud-based ecosystem of tools is architected to ensure editorial consistency and quality.

Cenveo Publisher Suite | Features and Benefits

Tool Overview Advantages for Authors/Editors Publishing Workflow Benefits Technical Specs Support
Smart Edit Helps editors perform common tasks during the content creation process.

Automated clean-up process identifies more than 200 different actions.

Auto content identification quickly updates specific document and content types: author names, affiliations, footnotes, abstracts, keywords, etc.

Content normalization transforms styled document to publisher/journal-specific format.

References validation ensures any missing or duplicate references are identified. All references validated against CrossRef and PubMed.

Publishers-specific preferences are highlighted for copyeditor to review.

Extensibility. Inclusion of new content items, specific content types, taxonomies, quality checks, and additional output deliverables are managed through a modular customizable interface.

Authoritative sources. The Cenveo architecture makes use of industry standard authority sources such as CrossRef and PubMed Central® that provides content integrity.

Publisher-specific flexibility.
Normalizations are based on title specific style and content requirements.

Built on the latest version of Microsoft Office 2013 and Visual Studio 2012. The Smart Edit Team comprises experienced analysts and developers with deep knowledge of STM content as well as publisher-specific requirements. A dedicated team makes changes or updates to normalization style and output routines quickly with fluency and expertise in content creation and output.
Smart Compose Automated composition engine that ingests content output from the Smart Edit process and generates proofs based on publishers’ styles.

Speed to publication. Automated content transformations enable the fastest turnaround times in the industry. Based on a publisher’s requirements and the content itself, same-day turnaround is a true possibility.

Consistency. With built-in styles based on publisher specifications, consistent format is guaranteed across journal articles, multiple titles, references, and more.

Streamline workflows. Transitioning from manuscript to proof and maintaining XML structure, translates to effortless digital and print output. One straight text article can be composed every 2 to 3 minutes.

Dynamic server-based 3B2 composition with core template built using Xpath, XSLT, and Perl. Style sheets and layouts stored as separate libraries.

Dynamic server-based InDesign composition with templates built on Java and InDesign scripts is the latest addition to high-speed composition of design-intensive content.

Template engineers with 15 to 20 years on-the-job experience are available around the clock for troubleshooting and for any other technical demands.
Smart Proof Online proofing and correction tool that presents composed pages via a web browser and offers an interface to update content and format.

Intuitive. Reminiscent of Microsoft Word but accessible via any browser, authors and editors can easily make line edits and insert queries.

Behind-the-scenes-XML. Focus is on the content and not the structure. XML mark-up is captured behind the scenes, including change history metadata.

Editorial integrity. Managing author corrections, editorial styles, and journal formats consistently translates to quality published content.

Streamlines the proofing process for authors and editors in a serial correction workflow. Integration of multiple correction sources into a single PDF (no re-marking of corrections).

An XHTML-based tool.

XML input is converted to XHTML for correction cycle then transformed back to XML.

One-time authentication, troubleshooting, and customer support.

Auto alert messages to technical support team helps to resolve any technical glitches.

While publishers' business drivers support the evolving journal landscape, which includes author support, open science, and readership needs, we ensure our technology helps them along the way.


Want to see a demo of Cenveo Publisher Suite or consult with a publishing workflow specialist? Simply click the link below to get started!

 

Related White Paper


Brochure

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

SSP Fall Seminar Recap - Mentoring, RFPs, Metadata...These Were Just a Few of Our Favorite Things

This past Tuesday and Wednesday (October 4 to 5), SSP hosted its Fall Seminar at the American Geophysical Union office in DC. The event was organized around three themes with presentations from publishers', vendors', and consultants' perspectives:

  1. Develop Somebody---Even Yourself: Mentorship, Career Development, and Networking
  2. A How to Guide: Successfully Executing an RFP Process
  3. Bagged and Tagged: How the New Scholarly Infrastructure is Connecting People, Places, and Things

Unlike the large SSP Annual Meeting, the Fall Seminar is an intimate gathering of journal managers, publishers, editorial directors, content technology architects, developmental editors, graphic designers, and more. The focus throughout the 2 days was building networks, both professional and organizational, to strengthen yourself and your company. It was evident that the message was taken to heart as everyone involved was open to conversation and making new connections.

The RFP presentation was loaded with tips and best practices but also included thoughts on what NOT to include in an RFP. The participants and the audience shared many pet peeves that translated to a list of great tips related to RFP content and process.

Never miss an opportunity to hear Chuck Koscher from CrossRef speak about standards and metadata. His mission of creating a sustainable infrastructure for scholarly communication is always explained in detail and with passion.

Following is a small sample of information from the past 2 days:

 

Resources for Publishers

Seven Facts That Publishers Should Know About DOI

While some academic publishing metadata standards have yet to reach a “tipping point,” others are already well established. The Digital Object Identifier, or DOI, is one of these. 

  1. What is DOI? Administered by the nonprofit International DOI Foundation, these ISO-standard alphanumeric codes serve as “persistent identifiers” for digital content (including abstracts), related objects, and physical assets or files. 
  2. The benefit of a universal DOI: Nearly all journal articles are assigned a unique DOI, which facilitates more efficient management, tracking/searching, and automation by publishing and content management systems. It links to the object permanently, even if it is moved, modified, or updated. It also can contain associated metadata, although the data model requires only a limited set of “kernel” elements.
  3. I’m a publisher, how do I use DOI? Typically, publishers contact the agency, obtain a DOI to be used for all of the articles they publish, and work with the agency to register and use the DOIs created for individual articles. 
  4.  Who allocates the DOI? Various registration agencies manage the DOI records, maintain the metadata databases, and participate in the overall DOI community. For academic publishing, the primary agency is the nonprofit Crossref
  5. What should I know about Crossref? Crossref handles DOIs for preprints (unpublished drafts posted on preprint servers) as well as DOIs for articles accepted in the publication chain (from the initial manuscript submission through the final published article). These are in fact separate identifiers—to distinguish the state of the piece in the publishing process—but are also linked to one another. 
  6. Where will we see growth in DOI adoption? According to April Ondis, Crossref’s Strategic Marketing Manager, “The real growth in DOI adoption will be in the area of preprints and early content registration.”  Driven in part by the growth of Open Access, researchers are increasingly using preprint content to invite informal feedback before the article is formally accepted for peer review and publication. Ondis noted that the DOI for an accepted article is the primary, and permanent one, while the preprint’s DOI is separate but linked.
  7. Are there problems with DOIs? Authors, institutions, and research funders need to know about pending articles as soon as possible. “However, with a DOI there has to be a content URL. At article acceptance, the publisher often does not know where that content will be, so a DOI could not be registered,” said Crossref’s Director of Technology, Chuck Koscher.  The solution? Crossref will now host an ‘intent to publish’ landing page for these DOIs, based on an ‘intent to publish’ field in the metadata supplied by the publisher.

Read more about DOI and other metadata standards in our white paper, "All Things Connected." [click here]

 

Related White Paper

Grab your copy of "All Things Connected" to learn more about DOIs and other metadata standards [click here]


Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Honoring Your Authors and the Scholarly Publishing Process

Retraction Watch recently discussed why PLOS ONE's correction rate is higher than average---authors do not review page proofs.

Everyone in scholarly publishing understands that mistakes are made along the publishing process and the bright side of digital publishing allows for quick redaction and updates to scholarly papers. However, when correction rates are higher than what's typically considered acceptable, which is about 1.5%, it's time to look into the workflow to determine what exactly is going on.

Mark Dingemanse, a researcher at the Max Planck Institute for Psycholinguistics has been reviewing PLOS ONE correction rate since May 2015. He recently updated his analysis in August 2016:

 
Here are the numbers for the whole of 2015: 30970 research articles across all PLOS journals, 1939 corrections (6.3% of publication output), of which 415 acknowledge publisher error (21.4% of corrections). And here’s 2016 so far: 15162 articles, 794 corrections (5.2%) of which 154 are publisher error (19.4% of corrections). So over the last 1.5 years, a full 6% of all PLOS publication output has received corrections, and at least one fifth of these are due to publisher errors beyond the control of authors. Keep in mind authors are essentially powerless and many don’t request corrections, so the problems are likely much worse.
— http://ideophone.org/why-plos-one-needs-page-proofs/
 

PLOS ONE makes it very clear that it is against the journal's policy to provide authors with page proofs. Head over to Retraction Watch and read the full story along with the comments and associated links.

At Cenveo Publisher Services, our workflows are built on the trifecta of people-process-technology with the "people" part first. We end with people as well---in the form of author proofs!

Traversing a typical journal workflow process at cenveo Publisher Services.

 

 


Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.