AI and NLP for Publishers: How Artificial Intelligence & Natural Language Processing Are Transforming Scholarly Communications

A free report from Cenveo Publisher Services

You may have heard how artificial intelligence (AI) is being deployed within the information industry to combat fake news, detect plagiarism, and even recommend content to users. Until now however, AI has had minimal impact on the content creation and editorial functions of the publishing ecosystem. For scholarly publishers in particular, AI capabilities have advanced to a degree that they can actually automate significant portions of their workflows, with massive implications for their businesses, their authors and the research community.

AI is a method by which humans train machines to identify patterns and learn new patterns. It involves developing algorithms that enable machines to quickly process large swaths of data, recognize the patterns within that data, and make decisions or recommendations based on that analysis.

Natural language processing (NLP) incorporates grammar analysis into machine-learning. A computer program is trained to recognize the noun, verb, and object in a sentence, and to understand the structure of words in order to discern their meaning.

With NLP technology, publishers can automate simple editing and formatting tasks and focus their energy on adding greater value to the content. They can also manage more journal submissions or speed up tedious peer review without significantly increasing staff or production costs.

Traditionally, all articles submitted to an academic journal undergo a similar process with multiple rounds of corrections and changes before copyediting, formatting, composition and proofing. All told, this system could take several weeks before the article is published.

On the other hand, AI and NLP technology can implement pre-set grammar and formatting rules to analyze the content and score articles for quality. The technology will automatically correct minor errors like grammar and punctuation, and flag more complex issues that may need an editor’s attention. Journal submissions that are high-quality and can advance straight to the typesetting and composition stage.

AI & NLP technology can flag content that requires an editor's review

Because editing is often the most time-consuming part of the production process, fast tracking high-quality articles to the composition stage can save a significant amount of time for publishers—while also improving the author experience.

In our latest report, AI and NLP for Publishers, we explore how AI and NLP are being used today in scholarly publishing and how it may impact the evolution of research. We also explore how the technology works and how publishers like Taylor & Francis are, with the help of Cenveo Publisher Services, realizing the benefits of intelligent automation.

Download the free report.

Smart Suite 2.0 is Cenveo’s integrated, cloud-based publishing engine combining AI and system intelligence to achieve accelerated workflows.

 

Happy Birthday Adobe PDF!

Adobe Acrobat turned 25 this month. For those of us who remember the pre-PDF days and what it was like sending that floppy disk to a colleague only to find out later it was gibberish when opened, we also might believe that the PDF is "sheer elegance in its simplicity."

Elegant? Yes!

Dr. John Warnock recognized that looks do matter and effective communication happens when an author's intended design, formatting, and images all combine to present an idea as originally intended. In 1990, Dr. Warnock launched his idea, The Camelot Project, in which anyone could capture documents from any application, send those documents anywhere, and even print those documents from any machine without compromising the integrity of the content. "Take that Apple IIc Plus!" Sincerely, Tandy 1000.

In August 1990, Dr. Warnock published a six-page white paper to support his Camelot idea and thus work commenced on the radical idea of a "portable document format."

PDF has been around for 25 years -- but what does it stand for? Here's what a few people had to say on the streets of Salt Lake CIty.

Simple? No.

Take a moment and think about how we take for granted all the complexity that exists behind the three clicks "Save As PDF." The following excerpt from Dr. Warnock's paper explains the inception of the PDF (née "Interchange PostScript"):

 

By redefining “moveto” and “lineto” very different things can happen. For example, if these operators are defined as follows:

/moveto
{exch writenumber writenumber (moveto) writestring}def
/lineto
{exch writenumber writenumber (lineto) writestring}def

then when the “poly” procedure is executed a file is written that has the following contents:
1.0 0.0 moveto
0.809 0.588 lineto
0.309 0.951 lineto
-0.309 0.951 lineto
-0.809 0.588 lineto
-1.0 0.0 lineto
-0.809 -0.588 lineto
-0.309 -0.951 lineto
0.309 -0.951 lineto
0.809 -0.588 lineto
1.0 0.0 lineto

In this example the new redefined “moveto” and “lineto” definitions don’t build a path. Instead they write out the coordinates they have been given and then write out the names of their own operations. The resulting file that is written by these new definitions draws the same polygon as the original file but only uses the “moveto” and “lineto” operators. Here, the execution of the PostScript file has allowed a derivative file to be generated. In some sense this derivative file is simpler and uses fewer operators than the original PostScript file but has the same net effect. We will call this operation of processing one PostScript file into another form of PostScript file “rebinding."

---The Camelot Project, J. Warnock

 

It took fewer than 3 years for Dr. Warnock's vision and diligent work by a brilliant production team to solve the problem and release the first iteration of Adobe Acrobat's Portable Document Format.

Creating PDFs in the early days was nowhere near as simple as it is today. I recall diligently writing down in my notebook all the steps required. I don't recall every step but I do remember the IT request to install three pieces of hefty and pricey software on my machine: Acrobat Exchange, Acrobat Distiller, and Acrobat Reader. Yes, in the early days Acrobat Reader had a price tag associated with it.

Software that changed the world.

Software that changed the world.

In today's mobile responsive world, the PDF can cause frustration on an iPhone (I'm guilty). Yet I would argue that no other document technology has as much ubiquitous influence across markets and demographics as the beautiful PDF (more to come).

 
Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Videos in Your Journal Publishing Program?

Integrating video into a journal publishing program is not new but it's also not ubiquitous across the market. Videos can be a useful component to support an individual article while also helping authors to promote their research and publications.

The New England Journal of Medicine surveyed its authors and readers on the effectiveness of its Quick Take Videos (QTs). The survey experienced a 51% response rate from 95 authors and 411 readers who were contacted to share their views.

Of those authors who responded, 75% replied that they were very satisfied with their role in helping to create QTs. While 17% responded they were very dissatisfied with their role in helping to create QTs.

98% of authors somewhat or strongly agreed that the QT accurately summarized their article and presented it in an engaging way.

Authors shared the following reasons when asked why they use QTs.

Readers shared the following reasons as relevant for why they view QTs.

When asked “Do you believe that videos represent the abstract of the future," 84% responded yes. The answer to this question is where real value can be found for journal publishers. Particularly in a time when journal publishing strives to provide greater benefits to authors, offering video shorts of articles is most certainly beneficial.


Are you currently integrating videos in your journal publishing program? Video abstracts? Training? Share your ideas in the comments section below.

 

Video Services


1 Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Smart Suite 2.0 Released - A New Approach to Pre-editing, Copyediting, Production, and Content Delivery

Smart Suite Version 2.0 is a cloud-based ecosystem of publishing tools that streamlines the production of high-quality content. The system has a complete interface (UI) redesign and tighter integration with high-speed production engines to solve the challenges related to multi-channel publishing.

Smart Suite 2.0 is the next generation publishing engine that focuses on a combination of artificial intelligence, including NLP, and system intelligence that eliminates human intervention and achieves the goal of high-speed publishing with editorial excellence. Smart Suite auto generates multiple outputs, including PDF, XML, HTML, EPUB, and MOBI from a manuscript in record-setting time.
— Francis Xavier, VP of Operations at Cenveo Publisher Services

Offering a fresh approach to streamline production, the unified toolset comprises four modules that seamlessly advance content through publishing workflows while validating and maintaining mark-up language behind the scenes.

  • Smart Edit is a pre-edit, copyedit, and conversion tool that incorporates natural language processing (NLP) and artificial intelligence (AI) to benefit publishers not only in terms of editorial quality but also better, faster markup and delivery to output channels.
  • Smart Compose is a fully automated production engine that ingests structured output from Smart Edit and generates page proofs. Designed to work with both 3B2 and InDesign, built-in styles based on publisher specifications guarantee consistent, high-quality layouts.
  • Smart Proof provides authors and editors with a browser-based correction tool that captures changes and allows for valid round tripping of XML.
  • Smart Track brings everything together in one easy UI that logs content transactions. The kanban-styled UI presents a familiar workflow overview with drill-down capabilities that track issues and improve both system and individual performance.

Smart Suite is fully configurable for specific publisher requirements and content types. Customized data such as taxonomic dictionaries, and industry integrations such as FundRef, GenBank, and ORCID, enhance the system based on publisher requirements.

 

Download Brochure

Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Taylor & Francis Group Awards Full-Service Production for Global Journal Content to Cenveo

Cenveo’s Technological Innovation Aligns With Taylor & Francis’ Journal Publishing Vision

Cenveo announces a major increase in full-service content production for Taylor & Francis’ global journal production program. Taylor & Francis selected Cenveo as a core content service provider to support Taylor & Francis’s continued growth.

PR-quote_T-and-F.png

As a world-leading academic and professional publisher, Taylor & Francis cultivates knowledge through its commitment to quality. Taylor & Francis identified in Cenveo a shared vision to develop production workflows designed to improve the velocity of research dissemination. This planned strategic initiative enhances customer experience for Taylor & Francis' contributor base, particularly newer generations of researchers and scientists, without alienating its traditional market.

“The critical piece that convinced us Cenveo was the right partner was their technology stack supports our publishing model and provides real-world, expedited publication turnaround times using AI and natural language processing technology,” explains Stewart Gardiner, Global Production Director of Journals at Taylor & Francis Group. “The organizational and operational innovations Cenveo proposed to support a rapid scale-up in production volumes were something we haven’t seen from other providers and were clearly based on lessons learned in previous ramp-ups.”

In February 2018, Cenveo announced a financial restructure and reorganization to strengthen its fiscal health. Mr. Gardiner remarks, “Given the company is currently reorganizing following a Chapter 11 process, our legal and financial people looked at Cenveo closely and came to the view that this is a relatively straightforward debt for equity restructure. Refinancing of this sort is not out of line with what one might expect for a company in Cenveo’s market position, scale, and acquisition history.”

Cenveo and Taylor & Francis have shared a long work history prior to this fivefold increase in volume. The transition process has already begun and onboarding the additional Taylor & Francis work is scheduled to take place in structured phases throughout the remainder of 2018.

Given the company is currently reorganizing following a Chapter 11 process, our legal and financial people looked at Cenveo closely and came to the view that this is a relatively straightforward debt for equity restructure. Refinancing of this sort is not out of line with what one might expect for a company in Cenveo’s market position, scale, and acquisition history.
— Stewart Gardiner, Global Production Director of Journals, Taylor & Francis Group

“This major win is a result of considerable work and effort that we have put into the next generation of Smart Suite combined with a focus on operational excellence,” explains Atul Goel, EVP Global Content Operations and President and COO of India Operations at Cenveo. “We are grateful for the trust placed in Cenveo by Taylor & Francis and heartened that Cenveo’s long-term vision of innovative publishing workflows aligns with a global leader in publishing.”

Cenveo is consistently rated as one of the highest performing content service providers by its customers. Cenveo’s ongoing commitment to publishers and extensive experience with volume ramp-up is further demonstrated by its significant investments in technology and staff.

Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Society for Scholarly Publishing Turns 40

The Society for Scholarly Publishing celebrates its 40th Anniversary this year. To celebrate, a number of special events are scheduled to take place at the Annual Meeting. You won't want to miss this year's event.

The 40th Anniversary Task Force, has also launched a new microsite for 2018 to celebrate SSP’s 40th anniversary. As part of a year-long celebration, the website will feature photos, documents, and news from SSP’s archives as well as interviewers with long-time members of the SSP community.

The SSPat40 website is updated regularly, so you will want to check back often to browse the historical content we unveil, old photos, past topics of interest to the community. If you have any old pictures or ideas you would like to share on the website, please contact me.

Finally, keep apprised of ongoing developments and share news about SSP at 40 with the hastag #SSPat40!

Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

View From a Publishing Consultant: 2018 Trends in Scholarly Publishing

This short video by John Bond of Riverwinds Consulting lists some of the trends he foresees in scholarly publishing this year.

 
 
Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Publishing Defined: What is Open Peer Review?

 

This short video by John Bond of Riverwinds Consulting talks about the different types of Open Peer Review. John recently published a new book titled "Scholarly Publishing: A Primer." 

 

Learn About our Peer Review Services for Publishers


Follow Us!

Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Open Practice Badges: A Primer and How to Get Started

The Center for Open Science (COS) provides tools, training, support, and advocacy that help researchers and scholars manage, share, and discover scientific research. The COS’ mission is to “increase the openness, integrity, and reproducibility of scholarly research. Acceleration of scientific progress can be a primary motivator for scholarship and a powerful driver of real solutions.

The COS develops software tools, workflows, data storage solutions, and more based on its free Open Science Framework (OSF). The OSF is an ecosystem of solutions, partnering companies, technologies, and ideas that support researchers across the entire research life cycle.  One initiative that is gaining momentum is the use of Open Practice Badges in the publishing workflow.

Openness is a core value of scientific practice.
 

The scholarly publishing community agrees on the relevance and importance of open communication for scientific research and progress. In 2009 there were approximately 4,800 OA journals publishing approximately 190,000 articles. In January 2017, the estimate is that there are around 9,500 active OA journals. At Cenveo Publisher Services, we work with a large number of society and commercial publishers who have launched or are preparing to add OA publication models to their workflows.

Awarding Open Practice Badges on published content is a way of designating and awarding authors badges that acknowledge their use of open practices during the research life cycle.

Incorporating Open Practice Badges Into Publishing Workflows

By acknowledging open practices in scientific research, journal publishers can use badges in their publications to certify that a particular research practice was followed. Badges can be awarded to the published content as part of the peer review process or they can be awarded post-publication. As long as processes and practices are transparent, any organization can issue badges. Most publishers are awarding the badges during peer review. Publishing platforms and review services are likely to use the badges post publication.

For publishers, the journal awards the badge and it is linked to the specific article. Each publisher tends to have specific methods for incorporating badges into the published article. However, it is critical that the badge is machine discoverable and readable.

Detailed information on incorporating OA badges into your publication workflow can be found at the OSF Wiki page here.

Badge Overview

There are three badges currently used:

  1. Open Data
  2. Open Materials
  3. Preregistered

Following is an overview of the three badges and corresponding criteria. Detailed information is available on the OSF Wiki page, including corresponding links.

Open Data

The Open Data badge is earned for making publicly available the digitally-shareable data necessary to reproduce the reported results.

Criteria

Digitally-shareable data are publicly available on an open-access repository. The data must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

A data dictionary (e.g., a codebook or metadata describing the data) is included with sufficient description for an independent researcher to reproduce the reported analyses and results. Data from the same project that are not needed to reproduce the reported results can be kept private without losing eligibility for the Open Data Badge.

An open license allowing others to copy, distribute, and make use of the data while allowing the licensor to retain credit and copyright as applicable. Creative Commons has defined several licenses for this purpose, which are described at www.creativecommons.org/licenses. CC0 or CC-BY is strongly recommended.

Open Materials

The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.

Criteria

Digitally-shareable materials are publicly available on an open-access repository. The materials must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

Infrastructure, equipment, biological materials, or other components that cannot be shared digitally are described in sufficient detail for an independent researcher to understand how to reproduce the procedure.

Sufficient explanation for an independent researcher to understand how the materials relate to the reported methodology.

Preregistered/Preregistered+Analysis Plan badges 

The Preregistered/Preregistered+Analysis Plan badges are earned for preregistering research.

Preregistered

The Preregistered badge is earned for having a preregistered design. A preregistered design includes: (1) Description of the research design and study materials including planned sample size, (2) Description of motivating research question or hypothesis, (3) Description of the outcome variable(s), and (4) Description of the predictor variables including controls, covariates, independent variables (conditions). When possible, the study materials themselves are included in the preregistration.

Criteria for earning the preregistered badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.govOpen Science FrameworkAEA RegistryEGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with registered plan.

Badge eligibility does not restrict authors from reporting results of additional analyses. Results from preregistered analyses must be distinguished explicitly from additional results in the report. Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design was altered but the changes and rationale for changes are provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.

Preregistered+Analysis Plan

The Preregistered+Analysis Plan badge is earned for having a preregistered research design (described above) and an analysis plan for the research and reporting results according to that plan. An analysis plan includes specification of the variables and the analyses that will be conducted. Guidance on construction of an analysis plan is below.

Criteria for earning the preregistered+analysis plan badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.gov, Open Science Framework, AEA registry, EGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with the registered plan.

Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design or analysis plan was altered but the changes are described and a rationale for the changes is provided. Where possible, analyses following the original specification should also be provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.”

What Journals Are Using Open Badges?

A list of journals currently using Open Practice Badges can be found here. The list continues to grow as more publishers understand the benefits of providing this acknowledgement to researchers and readers.


Cenveo Publisher Services is an advocate of Open Practice Badges. If your publishing organization would like to learn how we can support open badges in your workflow, feel free to reach out to us directly.

Are you currently using Open Practice Badges? Please share your findings or observations in the comments section below.

 

 

 

 

Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Innovative Research and Creative Output: From Ideas to Impact

Society for Scholarly Publishing - Philadelphia Regional Event

This post is a collaboration between SSP members, including Nicola Hill, Emma Sanders, and Adrian Stanley.

Left to right: Kathi Martin, Drexel Digital Museum; Jen Grayburn, CLIR Postdoc; Alex Humphreys, JSTOR Labs

On October 30th, the Society for Scholarly Publishing (SSP) hosted a regional event at the University of Pennsylvania, Van Pelt Library. The topic, "Innovative Research and Creative Outputs: From Ideas to Impact" brought together Philly-area publishers, librarians, and content professionals for a panel discussion on new and innovative methods of producing scholarship.

Jen Grayburn, CLIR Postdoctoral Fellow

Jen spoke about her use of Google Scholar, SketchFab and Unity in her work, which centers around the intersection of architecture and text. Using GIS (Geographic Information Systems) mapping software, Jen examines locations of historic sites. She shared an example of a mapping she did of St. Magnus Cathedral in the islands off the north coast of Scotland. In this particular example, Jen generated a binary map that  indicated what would and wouldn’t be visible on the ground from a certain height.

She uses geo-TIFs (TIF files encoded with geographical coordinates) to create a 3D topographic map to illustrate what is visible and why. Eventually, these mappings were confirmed with on-site visits she conducted. In her work, Jen uses Sketchfab to store the large 3D modeling files

Currently, there is a lack of standards around 3D scholarly outputs—how they’re reviewed, stored, and made accessible.3D collections are siloed by institution—there is really no repository. The only exception Jen cites is Duke University’s MORPHO SOURCE. For these reasons, evaluating and citing digital work is still a challenge.

Studies in Digital Heritage content are inextricably linked to the 3D model created in the course of those studies. There is a real need for community standards for 3D data presentation. Academic departments are generally slow to reward digital projects, or have a process for incorporating these scholarly outputs in formal evaluations.

Archeologists with an interest in Jen’s work, for example, always want the original 3D model she created, not the version on Sketchfab. But these models haven’t been peer-reviewed, and for that reason, Jen is reluctant to provide. In the near future, more standard development and community standards for 3D and VR creation and curation in higher education is certainly warranted.

Kathi Martin, the Drexel Digital Museum Project

Kathi Martin  presented her work with The Drexel Digital Museum Project: Historic Costume Collection (digimuse)---a searchable image database comprising select fashion from historic costume collections. Initially, fashion images were highly protected by using low-res images and watermarked images on the website. Kathi explained that Polish hacktivists demonstrated to her how easy it is to remove the watermark and improve resolution.

The museum has always been driven by open access and open source to share information and further usage and research. Interoperability is key to the museum’s mission—this allows the data on the museum’s website to be easily harvested across browsers.

The museum has widened beyond Drexel’s collection; for example, Iris Barre Apfel’s Geoffery Beene collection was displayed and that exhibit is archived on the museum site. Quicktime VR was used to film the collection and provide high-resolution captures of the fashion collections.

The technology DigiMuse is used in the Drexel project and provides a new level of engagement with the collections Kathi is preserving. Drexel's Digital Museum project website allows a site visitor to interact personally and actively with a distributed, collected narrative. The site includes rich metadata descriptions for every picture. The variety of contributions on the site, Kathi feels, stimulate varying and often deeply personal reactions.

She believes the site is very powerful due to its “baked-in connectedness.” Kathi closed with Grace Kelly’s gown, made by Givenchy in part out of actual coral (gasp!). The site complements the high-res images of the gown itself with media of Grace Kelly in the gown.

Alex Humphreys, JSTOR Labs

Alex discussed how JSTOR Labs applies methods and tools from digital scholarship to create tools for researchers, teachers, and students "that are immediately useful – and a little bit magical." JSTOR is a member of ITHAKA, a non-profit devoted to digital sustainability.

Alex Humphreys, director at JSTOR LaBs

Alex works with a team of five on innovative projects that benefit humanities scholars. He demonstrated JSTOR Labs’ Understanding Shakespeare tool, which uses the Folger Shakespeare Library’s digital version of Shakespeare plays to hyperlink each line of the play to a search showing all JSTOR articles that contain a particular line of prose. 

JSTOR Labs works from a philosophy of play—Alex sees what resources other organizations (like Folger Shakespeare Library) bring, what LABS brings, and what kind of sandbox they might build in collaboration. Part of JSTOR Labs’ philosophy values what Alex calls “multi-disciplinarity.” For example, JSTOR Labs’ partnership with Eigenfactor (which measures influential and highly cited articles) has resulted in a tool that helps scholars discover the most influential articles in a given field or topic area. 

JSTOR Labs also believes in hypothesis-driven development. Alex explained the key is ITERATING, ITERATING, INTERATING! Alex also presented the topic modeling examples, including Reimagining the Monograph, which started from JSTOR Labs asking, "Can we improve the experience and value of long-form scholarship?"

The “topicgraph” provides a fingerprint of a monograph. Each term has a set of associated keywords, containment of which in the text make the probability higher that the term is being discussed. 

Last but certainly not least, Alex unveiled am amazing and brand new tool with the working name “Text Analyzer.” This tool is essentially a multi-language analyzer—text can be pulled from, say, a Russian Wikipedia entry. The tool will translate the text and list in English the topics included in the entry. 

Alex notes that so much of digital humanities is about probabilities, not known data. The label modelling that JSTOR Labs most frequently uses (as opposed to cluster topic modeling).


The Philadelphia SSP Regional Meetings are an excellent venue to engage with the scholarly and scholarly publishing community. All are welcome. To learn more, click here!

 
Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Rights & Permissions Service for Publishers

Copyright is far more than just a necessary evil to protect intellectual property from theft. Copyright furthers all creative interests by making the rich marketplace of ideas available to a wider audience. Resourceful rights and permissions management supports author content while maximizing the publisher’s budget.

Hiring one person to perform all the rights and permissions functions requires finding a pretty special person: an editorial specialist with enough copyright expertise to be an IP strategist, while being a skilled digital-image savvy photo researcher and database manager. That's why we offer R&P as a service for publishers.

Cenveo Publisher Services manages all aspects of text, image, and rich media content R&P. We assemble a team of project managers, assessment specialists, data entry staff, photo researchers, and permissions experts to support the management of R&P in your organization.

By identifying a rights strategy early, authors can stay on budget. Research and permissions runs alongside production cycles with clearly defined milestones. Targeted international expertise also allows a spectrum of pricing options. Contact us to learn how we can support R&P for your journals or books program.

 

Download Brochure


Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Working With a Publishing Consultant

A short video by John Bond at Riverwinds Consulting. John's YouTube channel, Publishing Defined, is a great resource for scholarly and academic publishers.

 
 
Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Revenue Growth in Education, Scholarly, and Trade Book Publishing

The Association of American Publishers shared revenue figures in its StatShot report. Revenue growth is up 4.9% for Q1 2017 compared with Q1 2016.

Both education and scholarly publishers experienced slight revenue bumps during the first quarter of 2017, compared with the first quarter of 2016.

Higher Education course materials wins the greatest growth award, reporting $92 million (24.3%) increase to $470.2 million in Q1 2017 compared with the Q1 2016. Revenues for Professional Publishing (business, medical, law, scientific and technical books) were up by $5 million (4.5%) to $119.5 million.

 
Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Accessibility for Journal Publishers

The terms “access” and “scholarly journals” are often linked to Open Access publishing. Less often discussed—but still very important—are issues and challenges of making journal content accessible to the visually, cognitively, or otherwise impaired.

Guest blog by John Parsons


content accessibility for journal publishers

Peer-reviewed, scholarly journals are a specialized slice of the publishing universe. Worldwide, it is a $25 billion market. Unlike consumer and trade magazines, journals are not supported by advertising revenue, but rely on subscriptions, institutional funding, and/or open access funding mechanisms. Readership varies widely in size and scope, and includes students, journalists and government employees as well as researchers themselves. They are also delivered by a wide array of specialized digital platforms and websites.

What they do share with other publications is the assumption that their audience can read words and images on a page or screen. For the majority of journal readers, this poses few problems. However, for readers with visual or other impairments, content accessibility is a major concern.

Justifying Journal Content Accessibility

Some might argue, without foundation, that scholars qualified to consume peer-reviewed content are less likely to be impaired in the first place, making the number of affected users too low to justify the added costs. (If cost were the only issue, one Stephen Hawking in a journal’s potential audience would more than justify the cost of making scholarly exchange possible for disabled readers. Also, as was mentioned, scholars and researchers are not the only readers in the equation.)

In other words, one justification for accessibility is a moral argument. It’s simply the right thing to do. However, for most journals, this argument is moot. Government-funded research typically carries minimum accessibility requirements, such as those spelled out in U.S. Code Section 508.

Building content accessibility into a journal workflow need not even be a daunting financial question at all. Well-structured XML content and metadata has many benefits, of which accessibility is only one. (This will be the subject of another blog.)

Regardless of the reason, most journal publishers understand the why aspect of content accessibility. So, let’s focus on how best to do it.

Identifying the Pieces---WCAG 2.0, Section 508, and VPAT

To understand the scope of journal article accessibility, we need to know that it has two basic versions—a document (PDF or EPUB) and a webpage. These are similar in many ways, especially to a sighted person, but they have different accessibility requirements.

What each of these formats have in common are

  • accessibility metadata
  • meaningful alt text for images (including math formulas and charts)
  • a logical reading order
  • audible screen reading
  • alternative access to media content

Only two (EPUB and webpages) have potentially resizable text and a clear separation of presentation and content. (PDF’s fixed page and text size often can be problematic. But in areas where PDF is a commonly used format, notably healthcare, service providers can provide workflow mechanisms to remediate PDFs for Section 508 compliance.)

Webpages have the added requirements of color contrast, keyboard access, options to stop, pause, or hide moving content, and alternatives to audio, video, and interactive content. Most of these are covered in detail in the W3C Web Content Accessibility Guidelines (WCAG) 2.0 guidelines, many of which are federally mandated. Service provider solutions in this area include a Voluntary Product Accessibility Template (VPAT) for journal content. This template applies to all “Electronic and Information Technology” products and services. It helps government contracting officials and other buyers to evaluate how accessible a particular product is, according to Section 508 or WCAG 2.0 standards.

There are several “degrees of difficulty” when it comes to making journal articles accessible. Research that is predominantly text is the easiest, but still requires careful thought and planning. With proper tagging of text elements, clearly denoting reading order and the placement of section headings and other cues, a text article can be accessibility-enhanced by several methods, including large print and audio.

More difficult by far are the complex tables, charts, math formulas, and photographic images that are prevalent in STM journals. Here, extra attention must be paid to type size and logical element order (for tables). In the case of charts, formulas, and pictures, the answer is alternative or “alt” text descriptions.

Think of it as explaining a visual scene to someone who is blindfolded. Rudimentary alt text, like “child, doll, hammer,” would probably not convey the full meaning of a photograph depicting Bandura’s famous Bobo Doll experiment. Rather, the best alt text would be a more nuanced text explanation of what the images depict—preferably by a subject matter expert.

Automation in Workflow is Key

When Braille or even large print were the only solutions, journal content accessibility was not an option for most. All that changed, for the better, with the advent of well-structured digital content. Again, publishing service providers have done much to advance this process, and in many cases, automate it.

Not every issue can be automated, however. Making content accessible may involve redesign. For example, footnotes may need to be placed at the end of an article—similar to a reference list—to ensure continuity of reading. Other steps support the logical flow of content and reading order, semantic structuring for discoverability, inclusion of alt text descriptions for images, simplifying presentation and tagging of complex tabular data, and the rendering of math equations as MathML.

Journal publishers can facilitate this in part by selecting formats that are more accessible by nature. Articles published online or available as EPUB are accessible by default, although they need to be enhanced to meet all the requirements of WCAG 2.0. The gap is small and can be easily bridged by focusing on the shortcomings and addressing it in design, content structuring, and web hosting.

Many of the basic, structural issues of making journal content accessible can be resolved, more or less automatically, if the publishing system or platform enforces standardized metadata rules. Titles, subheads, body copy, and other text elements will have a logical order, and can easily be presented in accessible ways. For elements where knowledgeable human input is required (as with alt text), a good system will facilitate such input.

Accessibility is not just the right thing to do, for the sake of science. It is also an obtainable goal—with the right service provider.

 


Recent Reports


Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Counting the Hidden Costs of Publishing

Guest blog by John Parsons

The rise of digital STM publishing, and the ongoing discussion about open access and subscription-based models, has led some to conclude that these changes inexorably lead to lower overall publication costs. Reality is more complex.

In my last blog, I discussed the open access or OA publishing model for scholarly, STM publishing. In a nutshell, OA allows peer-reviewed articles to be accessed and read without cost to the reader. Instead of relying on subscriptions, funding for such articles comes from a variety of sources, including article processing charges or APCs.

There are many misconceptions about OA, including the mistaken notion that OA journals are not peer reviewed (false) and that authors typically pay APCs out of pocket (also false). However, a more serious problem occurs when we fail to account for all the costs of scholarly publishing—not just the obvious ones.

Digital Doesn’t Mean Free

Behind the scenes

The obvious publication costs of scholarly publishing—peer review, editing, XML transformation, metadata management, image validation, and so on—can be daunting.

Part of the problem is the Internet itself. Search engines have given us the ability (in theory) to find information we need. Many non-scholarly publishers, particularly newspapers, have published content for anyone to read—in the misbegotten hope of selling more online advertising. The more idealistic among us have given many TED Talks on the virtue of giving away content, trusting that those who receive it—or at least some of then—will reciprocate.

What may work for a rock band does not necessarily work in publishing, however. This is partly because publishing is a complex process, with many of its functions unknown to the average scholar or reader.

Behind the Screens

The obvious publication costs of scholarly publishing—peer review, editing, XML transformation, metadata management, image validation, and so on—are daunting for anyone starting a new journal. If they want to be considered seriously, publications using the “Gold” open access model have to be able to handle these production costs over the long term. They also have to invest in other ways—to enhance their brand, and provide many of the services that scholars and researchers may take for granted.

The first of these hidden costs is the handling of metadata. The OA publishing model—and digital publishing in general—resulted in an explosion of available content, including not only peer reviewed articles, but also the data on which they are based. Having consistent metadata is critical to finding any given needle in an increasing number of haystacks. Metadata is also the key that maintains updates to the research (think Crossref) and tracks errata.

The trouble is that metadata is easy to visualize but it takes work and resources to implement well. Take for example the seemingly simple task of author name fields. The field for author surname (or family name, or last name) is typically text, but how does it accommodate non-Latin characters or accents? Does it easily handle the fact that surnames in countries like China are not the “last” name? The problem is usually not with the field itself, but with how it’s used in a given platform or workflow.

Another hidden metadata cost is the emergence of standards, and how well each publishing workflow handles them. More recently, the unique author identifier (ORCID) has gained in prominence, but researchers and contributors may not automatically use them. There are many such metadata conventions—each representing a cost to the publisher, in order to let scholars focus on their work without undue publishing distractions.

Another hidden cost is presentation. From simple, easy-to-read typography to complex visual elements like math formulae, the publisher’s role (and the corresponding cost) has expanded. What was once a straightforward typesetting and design workflow for print has expanded to a complex, rules-driven process for transforming Word documents and graphic elements into backend XML, which fuels distribution.

The publishing model has drastically changed from a neatly-packaged “issue publication model” to a continuous publication approach. This new model delivers preprints, issues, articles, or abstracts to very specific channels. The systems and workflows that support the new publication model requires configuration and customization, which all have associated production costs.

Automation Is the Key

Very few publishers can maintain the production work required in house. Technology development, staffing, and innovation are costly to maintain. The solution is to rely on a trusted solutions provider, who performs such tasks for multiple journals. Typically, this involves the development of automated workflows—simplifying metadata handling and presentation issues, using a rules-based approach for all predictable scenarios. This of course relies on a robust IT presence—something a single publisher or group typically cannot afford alone. Ideally, automated workflows involve an initial setup cost, but will improve editorial quality, improve turnaround times, and speed up time to publication.

By offloading the routine, data-intensive parts of publishing workflow to a competent service provider, publishers and scholars can spend more time on actual content and less time on the mechanics of making it accessible to and useable by other researchers.


What are some of the "hidden costs" your organization finds challenging?

 

Resources for publishers

Publishing Defined: John Bond's STM Publishing Video Series

What is Crossmark?

John Bond of Riverwinds Consulting is creating a video library of useful shorts about topics and terms important to the STM publishing industry. For some people, his shorts may provide a great refresher or another take on subjects that impact our market. For those just starting their career in STM publishing, his video series should be required viewing!

The series is titled "Publishing Defined" and covers a broad range of topics from defining specific terms to strategic advice regarding RFPs. Also helpful are the playlists he’s put together. You are sure to add a little something to your own knowledgebase from this series!

The following video explains Crossmark and why it’s important for publishers and service providers:

The Crossmark playlist can be viewed here.


Crossmark and Crossref are explained in our white paper, "All Things Connected." Download your copy today by clicking on the cover in the right column.

 

Resources for Publishers

Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

Publishers Keep Calm and Carry On

It was another busy year at London Book Fair last week with reports of increased registration numbers up by a double-digit percentage.

 
 

The following captured a brief quiet moment at the Cenveo Publisher Services Stand. The global team met with publishers, production managers, archivists, technology executives, and many others to discuss all things related to the creation and management of content.

 
 

Accessibility

Indeed, the hot topic for LBF17 at the Cenveo Stand was content accessibility. Long a champion of digital equality, we're helping publishers create and architect content that is "born accessible." The same technologies and guidelines that improve access to materials for people with visual or hearing impairments, limited mobility, perceptual and cognitive differences, are also tremendously useful for all publishers' customers.

No longer limited to education publishers, we see that journal publishers and others have a driving need to do more with content accessibility.

 

Google Books Decision

In an extremely packed room, America’s foremost copyright jurist and a judge on the U.S. Court of Appeals Second Circuit, told attendees that Google’s program to scan tens of millions of library books to create an online index “conferred gigantic benefits to authors and the public equally,” and did not “offer a substitute or interfere with authors’ exclusive rights” to control distribution. READ MORE: Judge Pierre Leval Defends Google Books Decision, Fair Use

Scholarly Publishing and Academic Market

The Research and Scholarly Publishing Forum offered academic publishers and service providers a half-day program with lively debates from Elsevier, Wiley, and Taylor & Francis. Some of the highlights included

  • A discussion about the future of Open Access in the UK between Alicia Wise, Elsevier’s Director of Policy and Access, Liam Earney, Jisc Collections’ Head of Library Support Services, and Chris Banks, Assistant Provost (Space) & Director of Library Services, Central Library, Imperial College London
  • A panel presenting global research policy developments chaired by Wiley’s James Perham-Marchant, featuring speakers from Taylor & Francis, Berghahn Books and Research Consulting
  • A panel session on new innovations to watch, chaired by Tracey Armstrong, President and CEO of the Copyright Clearance Center, including speakers from Sparrho, Frontiers and Cold Spring Harbor Laboratory Press

Full Coverage via Publishers Weekly

Publishers Weekly covered a range of topics across the many markets represented at the Fair.

 

Resources for Publishers


Stay Connected

1 Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

How Open Access is Changing Scholarly Publishing

Guest blog by John Parsons

After almost two decades, the Open Access publishing model is still controversial, and misunderstood. Here’s where we stand today.

The beginnings of scholarly publishing correspond roughly to the Enlightenment period of the late 17th and early 18th Centuries. The practice of publishing one’s discoveries was driven by a belief—championed the Royal Society—in the transparent, open exchange of experiment-based ideas. Over the centuries, journals embraced a rigorous peer review process, to maintain the integrity (and the subscription value) of its research content.

Transparency, openness, and integrity all come at a cost, however. For many years, that cost was met by charging journal subscription fees—usually borne by institutions who either produced the research, benefited from it, or both. So long as the publishing model was solely print-based, the subscription model worked well, especially for institutions with deep pockets. That all changed with the Internet. Not only did the scope and volume of research increase rapidly, so did the perception that all information should be easily findable via search engines.

The Internet expanded the audience for research outside traditional institutions—to literally anyone with a connected device. With this expansion, the disparity between the well-funded and those less fortunate became acute. As it did with other publishing workflows, this disruption drove a need for new economic models for scholarly publishing.

Open Access Basics

Advocacy for less fettered access to knowledge is nothing new. But the current Open Access (OA) movement began in earnest in the early 2000s, with the “Three Bs” (the Budapest Open Access Initiative, the Bethesda Statement, and the Berlin Declaration by the Max Planck Institute). Much of the impetus occurred in the Scientific, Technical, and Medical, or STM publishing arena, and from research funding and policy entities like the European Commission and the U.S. National Institutes of Health. The latter’s full-text archive of free biomedical and life sciences articles, PubMedCentral or PMC, is a leading example—backed by a mandate that the results of publicly-funded research be freely available to the public.

In a nutshell, Open Access consists of two basic types—each with its own variations and exceptions. “Green” OA is the practice of self-archiving scholarly articles in a publicly-accessible data repository, such as PMC or one of many institutional repositories maintained by academic libraries. There is often a time lag between initial publication—especially by a subscription-based journal—and the availability of the archived version.

As we will discuss in future blogs, publishers and their service providers are exploring better ways to adapt their publishing workflows to the realities of OA and hybrid journals. In some cases, such as metadata tagging, XML generation, and output to print and online versions, these workflows can be highly automated. In others, publishers must find cost-effective ways to add value—while being as transparent as possible to the authors and users of journal content.

The alternative is the “Gold” OA model. It includes a growing number of journals, such as the Public Library of Science (PLOS), that do not charge subscription fees. Instead, they fund the cost of publishing through article processing charges (APCs) and other mechanisms. Although APCs are commonly thought of as being paid by the author, the real situation is more complex. Often, in cases where OA is mandated, APCs are built into the funding proposals, or otherwise factored into institutional and research budgets. PLOS and other journals can also waive APCs, or utilize voluntary funding “pools,” for researchers who cannot afford to pay them.

The appeal of Open Access is obvious to researchers and libraries of limited means. It also has the potential to accelerate research—by letting scientists more easily access and build upon others’ work. But for prestigious institutions, publishers, and their partners, the picture is more complicated.

Publishers in particular can be hard pressed to develop and enhance their brand—or offer a multitude of services that scholars may take for granted—when constrained by the APC funding model. (Those challenges will be addressed in a future blog.)

Misconceptions, Problems—and Solutions

Even today, researchers are not always clear about what Open Access means for scholarly publishing. Research librarians have their work cut out for them. They cite the common misconception that OA journals do not have an adequate peer review process, for example. This is caused by disreputable or “predatory” journals that continually spam researchers with publication offers. Librarians counter this with a growing arsenal of blacklist and whitelist sources, such as the Directory of Open Access Journals.

Perhaps a major contributor to the uncertainty surrounding OA is the practice of openly publishing “preprint” versions of articles prior to—or during the early stages of—the peer review process. Sometimes, this is part of the researcher’s strategy to secure further funding, but it can fuel the mistaken notion that peer review is not required in OA publishing workflow. Distinguishing preprints from final OA articles must be a goal for publishers and their partners.

Another problem is scholars’ unfamiliarity with the OA-driven changes in publishing workflows. Gold OA journals—particularly those involved in STM publishing—are usually quite adept at guiding authors through the publication process, just as their subscription-based counterparts and publishing service providers have been. For example, the practice of assigning Digital Object Identifiers (DOIs), ISSNs, and other metadata to scholarly publishing works is becoming increasingly efficient for both Gold OA and subscription journals.

Green OA is a thornier problem for traditional publishing workflows. Each institutional repository is separate from the others—with its own funding sources, development path, and legacy issues. A common approach to article metadata, for example, has not happened overnight. Fortunately, organizations like Crossref are working with multiple partners and initiatives to make these workflows universal—and transparent to the researcher.

Perhaps the biggest issue posed by OA is the fate of traditional, subscription-based journals. Despite the push to “flip” journals from a subscription model to Open Access, there are cases where this is simply not feasible or even desirable. Many journals have a large subscriber base of professionals who, although they value the research, do not themselves publish peer reviewed articles. This is especially true for STM publishing. Some of these journals have adopted a “hybrid” approach, charging APCs for some articles (which are available immediately) while maintaining others for subscribers only. These are eventually made Open Access under the Green model, especially when Open Access is a funding requirement.

Scanning the Horizon

As we will discuss in future blogs, publishers and their service providers are exploring better ways to adapt their publishing workflows to the realities of OA and hybrid journals. In some cases, such as metadata tagging, XML generation, and output to print and online versions, these workflows can be highly automated. In others, publishers must find cost-effective ways to add value—while being as transparent as possible to the authors and users of journal content.

Despite these challenges, Open Access is changing the scholarly publishing landscape forever. There is a compelling need for researchers to find and build upon the research of others—each needle buried in a haystack of immense proportions—to advance the human condition. Publishers and their service partners are well positioned to make that open process accessible and fair to all.

 

Resources for Publishers


Peer Review Management Services: Ensuring the Integrity of the Scientific Publishing Process

Cenveo Publisher Services now offers peer review management as a service. Journal publishers depend on the peer review process to validate research and uphold the quality of published articles. With deep expertise in scholarly publishing, our staff is fluent in all peer review models as well as the nuances of major peer review systems.

Download Brochure

Click here to download brochure

Click here to download brochure

Our mission is to support both commercial and scholarly journal publishers with services that ensure editorial excellence while demonstrating time and cost savings. Peer review management fits well in our service portfolio because we’ve been working with the STM publishing industry for more than 135 years and peer review is most certainly the cornerstone of scholarly publishing
— McClanahan, Vice President of Publishing Services, Cenveo Publisher Services

Customized peer review management solutions are based on each publisher’s workflows and business requirements. Peer review management is offered as a stand-alone service or integrated with Cenveo’s full-service journal production model. Dedicated staff work exclusively on peer review---maintaining deadlines, communicating with reviewers, and streamlining responses to authors. The service is bundled with regular performance reports that detail submission numbers, processing times, decision rates, and more.

Click the link below to learn more about this new service offering.

 

Resources for Publishers


Comment

Mike Groth

Michael Groth is Director of Marketing at Cenveo Publisher Services, where he oversees all aspects of marketing strategy and implementation across digital, social, conference, advertising and PR channels. Mike has spent over 20 years in marketing for scholarly publishing, previously at Emerald, Ingenta, Publishers Communication Group, the New England Journal of Medicine and Wolters Kluwer. He has made the rounds at information industry events, organized conference sessions, presented at SSP, ALA, ER&L and Charleston, and blogged on topics ranging from market trends, library budgets and research impact, to emerging markets and online communities.. Twitter Handle: @mikegroth72

A Simple Lesson From Walt Disney

Everyone Has a Story to Tell

Videos aid learning. Videos and animation are at the top of the elearning food chain. Whether it's within a traditional elearning course or as an independent asset, animated videos help learners visualize and understand complex concepts.

Increasingly, across all the markets we serve---journal publishers, K12 educational publishers, higher ed publishers, elearning providers, magazine publishers---all are interested in transforming complex content into animated video shorts.

Editorial credit: Alex Millauer / Shutterstock.com

Animation offers a medium of storytelling and visual entertainment, which can bring pleasure and information to people of all ages everywhere in the world.
— Walt Disney

Conceptualization and Production

Cenveo Publisher Services provides a blended team of creatives, editors, and technologists who transform a fuzzy vision into distinct products for use in digital publications, websites, and elearning courses. Our specialists comprise

  • instructional designers
  • subject matter experts
  • multimedia specialists
  • graphic visualizers

We work with our customers to provide the full-range of services around animation or à la carte options, including

  1. conceptualization
  2. content creation
  3. visual storyboarding
  4. art creation
  5. photo/video research and procurement
  6. permissions management
  7. audio recording
  8. animation
  9. live action shoots
  10. video editing and packaging
  11. accessibility--WCAG and Section 508 compliance

Animation Sample: SWOT Analysis

Have a look at an animated short we created to explain what a SWOT analysis is and why it's beneficial.