The Center for Open Science | Preregistration Challenge

Some of the world's leading journals are taking steps to maximize the transparency and reproducibility of science by promoting the preregistration of research. Those journals include

  • Frontiers in Human Neuroscience
  • Journal of Experimental Social Psychology
  • Journal of Memory and Language
  • Memory & Cognition
  • Nature & Nature Research Journals
  • Ecology
  • Proceedings of the NAS
  • Brain and Behavior
  • Cognition & Emotion
  • Cortex
  • Learning & Behavior
  • PLOS Biology
  • Psychological Science
  • Science

Why Should Research be Preregistered?

When research is preregistered, there is an advanced commitment before data are gathered. Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. Both are important, but the same data cannot be used to generate and test a hypothesis, which can happen unintentionally and reduce the clarity and quality of results. Removing potential conflicts through planning improves the quality and transparency of research, helping others who may wish to build on it.

The Center for Open Science (COS) is promoting preregistration through its Preregistration Challenge. The COS is giving away $1,000 to 1,000 researchers who preregister their projects before they publish them!

Publishers can support this initiative by reaching out to authors and promoting the challenge. Following is an introductory video that explains the challenge and you can learn more by clicking here.

 

Publishing Defined: What is Open Peer Review?

 

This short video by John Bond of Riverwinds Consulting talks about the different types of Open Peer Review. John recently published a new book titled "Scholarly Publishing: A Primer." 

 

Learn About our Peer Review Services for Publishers


Follow Us!

W3C Publishing Summit 2017

Guest blog by Evan Owens

The first-ever W3C Publishing Summit took place in San Francisco, November 9 to 10, to discuss how web technologies are shaping publishing today, tomorrow, and beyond. Publishing and the web interact in innumerable ways. The Open Web Platform and its technologies have become essential to how content is created, developed, enhanced, discovered, disseminated, and consumed online and offline.

Background on IDPF and W3C

During February 2017, the IDPF (International Digital Publishing Forum) merged into the W3C. IDPF members are now joining W3C with new committees formed, including the W3C Publishing Working Group, EPUB Community Group, and others.

Keynote: The Future of Content by Abhay Parasnis – CTO, Adobe

The internet is wide open to all world communications. “Content publication” has expanded to a very broad level via the Internet. Businesses are trying to reach out in personalized fashion. Artificial Intelligence (AI) and Machine Learning (ML) are important for content location & delivery and personalization. W3C does important standards development, but as technology is moving fast how should we coordinate successfully?

A major goal of the W3C is to define a new Portal Web Publication (PWP) content format that will merge HTML and EPUB and replace PDF. EPUB 4.0 is likely to become a subset of that new PWP standard.

Following are some of my observations from the various presentations and discussions from the conference. Feel free to add your thoughts and takeaways in the comments section!

Content Platforms and Publishers

  • Majority of eBook content is still in EPUB2
  • EPUB3 is big  in Japan and China but not common in English-language publications yet
  • Most failed EPUB content is from USA publishers
  • Publishers tend to overuse fixed layout, especially academic or instructional content
  • Future will be CSS, interactivity, and accessibility

Digital Publishing in Asia, Europe,and Latin America

  • UK the biggest eBook market with 575K new eBooks per year
  • Amazon is leading EU bookseller (90% of UK sales)
  • Japan produces approximately 500K eBooks
  • Japan has been using EPUB 3.0 since 2011; 100% of old files were migrated to the new format
  • The market is growing in Korea and China
  • In Latin America ebooks are primarily EPUB 2.0; 3.0 hasn’t been adopted yet
  • 55% of publishers in Latin America have not yet started digital content production

Accessibility in Publishing and W3C

  • Accessibility in digital publishing is a key issue that was included in EPUB
  • W3C implementation goals include supporting EPUB3 accessibility and collaborating with the W3C WCAG
  • DAISY has built a checking tool called “ACE”; it is now in beta and available for testing
  • Cenveo Publisher Services provides accessibility services and testing

Educational Publishing

  • Personalized learning challenges include the learning platform and the metrics
  • There is now a major move from books to digital e-learning platforms
  • Learning is now subject to data-driven insights: analytics add value by these tools

Creating EPUB Content that Looks and Works Great Everywhere

  • Microsoft added an EPUB reader into Windows 10 MS Edge web browser
  • Almost 90% of ebooks are EPUB2 and recent content in 2017 is only 62% EPUB3
  • Issues for EPUB content creation and rendition include
    • Many different screen sizes and orientations (e.g. phone, table, computer)
    • Reader requirements: mobility, classroom usage, accessibility
    • Pagination works differently in different reading systems
    • Tables and anything with fixed width is risky
    • Captions not staying with images due to page breaks
    • Background images break when flowing across pages
    • CSS layout for colored text failures
    • Supporting audio reader software by language metadata
    • Fixed layout never 100% perfect
    • Don’t use SVG for text layout
    • Test content in several epub reader devices, etc.

Publication Metadata

  • Consumer metadata versus academic metadata remains a key challenge
  • Standards are only slowly adopted; e.g. ONIX 3 published 2009 but by 2017 only about 50% adopted
  • Autotagging versus human tagging; machines more consistent
  • 105 metadata standards

Cenveo Publisher Services is a proud member of the W3C Publishing Working Group. The issues discussed at the W3C Publishing Summit are ones we address everyday with academic, scholarly, and education publishers. We look forward to working with you in 2018 on innovative publishing solutions that improve editorial quality and streamline production while continuously addressing costs. Let us know how we can help.

 

Follow Us!


Related White Papers


Open Practice Badges: A Primer and How to Get Started

The Center for Open Science (COS) provides tools, training, support, and advocacy that help researchers and scholars manage, share, and discover scientific research. The COS’ mission is to “increase the openness, integrity, and reproducibility of scholarly research. Acceleration of scientific progress can be a primary motivator for scholarship and a powerful driver of real solutions.

The COS develops software tools, workflows, data storage solutions, and more based on its free Open Science Framework (OSF). The OSF is an ecosystem of solutions, partnering companies, technologies, and ideas that support researchers across the entire research life cycle.  One initiative that is gaining momentum is the use of Open Practice Badges in the publishing workflow.

Openness is a core value of scientific practice.
 

The scholarly publishing community agrees on the relevance and importance of open communication for scientific research and progress. In 2009 there were approximately 4,800 OA journals publishing approximately 190,000 articles. In January 2017, the estimate is that there are around 9,500 active OA journals. At Cenveo Publisher Services, we work with a large number of society and commercial publishers who have launched or are preparing to add OA publication models to their workflows.

Awarding Open Practice Badges on published content is a way of designating and awarding authors badges that acknowledge their use of open practices during the research life cycle.

Incorporating Open Practice Badges Into Publishing Workflows

By acknowledging open practices in scientific research, journal publishers can use badges in their publications to certify that a particular research practice was followed. Badges can be awarded to the published content as part of the peer review process or they can be awarded post-publication. As long as processes and practices are transparent, any organization can issue badges. Most publishers are awarding the badges during peer review. Publishing platforms and review services are likely to use the badges post publication.

For publishers, the journal awards the badge and it is linked to the specific article. Each publisher tends to have specific methods for incorporating badges into the published article. However, it is critical that the badge is machine discoverable and readable.

Detailed information on incorporating OA badges into your publication workflow can be found at the OSF Wiki page here.

Badge Overview

There are three badges currently used:

  1. Open Data
  2. Open Materials
  3. Preregistered

Following is an overview of the three badges and corresponding criteria. Detailed information is available on the OSF Wiki page, including corresponding links.

Open Data

The Open Data badge is earned for making publicly available the digitally-shareable data necessary to reproduce the reported results.

Criteria

Digitally-shareable data are publicly available on an open-access repository. The data must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

A data dictionary (e.g., a codebook or metadata describing the data) is included with sufficient description for an independent researcher to reproduce the reported analyses and results. Data from the same project that are not needed to reproduce the reported results can be kept private without losing eligibility for the Open Data Badge.

An open license allowing others to copy, distribute, and make use of the data while allowing the licensor to retain credit and copyright as applicable. Creative Commons has defined several licenses for this purpose, which are described at www.creativecommons.org/licenses. CC0 or CC-BY is strongly recommended.

Open Materials

The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.

Criteria

Digitally-shareable materials are publicly available on an open-access repository. The materials must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

Infrastructure, equipment, biological materials, or other components that cannot be shared digitally are described in sufficient detail for an independent researcher to understand how to reproduce the procedure.

Sufficient explanation for an independent researcher to understand how the materials relate to the reported methodology.

Preregistered/Preregistered+Analysis Plan badges 

The Preregistered/Preregistered+Analysis Plan badges are earned for preregistering research.

Preregistered

The Preregistered badge is earned for having a preregistered design. A preregistered design includes: (1) Description of the research design and study materials including planned sample size, (2) Description of motivating research question or hypothesis, (3) Description of the outcome variable(s), and (4) Description of the predictor variables including controls, covariates, independent variables (conditions). When possible, the study materials themselves are included in the preregistration.

Criteria for earning the preregistered badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.govOpen Science FrameworkAEA RegistryEGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with registered plan.

Badge eligibility does not restrict authors from reporting results of additional analyses. Results from preregistered analyses must be distinguished explicitly from additional results in the report. Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design was altered but the changes and rationale for changes are provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.

Preregistered+Analysis Plan

The Preregistered+Analysis Plan badge is earned for having a preregistered research design (described above) and an analysis plan for the research and reporting results according to that plan. An analysis plan includes specification of the variables and the analyses that will be conducted. Guidance on construction of an analysis plan is below.

Criteria for earning the preregistered+analysis plan badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.gov, Open Science Framework, AEA registry, EGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with the registered plan.

Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design or analysis plan was altered but the changes are described and a rationale for the changes is provided. Where possible, analyses following the original specification should also be provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.”

What Journals Are Using Open Badges?

A list of journals currently using Open Practice Badges can be found here. The list continues to grow as more publishers understand the benefits of providing this acknowledgement to researchers and readers.


Cenveo Publisher Services is an advocate of Open Practice Badges. If your publishing organization would like to learn how we can support open badges in your workflow, feel free to reach out to us directly.

Are you currently using Open Practice Badges? Please share your findings or observations in the comments section below.

 

 

 

 

Innovative Research and Creative Output: From Ideas to Impact

Society for Scholarly Publishing - Philadelphia Regional Event

This post is a collaboration between SSP members, including Nicola Hill, Emma Sanders, and Adrian Stanley.

Left to right: Kathi Martin, Drexel Digital Museum; Jen Grayburn, CLIR Postdoc; Alex Humphreys, JSTOR Labs

On October 30th, the Society for Scholarly Publishing (SSP) hosted a regional event at the University of Pennsylvania, Van Pelt Library. The topic, "Innovative Research and Creative Outputs: From Ideas to Impact" brought together Philly-area publishers, librarians, and content professionals for a panel discussion on new and innovative methods of producing scholarship.

Jen Grayburn, CLIR Postdoctoral Fellow

Jen spoke about her use of Google Scholar, SketchFab and Unity in her work, which centers around the intersection of architecture and text. Using GIS (Geographic Information Systems) mapping software, Jen examines locations of historic sites. She shared an example of a mapping she did of St. Magnus Cathedral in the islands off the north coast of Scotland. In this particular example, Jen generated a binary map that  indicated what would and wouldn’t be visible on the ground from a certain height.

She uses geo-TIFs (TIF files encoded with geographical coordinates) to create a 3D topographic map to illustrate what is visible and why. Eventually, these mappings were confirmed with on-site visits she conducted. In her work, Jen uses Sketchfab to store the large 3D modeling files

Currently, there is a lack of standards around 3D scholarly outputs—how they’re reviewed, stored, and made accessible.3D collections are siloed by institution—there is really no repository. The only exception Jen cites is Duke University’s MORPHO SOURCE. For these reasons, evaluating and citing digital work is still a challenge.

Studies in Digital Heritage content are inextricably linked to the 3D model created in the course of those studies. There is a real need for community standards for 3D data presentation. Academic departments are generally slow to reward digital projects, or have a process for incorporating these scholarly outputs in formal evaluations.

Archeologists with an interest in Jen’s work, for example, always want the original 3D model she created, not the version on Sketchfab. But these models haven’t been peer-reviewed, and for that reason, Jen is reluctant to provide. In the near future, more standard development and community standards for 3D and VR creation and curation in higher education is certainly warranted.

Kathi Martin, the Drexel Digital Museum Project

Kathi Martin  presented her work with The Drexel Digital Museum Project: Historic Costume Collection (digimuse)---a searchable image database comprising select fashion from historic costume collections. Initially, fashion images were highly protected by using low-res images and watermarked images on the website. Kathi explained that Polish hacktivists demonstrated to her how easy it is to remove the watermark and improve resolution.

The museum has always been driven by open access and open source to share information and further usage and research. Interoperability is key to the museum’s mission—this allows the data on the museum’s website to be easily harvested across browsers.

The museum has widened beyond Drexel’s collection; for example, Iris Barre Apfel’s Geoffery Beene collection was displayed and that exhibit is archived on the museum site. Quicktime VR was used to film the collection and provide high-resolution captures of the fashion collections.

The technology DigiMuse is used in the Drexel project and provides a new level of engagement with the collections Kathi is preserving. Drexel's Digital Museum project website allows a site visitor to interact personally and actively with a distributed, collected narrative. The site includes rich metadata descriptions for every picture. The variety of contributions on the site, Kathi feels, stimulate varying and often deeply personal reactions.

She believes the site is very powerful due to its “baked-in connectedness.” Kathi closed with Grace Kelly’s gown, made by Givenchy in part out of actual coral (gasp!). The site complements the high-res images of the gown itself with media of Grace Kelly in the gown.

Alex Humphreys, JSTOR Labs

Alex discussed how JSTOR Labs applies methods and tools from digital scholarship to create tools for researchers, teachers, and students "that are immediately useful – and a little bit magical." JSTOR is a member of ITHAKA, a non-profit devoted to digital sustainability.

Alex Humphreys, director at JSTOR LaBs

Alex works with a team of five on innovative projects that benefit humanities scholars. He demonstrated JSTOR Labs’ Understanding Shakespeare tool, which uses the Folger Shakespeare Library’s digital version of Shakespeare plays to hyperlink each line of the play to a search showing all JSTOR articles that contain a particular line of prose. 

JSTOR Labs works from a philosophy of play—Alex sees what resources other organizations (like Folger Shakespeare Library) bring, what LABS brings, and what kind of sandbox they might build in collaboration. Part of JSTOR Labs’ philosophy values what Alex calls “multi-disciplinarity.” For example, JSTOR Labs’ partnership with Eigenfactor (which measures influential and highly cited articles) has resulted in a tool that helps scholars discover the most influential articles in a given field or topic area. 

JSTOR Labs also believes in hypothesis-driven development. Alex explained the key is ITERATING, ITERATING, INTERATING! Alex also presented the topic modeling examples, including Reimagining the Monograph, which started from JSTOR Labs asking, "Can we improve the experience and value of long-form scholarship?"

The “topicgraph” provides a fingerprint of a monograph. Each term has a set of associated keywords, containment of which in the text make the probability higher that the term is being discussed. 

Last but certainly not least, Alex unveiled am amazing and brand new tool with the working name “Text Analyzer.” This tool is essentially a multi-language analyzer—text can be pulled from, say, a Russian Wikipedia entry. The tool will translate the text and list in English the topics included in the entry. 

Alex notes that so much of digital humanities is about probabilities, not known data. The label modelling that JSTOR Labs most frequently uses (as opposed to cluster topic modeling).


The Philadelphia SSP Regional Meetings are an excellent venue to engage with the scholarly and scholarly publishing community. All are welcome. To learn more, click here!

 

Rights & Permissions Service for Publishers

Copyright is far more than just a necessary evil to protect intellectual property from theft. Copyright furthers all creative interests by making the rich marketplace of ideas available to a wider audience. Resourceful rights and permissions management supports author content while maximizing the publisher’s budget.

Hiring one person to perform all the rights and permissions functions requires finding a pretty special person: an editorial specialist with enough copyright expertise to be an IP strategist, while being a skilled digital-image savvy photo researcher and database manager. That's why we offer R&P as a service for publishers.

Cenveo Publisher Services manages all aspects of text, image, and rich media content R&P. We assemble a team of project managers, assessment specialists, data entry staff, photo researchers, and permissions experts to support the management of R&P in your organization.

By identifying a rights strategy early, authors can stay on budget. Research and permissions runs alongside production cycles with clearly defined milestones. Targeted international expertise also allows a spectrum of pricing options. Contact us to learn how we can support R&P for your journals or books program.

 

Download Brochure


Choosing a Journal or Book Printer

A great primer on finding a print partner by John Bond at Riverwinds Consulting. John's YouTube channel, Publishing Defined, is a great resource for scholarly and academic publishers.

CHOOSING A JOURNAL OR BOOK PRINTER: This short video by John Bond of Riverwinds Consulting discusses choosing a printer. FIND OUT more about John Bond and his publishing consulting practice at www.RiverwindsConsulting.com MORE VIDEOS on Choosing a Printer can be found at: https://www.youtube.com/playlist?list=PLqkE49N6nq3hhpEzslKtzBHbxgWCmDvL4 JOHN'S NEW BOOK is "The Request for Proposal in Publishing: Managing the RFP Process" To find out more about the book: https://www.riverwindsconsulting.com/rfps/ Buy it at Amazon: https://www.amazon.com/Request-Proposal-Publishing-Managing-Process-ebook/dp/B071W7MBLM/ref=sr_1_1?s=books&ie=UTF8&qid=1497619963&sr=1-1&keywords=john+bond+rfps/ SEND IDEAS for John to discuss on Publishing Defined.
 

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Mail Delivery Update - Hurricane/Tropical Storm Harvey

As the nation continues to recover from Hurricane Harvey (now downgraded to a Tropical Rainstorm), postal operations have been significantly impacted in the region. The USPS provides updated information here.

Cenveo's Mailing Services

Interested customers must make their own decisions as to whether to include affected mail addresses within their unprocessed mailing files. The recovery process has just begun in a few areas and the rain will continue in others for the remainder of the week. The Postal Service has not yet had ample time to access their capability to serve flooded areas or even determine whether affected addresses can even receive deliveries. It is interesting to note that the USPS is using Twitter to encourage displaced citizens to temporarily change their address as life-changing decisions are made.

During the Katrina tragedy, the USPS Address Management Center kept a separate file of addresses which were undeliverable and mailers used the list to purge their mailings. The USPS seems to be trying to get ahead this time by encouraging changes of address.

All processed mail for the affected areas is likely being held back for a few days at USPS processing centers or being held aside at a regional USPS processing site.

Click the image below to keep apprised of service disruption alerts.

Contact us if you would like to speak with one of Cenveo's USPS distribution specialists.

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Accessibility: Because the Internet is Blind

Like the visually impaired, the Internet cannot “see” content the way a sighted human being does. It can only discover relevant content via searchable text and metadata. When publishers take the right steps to make content accessible, they also make it more discoverable.

Guest blog by John Parsons

In the past four blogs, we’ve discussed how to make different types of published content accessible to visually and cognitively impaired users. Throughout the series, we’ve covered the reasons why publishers should do so, including the moral argument and its related compliance requirements, such as Section 508, NIMAS, and WCAG 2.0. While digital workflows and service providers have made such compliance affordable and practical, there is another argument for accessibility—one that is a compelling benefit in the age of digital content: discoverability.

The Nature of the Internet

We tend to think of the Internet in general—and Web content in particular—as a visual experience. We view the screen as we would a printed document, albeit with far greater capabilities for interactivity and connection to other information. The tools for searching and discovering content are all visual as well. Typing in a phrase, scanning the results, and choosing what we want, are all familiar, visually-dependent habits.

However, what we are seeing is not the content, but an on-screen rendering. We’re seeing the programmed user interface. It may be highly accurate and functional, but it’s a product of underlying data. The technology itself does not “see” or experience the content as we do. It only handles data and its related metadata.

Discoverability Is the Key

In order to be found on the Internet, a piece of published content must have a logical, and keyword-prioritized structure. It must not only have text strings that a search engine can find, it must also have standardized and commonly used metadata that correspond to what human users expect to find. Well-structured XML serves that purpose for nearly all types of published content.

The good news is that accessibility and discoverability have the same basic solution: well-structured content and metadata. Best practices for one solution are applicable to the other!

Every area of publishing benefits from greater discoverability.

This changes the equation for publishers faced with accessibility compliance issues. If they apply a holistic approach to well-structured XML content, they will improve their overall discoverability, and lay the groundwork for systematic rendering of their content in multiple forms—including HTML and EPUB optimized for accessibility.

Multiple Benefits

Every area of publishing benefits from greater discoverability. For journal and educational publishers, well-structured content can be more easily indexed by institutions and services, leading to higher citation and usage levels. For trade book publishers, discoverability translates to better search results and potentially more sales. For digital products of any kind, it means a better overall user experience, not only for the visually impaired but also for all users.

This is especially the case when it comes to non-text elements of published content. The practice of adding alt text descriptions for images and videos benefits not only the visually impaired reader. It also makes such rich content discoverable to the world.

Best practices for structuring content do not happen automatically. They require forethought by authors, publishers, and service providers. More importantly, they require a robust, standards-based workflow, to include searchable metadata and XML tags—automatically wherever possible, and easily in all other cases.

The issues of accessibility are really only problematic when viewed in isolation. When viewed as a subset of a more compelling use case—discoverability—they become a normal and positive part of the publishing ecosystem.

 


Working With a Publishing Consultant

A short video by John Bond at Riverwinds Consulting. John's YouTube channel, Publishing Defined, is a great resource for scholarly and academic publishers.

 
 

Revenue Growth in Education, Scholarly, and Trade Book Publishing

The Association of American Publishers shared revenue figures in its StatShot report. Revenue growth is up 4.9% for Q1 2017 compared with Q1 2016.

Both education and scholarly publishers experienced slight revenue bumps during the first quarter of 2017, compared with the first quarter of 2016.

Higher Education course materials wins the greatest growth award, reporting $92 million (24.3%) increase to $470.2 million in Q1 2017 compared with the Q1 2016. Revenues for Professional Publishing (business, medical, law, scientific and technical books) were up by $5 million (4.5%) to $119.5 million.

 

Accessibility for Trade Book Publishers

The venerable world of trade books has had accessibility options since the early 19th Century invention of Braille. However, only in the digital age has it been possible to make all books accessible to the visually impaired.

Guest blog by John Parsons

In the 1820s, Charles Barbier and Louis Braille adapted a Napoleonic military code to meet the reading needs of the blind. Today’s familiar system of raised dot characters substitutes touch for vision, and is used widely for signage and of course books and other written material. By the 20th Century, Braille was supplemented with large print books and records. For popular books these tools became synonymous with trade book publishers’ efforts to connect with visually impaired readers.

However, these tools—particularly Braille—has significant drawbacks. Before the advent of digital workflows, producing a Braille or even a large print book involved a separate design and manufacturing process, not to mention subsequent supply chain and distribution issues. But that has changed with the digital publishing revolution.

All Books Are “Born Digital”

With notable exceptions, trade books published since the 1980s started out as digital files on a personal computer. Word processors captured not only the author’s keystrokes but, increasingly, their formatting choices. (In the typewriter era, unless you count backspacing and typing the underline key, italics and boldface were the province of the typographer.)

On the PC, creating a larger size headline or subhead, or a distinct caption, evolved from a manual step in WordStar or MacWrite to a global stylesheet formatting command. When these word processing files made their way to a desktop publishing program, all the 12-point body copy for a regular book could become 18-point type for a large print version—at a single command.

Other benefits of digital-first content included a relatively easy conversion from Roman text characters to Braille, although that did not solve the actual book manufacturing process.

What really made the digital revolution a boon to accessibility was the rise of HTML—and its publishing offspring, eBooks. Web or EPUB text content can be re-sized or fed into screen readers for the visually impaired, but that’s only the start. It can also contain standardized metadata that a publishing workflow can use to create more accessible versions of the book.

Workflow Challenges

Trade books tend to be straightforward when it comes to accessibility challenges, but there are caveats that publishers and their service providers must address. The simplest of course is a book that is almost entirely text, with no illustrations, sidebars, or other visual elements. In those cases, the stylesheet formatting done by the author and/or publisher can be used to create accessibility-related tags for elements like headlines and subheads, as well as manage the correct reading order for Section 508 compliance.

Where things start to get tricky is when a book includes illustrations, or even special typographic elements like footnotes. To be accessible, the former must include descriptive alt text, which is usually best provided by an author, illustrator, or subject matter expert. Increasingly, just as writers became accustomed to adding their own typographic formatting, they may also include formatted captions containing this valuable, alt text-friendly information.

For other visual elements, service providers must fill in the accessibility gaps that authors cannot easily provide. This may include a certain amount of redesign, such as placement of footnotes at the end, to ensure continuity of reading, and defining the logical flow of content and reading order for page elements like sidebars. Service providers also add semantic structuring, alt text image descriptions not included by the author, and simplification of complex elements like tables.

It’s All About Format

Book publishers are already well ahead of the curve when it comes to accessibility. As mentioned in a previous blog, the page-centric PDF format is problematic. Fortunately, except for print workflows, trade publishers do not use it for their end product. In most cases, books are also produced in EPUB format, which is a derivative of HTML. These formats are accessible by default, although they need to be enhanced to meet the requirements of WCAG 2.0 standards. The gap is small, however, and can be easily bridged by focusing on design, content structuring, and web hosting.

Book reading for the visually impaired is no longer restricted to the popular titles, and compensatory technology of past centuries. With the advent of digital publishing, and the workflows that support and enhance it, accessibility for all books is an achievable goal.

 


HTML 5.2 - W3C Candidate Recommendation and The Publishing Working Group

Today the W3C announced that HTML 5.2 is a W3C Candidate Recommendation. Over the next 4 weeks, the Advisory Committee will review the spec and determine whether they will endorse as a W3C Recommendation.

About HTML 5.2

This specification defines the 5th major version, second minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features continue to be introduced to help Web application authors, new elements continue to be introduced based on research into prevailing authoring practices, and special attention continues to be given to defining clear conformance criteria for user agents in an effort to improve interoperability.

HTML in the Wayback Machine

What the W3C website looked like on January 14, 1998 via the Wayback Machine.

What the W3C website looked like on January 14, 1998 via the Wayback Machine.

While reviewing HTML 5.2, it's interesting to remember its origin story. The W3C provides a full history of HTML here but following are a few points of particular interest to the publishing community:

  • Originally, HTML was primarily designed as a language for semantically describing scientific documents.
  • For its first 5 years (1990-1995), HTML went through a number of revisions and experienced a number of extensions, primarily hosted first at CERN, and then at the IETF.
  • In 1998 the W3C membership decided to stop evolving HTML and instead begin work on an XML-based equivalent, called XHTML.
  • In 2003, the publication of XForms, a technology which was positioned as the next generation of Web forms, sparked a renewed interest in evolving HTML itself,
  • The idea that HTML’s evolution should be reopened was tested at a W3C workshop in 2004.
  • In 2006, the W3C indicated an interest to participate in the development of HTML 5.0.

It's a fascinating story and, like all history, important to revisit and understand.

W3C Today and the Publishing Working Group

The W3C website today.

The W3C website today.

In June, the W3C launched the new Publishing Working Group. The first ever W3C Publishing Summit will be held 9-10 November 2017 in San Francisco, California. Evan Owens, VP of Publishing Technologies at Cenveo Publisher Services will be there.

If you'd like to meet with Evan at the W3C Publishing Summit, you can make an appointment by clicking the button below.

 
Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Accessibility for Education Publishers

K-12 and Higher Ed publishers provide complex content that is deeply intertwined with Learning Management Systems and other digital deliverables. That makes accessibility harder—and potentially more rewarding.

Guest blog by John Parsons


Accessibility for educational publishers

In our recent blog, we tackled the issues of accessibility—for visually and cognitively impaired readers—in the realm of scholarly journal publishing. The solutions are (fairly) straightforward for that industry, because you’re dealing mostly with documents, and lots of text. Other types of publishers deal with a broader range of issues and output channels, so for them accessibility is more complex. Near the top of this difficulty scale are education publishers.

Even before the rise of digital media, education textbooks—notably in the K-12 market—posed significant accessibility challenges. Complex, rich layouts, laden with color, illustrations, and sidebars, made textbooks a rich, visual experience. Such books can be a treat for sighted students, for whom publishers have invested much thought and design research. For those less fortunate, however, a rich visual layout is an impediment.

Going Beyond Print

For printed textbooks, traditional accessibility fixes like large print and Braille are usually not cost-effective. Recorded audio has been a stopgap solution, but still a costly one, unlikely to handle the ever-increasing volume of educational material. Fortunately, the advent of digital media has far greater potential for making textbooks accessible.

When textbooks are produced as HTML or EPUB (but not PDF), the potential for greater accessibility is obvious. Type size can be adjusted at will. Text-to-speech can provide basic audio content with relative ease. Illustrations can be described with alt text—although care must be taken to insure its quality. Even reading order and other “roadmap” approaches to complex visual layouts can make digital textbooks more accessible than a printed version could ever be.

The real key is digital media’s inherent ability to separate presentation and content. Well-structured data and a rich set of metadata can be presented in multiple ways, including forms designed for the visually and cognitively impaired. Government mandates, including the NIMAS specifications, have accelerated this trend. Publishers themselves have developed platforms and service partnerships to make the structuring of data and metadata more cost-effective—even when the government mandate is outdated or insufficient. (The reasons for doing this will be the subject of a future blog.)

The LMS Factor

What makes accessibility for educational publishers far more difficult is not textbooks, however. Particularly in higher education but increasingly in K-12, textbooks are only part of a much larger content environment: the Learning Management System or LMS. Driven by the institutional need to track student progress, and provide many other learning benefits and related technologies, the LMS is typically a complex collection of text content, media, secure web portals, and databases. Although textbooks still form a large portion of LMS content, studies from the Book Industry Study Group (BISG) indicate that the field is undergoing a radical shift.

This has massive implications for accessibility. Not only must publishers provide reading assistance for text and descriptions for images, they also must deal with the interactive elements of a typical website. This includes color contrast, keyboard access, moving content control, and alternatives—probably alt text—for online video and other visually interactive elements. A sighted person might have no difficulty with an online quiz, but the process will be very different for the visually impaired.

Fortunately—at least for now—the online elements of most LMSs are deployed on standard desktop or laptop computers, not mobile devices. The BISG study indicates that this is because more students have access to a PC, but not all have a tablet or e-reader. This makes the publisher’s task “simpler”—with fewer variations in operating systems and interfaces—but that will change as mobile device use increases. LMS features on smartphones are the start of new accessibility headaches for publishers.

Workflow—Again

As I pointed out in the previous blog, service providers have a major role in making accessibility affordable. This is especially true for educational publishers. Automating and standardizing content and metadata are usually out of reach, even for the largest publishers. Even keeping up to date with government and industry mandates, like Section 508 and WCAG 2.0, are best handled by a common service provider.

As with journal publishing, the overall workflow will make accessibility cost-effective in the complex, LMS-focused world of educational publishing. Fortunately, given the size and scope of that industry’s audience, it also makes the goal of accessibility more rewarding.

 


Related White Papers


Accessibility for Journal Publishers

The terms “access” and “scholarly journals” are often linked to Open Access publishing. Less often discussed—but still very important—are issues and challenges of making journal content accessible to the visually, cognitively, or otherwise impaired.

Guest blog by John Parsons


content accessibility for journal publishers

Peer-reviewed, scholarly journals are a specialized slice of the publishing universe. Worldwide, it is a $25 billion market. Unlike consumer and trade magazines, journals are not supported by advertising revenue, but rely on subscriptions, institutional funding, and/or open access funding mechanisms. Readership varies widely in size and scope, and includes students, journalists and government employees as well as researchers themselves. They are also delivered by a wide array of specialized digital platforms and websites.

What they do share with other publications is the assumption that their audience can read words and images on a page or screen. For the majority of journal readers, this poses few problems. However, for readers with visual or other impairments, content accessibility is a major concern.

Justifying Journal Content Accessibility

Some might argue, without foundation, that scholars qualified to consume peer-reviewed content are less likely to be impaired in the first place, making the number of affected users too low to justify the added costs. (If cost were the only issue, one Stephen Hawking in a journal’s potential audience would more than justify the cost of making scholarly exchange possible for disabled readers. Also, as was mentioned, scholars and researchers are not the only readers in the equation.)

In other words, one justification for accessibility is a moral argument. It’s simply the right thing to do. However, for most journals, this argument is moot. Government-funded research typically carries minimum accessibility requirements, such as those spelled out in U.S. Code Section 508.

Building content accessibility into a journal workflow need not even be a daunting financial question at all. Well-structured XML content and metadata has many benefits, of which accessibility is only one. (This will be the subject of another blog.)

Regardless of the reason, most journal publishers understand the why aspect of content accessibility. So, let’s focus on how best to do it.

Identifying the Pieces---WCAG 2.0, Section 508, and VPAT

To understand the scope of journal article accessibility, we need to know that it has two basic versions—a document (PDF or EPUB) and a webpage. These are similar in many ways, especially to a sighted person, but they have different accessibility requirements.

What each of these formats have in common are

  • accessibility metadata
  • meaningful alt text for images (including math formulas and charts)
  • a logical reading order
  • audible screen reading
  • alternative access to media content

Only two (EPUB and webpages) have potentially resizable text and a clear separation of presentation and content. (PDF’s fixed page and text size often can be problematic. But in areas where PDF is a commonly used format, notably healthcare, service providers can provide workflow mechanisms to remediate PDFs for Section 508 compliance.)

Webpages have the added requirements of color contrast, keyboard access, options to stop, pause, or hide moving content, and alternatives to audio, video, and interactive content. Most of these are covered in detail in the W3C Web Content Accessibility Guidelines (WCAG) 2.0 guidelines, many of which are federally mandated. Service provider solutions in this area include a Voluntary Product Accessibility Template (VPAT) for journal content. This template applies to all “Electronic and Information Technology” products and services. It helps government contracting officials and other buyers to evaluate how accessible a particular product is, according to Section 508 or WCAG 2.0 standards.

There are several “degrees of difficulty” when it comes to making journal articles accessible. Research that is predominantly text is the easiest, but still requires careful thought and planning. With proper tagging of text elements, clearly denoting reading order and the placement of section headings and other cues, a text article can be accessibility-enhanced by several methods, including large print and audio.

More difficult by far are the complex tables, charts, math formulas, and photographic images that are prevalent in STM journals. Here, extra attention must be paid to type size and logical element order (for tables). In the case of charts, formulas, and pictures, the answer is alternative or “alt” text descriptions.

Think of it as explaining a visual scene to someone who is blindfolded. Rudimentary alt text, like “child, doll, hammer,” would probably not convey the full meaning of a photograph depicting Bandura’s famous Bobo Doll experiment. Rather, the best alt text would be a more nuanced text explanation of what the images depict—preferably by a subject matter expert.

Automation in Workflow is Key

When Braille or even large print were the only solutions, journal content accessibility was not an option for most. All that changed, for the better, with the advent of well-structured digital content. Again, publishing service providers have done much to advance this process, and in many cases, automate it.

Not every issue can be automated, however. Making content accessible may involve redesign. For example, footnotes may need to be placed at the end of an article—similar to a reference list—to ensure continuity of reading. Other steps support the logical flow of content and reading order, semantic structuring for discoverability, inclusion of alt text descriptions for images, simplifying presentation and tagging of complex tabular data, and the rendering of math equations as MathML.

Journal publishers can facilitate this in part by selecting formats that are more accessible by nature. Articles published online or available as EPUB are accessible by default, although they need to be enhanced to meet all the requirements of WCAG 2.0. The gap is small and can be easily bridged by focusing on the shortcomings and addressing it in design, content structuring, and web hosting.

Many of the basic, structural issues of making journal content accessible can be resolved, more or less automatically, if the publishing system or platform enforces standardized metadata rules. Titles, subheads, body copy, and other text elements will have a logical order, and can easily be presented in accessible ways. For elements where knowledgeable human input is required (as with alt text), a good system will facilitate such input.

Accessibility is not just the right thing to do, for the sake of science. It is also an obtainable goal—with the right service provider.

 


Recent Reports


Digital Solutions in India 2017 | A Special Report From Publishers Weekly

The annual report from Publishers Weekly (PW) that details service providers in India and the depth of solutions they offer in the global publishing market is now available. We are proud to take part in this special report that also captures a short list of accomplishments that Cenveo has experienced over the past year.

Recent Customer Success Stories

Cenveo Publisher Services recently worked with a global education publisher to develop an HTML5-based flashcard engine that offers flip card-styled content. “The end product combines terms and definitions with all types of media support to enhance user interaction and engagement,” explains marketing director Marianne Calilhanna, adding that the engine also “has complex assessment content built into the application to test knowledge about those terms and definitions learned.”

The entire application, which is WCAG 2.0 AA-compatible, was tested on three different browsers on three operating systems (iOS, OSX, and Windows). “It was also tested by an accessibility certification authority to ensure that the product is easily accessible by differently-abled users. The WCAG 2.0 AA compliance guidelines were thoroughly applied to the engine, including the colors used, color contrast, and settings panel. Then there was the use of large and well-spaced interactive elements or virtual controls, and the reinforcement of texts and visuals to ensure that no essential information was conveyed by audio alone,” says Calilhanna.

The next project from a major educational publisher was about creating and developing core content and supporting materials without hiring authors. “At first glance, it sounded like a cost-saving approach but it was actually more complex than that. Anyone involved with publishing educational content understands the deep and often hidden costs related to publishing and production,” Calilhanna says. “Our client, by partnering with Cenveo to develop and author higher-ed curriculum content, effectively bypassed ongoing royalties and permissions. This has resulted in lower costs and a positive P&L for the publisher, with savings passed on to students.”

Check out the full report:

Interesting to note the following observation, from PW

 
During PW’s trip to India to visit participants in this report early in the year, some digital solutions vendors—and their main U.S. clients in some cases—were already rethinking their business collaboration with plans of forming partnerships or joint ventures to sidestep the IT outsourcing/immigration issues. Some are looking into setting up branches in the U.S. to offer onshore and hybrid services, while a few more are checking out companies to take over and therefore have immediate U.S. representation.
— Publishers Weekly
 

At Cenveo Publisher Services, onshore and hybrid solutions have long been an option available from our portfolio of services. Whether it's full-service production management or peer review management services, we work with publishers to implement a workflow that best fits their content and their budget---offshore, onshore, hybrid.


The Technology & Trends Reshaping K-12 Education Publishing

Stream our recording of Publishing Executive's June 14th webinar featuring Lisa Carmona, SVP/Chief Product Officer, PreK-12 Portfolio at McGraw-Hill Education and Brian O'Leary, Executive Director at Book Information Study Group (BISG).

Helping students learn remains the core objective of education publishers, but the tools and tactics are evolving quickly. The new expectations of digital native students require publishers to enable learning materials with technology that meets the needs of The Mobile Generation. Data analytics have become nearly as important as content, supporting adaptive learning platforms and helping teachers monitor progress.
 

Download our latest report


Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Accessibility 101: What Does "Accessibility" Mean for Publishers?

Cenveo Publisher Services is a champion of digital equality. Over the coming weeks, we'll dive into some details about what accessibility means for publishers and review how to get started (or continue) with "born accessible" publishing initiatives.

Let's begin.

 
 

Making content accessible involves a number of services depending on the content type and markets your publishing program reaches. What is consistent across all content and markets, is well structured and tagged content. 

Stay tuned as we dive into the details for

  • documents
  • EPUB
  • games
  • websites
  • elearning courses

Feel free to share your questions and thoughts in the comments box below.

 

Learn More

Champion Digital Equality

Click here to learn more


Counting the Hidden Costs of Publishing

Guest blog by John Parsons

The rise of digital STM publishing, and the ongoing discussion about open access and subscription-based models, has led some to conclude that these changes inexorably lead to lower overall publication costs. Reality is more complex.

In my last blog, I discussed the open access or OA publishing model for scholarly, STM publishing. In a nutshell, OA allows peer-reviewed articles to be accessed and read without cost to the reader. Instead of relying on subscriptions, funding for such articles comes from a variety of sources, including article processing charges or APCs.

There are many misconceptions about OA, including the mistaken notion that OA journals are not peer reviewed (false) and that authors typically pay APCs out of pocket (also false). However, a more serious problem occurs when we fail to account for all the costs of scholarly publishing—not just the obvious ones.

Digital Doesn’t Mean Free

Behind the scenes

The obvious publication costs of scholarly publishing—peer review, editing, XML transformation, metadata management, image validation, and so on—can be daunting.

Part of the problem is the Internet itself. Search engines have given us the ability (in theory) to find information we need. Many non-scholarly publishers, particularly newspapers, have published content for anyone to read—in the misbegotten hope of selling more online advertising. The more idealistic among us have given many TED Talks on the virtue of giving away content, trusting that those who receive it—or at least some of then—will reciprocate.

What may work for a rock band does not necessarily work in publishing, however. This is partly because publishing is a complex process, with many of its functions unknown to the average scholar or reader.

Behind the Screens

The obvious publication costs of scholarly publishing—peer review, editing, XML transformation, metadata management, image validation, and so on—are daunting for anyone starting a new journal. If they want to be considered seriously, publications using the “Gold” open access model have to be able to handle these production costs over the long term. They also have to invest in other ways—to enhance their brand, and provide many of the services that scholars and researchers may take for granted.

The first of these hidden costs is the handling of metadata. The OA publishing model—and digital publishing in general—resulted in an explosion of available content, including not only peer reviewed articles, but also the data on which they are based. Having consistent metadata is critical to finding any given needle in an increasing number of haystacks. Metadata is also the key that maintains updates to the research (think Crossref) and tracks errata.

The trouble is that metadata is easy to visualize but it takes work and resources to implement well. Take for example the seemingly simple task of author name fields. The field for author surname (or family name, or last name) is typically text, but how does it accommodate non-Latin characters or accents? Does it easily handle the fact that surnames in countries like China are not the “last” name? The problem is usually not with the field itself, but with how it’s used in a given platform or workflow.

Another hidden metadata cost is the emergence of standards, and how well each publishing workflow handles them. More recently, the unique author identifier (ORCID) has gained in prominence, but researchers and contributors may not automatically use them. There are many such metadata conventions—each representing a cost to the publisher, in order to let scholars focus on their work without undue publishing distractions.

Another hidden cost is presentation. From simple, easy-to-read typography to complex visual elements like math formulae, the publisher’s role (and the corresponding cost) has expanded. What was once a straightforward typesetting and design workflow for print has expanded to a complex, rules-driven process for transforming Word documents and graphic elements into backend XML, which fuels distribution.

The publishing model has drastically changed from a neatly-packaged “issue publication model” to a continuous publication approach. This new model delivers preprints, issues, articles, or abstracts to very specific channels. The systems and workflows that support the new publication model requires configuration and customization, which all have associated production costs.

Automation Is the Key

Very few publishers can maintain the production work required in house. Technology development, staffing, and innovation are costly to maintain. The solution is to rely on a trusted solutions provider, who performs such tasks for multiple journals. Typically, this involves the development of automated workflows—simplifying metadata handling and presentation issues, using a rules-based approach for all predictable scenarios. This of course relies on a robust IT presence—something a single publisher or group typically cannot afford alone. Ideally, automated workflows involve an initial setup cost, but will improve editorial quality, improve turnaround times, and speed up time to publication.

By offloading the routine, data-intensive parts of publishing workflow to a competent service provider, publishers and scholars can spend more time on actual content and less time on the mechanics of making it accessible to and useable by other researchers.


What are some of the "hidden costs" your organization finds challenging?

 

Resources for publishers

Publishing Defined: John Bond's STM Publishing Video Series

What is Crossmark?

John Bond of Riverwinds Consulting is creating a video library of useful shorts about topics and terms important to the STM publishing industry. For some people, his shorts may provide a great refresher or another take on subjects that impact our market. For those just starting their career in STM publishing, his video series should be required viewing!

The series is titled "Publishing Defined" and covers a broad range of topics from defining specific terms to strategic advice regarding RFPs. Also helpful are the playlists he’s put together. You are sure to add a little something to your own knowledgebase from this series!

The following video explains Crossmark and why it’s important for publishers and service providers:

The Crossmark playlist can be viewed here.


Crossmark and Crossref are explained in our white paper, "All Things Connected." Download your copy today by clicking on the cover in the right column.

 

Resources for Publishers

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.