As the nation continues to recover from Hurricane Harvey (now downgraded to a Tropical Rainstorm), postal operations have been significantly impacted in the region. The USPS provides updated information here.
Cenveo's Mailing Services
Interested customers must make their own decisions as to whether to include affected mail addresses within their unprocessed mailing files. The recovery process has just begun in a few areas and the rain will continue in others for the remainder of the week. The Postal Service has not yet had ample time to access their capability to serve flooded areas or even determine whether affected addresses can even receive deliveries. It is interesting to note that the USPS is using Twitter to encourage displaced citizens to temporarily change their address as life-changing decisions are made.
During the Katrina tragedy, the USPS Address Management Center kept a separate file of addresses which were undeliverable and mailers used the list to purge their mailings. The USPS seems to be trying to get ahead this time by encouraging changes of address.
All processed mail for the affected areas is likely being held back for a few days at USPS processing centers or being held aside at a regional USPS processing site.
Click the image below to keep apprised of service disruption alerts.
Contact us if you would like to speak with one of Cenveo's USPS distribution specialists.
Like the visually impaired, the Internet cannot “see” content the way a sighted human being does. It can only discover relevant content via searchable text and metadata. When publishers take the right steps to make content accessible, they also make it more discoverable.
In the past four blogs, we’ve discussed how to make different types of published content accessible to visually and cognitively impaired users. Throughout the series, we’ve covered the reasons why publishers should do so, including the moral argument and its related compliance requirements, such as Section 508, NIMAS, and WCAG 2.0. While digital workflows and service providers have made such compliance affordable and practical, there is another argument for accessibility—one that is a compelling benefit in the age of digital content: discoverability.
The Nature of the Internet
We tend to think of the Internet in general—and Web content in particular—as a visual experience. We view the screen as we would a printed document, albeit with far greater capabilities for interactivity and connection to other information. The tools for searching and discovering content are all visual as well. Typing in a phrase, scanning the results, and choosing what we want, are all familiar, visually-dependent habits.
However, what we are seeing is not the content, but an on-screen rendering. We’re seeing the programmed user interface. It may be highly accurate and functional, but it’s a product of underlying data. The technology itself does not “see” or experience the content as we do. It only handles data and its related metadata.
Discoverability Is the Key
In order to be found on the Internet, a piece of published content must have a logical, and keyword-prioritized structure. It must not only have text strings that a search engine can find, it must also have standardized and commonly used metadata that correspond to what human users expect to find. Well-structured XML serves that purpose for nearly all types of published content.
The good news is that accessibility and discoverability have the same basic solution: well-structured content and metadata. Best practices for one solution are applicable to the other!
This changes the equation for publishers faced with accessibility compliance issues. If they apply a holistic approach to well-structured XML content, they will improve their overall discoverability, and lay the groundwork for systematic rendering of their content in multiple forms—including HTML and EPUB optimized for accessibility.
Every area of publishing benefits from greater discoverability. For journal and educational publishers, well-structured content can be more easily indexed by institutions and services, leading to higher citation and usage levels. For trade book publishers, discoverability translates to better search results and potentially more sales. For digital products of any kind, it means a better overall user experience, not only for the visually impaired but also for all users.
This is especially the case when it comes to non-text elements of published content. The practice of adding alt text descriptions for images and videos benefits not only the visually impaired reader. It also makes such rich content discoverable to the world.
Best practices for structuring content do not happen automatically. They require forethought by authors, publishers, and service providers. More importantly, they require a robust, standards-based workflow, to include searchable metadata and XML tags—automatically wherever possible, and easily in all other cases.
The issues of accessibility are really only problematic when viewed in isolation. When viewed as a subset of a more compelling use case—discoverability—they become a normal and positive part of the publishing ecosystem.
The Association of American Publishers shared revenue figures in its StatShot report. Revenue growth is up 4.9% for Q1 2017 compared with Q1 2016.
Both education and scholarly publishers experienced slight revenue bumps during the first quarter of 2017, compared with the first quarter of 2016.
Higher Education course materials wins the greatest growth award, reporting $92 million (24.3%) increase to $470.2 million in Q1 2017 compared with the Q1 2016. Revenues for Professional Publishing (business, medical, law, scientific and technical books) were up by $5 million (4.5%) to $119.5 million.
The venerable world of trade books has had accessibility options since the early 19th Century invention of Braille. However, only in the digital age has it been possible to make all books accessible to the visually impaired.
In the 1820s, Charles Barbier and Louis Braille adapted a Napoleonic military code to meet the reading needs of the blind. Today’s familiar system of raised dot characters substitutes touch for vision, and is used widely for signage and of course books and other written material. By the 20th Century, Braille was supplemented with large print books and records. For popular books these tools became synonymous with trade book publishers’ efforts to connect with visually impaired readers.
However, these tools—particularly Braille—has significant drawbacks. Before the advent of digital workflows, producing a Braille or even a large print book involved a separate design and manufacturing process, not to mention subsequent supply chain and distribution issues. But that has changed with the digital publishing revolution.
All Books Are “Born Digital”
With notable exceptions, trade books published since the 1980s started out as digital files on a personal computer. Word processors captured not only the author’s keystrokes but, increasingly, their formatting choices. (In the typewriter era, unless you count backspacing and typing the underline key, italics and boldface were the province of the typographer.)
On the PC, creating a larger size headline or subhead, or a distinct caption, evolved from a manual step in WordStar or MacWrite to a global stylesheet formatting command. When these word processing files made their way to a desktop publishing program, all the 12-point body copy for a regular book could become 18-point type for a large print version—at a single command.
Other benefits of digital-first content included a relatively easy conversion from Roman text characters to Braille, although that did not solve the actual book manufacturing process.
What really made the digital revolution a boon to accessibility was the rise of HTML—and its publishing offspring, eBooks. Web or EPUB text content can be re-sized or fed into screen readers for the visually impaired, but that’s only the start. It can also contain standardized metadata that a publishing workflow can use to create more accessible versions of the book.
Trade books tend to be straightforward when it comes to accessibility challenges, but there are caveats that publishers and their service providers must address. The simplest of course is a book that is almost entirely text, with no illustrations, sidebars, or other visual elements. In those cases, the stylesheet formatting done by the author and/or publisher can be used to create accessibility-related tags for elements like headlines and subheads, as well as manage the correct reading order for Section 508 compliance.
Where things start to get tricky is when a book includes illustrations, or even special typographic elements like footnotes. To be accessible, the former must include descriptive alt text, which is usually best provided by an author, illustrator, or subject matter expert. Increasingly, just as writers became accustomed to adding their own typographic formatting, they may also include formatted captions containing this valuable, alt text-friendly information.
For other visual elements, service providers must fill in the accessibility gaps that authors cannot easily provide. This may include a certain amount of redesign, such as placement of footnotes at the end, to ensure continuity of reading, and defining the logical flow of content and reading order for page elements like sidebars. Service providers also add semantic structuring, alt text image descriptions not included by the author, and simplification of complex elements like tables.
It’s All About Format
Book publishers are already well ahead of the curve when it comes to accessibility. As mentioned in a previous blog, the page-centric PDF format is problematic. Fortunately, except for print workflows, trade publishers do not use it for their end product. In most cases, books are also produced in EPUB format, which is a derivative of HTML. These formats are accessible by default, although they need to be enhanced to meet the requirements of WCAG 2.0 standards. The gap is small, however, and can be easily bridged by focusing on design, content structuring, and web hosting.
Book reading for the visually impaired is no longer restricted to the popular titles, and compensatory technology of past centuries. With the advent of digital publishing, and the workflows that support and enhance it, accessibility for all books is an achievable goal.
Today the W3C announced that HTML 5.2 is a W3C Candidate Recommendation. Over the next 4 weeks, the Advisory Committee will review the spec and determine whether they will endorse as a W3C Recommendation.
About HTML 5.2
This specification defines the 5th major version, second minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features continue to be introduced to help Web application authors, new elements continue to be introduced based on research into prevailing authoring practices, and special attention continues to be given to defining clear conformance criteria for user agents in an effort to improve interoperability.
HTML in the Wayback Machine
While reviewing HTML 5.2, it's interesting to remember its origin story. The W3C provides a full history of HTML here but following are a few points of particular interest to the publishing community:
- Originally, HTML was primarily designed as a language for semantically describing scientific documents.
- For its first 5 years (1990-1995), HTML went through a number of revisions and experienced a number of extensions, primarily hosted first at CERN, and then at the IETF.
- In 1998 the W3C membership decided to stop evolving HTML and instead begin work on an XML-based equivalent, called XHTML.
- In 2003, the publication of XForms, a technology which was positioned as the next generation of Web forms, sparked a renewed interest in evolving HTML itself,
- The idea that HTML’s evolution should be reopened was tested at a W3C workshop in 2004.
- In 2006, the W3C indicated an interest to participate in the development of HTML 5.0.
It's a fascinating story and, like all history, important to revisit and understand.
W3C Today and the Publishing Working Group
In June, the W3C launched the new Publishing Working Group. The first ever W3C Publishing Summit will be held 9-10 November 2017 in San Francisco, California. Evan Owens, VP of Publishing Technologies at Cenveo Publisher Services will be there.
If you'd like to meet with Evan at the W3C Publishing Summit, you can make an appointment by clicking the button below.
K-12 and Higher Ed publishers provide complex content that is deeply intertwined with Learning Management Systems and other digital deliverables. That makes accessibility harder—and potentially more rewarding.
In our recent blog, we tackled the issues of accessibility—for visually and cognitively impaired readers—in the realm of scholarly journal publishing. The solutions are (fairly) straightforward for that industry, because you’re dealing mostly with documents, and lots of text. Other types of publishers deal with a broader range of issues and output channels, so for them accessibility is more complex. Near the top of this difficulty scale are education publishers.
Even before the rise of digital media, education textbooks—notably in the K-12 market—posed significant accessibility challenges. Complex, rich layouts, laden with color, illustrations, and sidebars, made textbooks a rich, visual experience. Such books can be a treat for sighted students, for whom publishers have invested much thought and design research. For those less fortunate, however, a rich visual layout is an impediment.
Going Beyond Print
For printed textbooks, traditional accessibility fixes like large print and Braille are usually not cost-effective. Recorded audio has been a stopgap solution, but still a costly one, unlikely to handle the ever-increasing volume of educational material. Fortunately, the advent of digital media has far greater potential for making textbooks accessible.
When textbooks are produced as HTML or EPUB (but not PDF), the potential for greater accessibility is obvious. Type size can be adjusted at will. Text-to-speech can provide basic audio content with relative ease. Illustrations can be described with alt text—although care must be taken to insure its quality. Even reading order and other “roadmap” approaches to complex visual layouts can make digital textbooks more accessible than a printed version could ever be.
The real key is digital media’s inherent ability to separate presentation and content. Well-structured data and a rich set of metadata can be presented in multiple ways, including forms designed for the visually and cognitively impaired. Government mandates, including the NIMAS specifications, have accelerated this trend. Publishers themselves have developed platforms and service partnerships to make the structuring of data and metadata more cost-effective—even when the government mandate is outdated or insufficient. (The reasons for doing this will be the subject of a future blog.)
The LMS Factor
What makes accessibility for educational publishers far more difficult is not textbooks, however. Particularly in higher education but increasingly in K-12, textbooks are only part of a much larger content environment: the Learning Management System or LMS. Driven by the institutional need to track student progress, and provide many other learning benefits and related technologies, the LMS is typically a complex collection of text content, media, secure web portals, and databases. Although textbooks still form a large portion of LMS content, studies from the Book Industry Study Group (BISG) indicate that the field is undergoing a radical shift.
This has massive implications for accessibility. Not only must publishers provide reading assistance for text and descriptions for images, they also must deal with the interactive elements of a typical website. This includes color contrast, keyboard access, moving content control, and alternatives—probably alt text—for online video and other visually interactive elements. A sighted person might have no difficulty with an online quiz, but the process will be very different for the visually impaired.
Fortunately—at least for now—the online elements of most LMSs are deployed on standard desktop or laptop computers, not mobile devices. The BISG study indicates that this is because more students have access to a PC, but not all have a tablet or e-reader. This makes the publisher’s task “simpler”—with fewer variations in operating systems and interfaces—but that will change as mobile device use increases. LMS features on smartphones are the start of new accessibility headaches for publishers.
As I pointed out in the previous blog, service providers have a major role in making accessibility affordable. This is especially true for educational publishers. Automating and standardizing content and metadata are usually out of reach, even for the largest publishers. Even keeping up to date with government and industry mandates, like Section 508 and WCAG 2.0, are best handled by a common service provider.
As with journal publishing, the overall workflow will make accessibility cost-effective in the complex, LMS-focused world of educational publishing. Fortunately, given the size and scope of that industry’s audience, it also makes the goal of accessibility more rewarding.
The terms “access” and “scholarly journals” are often linked to Open Access publishing. Less often discussed—but still very important—are issues and challenges of making journal content accessible to the visually, cognitively, or otherwise impaired.
Peer-reviewed, scholarly journals are a specialized slice of the publishing universe. Worldwide, it is a $25 billion market. Unlike consumer and trade magazines, journals are not supported by advertising revenue, but rely on subscriptions, institutional funding, and/or open access funding mechanisms. Readership varies widely in size and scope, and includes students, journalists and government employees as well as researchers themselves. They are also delivered by a wide array of specialized digital platforms and websites.
What they do share with other publications is the assumption that their audience can read words and images on a page or screen. For the majority of journal readers, this poses few problems. However, for readers with visual or other impairments, content accessibility is a major concern.
Justifying Journal Content Accessibility
Some might argue, without foundation, that scholars qualified to consume peer-reviewed content are less likely to be impaired in the first place, making the number of affected users too low to justify the added costs. (If cost were the only issue, one Stephen Hawking in a journal’s potential audience would more than justify the cost of making scholarly exchange possible for disabled readers. Also, as was mentioned, scholars and researchers are not the only readers in the equation.)
In other words, one justification for accessibility is a moral argument. It’s simply the right thing to do. However, for most journals, this argument is moot. Government-funded research typically carries minimum accessibility requirements, such as those spelled out in U.S. Code Section 508.
Building content accessibility into a journal workflow need not even be a daunting financial question at all. Well-structured XML content and metadata has many benefits, of which accessibility is only one. (This will be the subject of another blog.)
Regardless of the reason, most journal publishers understand the why aspect of content accessibility. So, let’s focus on how best to do it.
Identifying the Pieces---WCAG 2.0, Section 508, and VPAT
To understand the scope of journal article accessibility, we need to know that it has two basic versions—a document (PDF or EPUB) and a webpage. These are similar in many ways, especially to a sighted person, but they have different accessibility requirements.
What each of these formats have in common are
- accessibility metadata
- meaningful alt text for images (including math formulas and charts)
- a logical reading order
- audible screen reading
- alternative access to media content
Only two (EPUB and webpages) have potentially resizable text and a clear separation of presentation and content. (PDF’s fixed page and text size often can be problematic. But in areas where PDF is a commonly used format, notably healthcare, service providers can provide workflow mechanisms to remediate PDFs for Section 508 compliance.)
Webpages have the added requirements of color contrast, keyboard access, options to stop, pause, or hide moving content, and alternatives to audio, video, and interactive content. Most of these are covered in detail in the W3C Web Content Accessibility Guidelines (WCAG) 2.0 guidelines, many of which are federally mandated. Service provider solutions in this area include a Voluntary Product Accessibility Template (VPAT) for journal content. This template applies to all “Electronic and Information Technology” products and services. It helps government contracting officials and other buyers to evaluate how accessible a particular product is, according to Section 508 or WCAG 2.0 standards.
There are several “degrees of difficulty” when it comes to making journal articles accessible. Research that is predominantly text is the easiest, but still requires careful thought and planning. With proper tagging of text elements, clearly denoting reading order and the placement of section headings and other cues, a text article can be accessibility-enhanced by several methods, including large print and audio.
More difficult by far are the complex tables, charts, math formulas, and photographic images that are prevalent in STM journals. Here, extra attention must be paid to type size and logical element order (for tables). In the case of charts, formulas, and pictures, the answer is alternative or “alt” text descriptions.
Think of it as explaining a visual scene to someone who is blindfolded. Rudimentary alt text, like “child, doll, hammer,” would probably not convey the full meaning of a photograph depicting Bandura’s famous Bobo Doll experiment. Rather, the best alt text would be a more nuanced text explanation of what the images depict—preferably by a subject matter expert.
Automation in Workflow is Key
When Braille or even large print were the only solutions, journal content accessibility was not an option for most. All that changed, for the better, with the advent of well-structured digital content. Again, publishing service providers have done much to advance this process, and in many cases, automate it.
Not every issue can be automated, however. Making content accessible may involve redesign. For example, footnotes may need to be placed at the end of an article—similar to a reference list—to ensure continuity of reading. Other steps support the logical flow of content and reading order, semantic structuring for discoverability, inclusion of alt text descriptions for images, simplifying presentation and tagging of complex tabular data, and the rendering of math equations as MathML.
Journal publishers can facilitate this in part by selecting formats that are more accessible by nature. Articles published online or available as EPUB are accessible by default, although they need to be enhanced to meet all the requirements of WCAG 2.0. The gap is small and can be easily bridged by focusing on the shortcomings and addressing it in design, content structuring, and web hosting.
Many of the basic, structural issues of making journal content accessible can be resolved, more or less automatically, if the publishing system or platform enforces standardized metadata rules. Titles, subheads, body copy, and other text elements will have a logical order, and can easily be presented in accessible ways. For elements where knowledgeable human input is required (as with alt text), a good system will facilitate such input.
Accessibility is not just the right thing to do, for the sake of science. It is also an obtainable goal—with the right service provider.
The annual report from Publishers Weekly (PW) that details service providers in India and the depth of solutions they offer in the global publishing market is now available. We are proud to take part in this special report that also captures a short list of accomplishments that Cenveo has experienced over the past year.
Recent Customer Success Stories
Cenveo Publisher Services recently worked with a global education publisher to develop an HTML5-based flashcard engine that offers flip card-styled content. “The end product combines terms and definitions with all types of media support to enhance user interaction and engagement,” explains marketing director Marianne Calilhanna, adding that the engine also “has complex assessment content built into the application to test knowledge about those terms and definitions learned.”
The entire application, which is WCAG 2.0 AA-compatible, was tested on three different browsers on three operating systems (iOS, OSX, and Windows). “It was also tested by an accessibility certification authority to ensure that the product is easily accessible by differently-abled users. The WCAG 2.0 AA compliance guidelines were thoroughly applied to the engine, including the colors used, color contrast, and settings panel. Then there was the use of large and well-spaced interactive elements or virtual controls, and the reinforcement of texts and visuals to ensure that no essential information was conveyed by audio alone,” says Calilhanna.
The next project from a major educational publisher was about creating and developing core content and supporting materials without hiring authors. “At first glance, it sounded like a cost-saving approach but it was actually more complex than that. Anyone involved with publishing educational content understands the deep and often hidden costs related to publishing and production,” Calilhanna says. “Our client, by partnering with Cenveo to develop and author higher-ed curriculum content, effectively bypassed ongoing royalties and permissions. This has resulted in lower costs and a positive P&L for the publisher, with savings passed on to students.”
Check out the full report:
Interesting to note the following observation, from PW
At Cenveo Publisher Services, onshore and hybrid solutions have long been an option available from our portfolio of services. Whether it's full-service production management or peer review management services, we work with publishers to implement a workflow that best fits their content and their budget---offshore, onshore, hybrid.
Stream our recording of Publishing Executive's June 14th webinar featuring Lisa Carmona, SVP/Chief Product Officer, PreK-12 Portfolio at McGraw-Hill Education and Brian O'Leary, Executive Director at Book Information Study Group (BISG).
Download our latest report
Cenveo Publisher Services is a champion of digital equality. Over the coming weeks, we'll dive into some details about what accessibility means for publishers and review how to get started (or continue) with "born accessible" publishing initiatives.
Making content accessible involves a number of services depending on the content type and markets your publishing program reaches. What is consistent across all content and markets, is well structured and tagged content.
Stay tuned as we dive into the details for
- elearning courses
Feel free to share your questions and thoughts in the comments box below.
The rise of digital STM publishing, and the ongoing discussion about open access and subscription-based models, has led some to conclude that these changes inexorably lead to lower overall publication costs. Reality is more complex.
In my last blog, I discussed the open access or OA publishing model for scholarly, STM publishing. In a nutshell, OA allows peer-reviewed articles to be accessed and read without cost to the reader. Instead of relying on subscriptions, funding for such articles comes from a variety of sources, including article processing charges or APCs.
There are many misconceptions about OA, including the mistaken notion that OA journals are not peer reviewed (false) and that authors typically pay APCs out of pocket (also false). However, a more serious problem occurs when we fail to account for all the costs of scholarly publishing—not just the obvious ones.
Digital Doesn’t Mean Free
Behind the scenes
The obvious publication costs of scholarly publishing—peer review, editing, XML transformation, metadata management, image validation, and so on—can be daunting.
Part of the problem is the Internet itself. Search engines have given us the ability (in theory) to find information we need. Many non-scholarly publishers, particularly newspapers, have published content for anyone to read—in the misbegotten hope of selling more online advertising. The more idealistic among us have given many TED Talks on the virtue of giving away content, trusting that those who receive it—or at least some of then—will reciprocate.
What may work for a rock band does not necessarily work in publishing, however. This is partly because publishing is a complex process, with many of its functions unknown to the average scholar or reader.
Behind the Screens
The obvious publication costs of scholarly publishing—peer review, editing, XML transformation, metadata management, image validation, and so on—are daunting for anyone starting a new journal. If they want to be considered seriously, publications using the “Gold” open access model have to be able to handle these production costs over the long term. They also have to invest in other ways—to enhance their brand, and provide many of the services that scholars and researchers may take for granted.
The first of these hidden costs is the handling of metadata. The OA publishing model—and digital publishing in general—resulted in an explosion of available content, including not only peer reviewed articles, but also the data on which they are based. Having consistent metadata is critical to finding any given needle in an increasing number of haystacks. Metadata is also the key that maintains updates to the research (think Crossref) and tracks errata.
The trouble is that metadata is easy to visualize but it takes work and resources to implement well. Take for example the seemingly simple task of author name fields. The field for author surname (or family name, or last name) is typically text, but how does it accommodate non-Latin characters or accents? Does it easily handle the fact that surnames in countries like China are not the “last” name? The problem is usually not with the field itself, but with how it’s used in a given platform or workflow.
Another hidden metadata cost is the emergence of standards, and how well each publishing workflow handles them. More recently, the unique author identifier (ORCID) has gained in prominence, but researchers and contributors may not automatically use them. There are many such metadata conventions—each representing a cost to the publisher, in order to let scholars focus on their work without undue publishing distractions.
Another hidden cost is presentation. From simple, easy-to-read typography to complex visual elements like math formulae, the publisher’s role (and the corresponding cost) has expanded. What was once a straightforward typesetting and design workflow for print has expanded to a complex, rules-driven process for transforming Word documents and graphic elements into backend XML, which fuels distribution.
The publishing model has drastically changed from a neatly-packaged “issue publication model” to a continuous publication approach. This new model delivers preprints, issues, articles, or abstracts to very specific channels. The systems and workflows that support the new publication model requires configuration and customization, which all have associated production costs.
Automation Is the Key
Very few publishers can maintain the production work required in house. Technology development, staffing, and innovation are costly to maintain. The solution is to rely on a trusted solutions provider, who performs such tasks for multiple journals. Typically, this involves the development of automated workflows—simplifying metadata handling and presentation issues, using a rules-based approach for all predictable scenarios. This of course relies on a robust IT presence—something a single publisher or group typically cannot afford alone. Ideally, automated workflows involve an initial setup cost, but will improve editorial quality, improve turnaround times, and speed up time to publication.
By offloading the routine, data-intensive parts of publishing workflow to a competent service provider, publishers and scholars can spend more time on actual content and less time on the mechanics of making it accessible to and useable by other researchers.
What are some of the "hidden costs" your organization finds challenging?
Resources for publishers
What is Crossmark?
John Bond of Riverwinds Consulting is creating a video library of useful shorts about topics and terms important to the STM publishing industry. For some people, his shorts may provide a great refresher or another take on subjects that impact our market. For those just starting their career in STM publishing, his video series should be required viewing!
The series is titled "Publishing Defined" and covers a broad range of topics from defining specific terms to strategic advice regarding RFPs. Also helpful are the playlists he’s put together. You are sure to add a little something to your own knowledgebase from this series!
The following video explains Crossmark and why it’s important for publishers and service providers:
The Crossmark playlist can be viewed here.
Crossmark and Crossref are explained in our white paper, "All Things Connected." Download your copy today by clicking on the cover in the right column.
Resources for Publishers
It was another busy year at London Book Fair last week with reports of increased registration numbers up by a double-digit percentage.
The following captured a brief quiet moment at the Cenveo Publisher Services Stand. The global team met with publishers, production managers, archivists, technology executives, and many others to discuss all things related to the creation and management of content.
Indeed, the hot topic for LBF17 at the Cenveo Stand was content accessibility. Long a champion of digital equality, we're helping publishers create and architect content that is "born accessible." The same technologies and guidelines that improve access to materials for people with visual or hearing impairments, limited mobility, perceptual and cognitive differences, are also tremendously useful for all publishers' customers.
No longer limited to education publishers, we see that journal publishers and others have a driving need to do more with content accessibility.
Google Books Decision
In an extremely packed room, America’s foremost copyright jurist and a judge on the U.S. Court of Appeals Second Circuit, told attendees that Google’s program to scan tens of millions of library books to create an online index “conferred gigantic benefits to authors and the public equally,” and did not “offer a substitute or interfere with authors’ exclusive rights” to control distribution. READ MORE: Judge Pierre Leval Defends Google Books Decision, Fair Use
Scholarly Publishing and Academic Market
The Research and Scholarly Publishing Forum offered academic publishers and service providers a half-day program with lively debates from Elsevier, Wiley, and Taylor & Francis. Some of the highlights included
- A discussion about the future of Open Access in the UK between Alicia Wise, Elsevier’s Director of Policy and Access, Liam Earney, Jisc Collections’ Head of Library Support Services, and Chris Banks, Assistant Provost (Space) & Director of Library Services, Central Library, Imperial College London
- A panel presenting global research policy developments chaired by Wiley’s James Perham-Marchant, featuring speakers from Taylor & Francis, Berghahn Books and Research Consulting
- A panel session on new innovations to watch, chaired by Tracey Armstrong, President and CEO of the Copyright Clearance Center, including speakers from Sparrho, Frontiers and Cold Spring Harbor Laboratory Press
Full Coverage via Publishers Weekly
Publishers Weekly covered a range of topics across the many markets represented at the Fair.
Resources for Publishers
After almost two decades, the Open Access publishing model is still controversial, and misunderstood. Here’s where we stand today.
The beginnings of scholarly publishing correspond roughly to the Enlightenment period of the late 17th and early 18th Centuries. The practice of publishing one’s discoveries was driven by a belief—championed the Royal Society—in the transparent, open exchange of experiment-based ideas. Over the centuries, journals embraced a rigorous peer review process, to maintain the integrity (and the subscription value) of its research content.
Transparency, openness, and integrity all come at a cost, however. For many years, that cost was met by charging journal subscription fees—usually borne by institutions who either produced the research, benefited from it, or both. So long as the publishing model was solely print-based, the subscription model worked well, especially for institutions with deep pockets. That all changed with the Internet. Not only did the scope and volume of research increase rapidly, so did the perception that all information should be easily findable via search engines.
The Internet expanded the audience for research outside traditional institutions—to literally anyone with a connected device. With this expansion, the disparity between the well-funded and those less fortunate became acute. As it did with other publishing workflows, this disruption drove a need for new economic models for scholarly publishing.
Open Access Basics
Advocacy for less fettered access to knowledge is nothing new. But the current Open Access (OA) movement began in earnest in the early 2000s, with the “Three Bs” (the Budapest Open Access Initiative, the Bethesda Statement, and the Berlin Declaration by the Max Planck Institute). Much of the impetus occurred in the Scientific, Technical, and Medical, or STM publishing arena, and from research funding and policy entities like the European Commission and the U.S. National Institutes of Health. The latter’s full-text archive of free biomedical and life sciences articles, PubMedCentral or PMC, is a leading example—backed by a mandate that the results of publicly-funded research be freely available to the public.
In a nutshell, Open Access consists of two basic types—each with its own variations and exceptions. “Green” OA is the practice of self-archiving scholarly articles in a publicly-accessible data repository, such as PMC or one of many institutional repositories maintained by academic libraries. There is often a time lag between initial publication—especially by a subscription-based journal—and the availability of the archived version.
The alternative is the “Gold” OA model. It includes a growing number of journals, such as the Public Library of Science (PLOS), that do not charge subscription fees. Instead, they fund the cost of publishing through article processing charges (APCs) and other mechanisms. Although APCs are commonly thought of as being paid by the author, the real situation is more complex. Often, in cases where OA is mandated, APCs are built into the funding proposals, or otherwise factored into institutional and research budgets. PLOS and other journals can also waive APCs, or utilize voluntary funding “pools,” for researchers who cannot afford to pay them.
The appeal of Open Access is obvious to researchers and libraries of limited means. It also has the potential to accelerate research—by letting scientists more easily access and build upon others’ work. But for prestigious institutions, publishers, and their partners, the picture is more complicated.
Publishers in particular can be hard pressed to develop and enhance their brand—or offer a multitude of services that scholars may take for granted—when constrained by the APC funding model. (Those challenges will be addressed in a future blog.)
Misconceptions, Problems—and Solutions
Even today, researchers are not always clear about what Open Access means for scholarly publishing. Research librarians have their work cut out for them. They cite the common misconception that OA journals do not have an adequate peer review process, for example. This is caused by disreputable or “predatory” journals that continually spam researchers with publication offers. Librarians counter this with a growing arsenal of blacklist and whitelist sources, such as the Directory of Open Access Journals.
Perhaps a major contributor to the uncertainty surrounding OA is the practice of openly publishing “preprint” versions of articles prior to—or during the early stages of—the peer review process. Sometimes, this is part of the researcher’s strategy to secure further funding, but it can fuel the mistaken notion that peer review is not required in OA publishing workflow. Distinguishing preprints from final OA articles must be a goal for publishers and their partners.
Another problem is scholars’ unfamiliarity with the OA-driven changes in publishing workflows. Gold OA journals—particularly those involved in STM publishing—are usually quite adept at guiding authors through the publication process, just as their subscription-based counterparts and publishing service providers have been. For example, the practice of assigning Digital Object Identifiers (DOIs), ISSNs, and other metadata to scholarly publishing works is becoming increasingly efficient for both Gold OA and subscription journals.
Green OA is a thornier problem for traditional publishing workflows. Each institutional repository is separate from the others—with its own funding sources, development path, and legacy issues. A common approach to article metadata, for example, has not happened overnight. Fortunately, organizations like Crossref are working with multiple partners and initiatives to make these workflows universal—and transparent to the researcher.
Perhaps the biggest issue posed by OA is the fate of traditional, subscription-based journals. Despite the push to “flip” journals from a subscription model to Open Access, there are cases where this is simply not feasible or even desirable. Many journals have a large subscriber base of professionals who, although they value the research, do not themselves publish peer reviewed articles. This is especially true for STM publishing. Some of these journals have adopted a “hybrid” approach, charging APCs for some articles (which are available immediately) while maintaining others for subscribers only. These are eventually made Open Access under the Green model, especially when Open Access is a funding requirement.
Scanning the Horizon
As we will discuss in future blogs, publishers and their service providers are exploring better ways to adapt their publishing workflows to the realities of OA and hybrid journals. In some cases, such as metadata tagging, XML generation, and output to print and online versions, these workflows can be highly automated. In others, publishers must find cost-effective ways to add value—while being as transparent as possible to the authors and users of journal content.
Despite these challenges, Open Access is changing the scholarly publishing landscape forever. There is a compelling need for researchers to find and build upon the research of others—each needle buried in a haystack of immense proportions—to advance the human condition. Publishers and their service partners are well positioned to make that open process accessible and fair to all.
Resources for Publishers
Cenveo Publisher Services now offers peer review management as a service. Journal publishers depend on the peer review process to validate research and uphold the quality of published articles. With deep expertise in scholarly publishing, our staff is fluent in all peer review models as well as the nuances of major peer review systems.
Customized peer review management solutions are based on each publisher’s workflows and business requirements. Peer review management is offered as a stand-alone service or integrated with Cenveo’s full-service journal production model. Dedicated staff work exclusively on peer review---maintaining deadlines, communicating with reviewers, and streamlining responses to authors. The service is bundled with regular performance reports that detail submission numbers, processing times, decision rates, and more.
Click the link below to learn more about this new service offering.
Resources for Publishers
Everyone Has a Story to Tell
Videos aid learning. Videos and animation are at the top of the elearning food chain. Whether it's within a traditional elearning course or as an independent asset, animated videos help learners visualize and understand complex concepts.
Increasingly, across all the markets we serve---journal publishers, K12 educational publishers, higher ed publishers, elearning providers, magazine publishers---all are interested in transforming complex content into animated video shorts.
Conceptualization and Production
Cenveo Publisher Services provides a blended team of creatives, editors, and technologists who transform a fuzzy vision into distinct products for use in digital publications, websites, and elearning courses. Our specialists comprise
- instructional designers
- subject matter experts
- multimedia specialists
- graphic visualizers
We work with our customers to provide the full-range of services around animation or à la carte options, including
- content creation
- visual storyboarding
- art creation
- photo/video research and procurement
- permissions management
- audio recording
- live action shoots
- video editing and packaging
- accessibility--WCAG and Section 508 compliance
Animation Sample: SWOT Analysis
Have a look at an animated short we created to explain what a SWOT analysis is and why it's beneficial.
The IDPF and W3C are working to combine the two organizations. Working together, they will strive to foster the global adoption of an open, accessible, interoperable digital publishing ecosystem that enables innovation. The primary motivation to combine IDPF with W3C is to ensure that EPUB’s future will be well-integrated with, and in the mainstream of, the overall Open Web Platform.
The primary goal is to ensure that EPUB remains free for all to use by evolving future EPUB major version development to W3C's royalty-free patent policy.
A committee called "Save the IDPF. Save EPUB." has formed and the group is publicly stating its dissent against the merger. Bill also responded elegantly to the organization's concern on the IDPF website:
Both of these pieces are required reading for anyone in the publishing industry and especially for book publishers. Cenveo Publisher Services is a member and supporter of the IDPF and believes that the EPUB community will be enhanced by the merger with the W3C.
What are your thoughts on the merger and the future of EPUB?
"Time reveals truth."
As 2017 quickly approaches, we're sure to read, learn, and understand more about the role scholarly publishing will play in our post-truth world. Content validation, peer review, image forensics, traditional citation databases---these are long-established and critical components of the scholarly publishing process. While the demand for increased speed to publication became a critical measurement of a journal publisher's success, editorial integrity and quality remain the gold standard by which publications are judged.
Kalev Leetaru, a contributer to Forbes, recently wrote "How Academia, Google Scholar And Predatory Publishers Help Feed Academic Fake News." In this article he shares a number of his experiences and conversations that illustrate how content validation is not at the forefront or even a consideration in some people's minds:
- "Not a day goes by that an academic paper doesn’t pass through my inbox that contains at least one claim that the authors attribute to a source it did not come from."
- "I constantly see my own academic papers cited as a source of wildly inaccurate numbers about social or mainstream media where the number cited does not even appear anywhere in my paper."
- "...many [graduate students] I’ve spoken with have never even heard of more traditional bibliographic search engines and prefer the ease-of-use and instant access of Google Scholar for quick citation searches."
- "The Editor-in-Chief of one of the world’s most prestigious and storied scientific journals recently casually informed me that his journal now astoundingly accepts citations to non-peer-reviewed personal web pages and blog posts as primary citations supporting key arguments in papers published in that journal."
Within scholarly publishing the conversation around "Open" echoes louder all the time. The first SSP Focus Group meeting on January 31, 2017 is on the topic of "Open Data, Science, and Digital Scholarship." PSP's Annual Conference (February 1 to 3) will discuss "Adding Value in the Age of Open."
The concept of "open" is not a new one. Though the term Open Access publishing started to proliferate in the early 2000s, the idea has been around for some time. Computer scientists had been self-archiving in anonymous ftp archives since the 1970s and physicists had been self-archiving in arxiv since the 1990s. In 1994, Stevan Harnad proposed "The Subversive Proposal," calling on all authors of "esoteric" research writings to archive their articles for free for everyone online.
Leetaru's article suggests that the combination of academia, Google Scholar, and predatory publishing practices play a role in the proliferation of fake news. One could also maintain that the scholarly publishing process plays a pivotal role in combating fake news.
How is your publishing organization navigating the challenges of open in our internet-connected world? What are the consequences of our movement into a more open ecosystem in the scholarly publishing community? Can quality and peer-reviewed content override non-peer-reviewed personal web pages and blog posts?
Time will tell.