Smart Suite 2.0 Released - A New Approach to Pre-editing, Copyediting, Production, and Content Delivery

Smart Suite Version 2.0 is a cloud-based ecosystem of publishing tools that streamlines the production of high-quality content. The system has a complete interface (UI) redesign and tighter integration with high-speed production engines to solve the challenges related to multi-channel publishing.

Smart Suite 2.0 is the next generation publishing engine that focuses on a combination of artificial intelligence, including NLP, and system intelligence that eliminates human intervention and achieves the goal of high-speed publishing with editorial excellence. Smart Suite auto generates multiple outputs, including PDF, XML, HTML, EPUB, and MOBI from a manuscript in record-setting time.
— Francis Xavier, VP of Operations at Cenveo Publisher Services

Offering a fresh approach to streamline production, the unified toolset comprises four modules that seamlessly advance content through publishing workflows while validating and maintaining mark-up language behind the scenes.

  • Smart Edit is a pre-edit, copyedit, and conversion tool that incorporates natural language processing (NLP) and artificial intelligence (AI) to benefit publishers not only in terms of editorial quality but also better, faster markup and delivery to output channels.
  • Smart Compose is a fully automated production engine that ingests structured output from Smart Edit and generates page proofs. Designed to work with both 3B2 and InDesign, built-in styles based on publisher specifications guarantee consistent, high-quality layouts.
  • Smart Proof provides authors and editors with a browser-based correction tool that captures changes and allows for valid round tripping of XML.
  • Smart Track brings everything together in one easy UI that logs content transactions. The kanban-styled UI presents a familiar workflow overview with drill-down capabilities that track issues and improve both system and individual performance.

Smart Suite is fully configurable for specific publisher requirements and content types. Customized data such as taxonomic dictionaries, and industry integrations such as FundRef, GenBank, and ORCID, enhance the system based on publisher requirements.

 

Download Brochure

Taylor & Francis Group Awards Full-Service Production for Global Journal Content to Cenveo

Cenveo’s Technological Innovation Aligns With Taylor & Francis’ Journal Publishing Vision

Cenveo announces a major increase in full-service content production for Taylor & Francis’ global journal production program. Taylor & Francis selected Cenveo as a core content service provider to support Taylor & Francis’s continued growth.

PR-quote_T-and-F.png

As a world-leading academic and professional publisher, Taylor & Francis cultivates knowledge through its commitment to quality. Taylor & Francis identified in Cenveo a shared vision to develop production workflows designed to improve the velocity of research dissemination. This planned strategic initiative enhances customer experience for Taylor & Francis' contributor base, particularly newer generations of researchers and scientists, without alienating its traditional market.

“The critical piece that convinced us Cenveo was the right partner was their technology stack supports our publishing model and provides real-world, expedited publication turnaround times using AI and natural language processing technology,” explains Stewart Gardiner, Global Production Director of Journals at Taylor & Francis Group. “The organizational and operational innovations Cenveo proposed to support a rapid scale-up in production volumes were something we haven’t seen from other providers and were clearly based on lessons learned in previous ramp-ups.”

In February 2018, Cenveo announced a financial restructure and reorganization to strengthen its fiscal health. Mr. Gardiner remarks, “Given the company is currently reorganizing following a Chapter 11 process, our legal and financial people looked at Cenveo closely and came to the view that this is a relatively straightforward debt for equity restructure. Refinancing of this sort is not out of line with what one might expect for a company in Cenveo’s market position, scale, and acquisition history.”

Cenveo and Taylor & Francis have shared a long work history prior to this fivefold increase in volume. The transition process has already begun and onboarding the additional Taylor & Francis work is scheduled to take place in structured phases throughout the remainder of 2018.

Given the company is currently reorganizing following a Chapter 11 process, our legal and financial people looked at Cenveo closely and came to the view that this is a relatively straightforward debt for equity restructure. Refinancing of this sort is not out of line with what one might expect for a company in Cenveo’s market position, scale, and acquisition history.
— Stewart Gardiner, Global Production Director of Journals, Taylor & Francis Group

“This major win is a result of considerable work and effort that we have put into the next generation of Smart Suite combined with a focus on operational excellence,” explains Atul Goel, EVP Global Content Operations and President and COO of India Operations at Cenveo. “We are grateful for the trust placed in Cenveo by Taylor & Francis and heartened that Cenveo’s long-term vision of innovative publishing workflows aligns with a global leader in publishing.”

Cenveo is consistently rated as one of the highest performing content service providers by its customers. Cenveo’s ongoing commitment to publishers and extensive experience with volume ramp-up is further demonstrated by its significant investments in technology and staff.

Accessibility FAQs

The topic of accessibility is a priority for all types of publishers in 2018 and we project it's the year the majority will invest in making content accessible for all readers.

Cenveo Publisher Services recently hosted a webinar on accessibility: "Digital Equality - The Importance of Accessibility in Your Publishing Strategy." If you did not catch the live webinar, you can stream it here. We received so many great questions during the webinar. However, we ran out of time before we could answer every one!

Following is a list of FAQs about content accessibility:

For decorative images, can you use alt text that reads something like “decorative image, yellow tulips.” Or is the null tag better?

A: Individuals who use read-aloud software or screen reader software frequently experience what’s called ‘audio fatigue.’ To prevent that, you want to limit what information they have to listen to. So if an image is purely decorative, it should be skipped completely.

If you are using HTML or PDF, use “” for null text.

In MS Word, you typically should leave the description field blank instead of using “”, only because Word will read it out loud as “begin quote, end quote,” and the reader will have to listen to it. The meaning will be understood but it's unnecessary and distracting.

Should alt text be limited to 130 characters?

A: Best practice is to use 4 to 10 words for short alt text and not exceed 100 characters in total. However, the long description should be detailed and describe the image in a meaningful way.

Who should write the alt text? Author, copyeditor, production editor? Our eproduction team is unsure whether we can just write alt text (especially when other people are reluctant to do so).

A: Writing alt-text needs understanding of alt-text writing parameters (accessibility) AND subject matter knowledge, especially for complex images. The best practice is to work with a service provider fluent in the process and then have the author review.

As a a beginner in this field, I'm interested in the basic technical details of what "semantic structure" means and which "metadata" should be accessible.

A: Semantic structuring provides meaningful tag names for key elements in the content (to facilitate search and discovery).  Metadata information comprises details about the book such as the title, author, ISBN, subject, etc. and the accessible qualities the product possesses. Appropriate semantic structuring and metadata depends on how the content is published and the formats produced. There are specific guidelines for web content, eBook (EPUB3), PDF, digital products (multimedia), etc.

If you would like more instruction and help, please click the Learn More link at the top of this page and we're happy to help.

Videos with audio should have captions, a transcript, and a video description. Is this a best practice recommendation or are all three required by law?

A: All three are part of the Section 508 requirements and WCAG 2.0. And so, yes, all three are required to make your video fully accessible to deaf, blind, and deaf-blind students.

What is the best way to make chemistry content accessible - in some cases thousands of molecular images? Considering ChemML is not broadly used or browser compatible, is it best to add alt text for each molecule?

A: Yes capturing the alt text for each molecule is the best approach considering lack of support by assistive technologies or screen readers.  A library of the molecules with alt text can be created for reusability of the molecules. 

Is there a standard for accuracy of closed captioned transcription of recorded educational/technical content?

A: The FCC closed captioning quality standards went into effect April 2014. This is of course for televised programming in support of the hearing impaired, but a lot of the standards apply to educational videos as well. More information can be found here.

How do I find out more about building accessibility in Adobe InDesign that transfers to Adobe Acrobat PDF files?

A: Here is a good resource from Adobe: Creating accessible PDF documents with InDesign CS6. We can help create validated accessible files or test ones you've created. Click the Learn More button at the top of this page for more information.

On a math test, if we describe an image of graph in alt text, we have technically answered the question. How would you make the image accessible to blind students without giving away the answer?

A: In that case, you would describe the visual appearance of the chart or the graph without interpreting the results. And you can find good examples of this at the Diagram Center website.

If you’re using a chart or a graph on a web page, you may want to provide an interpretation of the data so students will learn how to interpret. But if it’s on a quiz or a homework assignment, you only want to describe the visual appearance of the chart or the graph so that the student can draw inference themselves.

Can tables be accessible? Can you group a table and just give a summary? Or do you need to tag the table with header rows and table cells, etc.?

A: Tables can be made accessible. The tables should be tagged as per the accessibility guidelines, complex or large tables should be accompanied with a summary.

Depending on the technology you use or the software you use to create the table, tables are best for displaying data accessibly. MS Word does not allow you to provide column headers, so you should only use  simple tables.

You can create accessible tables using HTML. If you use a learning management system, it should have an HTML editor. In general, you should not have nested tables. You should break them up into several smaller, individual tables.

Do you know if publishers have a department devoted to making their products accessible?

A: The degree to which publishers are producing accessible products varies greatly. However, as regulatory deadlines kick in, more educational publishers are discovering that they risk losing substantial market share if they cannot provide content in an accessible format.

What is the breakdown of different disabilities among students that constitutes one-third to one-half of students with disabilities?

A: Please refer to this report, though this report was published in 2014 the information contained is useful: The State of Learning Disabilities.

Can you make a separate page for something that can’t be made accessible (say, using a Flash element)?

A: Absolutely. As long as you make the equivalent content readily available.

Does WCAG 2.0 cover dyslexic-friendly fonts?

A: No, it does not. The one success criterion that mentions typeface design is Level AAA, and even it only recommends sans serif typefaces and not even as a compliance issue.

What about dynamic Content Management Systems, like WordPress? Or eLearning authoring applications? Any recipes for Articulate, Camtasia, Lectora, or Adobe eLearning Suite?

A: WordPress can be made 100% WCAG 2.0 compliant. So can many other CMSs. We have a course and learning guide that goes through all of WCAG 2.0, including recipes for special platforms such as Articulate.

I’ve seen the Section 508 checklist. However, is there a checklist of things we can/should check for in the documents that you spoke about?

A: Yes. Essentially you need to walk through all the applicable WCAG 2.0 success criteria through the lens of a document. A simple checklist can be found at the following websites:

What if my website contains content that cannot be made accessible?

A: Some content, by its very nature, may not be made accessible. In such cases, the information provided must be made available to individuals with a disability in an equally effective manner. The Technical Guidelines provide suggestions for how to provide accessible descriptive content by which a person using accommodating technologies could understand what the inaccessible content is about. Note that using more established or more widely used technologies may be equally effective for all students, and allow for full accessibility.

Can I just cut and paste an image caption into an alt text field?

A: No. Alternative text should not be redundant with adjacent or body text.

We make content accessible only when required; typically after publication. Would it be more expensive to integrate accessibility for all titles at the onset of production?

A: Integrating accessibility at the onset of production is the recommended approach, it not only helps control the cost but also ensures the multiple products generated at the end of the production cycle will inherit the accessible qualities with no additional spending required to retrofit the product for accessibility. It is more expensive in the long-run to build accessibility into your workflow post publication.

What content requires a text equivalent?

A: Anything that is not text must have a text equivalent: pictures, image maps, video, sound, form controls, scripts, and colors.

Do all images need a text equivalent?

A: Any image that conveys information should have a text alternative. However, images that do not convey any information (decoration) should have an empty equivalent (in HTML, simply alt=""), so that people and assistive technologies know that they can be ignored.

How is Cenveo Publisher Services working with higher education publishers to move them towards accessibility?

A: Accessibility is integrated in our workflows to produce products that are born accessible. We endeavor to ensure all products are accessible and educate customers on the importance and benefits of accessibility as well as the legal compliance mandates. We have always recommended a born accessible product rather than retrofitting content for accessibility, which typically involves additional costs.

How can I get started making my content accessible?

A: Easy! Just grab a copy of our accessibility RFQ form, fill out, and return to info.psg@cenveo.com and we will get you started.

 

View Webinar


Related Reports


Download Brochures

Society for Scholarly Publishing Turns 40

The Society for Scholarly Publishing celebrates its 40th Anniversary this year. To celebrate, a number of special events are scheduled to take place at the Annual Meeting. You won't want to miss this year's event.

The 40th Anniversary Task Force, has also launched a new microsite for 2018 to celebrate SSP’s 40th anniversary. As part of a year-long celebration, the website will feature photos, documents, and news from SSP’s archives as well as interviewers with long-time members of the SSP community.

The SSPat40 website is updated regularly, so you will want to check back often to browse the historical content we unveil, old photos, past topics of interest to the community. If you have any old pictures or ideas you would like to share on the website, please contact me.

Finally, keep apprised of ongoing developments and share news about SSP at 40 with the hastag #SSPat40!

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

2018 U.S. Postal Rate Increase

The 2018 USPS rate increase became effective Sunday, January 21, 2018.  The overall rate increase for all classes is 1.92% but its financial effects on each mailing will differ based on

  • mail class used
  • mailing piece count
  • mail piece characteristics.

There is also a Postal Regulatory Commission (PRC ) proposal to modify the pricing authority of the USPS under the current 2006 Congressional framework (called PEAA). Currently, the USPS is constrained from raising the rates for its market dominant products and services beyond the average consumer price index (CPI) for the prior year. Competitive products can seek increases based on market elasticity. 

The PRC has proposed an average increase ceiling of the CPI percentage plus 2% annually for the next 5 years. After that time, the USPS' financial standing will be reviewed to develop future pricing authority. The PRC also challenged the USPS to increase prices for products that have costs failing to be covered by their revenue. The major products impacted include

  • periodicals class
  • marketing mail flats
  • bound printed matter flats

Higher than normal increases are likely in these areas. but the average increase must stay within its boundaries.

The USPS still does not have a Board of Governors (BOG). The White House has recommended three candidates for Congressional approval. The industry is asking the White House to recommend a fourth governor because there are procedural requirements that require the vote of four non-postal BOG members. The USPS cannot offer 2018 promotional incentives without BOG approval.  

We will continue to consult with our customers about the rate increase and work to provide the most economical postage solutions for publishers' mailings and direct mail. Have a question for Cenveo's VP of Postal Affairs and Distribution? Simply click the button below.

 

Follow Us

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

View From a Publishing Consultant: 2018 Trends in Scholarly Publishing

This short video by John Bond of Riverwinds Consulting lists some of the trends he foresees in scholarly publishing this year.

 
 
Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

The Center for Open Science | Preregistration Challenge

Some of the world's leading journals are taking steps to maximize the transparency and reproducibility of science by promoting the preregistration of research. Those journals include

  • Frontiers in Human Neuroscience
  • Journal of Experimental Social Psychology
  • Journal of Memory and Language
  • Memory & Cognition
  • Nature & Nature Research Journals
  • Ecology
  • Proceedings of the NAS
  • Brain and Behavior
  • Cognition & Emotion
  • Cortex
  • Learning & Behavior
  • PLOS Biology
  • Psychological Science
  • Science

Why Should Research be Preregistered?

When research is preregistered, there is an advanced commitment before data are gathered. Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. Both are important, but the same data cannot be used to generate and test a hypothesis, which can happen unintentionally and reduce the clarity and quality of results. Removing potential conflicts through planning improves the quality and transparency of research, helping others who may wish to build on it.

The Center for Open Science (COS) is promoting preregistration through its Preregistration Challenge. The COS is giving away $1,000 to 1,000 researchers who preregister their projects before they publish them!

Publishers can support this initiative by reaching out to authors and promoting the challenge. Following is an introductory video that explains the challenge and you can learn more by clicking here.

 

Publishing Defined: What is Open Peer Review?

 

This short video by John Bond of Riverwinds Consulting talks about the different types of Open Peer Review. John recently published a new book titled "Scholarly Publishing: A Primer." 

 

Learn About our Peer Review Services for Publishers


Follow Us!

W3C Publishing Summit 2017

Guest blog by Evan Owens

The first-ever W3C Publishing Summit took place in San Francisco, November 9 to 10, to discuss how web technologies are shaping publishing today, tomorrow, and beyond. Publishing and the web interact in innumerable ways. The Open Web Platform and its technologies have become essential to how content is created, developed, enhanced, discovered, disseminated, and consumed online and offline.

Background on IDPF and W3C

During February 2017, the IDPF (International Digital Publishing Forum) merged into the W3C. IDPF members are now joining W3C with new committees formed, including the W3C Publishing Working Group, EPUB Community Group, and others.

Keynote: The Future of Content by Abhay Parasnis – CTO, Adobe

The internet is wide open to all world communications. “Content publication” has expanded to a very broad level via the Internet. Businesses are trying to reach out in personalized fashion. Artificial Intelligence (AI) and Machine Learning (ML) are important for content location & delivery and personalization. W3C does important standards development, but as technology is moving fast how should we coordinate successfully?

A major goal of the W3C is to define a new Portal Web Publication (PWP) content format that will merge HTML and EPUB and replace PDF. EPUB 4.0 is likely to become a subset of that new PWP standard.

Following are some of my observations from the various presentations and discussions from the conference. Feel free to add your thoughts and takeaways in the comments section!

Content Platforms and Publishers

  • Majority of eBook content is still in EPUB2
  • EPUB3 is big  in Japan and China but not common in English-language publications yet
  • Most failed EPUB content is from USA publishers
  • Publishers tend to overuse fixed layout, especially academic or instructional content
  • Future will be CSS, interactivity, and accessibility

Digital Publishing in Asia, Europe,and Latin America

  • UK the biggest eBook market with 575K new eBooks per year
  • Amazon is leading EU bookseller (90% of UK sales)
  • Japan produces approximately 500K eBooks
  • Japan has been using EPUB 3.0 since 2011; 100% of old files were migrated to the new format
  • The market is growing in Korea and China
  • In Latin America ebooks are primarily EPUB 2.0; 3.0 hasn’t been adopted yet
  • 55% of publishers in Latin America have not yet started digital content production

Accessibility in Publishing and W3C

  • Accessibility in digital publishing is a key issue that was included in EPUB
  • W3C implementation goals include supporting EPUB3 accessibility and collaborating with the W3C WCAG
  • DAISY has built a checking tool called “ACE”; it is now in beta and available for testing
  • Cenveo Publisher Services provides accessibility services and testing

Educational Publishing

  • Personalized learning challenges include the learning platform and the metrics
  • There is now a major move from books to digital e-learning platforms
  • Learning is now subject to data-driven insights: analytics add value by these tools

Creating EPUB Content that Looks and Works Great Everywhere

  • Microsoft added an EPUB reader into Windows 10 MS Edge web browser
  • Almost 90% of ebooks are EPUB2 and recent content in 2017 is only 62% EPUB3
  • Issues for EPUB content creation and rendition include
    • Many different screen sizes and orientations (e.g. phone, table, computer)
    • Reader requirements: mobility, classroom usage, accessibility
    • Pagination works differently in different reading systems
    • Tables and anything with fixed width is risky
    • Captions not staying with images due to page breaks
    • Background images break when flowing across pages
    • CSS layout for colored text failures
    • Supporting audio reader software by language metadata
    • Fixed layout never 100% perfect
    • Don’t use SVG for text layout
    • Test content in several epub reader devices, etc.

Publication Metadata

  • Consumer metadata versus academic metadata remains a key challenge
  • Standards are only slowly adopted; e.g. ONIX 3 published 2009 but by 2017 only about 50% adopted
  • Autotagging versus human tagging; machines more consistent
  • 105 metadata standards

Cenveo Publisher Services is a proud member of the W3C Publishing Working Group. The issues discussed at the W3C Publishing Summit are ones we address everyday with academic, scholarly, and education publishers. We look forward to working with you in 2018 on innovative publishing solutions that improve editorial quality and streamline production while continuously addressing costs. Let us know how we can help.

 

Follow Us!


Related White Papers


Open Practice Badges: A Primer and How to Get Started

The Center for Open Science (COS) provides tools, training, support, and advocacy that help researchers and scholars manage, share, and discover scientific research. The COS’ mission is to “increase the openness, integrity, and reproducibility of scholarly research. Acceleration of scientific progress can be a primary motivator for scholarship and a powerful driver of real solutions.

The COS develops software tools, workflows, data storage solutions, and more based on its free Open Science Framework (OSF). The OSF is an ecosystem of solutions, partnering companies, technologies, and ideas that support researchers across the entire research life cycle.  One initiative that is gaining momentum is the use of Open Practice Badges in the publishing workflow.

Openness is a core value of scientific practice.
 

The scholarly publishing community agrees on the relevance and importance of open communication for scientific research and progress. In 2009 there were approximately 4,800 OA journals publishing approximately 190,000 articles. In January 2017, the estimate is that there are around 9,500 active OA journals. At Cenveo Publisher Services, we work with a large number of society and commercial publishers who have launched or are preparing to add OA publication models to their workflows.

Awarding Open Practice Badges on published content is a way of designating and awarding authors badges that acknowledge their use of open practices during the research life cycle.

Incorporating Open Practice Badges Into Publishing Workflows

By acknowledging open practices in scientific research, journal publishers can use badges in their publications to certify that a particular research practice was followed. Badges can be awarded to the published content as part of the peer review process or they can be awarded post-publication. As long as processes and practices are transparent, any organization can issue badges. Most publishers are awarding the badges during peer review. Publishing platforms and review services are likely to use the badges post publication.

For publishers, the journal awards the badge and it is linked to the specific article. Each publisher tends to have specific methods for incorporating badges into the published article. However, it is critical that the badge is machine discoverable and readable.

Detailed information on incorporating OA badges into your publication workflow can be found at the OSF Wiki page here.

Badge Overview

There are three badges currently used:

  1. Open Data
  2. Open Materials
  3. Preregistered

Following is an overview of the three badges and corresponding criteria. Detailed information is available on the OSF Wiki page, including corresponding links.

Open Data

The Open Data badge is earned for making publicly available the digitally-shareable data necessary to reproduce the reported results.

Criteria

Digitally-shareable data are publicly available on an open-access repository. The data must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

A data dictionary (e.g., a codebook or metadata describing the data) is included with sufficient description for an independent researcher to reproduce the reported analyses and results. Data from the same project that are not needed to reproduce the reported results can be kept private without losing eligibility for the Open Data Badge.

An open license allowing others to copy, distribute, and make use of the data while allowing the licensor to retain credit and copyright as applicable. Creative Commons has defined several licenses for this purpose, which are described at www.creativecommons.org/licenses. CC0 or CC-BY is strongly recommended.

Open Materials

The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.

Criteria

Digitally-shareable materials are publicly available on an open-access repository. The materials must have a persistent identifier and be provided in a format that is time-stamped, immutable, and permanent (e.g., university repository, a registration on the Open Science Framework, or an independent repository at www.re3data.org).

Infrastructure, equipment, biological materials, or other components that cannot be shared digitally are described in sufficient detail for an independent researcher to understand how to reproduce the procedure.

Sufficient explanation for an independent researcher to understand how the materials relate to the reported methodology.

Preregistered/Preregistered+Analysis Plan badges 

The Preregistered/Preregistered+Analysis Plan badges are earned for preregistering research.

Preregistered

The Preregistered badge is earned for having a preregistered design. A preregistered design includes: (1) Description of the research design and study materials including planned sample size, (2) Description of motivating research question or hypothesis, (3) Description of the outcome variable(s), and (4) Description of the predictor variables including controls, covariates, independent variables (conditions). When possible, the study materials themselves are included in the preregistration.

Criteria for earning the preregistered badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.govOpen Science FrameworkAEA RegistryEGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with registered plan.

Badge eligibility does not restrict authors from reporting results of additional analyses. Results from preregistered analyses must be distinguished explicitly from additional results in the report. Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design was altered but the changes and rationale for changes are provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.

Preregistered+Analysis Plan

The Preregistered+Analysis Plan badge is earned for having a preregistered research design (described above) and an analysis plan for the research and reporting results according to that plan. An analysis plan includes specification of the variables and the analyses that will be conducted. Guidance on construction of an analysis plan is below.

Criteria for earning the preregistered+analysis plan badge on a report of research are:

  1. A public date-time stamped registration is in an institutional registration system (e.g., ClinicalTrials.gov, Open Science Framework, AEA registry, EGAP).
  2. Registration pre-dates the intervention.
  3. Registered design and analysis plan corresponds directly to reported design and analysis.
  4. Full disclosure of results in accordance with the registered plan.

Notations may be added to badges. Notations qualify badge meaning: TC, or Transparent Changes, means that the design or analysis plan was altered but the changes are described and a rationale for the changes is provided. Where possible, analyses following the original specification should also be provided. DE, or Data Exist, means that (2) is replaced with “registration postdates realization of the outcomes, but the authors have yet to inspect or analyze the outcomes.”

What Journals Are Using Open Badges?

A list of journals currently using Open Practice Badges can be found here. The list continues to grow as more publishers understand the benefits of providing this acknowledgement to researchers and readers.


Cenveo Publisher Services is an advocate of Open Practice Badges. If your publishing organization would like to learn how we can support open badges in your workflow, feel free to reach out to us directly.

Are you currently using Open Practice Badges? Please share your findings or observations in the comments section below.

 

 

 

 

Innovative Research and Creative Output: From Ideas to Impact

Society for Scholarly Publishing - Philadelphia Regional Event

This post is a collaboration between SSP members, including Nicola Hill, Emma Sanders, and Adrian Stanley.

Left to right: Kathi Martin, Drexel Digital Museum; Jen Grayburn, CLIR Postdoc; Alex Humphreys, JSTOR Labs

On October 30th, the Society for Scholarly Publishing (SSP) hosted a regional event at the University of Pennsylvania, Van Pelt Library. The topic, "Innovative Research and Creative Outputs: From Ideas to Impact" brought together Philly-area publishers, librarians, and content professionals for a panel discussion on new and innovative methods of producing scholarship.

Jen Grayburn, CLIR Postdoctoral Fellow

Jen spoke about her use of Google Scholar, SketchFab and Unity in her work, which centers around the intersection of architecture and text. Using GIS (Geographic Information Systems) mapping software, Jen examines locations of historic sites. She shared an example of a mapping she did of St. Magnus Cathedral in the islands off the north coast of Scotland. In this particular example, Jen generated a binary map that  indicated what would and wouldn’t be visible on the ground from a certain height.

She uses geo-TIFs (TIF files encoded with geographical coordinates) to create a 3D topographic map to illustrate what is visible and why. Eventually, these mappings were confirmed with on-site visits she conducted. In her work, Jen uses Sketchfab to store the large 3D modeling files

Currently, there is a lack of standards around 3D scholarly outputs—how they’re reviewed, stored, and made accessible.3D collections are siloed by institution—there is really no repository. The only exception Jen cites is Duke University’s MORPHO SOURCE. For these reasons, evaluating and citing digital work is still a challenge.

Studies in Digital Heritage content are inextricably linked to the 3D model created in the course of those studies. There is a real need for community standards for 3D data presentation. Academic departments are generally slow to reward digital projects, or have a process for incorporating these scholarly outputs in formal evaluations.

Archeologists with an interest in Jen’s work, for example, always want the original 3D model she created, not the version on Sketchfab. But these models haven’t been peer-reviewed, and for that reason, Jen is reluctant to provide. In the near future, more standard development and community standards for 3D and VR creation and curation in higher education is certainly warranted.

Kathi Martin, the Drexel Digital Museum Project

Kathi Martin  presented her work with The Drexel Digital Museum Project: Historic Costume Collection (digimuse)---a searchable image database comprising select fashion from historic costume collections. Initially, fashion images were highly protected by using low-res images and watermarked images on the website. Kathi explained that Polish hacktivists demonstrated to her how easy it is to remove the watermark and improve resolution.

The museum has always been driven by open access and open source to share information and further usage and research. Interoperability is key to the museum’s mission—this allows the data on the museum’s website to be easily harvested across browsers.

The museum has widened beyond Drexel’s collection; for example, Iris Barre Apfel’s Geoffery Beene collection was displayed and that exhibit is archived on the museum site. Quicktime VR was used to film the collection and provide high-resolution captures of the fashion collections.

The technology DigiMuse is used in the Drexel project and provides a new level of engagement with the collections Kathi is preserving. Drexel's Digital Museum project website allows a site visitor to interact personally and actively with a distributed, collected narrative. The site includes rich metadata descriptions for every picture. The variety of contributions on the site, Kathi feels, stimulate varying and often deeply personal reactions.

She believes the site is very powerful due to its “baked-in connectedness.” Kathi closed with Grace Kelly’s gown, made by Givenchy in part out of actual coral (gasp!). The site complements the high-res images of the gown itself with media of Grace Kelly in the gown.

Alex Humphreys, JSTOR Labs

Alex discussed how JSTOR Labs applies methods and tools from digital scholarship to create tools for researchers, teachers, and students "that are immediately useful – and a little bit magical." JSTOR is a member of ITHAKA, a non-profit devoted to digital sustainability.

Alex Humphreys, director at JSTOR LaBs

Alex works with a team of five on innovative projects that benefit humanities scholars. He demonstrated JSTOR Labs’ Understanding Shakespeare tool, which uses the Folger Shakespeare Library’s digital version of Shakespeare plays to hyperlink each line of the play to a search showing all JSTOR articles that contain a particular line of prose. 

JSTOR Labs works from a philosophy of play—Alex sees what resources other organizations (like Folger Shakespeare Library) bring, what LABS brings, and what kind of sandbox they might build in collaboration. Part of JSTOR Labs’ philosophy values what Alex calls “multi-disciplinarity.” For example, JSTOR Labs’ partnership with Eigenfactor (which measures influential and highly cited articles) has resulted in a tool that helps scholars discover the most influential articles in a given field or topic area. 

JSTOR Labs also believes in hypothesis-driven development. Alex explained the key is ITERATING, ITERATING, INTERATING! Alex also presented the topic modeling examples, including Reimagining the Monograph, which started from JSTOR Labs asking, "Can we improve the experience and value of long-form scholarship?"

The “topicgraph” provides a fingerprint of a monograph. Each term has a set of associated keywords, containment of which in the text make the probability higher that the term is being discussed. 

Last but certainly not least, Alex unveiled am amazing and brand new tool with the working name “Text Analyzer.” This tool is essentially a multi-language analyzer—text can be pulled from, say, a Russian Wikipedia entry. The tool will translate the text and list in English the topics included in the entry. 

Alex notes that so much of digital humanities is about probabilities, not known data. The label modelling that JSTOR Labs most frequently uses (as opposed to cluster topic modeling).


The Philadelphia SSP Regional Meetings are an excellent venue to engage with the scholarly and scholarly publishing community. All are welcome. To learn more, click here!

 

Rights & Permissions Service for Publishers

Copyright is far more than just a necessary evil to protect intellectual property from theft. Copyright furthers all creative interests by making the rich marketplace of ideas available to a wider audience. Resourceful rights and permissions management supports author content while maximizing the publisher’s budget.

Hiring one person to perform all the rights and permissions functions requires finding a pretty special person: an editorial specialist with enough copyright expertise to be an IP strategist, while being a skilled digital-image savvy photo researcher and database manager. That's why we offer R&P as a service for publishers.

Cenveo Publisher Services manages all aspects of text, image, and rich media content R&P. We assemble a team of project managers, assessment specialists, data entry staff, photo researchers, and permissions experts to support the management of R&P in your organization.

By identifying a rights strategy early, authors can stay on budget. Research and permissions runs alongside production cycles with clearly defined milestones. Targeted international expertise also allows a spectrum of pricing options. Contact us to learn how we can support R&P for your journals or books program.

 

Download Brochure


Choosing a Journal or Book Printer

A great primer on finding a print partner by John Bond at Riverwinds Consulting. John's YouTube channel, Publishing Defined, is a great resource for scholarly and academic publishers.

CHOOSING A JOURNAL OR BOOK PRINTER: This short video by John Bond of Riverwinds Consulting discusses choosing a printer. FIND OUT more about John Bond and his publishing consulting practice at www.RiverwindsConsulting.com MORE VIDEOS on Choosing a Printer can be found at: https://www.youtube.com/playlist?list=PLqkE49N6nq3hhpEzslKtzBHbxgWCmDvL4 JOHN'S NEW BOOK is "The Request for Proposal in Publishing: Managing the RFP Process" To find out more about the book: https://www.riverwindsconsulting.com/rfps/ Buy it at Amazon: https://www.amazon.com/Request-Proposal-Publishing-Managing-Process-ebook/dp/B071W7MBLM/ref=sr_1_1?s=books&ie=UTF8&qid=1497619963&sr=1-1&keywords=john+bond+rfps/ SEND IDEAS for John to discuss on Publishing Defined.
 

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Mail Delivery Update - Hurricane/Tropical Storm Harvey

As the nation continues to recover from Hurricane Harvey (now downgraded to a Tropical Rainstorm), postal operations have been significantly impacted in the region. The USPS provides updated information here.

Cenveo's Mailing Services

Interested customers must make their own decisions as to whether to include affected mail addresses within their unprocessed mailing files. The recovery process has just begun in a few areas and the rain will continue in others for the remainder of the week. The Postal Service has not yet had ample time to access their capability to serve flooded areas or even determine whether affected addresses can even receive deliveries. It is interesting to note that the USPS is using Twitter to encourage displaced citizens to temporarily change their address as life-changing decisions are made.

During the Katrina tragedy, the USPS Address Management Center kept a separate file of addresses which were undeliverable and mailers used the list to purge their mailings. The USPS seems to be trying to get ahead this time by encouraging changes of address.

All processed mail for the affected areas is likely being held back for a few days at USPS processing centers or being held aside at a regional USPS processing site.

Click the image below to keep apprised of service disruption alerts.

Contact us if you would like to speak with one of Cenveo's USPS distribution specialists.

Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Accessibility: Because the Internet is Blind

Like the visually impaired, the Internet cannot “see” content the way a sighted human being does. It can only discover relevant content via searchable text and metadata. When publishers take the right steps to make content accessible, they also make it more discoverable.

Guest blog by John Parsons

In the past four blogs, we’ve discussed how to make different types of published content accessible to visually and cognitively impaired users. Throughout the series, we’ve covered the reasons why publishers should do so, including the moral argument and its related compliance requirements, such as Section 508, NIMAS, and WCAG 2.0. While digital workflows and service providers have made such compliance affordable and practical, there is another argument for accessibility—one that is a compelling benefit in the age of digital content: discoverability.

The Nature of the Internet

We tend to think of the Internet in general—and Web content in particular—as a visual experience. We view the screen as we would a printed document, albeit with far greater capabilities for interactivity and connection to other information. The tools for searching and discovering content are all visual as well. Typing in a phrase, scanning the results, and choosing what we want, are all familiar, visually-dependent habits.

However, what we are seeing is not the content, but an on-screen rendering. We’re seeing the programmed user interface. It may be highly accurate and functional, but it’s a product of underlying data. The technology itself does not “see” or experience the content as we do. It only handles data and its related metadata.

Discoverability Is the Key

In order to be found on the Internet, a piece of published content must have a logical, and keyword-prioritized structure. It must not only have text strings that a search engine can find, it must also have standardized and commonly used metadata that correspond to what human users expect to find. Well-structured XML serves that purpose for nearly all types of published content.

The good news is that accessibility and discoverability have the same basic solution: well-structured content and metadata. Best practices for one solution are applicable to the other!

Every area of publishing benefits from greater discoverability.

This changes the equation for publishers faced with accessibility compliance issues. If they apply a holistic approach to well-structured XML content, they will improve their overall discoverability, and lay the groundwork for systematic rendering of their content in multiple forms—including HTML and EPUB optimized for accessibility.

Multiple Benefits

Every area of publishing benefits from greater discoverability. For journal and educational publishers, well-structured content can be more easily indexed by institutions and services, leading to higher citation and usage levels. For trade book publishers, discoverability translates to better search results and potentially more sales. For digital products of any kind, it means a better overall user experience, not only for the visually impaired but also for all users.

This is especially the case when it comes to non-text elements of published content. The practice of adding alt text descriptions for images and videos benefits not only the visually impaired reader. It also makes such rich content discoverable to the world.

Best practices for structuring content do not happen automatically. They require forethought by authors, publishers, and service providers. More importantly, they require a robust, standards-based workflow, to include searchable metadata and XML tags—automatically wherever possible, and easily in all other cases.

The issues of accessibility are really only problematic when viewed in isolation. When viewed as a subset of a more compelling use case—discoverability—they become a normal and positive part of the publishing ecosystem.

 


Working With a Publishing Consultant

A short video by John Bond at Riverwinds Consulting. John's YouTube channel, Publishing Defined, is a great resource for scholarly and academic publishers.

 
 

Revenue Growth in Education, Scholarly, and Trade Book Publishing

The Association of American Publishers shared revenue figures in its StatShot report. Revenue growth is up 4.9% for Q1 2017 compared with Q1 2016.

Both education and scholarly publishers experienced slight revenue bumps during the first quarter of 2017, compared with the first quarter of 2016.

Higher Education course materials wins the greatest growth award, reporting $92 million (24.3%) increase to $470.2 million in Q1 2017 compared with the Q1 2016. Revenues for Professional Publishing (business, medical, law, scientific and technical books) were up by $5 million (4.5%) to $119.5 million.

 

Accessibility for Trade Book Publishers

The venerable world of trade books has had accessibility options since the early 19th Century invention of Braille. However, only in the digital age has it been possible to make all books accessible to the visually impaired.

Guest blog by John Parsons

In the 1820s, Charles Barbier and Louis Braille adapted a Napoleonic military code to meet the reading needs of the blind. Today’s familiar system of raised dot characters substitutes touch for vision, and is used widely for signage and of course books and other written material. By the 20th Century, Braille was supplemented with large print books and records. For popular books these tools became synonymous with trade book publishers’ efforts to connect with visually impaired readers.

However, these tools—particularly Braille—has significant drawbacks. Before the advent of digital workflows, producing a Braille or even a large print book involved a separate design and manufacturing process, not to mention subsequent supply chain and distribution issues. But that has changed with the digital publishing revolution.

All Books Are “Born Digital”

With notable exceptions, trade books published since the 1980s started out as digital files on a personal computer. Word processors captured not only the author’s keystrokes but, increasingly, their formatting choices. (In the typewriter era, unless you count backspacing and typing the underline key, italics and boldface were the province of the typographer.)

On the PC, creating a larger size headline or subhead, or a distinct caption, evolved from a manual step in WordStar or MacWrite to a global stylesheet formatting command. When these word processing files made their way to a desktop publishing program, all the 12-point body copy for a regular book could become 18-point type for a large print version—at a single command.

Other benefits of digital-first content included a relatively easy conversion from Roman text characters to Braille, although that did not solve the actual book manufacturing process.

What really made the digital revolution a boon to accessibility was the rise of HTML—and its publishing offspring, eBooks. Web or EPUB text content can be re-sized or fed into screen readers for the visually impaired, but that’s only the start. It can also contain standardized metadata that a publishing workflow can use to create more accessible versions of the book.

Workflow Challenges

Trade books tend to be straightforward when it comes to accessibility challenges, but there are caveats that publishers and their service providers must address. The simplest of course is a book that is almost entirely text, with no illustrations, sidebars, or other visual elements. In those cases, the stylesheet formatting done by the author and/or publisher can be used to create accessibility-related tags for elements like headlines and subheads, as well as manage the correct reading order for Section 508 compliance.

Where things start to get tricky is when a book includes illustrations, or even special typographic elements like footnotes. To be accessible, the former must include descriptive alt text, which is usually best provided by an author, illustrator, or subject matter expert. Increasingly, just as writers became accustomed to adding their own typographic formatting, they may also include formatted captions containing this valuable, alt text-friendly information.

For other visual elements, service providers must fill in the accessibility gaps that authors cannot easily provide. This may include a certain amount of redesign, such as placement of footnotes at the end, to ensure continuity of reading, and defining the logical flow of content and reading order for page elements like sidebars. Service providers also add semantic structuring, alt text image descriptions not included by the author, and simplification of complex elements like tables.

It’s All About Format

Book publishers are already well ahead of the curve when it comes to accessibility. As mentioned in a previous blog, the page-centric PDF format is problematic. Fortunately, except for print workflows, trade publishers do not use it for their end product. In most cases, books are also produced in EPUB format, which is a derivative of HTML. These formats are accessible by default, although they need to be enhanced to meet the requirements of WCAG 2.0 standards. The gap is small, however, and can be easily bridged by focusing on design, content structuring, and web hosting.

Book reading for the visually impaired is no longer restricted to the popular titles, and compensatory technology of past centuries. With the advent of digital publishing, and the workflows that support and enhance it, accessibility for all books is an achievable goal.

 


HTML 5.2 - W3C Candidate Recommendation and The Publishing Working Group

Today the W3C announced that HTML 5.2 is a W3C Candidate Recommendation. Over the next 4 weeks, the Advisory Committee will review the spec and determine whether they will endorse as a W3C Recommendation.

About HTML 5.2

This specification defines the 5th major version, second minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features continue to be introduced to help Web application authors, new elements continue to be introduced based on research into prevailing authoring practices, and special attention continues to be given to defining clear conformance criteria for user agents in an effort to improve interoperability.

HTML in the Wayback Machine

 What the W3C website looked like on January 14, 1998 via the Wayback Machine.

What the W3C website looked like on January 14, 1998 via the Wayback Machine.

While reviewing HTML 5.2, it's interesting to remember its origin story. The W3C provides a full history of HTML here but following are a few points of particular interest to the publishing community:

  • Originally, HTML was primarily designed as a language for semantically describing scientific documents.
  • For its first 5 years (1990-1995), HTML went through a number of revisions and experienced a number of extensions, primarily hosted first at CERN, and then at the IETF.
  • In 1998 the W3C membership decided to stop evolving HTML and instead begin work on an XML-based equivalent, called XHTML.
  • In 2003, the publication of XForms, a technology which was positioned as the next generation of Web forms, sparked a renewed interest in evolving HTML itself,
  • The idea that HTML’s evolution should be reopened was tested at a W3C workshop in 2004.
  • In 2006, the W3C indicated an interest to participate in the development of HTML 5.0.

It's a fascinating story and, like all history, important to revisit and understand.

W3C Today and the Publishing Working Group

 The W3C website today.

The W3C website today.

In June, the W3C launched the new Publishing Working Group. The first ever W3C Publishing Summit will be held 9-10 November 2017 in San Francisco, California. Evan Owens, VP of Publishing Technologies at Cenveo Publisher Services will be there.

If you'd like to meet with Evan at the W3C Publishing Summit, you can make an appointment by clicking the button below.

 
Comment

Marianne Calilhanna

Marianne is director of marketing for Cenveo Publisher Services. She started her career in editorial and production, working on STM primary and review journals. During her 28+ year career she's worked as a book editor, SGML (remember that?!) editor, and managing editor in addition to marketing-related positions. Technology, production, and people---these are just a few of her favorite things.

Accessibility for Education Publishers

K-12 and Higher Ed publishers provide complex content that is deeply intertwined with Learning Management Systems and other digital deliverables. That makes accessibility harder—and potentially more rewarding.

Guest blog by John Parsons


Accessibility for educational publishers

In our recent blog, we tackled the issues of accessibility—for visually and cognitively impaired readers—in the realm of scholarly journal publishing. The solutions are (fairly) straightforward for that industry, because you’re dealing mostly with documents, and lots of text. Other types of publishers deal with a broader range of issues and output channels, so for them accessibility is more complex. Near the top of this difficulty scale are education publishers.

Even before the rise of digital media, education textbooks—notably in the K-12 market—posed significant accessibility challenges. Complex, rich layouts, laden with color, illustrations, and sidebars, made textbooks a rich, visual experience. Such books can be a treat for sighted students, for whom publishers have invested much thought and design research. For those less fortunate, however, a rich visual layout is an impediment.

Going Beyond Print

For printed textbooks, traditional accessibility fixes like large print and Braille are usually not cost-effective. Recorded audio has been a stopgap solution, but still a costly one, unlikely to handle the ever-increasing volume of educational material. Fortunately, the advent of digital media has far greater potential for making textbooks accessible.

When textbooks are produced as HTML or EPUB (but not PDF), the potential for greater accessibility is obvious. Type size can be adjusted at will. Text-to-speech can provide basic audio content with relative ease. Illustrations can be described with alt text—although care must be taken to insure its quality. Even reading order and other “roadmap” approaches to complex visual layouts can make digital textbooks more accessible than a printed version could ever be.

The real key is digital media’s inherent ability to separate presentation and content. Well-structured data and a rich set of metadata can be presented in multiple ways, including forms designed for the visually and cognitively impaired. Government mandates, including the NIMAS specifications, have accelerated this trend. Publishers themselves have developed platforms and service partnerships to make the structuring of data and metadata more cost-effective—even when the government mandate is outdated or insufficient. (The reasons for doing this will be the subject of a future blog.)

The LMS Factor

What makes accessibility for educational publishers far more difficult is not textbooks, however. Particularly in higher education but increasingly in K-12, textbooks are only part of a much larger content environment: the Learning Management System or LMS. Driven by the institutional need to track student progress, and provide many other learning benefits and related technologies, the LMS is typically a complex collection of text content, media, secure web portals, and databases. Although textbooks still form a large portion of LMS content, studies from the Book Industry Study Group (BISG) indicate that the field is undergoing a radical shift.

This has massive implications for accessibility. Not only must publishers provide reading assistance for text and descriptions for images, they also must deal with the interactive elements of a typical website. This includes color contrast, keyboard access, moving content control, and alternatives—probably alt text—for online video and other visually interactive elements. A sighted person might have no difficulty with an online quiz, but the process will be very different for the visually impaired.

Fortunately—at least for now—the online elements of most LMSs are deployed on standard desktop or laptop computers, not mobile devices. The BISG study indicates that this is because more students have access to a PC, but not all have a tablet or e-reader. This makes the publisher’s task “simpler”—with fewer variations in operating systems and interfaces—but that will change as mobile device use increases. LMS features on smartphones are the start of new accessibility headaches for publishers.

Workflow—Again

As I pointed out in the previous blog, service providers have a major role in making accessibility affordable. This is especially true for educational publishers. Automating and standardizing content and metadata are usually out of reach, even for the largest publishers. Even keeping up to date with government and industry mandates, like Section 508 and WCAG 2.0, are best handled by a common service provider.

As with journal publishing, the overall workflow will make accessibility cost-effective in the complex, LMS-focused world of educational publishing. Fortunately, given the size and scope of that industry’s audience, it also makes the goal of accessibility more rewarding.

 


Related White Papers