• Being research data

    What motivates people to participate in clinical trials? What can we learn about our own work from the researchers who conduct those trials? In a guest post on the Scholarly Kitchen, Bruce Rosenblum explains why he became research data, what he’s learned from “researching the researchers,” and how others can become research data, too.

    Read “Being Research Data”

  • Why disability data capture is key to improving inclusion outcomes in scholarly publishing

    Together with co-authors Simon Holt (Elsevier), Erin Osborne-Martin (Wiley), and Stacy Scott (Taylor & Francis), Inera’s Sylvia Izzo Hunter examines the whys, hows, benefits, and complexities of capturing large-scale data on disability in the scholarly communications industry. Published in a DEIA-focused special issue of Learned Publishing.

    Read “Why disability data capture is key to improving inclusion outcomes in scholarly publishing”

  • An Incomplete Guide to Creating Accessible Content

    Inera’s Joni Dames sets out to answer some deceptively simple questions about accessibility—What are the obstacles preventing people from accessing your content? Are you creating content that people can interact with easily? Is your content more than just 508-compliant?—and explains how to stop worrying and incorporate some basic steps in your workflow to help make sure your content is accessible to everyone. This paper was presented at JATS-Con 2022.

    Read “An Incomplete Guide to Creating Accessible Content”

    Watch the video recording of “An Incomplete Guide to Creating Accessible Content”

  • What’s Wrong with Preprint Citations?

    The COVID-19 pandemic produced an explosion of postings on preprint servers to meet the critical need for rapid dissemination of new biomedical and clinical research findings—and citations to these preprints have exploded, too. In this guest post on the Scholarly Kitchen blog, Inera’s Sylvia Izzo Hunter, Igor Kleshchevich, and Bruce Rosenblum discuss what we learned about preprint citations through updating our software to handle them.

    Read “What’s Wrong with Preprint Citations?”

  • Disclosing Disability in the Workplace

    The decision to disclose a disability or serious health condition in the workplace—especially a hidden or invisible one—is a decision that thousands of people face every day. Whether in senior positions or just starting out, many of us struggle with what, how, and how much of ourselves to share with our colleagues, with our professional contacts, and with the industry at large. In this guest post on the Scholarly Kitchen blog, Inera’s Bruce Rosenblum shares a personal perspective on this question.

    Read “Disclosing Disability in the Workplace”

  • Keeping It Authentic: Reconciling ORCID iDs Gathered at Submission with the Author Manuscript

    In this Industry Update article for Learned Publishing, published in the July 2018 issue, Inera’s Robin Dunford and Bruce Rosenblum discuss current issues in the collection, authentication, and publication of authors’ ORCID iDs. The article describes how eXtyles ORCID Integration Suite allows automated reconciliation and synchronization between an article submission system’s transmittal file and the author manuscript to ensure that authenticated ORCID iDs are protected through the publication cycle.

    Read “Keeping It Authentic”

  • Letter to the Editor: RE: Seifert M. How accurate are references in Trace Elements and Electrolytes? Trace Elem Electrolytes. 2017; 34: 137-138.

    In an invited response to this letter from Dr. Matthias Seifert, Inera’s Robin Dunford, Sylvia Izzo Hunter, and Bruce Rosenblum document the results of an investigation into how our eXtyles software handles errors and omissions in author-submitted reference lists. Taken together, Dr. Seifert’s findings and our own demonstrate that Inera’s software does its job well, while also offering concrete suggestions for further improving reference accuracy beyond the use of software.

    Read the letter and our response here!

  • Wrangling Math from Microsoft Word into JATS XML Workflows

    Inera’s Caitlin Gebhard and Bruce Rosenblum clarify the different forms of equations that can be encountered in Word documents and discuss the issues and idiosyncrasies of converting these various forms to MathML, LaTeX, and/or images in the JATS XML model. This paper also touches on workflow alternatives for handling equations in various rendering environments and how those downstream requirements may affect the means of equation extraction from Word documents. This paper was presented at JATS-Con 2016.

    Read “Wrangling Math from Microsoft Word into JATS XML Workflows”

  • XML Publication Workflows for Standards

    Bruce Rosenblum presents an overview of XML workflow options for standards bodies that includes an ISO case study. This article is reproduced with the permission of SES, the Society for Standards Professionals. The article was first published in Standards Engineering, the official SES Journal, Vol. 65, No. 6, November/December 2013. For subscription or membership information, contact SES at [email protected].

    Read “XML Publication Workflows for Standards”

  • Variations in XML Reference Tagging in Scholarly Publication

    Bruce Rosenblum provides a brief history of reference tagging in SGML and XML. The paper discusses specific reference markup structures in the Journal Article Tag Suite (JATS), from the common to the arcane. The evolution from <citation> and <nlm-citation> to the newer <mixed-citation> and <element-citation> elements in 3.x is reviewed, including a discussion of the workflow implications of each model. Bruce concludes with some observations about the intersection among reference markup, online reference linking, and the true meaning of references in the world of electronic publishing. This paper was presented at JATS-Con 2011.

    Read “Variations in XML Reference Tagging in Scholarly Publication”

  • NLM Journal Publishing DTD Flexibility: How and Why Applications of the NLM DTD Vary Based on Publisher-Specific Requirements

    On the basis of a review of more than 20 implementations of the DTD, this paper discusses various interpretations chosen by a range of publishers as well as the business or technical requirements that led to those decisions. The implications, pro and con, of this flexibility are examined. The paper concludes with the suggestion that this flexibility is one factor that has led to wide adoption of the NLM DTD Suite. Presented at JATS-Con 2010 by Bruce Rosenblum.

    Read “NLM Journal Publishing DTD Flexibility”

  • E-Journal Archive DTD Feasibility Study

    A report prepared by Inera under a Mellon Foundation grant for the Harvard University Libraries that surveys the DTDs of ten journal publishers.

    Read “E-Journal Archive DTD Feasibility Study”


Interviews and Press

  • NISO Open Teleconference: A Conversation with NISO Fellow Bruce Rosenblum

    Bruce chats with NISO Associate Executive Director Nettie Lagace about his long history in the electronic publishing industry and how his experiences have affected the way we create and adopt standards and what we expect from our information sharing processes today.

    Listen to the recorded interview on the NISO website.

  • JATS—Where’s It Going, Where Has It Been? (NISO Newsline)

    NISO Newsline: Has the NISO version of the standard been widely adopted?

    Bruce Rosenblum: There has been wide adoption of JATS in scholarly publishing. […] JATS has been more successful than we ever imagined. In many ways, it was an accident waiting to happen. By the time the NLM DTD got out the door, people were really looking for an off-the-shelf XML standard. A large part of the market was locked out of going toward XML without such a standard.

    Read Bruce’s full interview with NISO Newsline here.

  • ALPSP Awards Spotlight on... Edifix, a cloud based bibliographic references service (ALPSP Blog)

    ALPSP: Why do you think it demonstrates publishing innovation?

    BR: […] The critical innovation Edifix brings to the bibliographic reference problem is its parsing engine – that is, its sophisticated ability to automatically identify the elements of plain-text references. This ability to accurately burst a reference into its parts and then put it back together enables all of the advanced Edifix services, from copyediting to data correction to structured output (including an output format that will let you import into a reference manager like EndNote without all of that manual labor).

    Read the full interview on the ALPSP blog.

  • eXtyles: Interview with Elizabeth Blake and Bruce Rosenblum (PLOS Blog)

    7. How does eXtyles use the NLM DTD?

    Bruce Rosenblum: eXtyles users, with no knowledge of XML, can create high-quality XML according to the NLM DTD as a simple one-button action after using eXtyles to easily complete editorial preparation of a manuscript. In other words, eXtyles XML creation is a natural by-product of normal manuscript preparation for publication, and it requires no specialized user knowledge.

    Read the full interview on the Internet Archive.