Photo from SCI 2015

SCI 2015 has concluded, stay tuned for SCI 2016

The 2015 Scholarly Communication Institute concluded last Thursday and it was an exhilarating and exhausting week for everybody involved. As we hoped, participants started new collaborations, incubated new ideas, developed new plans, built new things, and made new friends. One team completed and submitted a grant proposal by the end of SCI in order to continue their project, and other teams are already working on their next steps. We got lucky with beautiful fall weather, ate well, took walks in the DuBose house gardens, and had tours of nearby universities and their host cities.

I won’t recount the whole week here, but will point you to some links where you can get a gist of what it was like to be at SCI.

We’ll soon begin work on planning SCI 2016, and will announce the RFP early in the new year. Check back on or @TriangleSCI for the announcement, and we hope you will consider submitting a proposal for the next round.

A big thanks again to everybody who participated, and especially to the Andrew W. Mellon Foundation for making this possible and to our partners at Duke University, North Carolina Central University, North Carolina State University, The University of North Carolina at Chapel Hill, the Triangle Research Libraries Network, the National Humanities Center, and the American Council of Learned Societies for their work to make it a success.

This slideshow requires JavaScript.

[ Featured photo and slide show photos by Eric Dye used under CC license: ]

SCI team time

Think, Do, Collaborate, Cross-Pollinate

We’re less than a month away from the beginning of the second Scholarly Communication Institute held in the Research Triangle area of North Carolina, and are looking forward to this year’s cohort coming to Chapel Hill for five days of thinking, planning, and doing in a collaborative and relaxed setting. Last year, the first year of SCI in it’s new home, most of the participants were from the local area, and we tested out a new model for how such an institute might work. By all accounts it was a great success, and this year participants are coming from far and wide – from all across the United States, and about one third from other countries too, including as far away as Perth, Australia.

What do we do during these Institutes?

About a third of the schedule is unstructured, what we call “team time”. Each of the invited participants is part of a team working on a project they proposed during the RFP process – they set their own goals, process, and deliverables. During the team times there are no rules – teams can brainstorm, research their topics, run a charrette, document plans, develop software, write a paper, test models – last year participants did all of these things and more during their unstructured team times.

Another third of the schedule involves the entire cohort in discussions together, what we call “plenary” meetings. During these sessions all the teams come together in conversations regarding issues of mutual interest to all of their projects. Some of these sessions start with a focus on a particular team’s project, allowing them to seek advice from the broader group on issues that are challenging them, or to seek feedback on ideas they are trying to advance. Other plenary sessions are conversations guided by several facilitators, who throughout each day have been engaging with each of the teams, and listening for and suggesting areas of intersection between the different projects.

DuBose House gardensThe final major piece of the schedule is social time. We know that often the best insights come when you’re not necessarily looking for them, but rather over a meal, or drinks, or when taking a walk someplace you haven’t been before. So we’ve built a lot of time for that into the schedule. Breakfast and lunch each day will be in the rooms of a historic house on the grounds the conference center, with ample time after lunch to take a walk in the nearby gardens. On one evening we’ll have a reception at the National Humanities Center, where SCI participants will have an opportunity to talk with fellows and staff of the Humanities Center as well as invited guests from nearby universities. And on other evenings there will be optional small group dinners at various restaurants in Durham and Chapel Hill, with visits to Duke University and the University of North Carolina along the way.

However, almost all of this is flexible. Last year, we adjusted the schedule along the way, based on suggestions from participants, and in response to observations about how useful different types of activities were at different times of the Institute. Mealtimes are fixed, but aside from that the schedule is fair game.

Slide: Notes for a retreat

What there won’t be at the Institute are PowerPoint slides. No reading a prepared talk, no deciding which conference track you’re going to attend, no vendor sales pitches. Alright, maybe there are a few presentations, but they’re brief and mostly about sparking ideas and setting tone. We’ll have some brief remarks at the receptions, and on the first day, Tom Scheinfeldt will be opening the Institute meetings with some observations on setting the conditions for a productive retreat. On the last day each team will practice their “elevator pitch” with SCI’s advisory board, answering these questions about their project: What? So what? and What next?

The participants will come together on October 11 mostly not having worked together or even met each other before, and will leave on October 15 having started new collaborations, incubated new ideas, developed new plans, and built new things. They will also have eaten well, relaxed away from their usual work, and, we hope, made new friends.

Over the next few weeks you’ll be able to follow the progress of the 2015 Triangle Scholarly Communication Institute via the #TriangleSCI hashtag on Twitter, and afterwards, via this blog. To get a sense of what it was like last year, see this Storify thread that collected a representative sample of tweets from SCI 2014, and these blog posts from SCI 2014. And if you’re interested in participating next year, keep an eye on this blog for the next RFP, to be announced in early 2016.

An Analytical Attribution Framework

This is the fifth and final in a series of posts about each of the teams that will be attending SCI 2015, and their projects. This one was submitted by Christopher Blackwell.

What are the goals of your project and how do they fit the theme of this year’s Institute?

Attributing and valuing scholarship was easy when scholarship was monographic and communities of scholars were small. It was easy to attach an author to a citation when the author was universally known (“Aristotle”, “Linnaeus”) and the citation pointed to a clearly defined, grossly granular publication, one of relatively few to have emerged in a given year. Scholarship has always emerged from collaboration, but in a rigid hierarchy it was easy to collapse a team of researchers to a single named authority.

When scholarship was largely monographic, attribution could remain monolithic: a name or one-dimensional list of names attached to a work. Just as scholarship supported one primary and straightforward method of interaction—reading—attribution likewise supported a relatively simple number of methods: credit for authorship, ranking in a list of authors, a multiplier based on the perceived value of the work.

Detail from the Venetus A manuscript, showing Iliad 3.1-9

The promise of digital scholarship lies in the potential for synthesis and analysis, a much richer body of operations that may extract more meaning and prompt more insight from a given body of data. We wonder if approaches to synthesis and analysis that have proven fruitful for our own research might also be fruitful ways of approaching how we credit and value contributions to that research. We have all encountered problems of attributing and valuing authorship in situations like:

  • many editors producing a single edition of a text,
  • a group of developers contributing to a single software project,
  • many editors indexing or commenting on a body of data (texts, images, &c.),
  • scholars producing complementary analyses of a given text, (that is, one scholar produces a syntactic analysis, and one a semantic analysis),
  • scholars producing exclusive analyses of the same data (that is, one scholar analyzes syntax one way, another analyzes it another way).

In each case, while it is possible to attach “authorship” to individual pieces of work—lines of code or XML, individual indexed relationships, individual analysis—it is extremely difficult to quantify the significance of each author’s contribution:

  • Editor A enters an initial OCR text of the Iliad to a GitHub repository, thus “contributing” 15,000 lines; Editor B meticulously documents variants, receiving credit for only 75 lines.
  • Author N writes a short algorithm that is called innumerable times throughout the execution of a piece of software; it is brilliant because it consists of only 12 lines of code.
  • Scholar A captures the syntax of a complex sentence in Thucydides; Scholar B builds on that initial analysis, making it better. It would be desireable to capture Scholar B’s debt to Scholar A, and the extent to which the two analyses differ.

Who is on your team, and what are you hoping they will contribute to the project?

The members of this proposed working group have extensive experience applying innovative approaches to analysis, for topics that require collaborative effort across diverse data, often under conditions of uncertainty:

  • documenting conflicting interpretations of damaged text-bearing artifacts,
  • integrating various kinds of image-data for recovering lost text,
  • exploring the intersection of syntactic and semantic graphs of texts,
  • associating metadata with texts and data-structures at differing levels of granularity,
  • capturing iterative analyses of corpora undergoing collaborative editing,
  • aligning diverse data across generic and chronological axes,
  • building learning portfolios to track specific performance in the acquisition of a foreign language.

Our team members are:

  • Bridget Almas, the lead software developer and architect for the Perseus Digital Library
  • Christopher W. Blackwell, Project Architect for the Homer Multitext and co-developer (with Neel Smith) of the Canonical Text Service Protocol
  • Francesco Mambrini, research fellow at the Digital Humanities Department (IT-Referat) of the German Archaeological Institute
  • Ségolène Tarte, senior research fellow at the University of Oxford’s e-Research Centre
  • Gabriel A. Weaver, a Research Scientist at the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign

What do you look forward to most from SCI, and what do you hope to accomplish through the Institute?

All the members of our team have talked about these issues, and possible technological solutions, for years, in various ad hoc conversations. The Institute will give us the most valuable opportunity of dedicated in corpore time and space to align our individual ongoing work and technologies with our shared goal of flexible, expressive, machine-actionable attribution and evaluation.

We are also looking forward to the opportunity to share ideas with, gather criticism from, and face the need of clear presentation to the larger group that will gather in October.

Venetus A

What are your plans for next steps after the Institute this fall?

We are all engaged in collaborative work that could immediately serve as test-beds for ideas about analytical attribution. Blackwell’s work on historical botany, for example, continues to engage undergraduate students from across disciplines, over relatively short periods of time, contributing diverse and very specific data to an evolving digital library: taxonomic indexing, medical commentary, historical essays, transcriptions of letters, compilation of geo-spatial data, photography. Almas, in her work on Perseids has an immediate need and audience for innovative approaches to citation of complex and evolving analyses. Mambrini’s research at the DAI is likewise focused on analysis of texts and meta-analysis of scholarly interpretation.

Weaver notes that an analytical framework for attribution would be immediately useful in the domain of computer science as a discipline and within industry. Currently, there is demand among practitioners to be able to search, retrieve, and measure the evolution of multiple versions of security policies and compliance reports over time.

The proposed discussion for this SCI workshop can have strong impacts on the current activities of the Deutsches Archäologisches Institut, and any conclusions that we reach would be welcomed contributions to any of the ongoing outreach initiatives of the Institute, e.g. the Digital Classicist Berlin series of symposia. Mambrini is a co-chair of the conference on “Corpus-Based Research in the Humanities” (next held in Warsaw in December 2015), whose participants would be specifically interested in this topic.

Is there anything else you’d like to say about your project or participation in the Institute?

The Scholarly Communication Institute represents an opportunity that is all too rare: a space for forward-looking conversation among scholars from different disciplines. We are excited at the prospect, and honored to have been invited.

[ Image credits: and ]

Modeling contributorship with TaDiRAH

This is the fourth in a series of posts about each of the teams that will be attending SCI 2015, and their projects. This one was submitted by Micah Vandegrift.

The goal of our project, codename TaDiRize, is to examine the expanding model of contributorship in the humanities, especially as digital work becomes more broadly recognized. Digital projects often require a team of scholars, and the mounting diversity of team members involved in the production of digital scholarship has prompted a diverse set of questions surrounding the challenges of assigning credit and authorship. We feel that this aligns perfectly with the goals of the Institute this year by focusing on the valuation of digital scholarship.

We plan to address this topic by developing a model for applying the Taxonomy of Digital Research Activities in the Humanities (TaDiRAH) to contributor activities and outputs as a first step toward better assessment of collaborative scholarship. Building on similar initiatives like Project CRediT, we will discuss, challenge, and begin to apply the concept of “credit where it’s due” for digital humanities scholarship.

The composition of our team is essential to this project. Social scientist Cassidy Sugimoto and philosopher J. Britt Holbrook bring research expertise in the area of “scholarly impact.” Korey Jackson, having worked in alt-academic publishing capacities, introduces a big picture point of view and non-traditional humanities experience. Zach Coble, April Hathcock and Micah Vandegrift each work on the ground in the midst of publishing, research, librarianship and digital scholarship, with unique backgrounds and perspectives. We hope to enrich the greater Scholarly Communication Institute with multi-varied skills, knowledge and interdisciplinarity.

Overall, we are most looking forward to interacting with the other teams in this collaborative, innovative environment. Following the institute we plan to promote our TaDiRized model around the digital humanities and scholarly communication community for comments, ideas, suggestions and ultimately improvements. We hope to continue to work together as a team, and with other TriSCI15 groups to advance the discussion about validating digital scholarship.

Firemen practice the church raise with a ladder.

Firemen practice the church raise with a ladder.
From the Fire Fighter Photographs collection at the Allen County (Indiana) Public Library, used by permission.

Author, Student, Programmer, Peer: Valuing Heterogeneous Teams in Networked Knowledge Production

This is the third in a series of posts about each of the teams that will be attending SCI 2015, and their projects. This one is adapted from the text of the proposal submitted by Daniel Powell.

Guiding Questions

Digital scholarship is often a deeply collaborative and networked enterprise, one which – in its various forms – involves multiple practitioners from a variety of academic contexts. This working group believes formalised evaluative structures have not kept pace with either the realities of social knowledge creation or with the numerous technological efficiencies provided by digital tools and platforms. Credit, promotion, funding, and credentialing are more complex topics than ever, yet many individuals and institutions rely on simple, outdated strictures to make judgements.


Guided by these interrelated questions, we hope to interrogate this landscape of knowledge production and mobilisation:

  • How should institutions of higher learning, and individuals embedded within those institutions, value and evaluate networked knowledge production undertaken in wide-ranging collaboration using networked digital tools?
  • What is the role of digital methods and platforms in developing new pedagogical practices and curricular structures that foster digital scholarship in university classrooms? How are such tools being deployed to bridge the troubling gap between bifurcated models of “teaching” and “research” as discrete activities?
  • What is the role of mentored praxis in evaluating intellectual labour and student progress?
  • How can we rethink traditional models of authorship and intellectual production to ensure that the work of heterogeneous teams of knowledge producers – teams that can and do include the general public, research stakeholders, students, faculty, alternative academic staff, librarians, computer IT professionals, etc – is accurately understood and valued?
  • What can we do to rethink traditional models for publishing and editorial practices, graduate and undergraduate standards for evaluation, and tenure and promotion guidelines to make them more effective in networked knowledge environments?


Overall, our objective is to link discussions of social knowledge creation (crowdsourcing, labour practices on digital projects, wiki culture, etc) with heterogeneous knowledge producers (undergraduate students, graduate research assistants, library personnel, altac staff, etc) to produce new ways of understanding how to measure and evaluate digital scholarship.

Pragmatically, we hope to use this working group to promulgate evaluative standards and guidelines for faculty, administrators, students, and staff working within and around digital scholarship. In this effort we build on the robust foundation developed by, among others, the Modern Language Association, whose efforts in various workshops, publications, and committees have begun these conversations. [For a summary of this work, see the 2011 issue of Profession, published by MLA and available for free here:] The MLA Guidelines for Evaluating Work in Digital Humanities and Digital Media – themselves the culmination of several years’ work on the topic – recognises, for example, that digital scholarly practitioners “engage in collaborative work” far more often than their non-digital counterparts, but stops short of recommending specific frameworks of evaluation. Compounding the difficulties inherent in formally evaluating digital research is the collaborative involvement of students, graduate researchers, library staff, etc. The struggle for evaluators is not limited to form (archive, blog post, database, source code) but also encompasses disciplinary norms and shared authorship practices. We hope to blend these two concerns – digital scholarship as form and digital scholarship as collaborative process – to discuss and share guidelines that address credit, mentorship frameworks, and scholarly merit for digital work.

In a theoretical sense, we hope to confront the emergent body of evidence and scholarly products that indicate a qualitative change in how knowledge is produced in a networked age. By this we mean the shift from single-author, long-form prose research (especially in the humanities) to distributed models of intellectual production that intersect with social media connectivity, digitally facilitated coauthorship, and well-defined and traceable patterns of intellectual contribution. Social knowledge creation is quickly becoming a fact of life for digital scholarship, as is the reality of iterative, long term development for digital projects.

We envision this exploration and discussion proceeding along three axes:

  1. A survey and examination of existing documentation and guidelines related to digital scholarship and collaborative knowledge production. Much of this can be compiled prior to SCI and brought to bear immediately on early discussions.
  2. Qualitative discussion and sharing of the institutional experiences with pedagogy, digital scholarship, and professional evaluation our working group allows. In this we will leverage the diversity of voices we have brought together at SCI.
  3. Synthesis and creation of holistic guidelines and documentation for moving conversations on this topic forward in multiple departments, institutions, and scholarly organisations. These are intended for practitioners of scholarly work in digital form and in large teams to use in situations of tenure, promotion, and credit apportioning.

In large part, the challenges to new systems of evaluation and credit are not technological or infrastructural per se. Instead, they are social, habituated by longstanding disciplinary norms and expectations. They are deeply embedded in administrative norms and processes, from informal expectations to the literal paperwork used within evaluative frameworks. They find expression in tenure & promotion guidelines that ignore collaborative work or frame digital scholarship as service; in evaluation frameworks like the Research Excellence Framework in the United Kingdom or the Excellence in Research for Australia which overvalue monographs in rigid point based systems that determine funding; in requirements for depositing dissertations that preclude, by definition, digital work; and so on. Our hope is that the documents and discussion that emerges from SCI on this topic can serve as an insurgency against those forces that stifle innovative research and actively separate “researchers” from other knowledge producers on digital projects.

Pool of knowledge

Relevance of Working Group Participants

Our working group has come together around a single idea: that to best discuss how collaborative and cross-demographic digital scholarship should be evaluated, multiple voices and viewpoints must be represented.
To that end, our group is diverse: tenured, internationally known researchers in the digital humanities and e-literature; undergraduate students working on digital scholarly projects; advanced graduate students who have participated in large-scale collaborative projects; alternative academic staff dedicated to fostering digital scholarship at small liberal arts colleges; early career and tenured faculty devoted to digital knowledge production at teaching-intensive undergraduate institutions; and digital scholarship librarians invested in altmetrics and distributed communities of practice. We represent departments of English and Digital Humanities, large research libraries, several digital research laboratories and centres, and Andrew Mellon funded initiatives. Geographically, we represented the United States, Canada, and the United Kingdom. Together, we represent a cross-section of digital scholarly practices in contemporary academia.

Relationship to SCI 2015 theme.

Our working group proposes to directly engage with a number of the questions put forward by SCI for 2015. In response to the multiple questions asked by SCI, we see the following possibilities for exploration and dissemination:

  • In exploring what it means to be an author and how to value, attribute, and reward the work of multiple contributors, we hope to push for a redefinition of the author/researcher that encompasses individuals who are often excluded from such discussions, especially in pedagogical and mentorship contexts.
  • In surveying varied systems of incentives for digital scholarship, we anticipate finding an emphasis on single-author monographs and articles. The process of challenging these norms from the multiple viewpoints of our working group – teaching faculty, librarian, student, tenured researcher – will allow us to synthesise what these frameworks should be.
  • In addressing the relative value of innovative digital scholarship to multiple audiences and users, we envision an opportunity to synthesise pedagogy, knowledge creation, and diverse disciplinary activities in a more holistic framework. In other words, digital scholarship forces us to reconsider the existing separation between teaching and research, as well as the ‘research’ and ‘service’ functions of university departments and organisations.

Working Group Participants

Daniel Powell (Convener) is a Marie Skłodowska-Curie Fellow in the Digital Scholarly Editing Initial Training (DiXiT) Network, a Marie Curie Action funded by the European Commission’s 7th Framework Programme for Research and Technological Development. Based at the Department of Digital Humanities at King’s College London, he researchers collaborative knowledge creation, social editing practices, and crowdsourcing. Powell is also a Doctoral Candidate in English at the University of Victoria, where he has for a number of years been affiliated with the Electronic Textual Cultures Lab ( At both institutions, he has worked extensively on issues of graduate training and mentorship; historicising patterns of academic behaviour; systemic discussion of university development; and large-scale digital projects. He is a member of the Modern Language Association’s Committee on Information Technology, Project Manager for the Andrew W. Mellon- funded Renaissance Knowledge Network, and editor (along with Melissa Dalgleish) of Graduate Training in the 21st Century, a project within the agenda-setting #Alt-Academy collection on MediaCommons ( Having completed an AB at a small, Southern liberal arts college in the United States, undertaken postgraduate research in Canada, and now positioned at a global leader in digital scholarship, Powell is in a position to bring broad, comparative knowledge of multiple institutions and countries to bear on discussions of evaluation, credit, collaboration, and pedagogy.

Eric Dye is a photographer, graphics designer, and journalist, as well as an undergraduate in liberal studies at Penn State, The Behrend College. He works at The Behrend Beacon as the Creative Director and Opinion Editor. He has spent over 8 years independently studying photography and is now using those skills for portraiture, photojournalism, and running a small business. Eric also regularly blogs about photography at He attended the ACP National College Journalism Convention in Fall 2012 and Spring 2015. Supported by funding from the Undergraduate Student Summer Research Fellowship at Penn State Erie, he will be conducting research in the summer of 2015 to review and analyse the history of the locomotive industry in Erie, Pennsylvania. This research will be compiled and represented through a photo essay to be disseminated through the 12th Street Project and hosted by the Penn State Digital Humanities Lab. This project seeks to provide a resource for the past and future members of the locomotive industry on behalf of the local community as well as inspire new industry enthusiasm. Dye brings an undergraduate perspective to the working group, based on involvement with the Penn State Digital Humanities Lab and independent entrepreneurial activity.

Dene Grigar is an Associate Professor and Director of the Creative Media & Digital Culture Program at Washington State University – Vancouver. Her research focuses on the creation, curation, preservation, and criticism of Electronic Literature. This research relies on a deep knowledge of media production and is expressed through traditional publications (e.g. essays, articles, chapters) but also through varied activities involving curated exhibits and multimedia design. Her work has historically found itself at the cusp of changes wrought by the evolving notions of literature and associated literary activities as they are impacted by digital media and, so, has been continuously evolving in response to technological and cultural considerations. As Director of an academic program in a new and emerging field, Grigar has had to find ways to credential faculty, demonstrate scholarly viability of collaborative research, and develop assessment documents that evaluate excellence. She brings 25 years of teaching experience in higher education to our working group.

Jacob Heil is the Andrew W. Mellon Digital Scholar for the Five Colleges of Ohio, a consortium comprised of the College of Wooster, Denison University, Kenyon College, Oberlin College, and Ohio Wesleyan University. As part of this grant-funded initiative, he works with faculty, librarians, educational technologists, and students to design and carry out digital pedagogical projects. In scale, these range from simple digital collections through to TEI-encoded editions to GIS-enabled, mobile-ready presentation of historical maps. In his previous role as the project manager for the Early Modern OCR Project (eMOP; in Texas A&M University’s Initiative for Digital Humanities, Media, and Culture (IDHMC), Jacob coordinated the first phase of eMOP’s international, inter-institutional collaboration designed to teach machines to read early printed materials. While his formal scholarly training is in book history and early modern English drama, he has found his way to managing large-scale collaborative efforts and, with the Five Colleges, to fostering collaboration by helping to build up a culture for which digital pedagogies and scholarship are sewn into the fabric of the liberal arts campus. Currently, in addition to working through his own questions about early modern drama and print history, he is invested in thinking through the ways in which the digital cultures of small liberal arts colleges and consortia can inform those of larger, research-intensive institutions.

Aaron Mauro is Assistant Professor of Digital Humanities and English at Penn State Erie, The Behrend College. He is the director of the Penn State Digital Humanities Lab at Behrend ( The Lab currently oversees three research projects, including the EULA Tool, the 12th Street Project, and the Hammermill Archive. As co-chair of the Digital Media, Arts, and Technology program at Penn State Erie, he teaches regularly on diverse topics relating to digital culture, computational text analysis, and scholarly communication. His articles on U.S. literature and culture have appeared in Modern Fiction Studies, Mosaic, and Symploke among others. He has also published on issues relating to digital humanities in both Digital Studies and Digital Humanities Quarterly. Mauro will bring a unique perspective that bridges research, teaching, curriculum development, knowledge mobilisation, and collaboration with the scope and spirit of the liberal arts.

Bridget Jenkins is an English and Professional Writing major at Penn State Erie, The Behrend College, in the class of 2016. She is currently the managing editor at her college newspaper, The Behrend Beacon, a position that allowed her to travel to recently Los Angeles for the Associated Collegiate Press College Journalism Conference. In the summer of 2015, she will be conducting research on the Masonic Temple of Erie, PA. Working in conjunction with the 12th Street Project, her work will be published in a collection that aims to record the history, culture, and contemporary voices of those living in the Erie area. Jenkins’ multimedia project will include a visual and oral history of the Masonic Temple, a building which represents a prime example of early 20th century architecture and the economic and cultural prosperity that made such structures possible. She argues that this building represents an enduring link between the early 1900s and today.

Sarah Potvin is the Digital Scholarship Librarian in the Office of Scholarly Communication of the Texas A&M University Libraries, where she holds the rank of Assistant Professor. Her recent scholarly work has examined the sociotechnical infrastructure behind digital scholarship, a category that encompasses community-building and norms, evaluative structures (ranging from formal promotion & tenure guidelines to the use of bibliometrics and altmetrics as proxies), and the development of platforms and policies. This focus is reflected in her work as a founding co-editor of dh+lib (; her more localised involvement with the Texas A&M digital humanities working group (co-convener), Texas Digital Library Metadata community group (chair), and the Texas A&M Libraries’ digital asset management system assessment task force (chair) and digital scholarship/media promotion & tenure guidelines task force; and membership, past and present, in international program committees for the Digital Humanities, Dublin Core Metadata Initiative, and the DSpace User Group (Open Repositories) conferences (as well as an organiser of unconferences). She has been employed in research-related positions since she was 16, most often with titles like ‘research assistant,’ ‘research analyst,’ and ‘editorial assistant,’ affording a view of authorship norms across multiple disciplines and settings, within and outside of formal university structures.

Raymond G. Siemens is Canada Research Chair in Humanities Computing and Distinguished Professor in the Faculty of Humanities at the University of Victoria, in English and Computer Science. He is founding editor of the electronic scholarly journal Early Modern Literary Studies, among the first open access academic e-journals, and his publications include, among others, Blackwell’s Companion to Digital Humanities (with Susan Schreibman and John Unsworth), Blackwell’s Companion to Digital Literary Studies (with Schreibman), A Social Edition of the Devonshire MS, and Literary Studies in the Digital Age (with Kenneth Price). He directs the Implementing New Knowledge Environments project, the Digital Humanities Summer Institute and the Electronic Textual Cultures Lab, and serves as Vice President / Director of the Canadian Federation of the Humanities and Social Sciences for Research Dissemination, recently serving also as Chair of the international Alliance of Digital Humanities Organisations’ Steering Committee, the MLA Committee on Information Technology, and the MLA Committee on Scholarly Editions. Siemens brings a deep knowledge of institutional practices related to digital humanities, extensive experience with tenure & promotion practices for non-traditional scholarship, and a firsthand perspective on mentoring students in praxis-based settings.

Sharing, Dissemination, and Follow Up Activities

We are excited by the possibilities that might emerge from our Working Group discussions, and anticipate a number of concrete results from our time together:

  • A web-based and public facing collection of relevant guidelines, documentation, and academic publications. This might collect documents promulgated by scholarly associations on evaluating digital scholarship (such as the MLA, AHA, and various universities and consortia); publications on practitioner experiences in these areas; examples of collaborative digital projects that have modelled evaluation practices; and working group narratives of their experiences in existing evaluation frameworks.
  • A white paper exploring the issues here outlined, emphasising especially the intersection of collaborative knowledge practices, pedagogy, and non-tenure track research activity in digital forms. This would also contain our synthesised insights into the current state of play in this area.
  • An appendix to the white paper consisting of well-defined guidelines and recommendations for evaluating collaborative scholarship in pedagogical contexts. This can serve as a starting point for further discussion in numerous institutional contexts.
  • A glossary of existing digital scholarship platforms that are being used to create collaborative scholarship. This might include wikis, CommentPress, the Google Drive platform, etc. A special emphasis will be put on tools & platforms used in pedagogical contexts.

More generally, it is our hope that SCI participants – in our working group and in the wider institute – take our insights and documents back to their local institutions and organisations to prompt local conversations. Our document outputs and web resources can have a cascading effect, being deployed in multiple contexts for diverse purposes.

[ Photo credits: used under CC license used under CC license. ]

The Qualities of Quality – Validating and justifying digital scholarship beyond traditional values frameworks

This is the second in a series of posts about each of the teams that will be attending SCI 2015, and their projects. This one is adapted from the text of the proposal submitted by Samuel Moore.

What does validation mean outside of a values/normative framework?

Justice Potter Stewart famously quipped, in a US Supreme Court case about pornography, that he could not define obscenity but that he “knew it when he saw it”. In a very different sphere, virtually every research funder and institution in the world includes either “quality” (sometimes replaced with “excellence”) as a key target in their mission statements, goals, or criteria for assessment. Some of them even seek to define the term. But like many normative claims about distinctive characteristics of prestigious activities these definitions are slippery. In many cases they are either circular, entirely retrospective or can be reduced to “what those who matter know when they see it”.


At the same time research makes particular claims to being necessarily un-planned in its overall direction while also being an expensive and therefore exclusive activity. Decisions therefore need to be made on which research, and which researchers, will be supported with the limited funds available. It is an article of faith that it is “curiosity-driven” research that is ultimately the most productive, rather than that directed to specific societal goals. An objective measure of some form – “quality” – is therefore required to justify and prioritise investment in research without direct application.

Is “quality” merely a rhetorical fiction required to square this circle, or does it capture and express values that underlie the research that is worthy of public funding? Is the confusion about what is meant by quality in national and institutional research assessment a serious issue that is really a stand-in for questions about authority (and credibility) or is it simply a tool of realpolitik, one of the messy compromises required in running real-world institutions?

Traditionally quality is determined through a process of peer review. This social process has become the cornerstone of scholarly practice while at the same time having its validity and utility highly contested. Arguably quality and peer review are mutually supportive concepts where to attack one is seen to be attacking the other and threatening the stability of institutions of scholarship. Yet analysis of peer review records rarely shows any coupling between the two concepts. Nonetheless, peer review, of both grants and literature, confers credibility and acts as a socially validated proxy of quality within the research community. In turn the outlets (funders, journals) which certify a peer review process become proxies of these proxies. The irony is that the focus on these secondary proxies has led to a further layer of proxy measures (the Impact Factor) that decouples the conferring of prestige and credibility from the peer review process that it is supposed to be based on. Even if “quality” is not a mistaken concept it is a largely debased one.

We plan to bring together different perspectives and skill-sets on this issue inspired by a twitter conversation between two of us ( This discussion, focussing on the question of whether “quality” is a single or heterogeneous concept draws on analytical work seeking correlations within the scores in real world assessment rankings and also brings experience of large and novel data now available on the use and discussion of research outputs. In addition we bring experience of the intersection of new forms of digital scholarship and what it means for the future of the university in political terms and linguistic and philological analysis to round out our team.

In the context of the institute we propose to use this initial experience in order to combine a narrative approach to statements of quality from research funders, institutions, assessors and researchers with an analytical approach that can test whether the claims made can be supported by the information used. We will begin with a dissection and analysis of the rhetorics of “quality” from various fields and an enumeration of the current proxy measures (journal brands, publisher names, citation indices, impact factor, altmetrics, peer-review procedures, invitation-only journals) that are claimed to accurately pre-filter for, or retrospectively label, “excellence”.

Our hypothesis is that “quality” as an objective scale is not a useful concept, that it is socially determined and driven by existing power structures. Moreover it is now such a confused concept, dependent on layers of re-interpretation and measurement through weakly relevant proxies that it is not even well socially determined.

Puzzle of complexityHowever quality can be re-imagined as a multi-variate construct that can be deployed to address different priorities. This shift from “quality” to “qualities” has potentially valuable practical outcomes in focussing our attention on different aspects of communicated research outputs. It also, importantly, should give cause for pause when the term is used across disciplinary boundaries; quality and its evaluation must be tied to the purpose of the research which, itself, must be situated within specific disciplinary practices. Most importantly it raises profound political questions around the consensus justifications for publicly funded research. If we are to address “qualities” rather than “quality”, we are required to examine the societal values and expectations that underpin the public funding of research.

Working Group members

  • Samuel Moore is a PhD student in the Department of Digital Humanities at King’s College London. His research focusses on the extent to which open-access publishing in the humanities is a disruptive process or merely transformational in the UK higher education context. He is also Managing Editor of the Ubiquity Press Metajournals, which publish structured summaries of openly available research objects, such as open-source software, data and bioresources. Consequently, Samuel is deeply interested in academic credit, novel research outputs, and the future of the university.
  • Cameron Neylon is a failed scientist and amateur humanist currently working as Advocacy Director at PLOS. He has worked for the past decade on the challenges of bringing scholarly communications onto the web including issues of Open Access, Open Data, incentives and assessment structures. He was a contributor to the Altmetrics Manifesto and the Panton Principles, and has written widely on research assessment, peer review and the challenges of research governance.
  • Dr. Martin Paul Eve is a Senior Lecturer in Literature, Technology and Publishing at Birkbeck, University of London. He is a founder of the Andrew W. Mellon Foundation-funded Open Library of Humanities, the author of Open Access and the Humanities: Contexts, Controversies and the Future (open access from Cambridge University Press, 2014), and the lead developer of the open-source XML typesetting platform, meTypeset.
  • Damian Pattinson obtained his PhD in neuroscience from University College London, where he studied the development of sensory pathways in the laboratory of Prof Maria Fitzgerald. After a brief postdoc at Kings College London, Damian joined the BMJ as a Scientific Editor on Clinical Evidence. He moved over to the online clinical resource, BMJ Best Practice shortly after its conception, firstly as Commissioning Editor, and then later as Senior Editor. He joined PLOS ONE in February 2010 as Executive Editor, and became Editorial Director in October 2012.
  • Jennifer Lin, PhD is Senior Product Manager at PLOS. She is the primary lead of the Article-Level Metrics initiative and the publisher’s data program. She earned her PhD at Johns Hopkins University. She has 15 years of experience in community outreach, change management, product development, and project management in scholarly communications, education, and the public sector.
  • Daniel Paul O’Donnell is Professor of English at the University of Lethbridge (Alberta, Canada). He trained as an Anglo-Saxon philologist and has been active in what is now known as the Digital Humanities since his undergraduate days at the Dictionary of Old English the late 1980s. His current work focuses on the place and practice of the Humanities in the digital age, particularly in terms of social organisation, communication practices, and globalisation. He is the founding director of,, and the Lethbridge Journal Incubator. He is currently Vice President at and a former chair of the Text Encoding Initiative.


The group is purposefully cross-disciplinary and comprises members from academia and the open-access publishing community. Our position is radical in as much as it potentially undermines current assumptions and hierarchies in the research enterprise. There is a real opportunity to influence how publishers, funders and universities approach the idea of research quality in their systems and organisations; we will achieve this primarily through influencing the narrative around “quality” and how the word is used. The group has a track record in targeted interventions that over time change discourse. In the context of the institute we will start this process through preparing a report for online publication, and consider the opportunities for a more formal (and possibly more in depth) publication.

We will follow this up with OpEds targeted at both traditional institutional audiences (Times Higher Education, Chronicle of Higher Education, mainstream media) as well as online communities (LSE Impact Blog). We also have good contacts with key players including the European Commission, HEFCE (UK), Social Sciences and Humanities Research Council (Canada), and other funders. We will seek opportunities to present our outputs in relevant forums across a range of stakeholder groups. There are also many community initiatives focussing on incentives for researchers and the link to assessment. We are already engaged with many of these and will use them as a further means for dissemination.


[ Image credits: used under CC license used under CC license used under CC license. ]

[ edited on 18 May to update the bio of Martin Eve ]

Collecting and analyzing usage data for online scholarly publications

This is the first in a series of posts about each of the teams that will be attending SCI 2015, and their projects. This one is adapted from the text of the proposal submitted by Kevin S. Hawkins.


Digital technologies have made it relatively easy and inexpensive for a broad range of traditional and new publishing entities to produce quality scholarship in digital forms. Unfortunately, aspirant digital publishers have encountered significant problems in collecting and analyzing usage data about such publications. A key challenge is that usage data is often available on more than one platform, including internal systems (such as a publisher’s institutional repository or other local web publishing infrastructure), vendor-hosted solutions (such as bepress Digital Commons), and third-party platforms that include open-access content (such as HathiTrust, the OAPEN Library, the author’s institutional repository, a disciplinary repository,, and the Internet Archive). Aggregated usage data allows the publisher to assess the impact of their online publications, to make strategic and business decisions about their publication operations, and to share the data with its authors. Furthermore, usage data helps publishers demonstrate their value to funders and administrators and to potential authors skeptical of online, especially open-access, publishing.

Streamgraph exampleTo give a concrete example, Writing History in the Digital Age is a volume of essays subjected to open peer review and commenting on a website hosted by Trinity College. Subsequent to its availability on the Trinity College site, the University of Michigan Press (U-M Press) published the book as part of its digitalculturebooks series, making it freely available to read online and available for purchase in print and as an e-book.  Unfortunately this desirable increase in access has made it difficult for the editors, contributors, or even U-M Press to assess how the book is used in its various editions, given the siloization of data in each platform. At the present time, Jack Dougherty, one of the co-editors, collects usage data from the Trinity College site using Google Analytics, and U-M Press collects usage data from its platform for the free online version using a combination of Google Analytics and spreadsheets of data from a homegrown usage statistics system. But there is no practical way for the U-M Press to combine its usage data from its two sources, much less with the data from the editor, to see which version of the book readers prefer and whether they buy a copy of the book after exploring one of the free versions. Similarly, if either co-editor or a contributor wishes to demonstrate the impact of their work to a promotion & tenure committee, they would need to request the data from all of the sources and go through the difficult and time-consuming process of compiling it, including aligning the incompatible data recorded by these tools.

In response to the situation exemplified by the above scenario, this working group will do the following:

  1. Look at examples of usage data reports from platforms that produce reports conforming to the COUNTER and related PIRUS standards (which were designed with libraries and content managers in mind) and at the user interfaces of and reports produced by the web analytics tools Google Analytics and Piwik to see:
    • which data is useful to authors and publishers of scholarly literature
    • what kind of data is missing but important for authors and publishers to know
  2. Formulate a set of functional requirements for the study of usage data by authors and scholarly publishers.
  3. Create prototypes of a user interface and usage reports (both inspired by the web analytics tools examined) that a tool to collect and analyze usage data for scholarly publications would provide.

Our working group includes scholars and staff at various organizations with experience publishing works of scholarship online on more than one platform. Many of them have struggled to collect and analyze usage data on their publications and can speak to the sort of data that would be most useful to authors and their publishers, as well as other stakeholders such as funders or administrative leadership at hosting institutions. It also includes an expert on bibliometrics and altmetrics, two methods of quantifying the impact of a work of scholarship through its citations and other mentions online. We will explore ways of including these metrics with data about usage to provide a fuller picture of the total impact of works of scholarship.

Composition of the working group

Kevin S. Hawkins is director of library publishing at the University of North Texas Libraries, where he has established a new scholarly publishing service at the UNT Libraries that complements the UNT Press. Previously, he spent ten years with Michigan Publishing, which includes the University of Michigan Library and Press, whose publications are available on various platforms that produce incompatible usage data. He also currently serves as president of the board of the Library Publishing Coalition.

Sarah V. Melton is digital projects coordinator at the Emory Center for Digital Scholarship at Emory University, where she coordinates the open-access program. She is also a practicing scholar, completing her PhD at Emory University and serving as digital publishing strategist for Southern Spaces and on the editorial board of the Atlanta Studies Network.

Lucy Montgomery is director of the Centre for Culture and Technology at Curtin University and deputy director of Knowledge Unlatched, a non-profit organization piloting a new approach to funding open access monographs. KU’s pilot collection is available on more than one platform, and Montgomery has been closely involved in studying the usage of the pilot collection.

Lisa Schiff is technical lead of the Publishing Group of the California Digital Library. She is responsible for ensuring that CDL’s current and future programs and services related to publishing are as effective and robust as possible. She is also contributing to a Mellon-funded project at CDL and the UC Press to develop a web-based open-source content management system and workflow management system to support the publication of open-access monographs in the humanities and social sciences.  She is a member of the editorial board of the Journal of Librarianship and Scholarly Communication and is co-chair of the ORCID Business Steering Group.

Rodrigo Costas is a researcher at the Centre for Science and Technology Studies at Leiden University. He studies bibliometrics and altmetrics and is interested in the conceptual and empirical differences between altmetrics and usage indicators.

Plans for beyond the SCI

Analytics icon

The working group will produce documents that could be used to guide development of a tool for publishers of online scholarship—including university presses, libraries, and digital scholarship centers—to collect, analyze, and share usage data and altmetrics regarding their publications. We will make these freely available online and seek input from the wider community after the SCI.

Some of the working group members are already seeking funding to develop such a tool, so the final set of documents will also serve to demonstrate to potential funders that an extensive planning phase has already taken place.

[ Streamgraph image by used under CC license. Analytics image by used under CC license. ]