[At Jeff McClurken’s invitation, I was recently part of a panel focused on reviewing digital history at Organization of American Historian’s annual meeting. My portion of the discussion was to focus on reviewing digital public history projects, which have their own particularities that make them different that some other genres of digital history. I welcomed the opportunity because I think that the work of review is one of the most important of an historian’s professional obligations. Below is a version of my comments.]
A generous and conscientious review process at a crucial stage can make the difference between a mediocre project and a great one. And, a careful review after a project launches can be an essential authorizing element for that work and the people who produced it.
The work of a reviewing is an act of leadership in the field—-in digital history, in academic history, in public history. As such, we would do well to consider the qualities that seek in effective leaders before we turn to the form and content of an effective review.
We seek out leaders
- who prize collaboration and cooperation;
- who have vision, but make room for other voices;
- who honor many types of experience and expertise;
- who acknowledge the important contributions of others;
- who clearly admit that they do not have all the answers.
Individuals who embody these qualities often stand out as the people we turn to help us move our work forward. They are people we trust. I would submit, that these are also the people we want to review our work.
We can and should do our best to create a culture of reviewing that is humane and constructive. In that effort we might turn the groundbreaking work of the HuMetricsHSS Project to help structure our thinking. The project is working through a process to create and disseminate a “humane evaluation framework” that builds upon the values that participants have identified as central to humanities and social science disciplines, including collegiality, quality, equality, openness, and community.
These values parallel those that we commonly see at work in the approach of effective leaders. And they are values that we should seek from our reviewers and their work. Putting these qualities front and center means that the work of review becomes generative, rather than a competitive, zero-sum practice.
What do we value in a review?
Of course, good single-authored book reviews can also be generative. One of the best discourses on what truly makes a good book review comes from one generous leader in the field reading the work of another generous leader in the field. In January, 2017, Karin Wulf, Director of the Omohundro Institute for Early American History and Culture, wrote an excellent piece for Scholarly Kitchen in which she examined Annette Gordon-Reed’s NYRB review of Robert Parkinson’s The Common Cause: Creating Race and Nation in the American Revolution (2016). Using Gordon-Reed’s review as an example, Wulf explained, “The most effective review brings readers — those who have read or might read the book, but often those who have not and may not — into a broader, informed conversation about the topics the book addresses.” Wulf clearly outlines the key duties of exemplary reviewer: to be fair in their evaluation of the work, to connect the work to a larger context, and to bring the readers into a new sense of the significance of the project.
These duties are also the duties of a reviewer of a digital public history project. But since digital projects different from books in important ways, there are some important ways that reviews of those projects can and should differ from those of monographs. In moving outside of the formal conventions of a print monograph, digital projects break open the possibilities of form and function for historical work, and the same should be true for their review. To create a humane and generative review process for digital projects, we need to ask what we want our digital projects and our digital project reviews to accomplish in the world, and when we want to do that work.
Unlike traditional print publications, it is rare for digital work to go through the pre-publication peer review and evaluation process that strengthens work in so many ways—whether that process is open and public, or closed and blind. Many larger funders ask for formative evaluation, but in reality that process is often shunted to the side in the face of the larger demands of building the project. It would be wonderful if a constructive public review process could begin early in the life of a digital project, influencing the process of the project work as it moves forward.
Any opportunity we might have to undertake constructive review and evaluation of digital works-in-progress would go a long way to building a larger community of practice around that work. It would also give us another chance to situate new and experimental work in the context of peer projects, similar either in content or methodological approach. Placing those developing projects in conversation with one another, and with older more established projects, provides a possibility of mentoring and sharing of lessons learned through the long process of bring digital work to fruition.
Review as a Process and Conversation
To some extent, all digital public history work is undertaken in the service of “putting history to work in the world” – making history legible and relevant to individuals beyond the academy. As a result, no digital public project can divorced from its audience. That focus on audience and their experience of digital history often leads to a recognition of the shared authority that is a reality in public work where the perspectives and experiences of the public are essential to the work. In some cases, this entails work that is crafted in collaboration with and in response to the public audience, their interests and their needs.
In the end, publicly engaged humanities work is necessarily part of a larger conversation that is as much a process as it is a product. The review form needs to reflect this process. Thus, we might consider moving from review as reportage to review as conversation—a conversation that address the varied aspects of the work itself. What if the review process was also public, collaborative, and dialogic?
A number of the most generative digital history communities are already modeling a practice of dialogic review for monographs. For example, Black Perspectives regularly selects new works for concentrated attention, which means a full week of insightful review from different perspectives and a response from the creator. The S-USIH blog also hosts similar “salons.” Bring a number of scholars together to concentrate on a single book, these features mirror the longstanding practice of bringing many voices together to reflect on the long-term impact of groundbreaking scholarship or a paradigm shifting methodological approach. But the difference here is that the online communities are focusing their attention on emerging work—trying to anticipate the ways that new scholarship is surfacing innovative approaches and insights for our understanding of the past.
Some shifts in process and perspective will be necessary to embrace the conversation model of review for digital public history projects. More importantly, a conversational model of review would require the field to be open to a collaborative mode of leadership that embraces shared authority.
Since digital public history works are (or should be) intentionally targeted at a specific non-scholarly audience, they often does not include the larger intellectual framing that is usually laid out in a monograph’s introduction. A conscientious reviewer depends that positioning and methodological discussion to offer a fair reading of the work. As a result, I urge all projects to create a deep “About” section that includes a set of essential framing statements:
- goals statement from the project
- audience profile
- staffing/labor statement
- process narrative
- evaluation process and outcomes, if there are any
- development trajectory, if work is ongoing
Once these important contextual materials have been made public, we might take the step of opening digital pubic history projects to a review conversation that includes many perspectives—perspectives that match important aspects of the work. Rather than turning to a single individual to evaluate a project, we could embrace a conversational model that honors many perspectives and types of expertise. An editor could inviting reviews from:
- target audience members
- content experts
- technology experts
- design and user experience experts
- public historians doing similar work
Finally, we might ask the creators to respond to and engage with this cluster of perspectives, considering the lessons learned through the process of the project and the review. In this way, the review process itself becomes an act of public humanities, approximating the complexity of digital public work and its many meanings in the world, addressing the form, the content, the execution, and the public import of the work. Most importantly, this conversational model of review, moves us from valuing single-voiced judgment to valuing multi-vocal learning and growth.