Posthumous Pardons & the Postracial Myth

A University of Michigan Law School report released today points out that the United States exonerated more people in 2013 than in any other year. News outlets have been debating the report’s significance. Does it demonstrate that the number of innocent people in prison is declining, or merely that the toll of mass incarceration is higher than ever? While these are important questions that will no doubt be taken up in the coming weeks, I’m particularly curious about the handful of cases before 1950 cited by the report, as well as the related issue of posthumous pardons (particularly for African American men).

The Scottsboro Boys in 1931. Image via AP/Getty.

Three of the twelve posthumous exonerations that the report lists took place in Alabama, where in November the state Board of Pardons and Paroles voted to pardon Haywood Patterson, Charles Weems, and Andy Wright, the three remaining “Scottsboro Boys” who hadn’t already been pardoned or had their convictions dropped. If you’re not familiar with the case, the boys had been part of a group of nine black teenagers accused of raping two white women in Alabama in 1931. The pardon is being hailed as a sort of final step for Alabama’s Civil Rights trajectory. As the New York Times put it, the pardon “clos[es] one of the most notorious chapters of the South’s racial history.”

George Stinney. Image via UAlbany National Death Penalty Archives.

Most recently, the case of George Stinney, a fourteen-year-old African American boy who was put to death in 1944 for the rape and murder of two white girls, has been reopened in South Carolina. Stinney, who holds the unfortunate title of being the youngest person executed in the twentieth century, was convicted in a brief trial by an all-white jury. The gruesome details of Stinney’s case include the fact that his Bible was used as a sort of booster seat so that he could fit into the electric chair and that his feet could not reach the floor of the execution chamber. The fate of Stinney’s case awaits a court ruling on whether autopsies in South Carolina should be considered public information or medical records. Even so, the case has received widespread media attention, with local and national audiences urging the state to “rectify the wrong done so long ago…when miscarriages of justice against people like [Stinney] were so commonplace.”

Stephen Greenspan, Clinical Professor of Psychology at the University of Colorado, suggests that the increasing number of posthumous pardons reflects “a growing understanding from recent cases that innocent people are frequently convicted, and sometimes executed, often as a result of unfair and biased trial processes or prosecutorial and police misconduct.”

There’s something disquieting, though, about the rhetoric surrounding cases like Stinney’s and those of the Scottsboro Boys. Pronouncing “case closed” on the racism of the early- and mid-twentieth century justice system provides a way to distance ourselves from the past, even when (as the Michigan report shows) courts today routinely put black people in prison with little evidence or provocation.  While I don’t deny that a formal pardon might provide some measure of peace for the families of the exonerated, it’s a relatively low-stakes measure when compared to pardoning people who are still alive. As Clive Stafford Smith asks, “would our ‘modern courts’ have let Stinney off just because he did not commit the crime? The simple, sorry answer is, no.” Judicially sponsored racism is not over. Institutionalized practices that lead to the wrongful imprisonment of a disproportionately of African American male population are undeniably still with us.

I’d be curious to look at more complete data regarding posthumous pardons in the United States. Specifically, I wonder what motivates state (and national) officials to issue them, as well as how the demographics compare to pardons of those who are still living.

For further reading:

Alexander, Michelle. The New Jim Crow: Mass Incarceration in the Age of Colorblindness. The New Press, New York, 2010.

Bui, Yen, and Jeanette L. Jordan. “Amnesty and Pardon.” In The Encyclopedia of Criminology and Criminal Justice. Blackwell Publishing Ltd, 2014. http://onlinelibrary.wiley.com/doi/10.1002/9781118517383.wbeccj056/abstract.

Greenspan, Stephen. “Posthumous Pardons Granted in American History” (March 2011). http://www.deathpenaltyinfo.org/documents/PosthumousPardons.pdf.

Reimagining Pedagogy at the Penn Social Impact House

Mapping out ideas during an afternoon session at the Penn Social Impact House.

Sponsored by the University of Pennsylvania, the Penn Social Impact House is a two-week immersion fellowship for students and recent alumni seeking to test innovative ideas for changing the world outside their classrooms. The students‘ ventures span a wide range of interests; one fellow, for example, is developing an online learning platform (similar to a MOOC) for struggling high school students; another is designing a mobile app to teach financial literacy; and a third is creating a comprehensive program to engage at-risk youth and teach computer programming.

While course credit is available, the Social Impact House consists largely of extracurricular work in experiential learning, design thinking, and practical applications of the fellows’ own ideas. Dozens of mentors, including faculty members in a number of disciplines, nonprofit innovators, and researchers, volunteer their time to visit the institute.

In some ways, the structure of the Social Impact House adheres to basic tenets of the University setting. Many mentors are accomplished faculty members from a number of universities, and a comprehensive curriculum designed in advance culminates in fellows’ final presentations of their work. Yet the overall experience is dramatically different than even the most interdisciplinary classroom. The diverse group of fellows includes undergraduates and graduate students earning degrees in subjects ranging from architecture to computer science, as well as a few recent alumni from across the university. Daily sessions are structured around a series of collaborative experiential learning exercises. A visit to a local farmers market, for example, spurred fellows to contextualize their own projects within their communities and populations served.

During downtime, fellows led skill shares and taught each other elements of their specialties, which ranged from computer programming to design. Their multidisciplinarity means that conversations ranging from history, politics, and international relations to marketing and financial strategy were not uncommon. By the end of this year’s institute, fellows had progressed enormously both in developing their ventures and nurturing their sense of global context.

We cannot, of course, typically sequester our university students in a house together for the entirety of a course. So what might we learn from an experience like the Social Impact House? Here are a few of my takeaways:

  1. Mix things up. Putting a diverse group of students together—both in terms of experience and academic interests—allows them to work toward a common goal (in this case, growing their ventures) from completely different perspectives. In my experience, opportunities for mentorship, collaboration, and innovation thrive in this sort of environment.
  2. Bring experiential learning to the forefront. For the Penn fellows, a hands-on problem-solving experience was much more engaging than a case study. For historians, experiential components might consist of role plays, tie-ins with current events, and field trips.
  3. Cultivate community. Living together might not be possible (or desirable!) for the duration of a university course, but there are countless other ways to nourish community in a group learning environment. Feelings of shared experience and vulnerability are more likely to emerge from hands-on experience than passive learning. Service-based or place-grounded components strengthen the ties of class community as well.

The community-based incubator could serve a number of other purposes within a university. What if this model were implemented for honors students or majors during the summer or a holiday break? What if students from across the university could come together in this manner to devise solutions to local and international problems?

I’m grateful to have had the experience of being a staff member at the Social Impact House, and I’m looking forward to integrating some of these strategies into my classroom this semester.

Benefits of Digital Experimentation

I just came across this write-up by Chris Cantwell of the Religion in American History blog. The post refers to Lincoln Logarithms, a project I worked on last semester in conjunction with a few of my graduate student colleagues. Cantwell hits on a couple of the big benefits of doing experimental digital work: people can access it quickly, and they often don’t care that I’m not a tenured professor.

The project’s real strength stems from the fact that it was conceived, researched, and published primarily by DiSC’s three graduate student fellows. According to Emory’s widely circulated press release, these graduate students worked in conjunction with the university’s librarians and academic staff not only to analyze the library’s sermon collection, but also to evaluate the reliability of new methods in digital humanities research. And they did this all while getting significant media attention and exposure. Quick: can you name the last graduate student seminar paper to be written up by one of the largest professional organizations in the humanities?

This potential to connect student work with the wider world is one of the most exciting, and perhaps the most novel, opportunities the insurgent digital humanities affords. At every level, collaborative digital projects like Lincoln Logarithms have the potential to make our students not just passive receptors of scholarly knowledge, but active contributors to its production as well.

 

#tooFEW Feminists Engage Wikipedia

Wikipedia is the fifth most visited site on the web, and it’s estimated that over 90% of its editors are men. I have engaged with Wikipedia nearly every day for the past several years, but until a few days I had never made a single Wikipedia edit. Not one.

Women are much less likely  to actively create Wikipedia content than men. Does the dearth of female editors mean that everything on Wikipedia is super-sexist? No. But the effects are many: fewer articles on notable women, lower-quality articles about feminist topics, and a general lack of investment by women into a site that is created entirely by its users.

New Wikipedia editors gather at Emory University’s Digital Scholarship Commons for the Atlanta, GA location of #tooFEW.

I’m reasonably tech-savvy, and because of my  academic research I have quite a bit of specialized knowledge about little-known but important women. Yet I’ve never taken the time to create Wikipedia articles for them. Consequently, I was thrilled to help organize #tooFEW, a Wikipedia edit-a-thon that took place on Saturday at several locations across the United States. In partnership with folks at Scripps College, Duke University, Barnard College, and other locations, we gathered for a day of knowledge-building and Wikipedia editing. Our group in Atlanta included librarians, undergraduates, graduate students, faculty, and community members. Our edits spanned from comma splice removal to the creation of brand new articles.

We communicated with many of the long-distance editors via Twitter and a chatroom, asking questions and voicing difficulties. We listed the articles we were trying to improve via a Wikipedia page, and several organizers created a series of terrific guides (some of which are listed at the end of this post).

My edits may have not made a difference to anyone but me. And as with any user-created site, there’s a chance that none of them will stick. Now, though, I have a Wikipedia username, I know how to navigate the editing interface, and I have connected—virtually and in person—with hundreds of other Wikipedia editors who share my interests. I’m much more likely to edit in the future, and I feel a bit more like a steward of Wikipedia than merely a passive consumer of content.

#tooFEW Guides and Commentary

Occupy Wall Tweet

I’m happy to report that Tweeting #OWS, a project I worked on at the Emory Digital Scholarship Commons (DiSC), has been nominated for a 2012 DH award! The DH awards recognize talent and expertise in the worldwide digital humanities community and are nominated and voted for entirely by the public. Tweeting #OWS is nominated in the “Best DH visualization or infographic” category alongside some pretty amazing projects.

For the project, we began with a corpus of 10 million Occupy-related tweets and whittled them down by location and hashtags used. I developed a series of visualizations that chart trending topics, hubs of twitter activity across the United States, and temperature correlations.

Voting is open to the public through February 17.

Tweeting #OWS
The DH Awards

Life Sentence: Music and Wrongful Incarceration Come Together

I’m pleased to announce the launch of a site I developed in conjunction with a team of amazing musicians and nonprofit innovators. Life Sentence: The Album is a collection of music based on the life of Clarence Harrison, who spent 18 years in a Georgia prison for a crime he didn’t commit. Working on this project has meant nourishing my main academic interests—incarceration and technology—while working with some seriously inspiring folks. I’ll continue to add content to the site in the coming months, including interviews with Clarence and opportunities to volunteer. All proceeds from Life Sentence: The Album will benefit the Georgia Innocence Project, a nonprofit that works to free innocent people imprisoned in Georgia and Alabama.

Check out Life Sentence: The Album on the web, Facebook, and Twitter.

On the Topic of Topic Modeling: NEH/MITH Workshop Wrap-up

Map of Twitter activity around the workshop (image courtesy of @lmrhody).

Overview

Saturday’s Topic Modeling for the Humanities Workshop at MITH was a terrific opportunity to zero in on the mechanics, methods, and applications of topic modeling. In light of recent online conversations about possible overuses and misapplications of MALLET, Saturday’s talks (geared towards humanists) provided some much-needed insight regarding when, why, and how topic modeling might help humanities research. My best takeaway was this helpful reminder: for humanists, topic modeling is not an end in itself; it is a means to test hypotheses, search for patterns, and enrich scholarly research. Perhaps most importantly, I finally feel confident in my pronunciation of Latent Dirichlet Allocation (it’s dee-rish-lay).

According to the workshop’s organizers, 75% of the 55 or so people in attendance are actively working on projects involving topic modeling. This includes my own historical study of prison newspapers. But in all honesty, my approach to topic modeling so far has gone a bit like this: “Whee! I’m plugging text into MALLET! I have results that look like ‘real’ data! But what did I just do? And what do I do with my results?”

Topic modeling as a dolly zoom. Thanks @mcburton.

Talking through the process of topic modeling and interpreting the results with the humanists and computer scientists at the workshop helped demystify the more opaque elements of LDA. Some of the most helpful analogies for topic modeling were the idea of topic modeling as a “dolly zoom” into portions of the text (@mcburton), as well as MALLET output as an index to a huge, mostly unread, book (@patrick_mj).

A head-spinning among of information was presented in the daylong workshop, and I hope to see recaps from some of the prolific bloggers and tweeters in attendance in the coming days. Here’s a very abbreviated version of what I found most helpful from each speaker:

  • Matt Jockers, Thematic Change and Authorial Innovation in the 19th Century Novel — I’ve often been asked questions about my project like “Isn’t ignoring some topics to focus on others skewing your data?” or “If your results are different every time you run the model, how is your data ‘real’?” Matt did a great job pointing out how the essence of scholarly work already involves place attention on some themes at the expense of others. In other words, it is perfectly okay to ignore some topics. When I develop a topic model, I am injecting my assumptions about what matters into the construction and interpretation of the model.
  • Rob NelsonAnalyzing Nationalism and Other Slippery ‘isms’ — A topic is a list of co-occuring words, but the appearance of a topic can mean many different things. The analysis of wartime rhetoric in the North versus the South shows just how crucial historical context is to the model’s analysis.
  • Jordan Boyd-Graber, Incorporating Human Knowledge and Insights into Probabilistic Models of Text — Topic models are “willfully ignorant” of the meanings of words, which can be both good and bad. We already insert ourselves into the model simply by choosing what to focus on. However, if we use human engagement to shift how topics are defined, we can get “better” topics as a result.
  • Jo Guldi, Paper Machines: A Tool for Analyzing Large-Scale Digital Corpora — As a point of entry into large volumes of archival text, topic modeling can tell us where to start looking. Specifically, topic modeling can be a useful first step in identifying patterns, breaks, and archival dissent. In short, topic models can provide critical distance in a way that leafing through archival pages can’t.
  • Chris Johnson-Roberson, Paper Machines: A Tool for Analyzing Large-Scale Digital Corpora — The GUI offered by Paper Machines (the plugin for Zotero) can help us sort through archives that contain more data than we could ever read. Paper Machines is a great example of how a GUI can “democratize” topic modeling and data visualization for folks uncomfortable with the command line and the underlying math.
  • David Mimno, The Details: How We Train Big Topic Models on Lots of Text — “Computer-assisted humanities” might be a better term than “digital humanities” in terms of the type of scholarship we should aim to produce. Also, when accompanied by a mind-blowingly succinct explanation, it might actually be possible for a humanist like me to understand the math behind Gibbs Sampling.
  • David Blei, Topic Modeling in the Humanities Roundtable Discussion — If you’re working with a corpus spanning a long range of time (i.e. the better part of a century or longer), language used to explain your topics is going to change. Dynamic topic models can account for this problem, offering quite a few advantages over trying to re-model your topics over smaller periods of time.

Going Forward

The final workshop Q&A addressed issues of cross-fertilizaiton, including how humanists and computer scientists can effectively collaborate on topic modeling projects. The consensus seemed to be that computer scientists want clean, interesting corpora to work with, and—I hope this goes without saying—should not be viewed simply as executors of humanists’ projects.

If I had to create my ideal environment in which to move forward with topic modeling projects, it would include:

  1. A more hands-on follow up event where we could workshop our projects;
  2. A summer statistics institute for humanists;
  3. Detailed documentation for guiding data from the input through the visualization phase.

Digitization and the “Canon”

On a final note is some commentary on a topic that came up in the Twitter backchannels but not in the workshop itself. Most of the topic modeling projects we heard about make use of already digitized newspapers and literary works (with the exception of Jo Gouldi’s archival work). “Because it was already digitized” seems to be a go-to reason for corpus selection in a lot of topic modeling projects. Funding for digitization is much less readily available right now than funding for digital innovation, so the self-selection evident in topic modeling corpora isn’t likely to change anytime soon. The push for inclusion of non-canonical texts into digital humanities work is severely hampered by default.

Many thanks to Jen Guiliano, Travis Brown, and the workshop presenters for all of their hard work. I’m looking forward to more topic modeling fun at this month’s Chicago Colloquium on Digital Humanities and Computer Science.

 Further Reading

Workshop Zotero archive
Slides from David Mimno’s Workshop Presentation (PDF)
Collaborative Google Doc with Workshop Notes (courtesy of Brian Croxall)
Thomas Padilla’s AYBABTU or Topic Modeling in the Humanities

Supercharge Your Zotero Library Using Paper Machines: Part II

A version of this post originally appeared on the Emory DiSC blog.

In my last post I discussed how Paper Machines, the text analysis add-on for Zotero, can help you visualize your research. Some of Paper Machines’ features are pretty self-explanatory, but others are less intuitive. Here I’ve tried to expand on some of the potentially complicated aspects of Paper Machines to supplement the documentation available on the developer’s site.

Getting Started
Paper Machines is available for Zotero Standalone and Mozilla Firefox. To install the Paper Machines add-on in Firefox, download the  XPI file, then load it by navigating to tools → add-ons → install add-ons → get add on from file. In Zotero Standalone,  navigate to tools → add-ons → gear icon → install add-on from file.

Once you’ve installed the add-on, you can adjust various default settings.

  

You can analyze the contents of your Zotero library by right clicking on any collection and selecting “Extract Text for Paper Machines.” Once the text is extracted, you have the option of running various processes and viewing the corresponding visualizations.

Word Cloud
Paper Machines’ default word cloud is automatically displayed at the lower left corner of the Zotero pane. You can also compare sets of text using multiple word clouds, which can be divided either chronologically or by subcollection. This option requires that you select among multiple filter methods:

  • None produces a simple word cloud based on raw frequency.
  • Tf*idf eliminates words that are deemed unimportant to the corpus.
  • Dunning’s log-likelihood measures the probability of a word occurring in one corpus of text versus another.
  • Mann-Whitney U assesses how consistently a given term appears in one corpus versus another. Here’s a good post about the differences between Dunning’s log-likelihood and Mann-Whitney U.

Topic Modeling
Using the MALLET toolkit, Paper Machines can determine what topics (derived from groups of words that appear together) arise most frequently in your text. Topics can be charted over time (in days), within specific subcollections, or by mutual information. You can also adjust the topic modeling settings, including:

  • Tf*idf (See above.)
  • Porter stemming modifies words by removing their suffixes.”Worked” and “working,” for example, would both be counted under the word “work.”
  • JSTOR for Data Research uses data from JSTOR to supplement the data in your Zotero library. You must have a JSTOR account to use this function.
  • Number of Iterations  (under “Advanced Options”): Paper Machines defaults to 1000; the larger the number of iterations, the longer the sampling will take; smaller numbers will produce lower-quality models.

Right click on a collection for the Paper Machines menu.

There are a number of other adjustable fields under “Advanced Options,” but the default settings should work well for almost everyone. If you’re interested in delving into the mechanics of topic modeling, I’d suggest starting with this post from The Programming Historian 2, as well as”A Whirlwind Tour of Automated Language Processing for the Humanities and Social Sciences,” a book chapter by Douglas Oard.

Certain Paper Machines functions—for example, Periodical PDF Import and Classifier—are still in the experimental phase, so I’ll explore them after they’ve been updated further. Be sure to select “automatically update” under the add-on preferences so you can benefit from the added functionality that’s being continually added to Paper Machines.

Occupy Wall Street: Tweets vs. Temperature

When I set out to compare two elements in a visualization, 9 times out of 10 the results are underwhelming. But when I created this graph for the Tweeting #OWS Project, the extent of correlation between temperature and Twitter activity was a huge surprise. For the inner area, I used daily Occupy Wall Street-themed Twitter data from New York City. The outer area is the average temperature in New York City for the same year.

The graph provokes some interesting questions. Why did heavily publicized OWS events seem to take place on especially warm days? Did people come out to protest because it was hot, or did they go inside to tweet about it for the same reason? Did police arrest people on hot days because they were less likely to reassemble?