Kieran Healy posted last year about “sleeping beauties” in philosophy—papers that went several years before receiving any citations but that ended up accumulating many. This pattern is unusual, as most papers receive a good amount of citations immediately and continue to do so (or the opposite). I think literary studies and history is less paper-driven than philosophy, and I would encourage everyone to read this for more context on citations in the humanities.
Two stories caught my attention yesterday. The first was a review of some recent studies of citation practices by field, broadly considered. The claim that alarmed a number of people on twitter was that “82%” of humanities scholarship was never cited. I pointed out that it was a mistake to assume that “never cited” means “never read.” That someone would even make this inference is quite mysterious to me. Let me explain: this semester, I have been teaching, for the first time, a course on the Victorian novel.
A problem that many of the co-citation graphs I discussed in the last post share is that they are too dense to be easily readable. I created the sliders as a way of alleviating this problem, but some of the data sets are too dense at any citation-threshold. Being able to view only one of the communities at a time seemed like a plausible solution, but I was far from sure how to implement it using d3.
I’ve created several new co-citation graphs recently. While I enjoy looking at the visualizations, I haven’t yet analyzed any of them thoroughly. The film studies network was intriguing to me for several reasons, and I’m going to explore it now in more detail.
I downloaded just over 12K articles from various film studies journals in Web of Science. The journals are Sight and Sound; Film Comment; Literature/Film Quarterly; American Film; Cinema Journal; Screen; Historical Journal of Film, Radio, and Television; Journal of Popular Film & Television; Wide Angle; Film Quarterly; Journal of Film and Video; Film Criticism; and Quarterly Review of Film & Video.
I’ve written here and here about creating co-citation networks in D3 from Web of Science data. My first experiment, described above, was creating a threshold slider. I next wanted to try to create a chronological slider that would allow you to adjust the dates of the citations in the network.
There are doubtless many ways of going about doing this, and I’m reasonably sure that the method I’m going to describe is far from ideal.
After reading Kieran Healy’s latest post about women and citation patterns in philosophy, I wanted to revisit the co-citation graph I had made of five journals in literary and cultural theory. As I noted, one of these journals is Signs, which is devoted specifically to feminist theory. I didn’t think that its presence would skew the results too much, but I wanted to test it. Here are the top thirty citations in those five journals:
I wanted to modify this script by Neal Caren to create an adjustable graph that allows you to control the threshold of citations for nodes that will appear on the graph. If for example, you wanted to see only those nodes with twenty or more citations, you can just move the slider over to see those, and the data will automatically update. I have created three of these: Modernist Journals, Literary Theory, and Rhetoric and Composition.
I’ve been interested in humanities citation analysis for some time now, though I had been somewhat frustrated in that work by JSTOR pulling its citation data from its DfR portal a year or so ago. It was only a day or two ago with Kieran Healy’s fascinating post on philosophy citation networks that I noticed that the Web of Science database has this information in a relatively accessible format. Healy used Neal Caren’s work on sociology journals as a model.
I have been interested in bibliometrics for some time now. Humanities citation data has always been harder to come by than that of the sciences, largely because the importance of citation-count as a metric has never much caught on there. Another important reason is a generalized distrust and suspicion of quantification in the humanities. And there are very good reasons to be suspicious of assigning too much significance to citation-counts in any discipline.
I’ve been thinking a lot recently about a simple question: can machine learning detect patterns of disciplinary change that are at odds with received understanding? The forms of machine learning that I’ve been using to try to test this—LDA and the dynamic LDA variant—do a very good job of picking up the patterns that you would suspect to find in, say, a large corpus of literary journals. The model I built of several theoretically oriented journals in JSTOR, for example, shows much the same trends that anyone familiar with the broad contours of literary theory would expect to find.