The New York Philharmonic has got an excellent online archive of all of its concerts since 1842. This article uses the archive to investigate which composers tend to be programmed together.Continue reading →
Many datasets of composers tell us relatively little about them, so we sometimes have to guess details from the information available – such as the composer’s name. Forenames, for example, are often a good indicator of gender, as described in this previous article. Titles – associated with the church, aristocracy or royalty – can also reveal gender, and tell us about occupation or social class. This article looks at what names can tell us about nationality – based on a recent attempt to identify Italian composers among the many obscure and unknown names listed in the British Library’s music catalogue.Continue reading →
Deduplication is an important, though often messy and time-consuming, part of many statistical investigations. It is usually required when data comes from several different sources, to identify all of the records that actually refer to the same thing. For example, I have recently been deduplicating the names appearing in the ‘women composers’ sources listed in this previous article. Deduplication may also be needed where several publications of the same work are described in different ways in a library catalogue. Continue reading →
I have recently been working on extracting data on women composers from the various sources listed in this previous article. The first source on that list is a scanned copy of a French translation of a book – Les femmes compositeurs de musique – compiled in 1910 by Otto Ebel. It is available at archive.org here. Although I’ve not had great success in the past in extracting usable data from scanned books, this appears to be a reasonably tidy scan of Ebel, which looks like a useful source on women composers, so I thought I would give it a go. Continue reading →
Triangulation is a research technique that involves looking at the same thing from two different perspectives. In surveying, it enables positions and distances to be calculated by measuring angles from two locations. In the social sciences, it can increase the reliability of conclusions if they are found by two (or more) different methods. And in statistical historical musicology, looking for the same works or composers in two or more datasets can tell us a lot about the characteristics of the datasets, and about the works’ patterns of survival or dissemination. Continue reading →
Often in statistical analysis we need to select things at random. For example, if it is impractical to work with a complete dataset, the only option might be to use a random sample. The science of statistics tells us how to analyse a sample in order to reach conclusions about the entire dataset, and gives us ways to calculate margins of error based on the size of the sample. But I digress.
So, how might we pick a random composer? Continue reading →
I have recently been trying to collect data from the Listening Experience Database (LED) in order to put together a proposal for a conference paper. The LED is a nicely constructed database using linked open data and a structure based on something called the ‘Semantic Web’. Rather than traditional databases that have a hierarchical ‘tree’ structure, the Semantic Web concept is a true ‘network’, where anything can be linked to anything else. The LED, for example, includes links to data on a number of other databases. Have a look at the LED and follow a few links and you will see what this means – a very rich and flexible means of linking data together. Continue reading →
In what ways can statistical techniques be used to investigate topics in historical musicology? I think there are four main approaches – hypothesis testing, quantification, modelling and exploration. Their use depends on the topic, the data, and the type of question you are trying to answer.
These four types often overlap. It is hard to do modelling without some exploration and quantification, for example. Also, after you have spent so long collecting the data, cleaning it, and getting it into a form for statistical analysis, why not squeeze the most out of it and do some general exploration after testing your hypotheses? Continue reading →