Today I attended the second one-day workshop in the CHASE Arts and Humanities in the Digital Age programme, which looked at Digital Images (and video), and was delivered by Matthew Sillence from the University of East Anglia. Of all the workshop topics this is the one with which I am least familiar, and that possibly will be the least relevant to my PhD. However, I found it interesting in that we learned about the sort of data that can be obtained from image and video files, and how that can be exported and rendered in a textual format – this made me feel like I was on much more familiar territory, and demonstrated how in fact image and video data could be manipulated and converted into Linked Data.
We spent much of the workshop using ImageJ software to measure various parameters of an image, and to record data such as the actual dimensions of the digital image’s real-world counterpart (you cannot measure a physical painting in pixels). Matthew suggested ways in which quantifying an image in this way might apply to current research in Art History, such as determining the total area of a painting that is covered in gold leaf, and tracking how this changes across multiple paintings over time.
Matthew also showed how ImageJ could be used to enhance images for inclusion in an article or thesis, by adding labelled arrows, or a scale to show e.g. the size of the physical painting. While I might not be exporting data from images, or measuring them, the ability to include additional information on images I use to complement my written work would be very helpful.
During the afternoon session, we learned how to batch process multiple images by taking measurements of their average brightness value, then create a visualisation showing this value (shown as a grayscale rectangle) for each image in the set. Unfortunately this didn’t seem to work on Macs with the standard installation of ImageJ, but an alternative version can be downloaded. This looked very impressive, and is a good way of immediately displaying trends over a collection of images. While it is probably unlikely that I will end up using this in my research, it is definitely an interesting technique to be aware of, and hopefully will spring to mind if I end up working with an image collection in the future.
We also looked at video encoding using Anvil, which allows the user to add extra ‘tracks’ to the relevant frame(s) on a video file that include transcriptions, comments or terms from a specified vocabulary. These vocabularies are called coding schemes, and when annotating video in this way it is a good idea to use an agreed standard scheme where possible. The specification for the scheme is written in XML – again a familiar technology to me, and relevant to my existing work. Matthew advised that similar schemes exist for coding interview data, e.g. by using NVivo, so this is definitely something to look into once I start the interview phase of my research.
Overall, this was an interesting and enjoyable day. I had come expecting to learn a lot, due to the unfamiliarity of the subject matter, but was not expecting to be able to relate some of the points so closely to my own research. When looking at Linked Data resources in future, I will now be much more aware of the kind of data that can be exported from an image or video, and how this could potentially be connected to data from other sources.