Tools: Kaggle test data set
This page is for the creation and organisation of a 240 image test data set for the Signs of Literacy Kaggle research competition. The competition will run from November 2018 to early January 2019.
Wikitable display of KaggleTestData as of Saturday, June 16th, 2018 @ 21.12 (n=33)
Test data set
We will soon have 120 snippets and metadata from our English High Court of Admiralty data up on the MarineLives wiki. We will then add a further 120 snippets and metadata from the Alle Amsterdamser Akten (Dutch notarial archives).
In the short term, we need to submit a 240 graded snippet test data set to Kaggle, for Kaggle data scientists to play with. They will then provide feedback to us, before we create the much larger Kaggle training data set for the November Kaggle research competition. Our medium term solution, with the help of Picturae, will be to have 10,000 images up on a Picturae controlled IIIF server, with the snippets created in Recogito referring back to the IIIF server images.
We have created a simple semantic form, which displays an image snippet, displays its classification as a marke, initial or signature, and allows the input of the metadata of name, occupation, age, place of residence and date of the source deposition or of the source notarial document.
Our semantic wiki enables all these snippets to be sorted by any aspect of the metadata, by their classification as marke, initial, or signature, and by the grading for sophistication of execution we choose to give them. We have created two sets of input metadata fields for four people - Colin Greenstreet, Dr Mark Hailwood, Mark Ponte and Dr Jelle van Lottum - one set of input fields is for a simple simple, medium, sophisticated tag; the second set of input fields will be a forced ranging of 1 to 40, with 1 as most sophisticated and 40 as least sophisticated.
Once we have got the first 120 snippets up on the MarineLives wiki, we will grade the three classes of snippet (markes, initials, and signature) by "sophistication of execution". Rather than attempting to prediscuss what this means between the graders, we will each independently think about what grading criteria would look like for markes, initials and signatures, and then grade the 120 snippets within the three classes (not attempting to compare markes, initials and signatures as classes in terms of sophistication, just doing the grading within the classes.
We plan to grade in two ways:
Firstly, using our own criteria for sophistication of execution, we assign a "simple", "medium", "sophisticated" tag within their class to the markes, initials and signatures
Secondly, again using our own criteria for sophistication of execution, we rank the snippets within their class by sophistication of execution, with 1 for the most sophisticated and 40 for the least sophisticated. We will NOT allow ties, so each snippet will have a different ranking number.
We plan early next week to have a discussion amongst the graders about the definitions we have used, the criteria we have developed and applied, and to compare how consistent (or not) we as C21st humans were in grading C17th markes, initials and signatures.
It will be interested to see what process the graders develop to do the grading, and not just the grading criteria and results. Dealing with comparing 40 markes, initials and signatures is probably just manageable - with just 40 snippets to grade we could if necessary paste them all onto a Powerpoint page and shuffle them round until you have them in a grading order that satisfies you, but that will clearly not work for 10,000 images
We are still working on the idea of using conjoint analysis to present graders with random binary comparisons of markes, initials and signatures, and to allow input of a "more sophisticated/less sophisticated" binary choice. This method would enable us to cope with the forced ranking of 10,000 snippets, and would also lend itself to working with significant numbers of volunteers on a semi-automated basis to accumulate grading data.
Ideally, we would get a software developer interested in this. A solution using the Mirador IIIF viewer would be ideal, since it would force users into a close reading of images, using the Mirador IIIF viewer, and benefiting from the fact that Picturae will be putting all 10,000 source images for our Kaggle training data set onto a IIIF server.
We are also checking whether there is some off the shelf conjoint analysis software we could use.