Categories
Python Wikidata Wikimedia projects

Sunday Query : use SPARQL and Python to fix typographical errors on Wikidata

My turn to make a #SundayQuery! As Harmonia Amanda just said in her own article, I was about explain how to make a Python script to fix the results of her query… But I thought I should start with another script, similar but shorter and easier to understand. The script for Harmonia is here, though.

On Thursday, I published an article about medieval battles, and since then, I did start to fix battle items on Wikidata. One of the most repetitive fixes is the capitalization of the French labels: as they have been imported from Wikipedia, the labels have an unnecessary capital first letter (“Bataille de Saint-Pouilleux en Binouze” instead of “bataille de Saint-Pouilleux en Binouze”)

The query

So first, we need to find all the items that have this typo:

http://tinyurl.com/jljf6xr

Some basic explanations :

  • ?item wdt:P31/wdt:P279* wd:Q178561 .  looks for items that are battles or subclasses of battles, just to be sure I’m not making changes to some book called “Bataille de Perpète-les-Olivettes”…
  • On the next line, I query the labels for the items  ?item rdfs:label ?label .  and filter to keep only those in French FILTER(LANG(?label) = "fr") . . As I need to use the label inside the query and not merely for display (as Harmonia Amanda just explained in her article), I cannot use the wikibase:label, and so I use the semantic web standard rdfs:label.
  • The last line is a FILTER , that keeps only those of the results that matches the function inside it. Here, STRSTARTS  checks if ?label  begins with "Bataille " .

As of the time I write this, running the query returns 3521 results. Far too much to fix it by hand, and I know no tool that already exists and would fix that for me. So, I guess it’s Python time!

The Python script

I love Python. I absolutely love Python. The language is great to put up a useful app within minutes, easily readable (It’s basically English, in fact), not cluttered with gorram series of brackets or semicolons, and generally has great libraries for the things I do the most: scraping webpages, parsing and sorting data, checking ISBNs ((I hope I’ll be able to write something about it sometime soon.)) and making websites. Oh and making SPARQL queries of course ((Plus, the examples in the official documentation are Firefly-based. Yes sir, Captain Tightpants.)).

Two snake charmers with a python and a couple of cobras.
Not to mention that the name of the language has a “snake charmer” side 😉

Preliminary thoughts

If you don’t know Python, this article is not the right place to learn it, but there are numerous resources available online ((For example, https://www.codecademy.com/learn/python or https://docs.python.org/3.5/tutorial/.)). Just make sure they are up-to-date and for Python 3. The rest of this articles assumes that you have a basic understanding of Python (indentation, variables, strings, lists, dictionaries, imports and “for” loops.), and that Python 3 and pip are installed on your system.

Why Python 3? Because we’ll handle strings that come from Wikidata and are thus encoded in UTF-8, and Python 2 makes you jump through some loops to use it. Plus, we are in 2016, for Belenos’ sake.

Why pip? because we need a non-standard library to make SPARQL queries, called SPARQLwrapper, and the easiest way to install it is to use this command:

Now, let’s start scripting!

For a start, let’s just query the full list of the sieges ((I’ve fixed the battles in the meantime 😉 )):

That’s quite a bunch of lines, but what does this script do? As we’ll see, most of this will be included in every script that uses a SPARQL query.

  • First, we import two things from the SPARQLWrapper module: the SPARQLWrapper object itself and a “JSON” that it will use later (don’t worry, you won’t have to manipulate json files yourself.)
  • Next, we create a “endpoint” variable, which contains the full URL to the SPARQL endpoint of Wikidata ((And not the web access to the endpoint, which is just “https://query.wikidata.org/”)).
  • Next, we create a SPARQLWrapper object that will use this endpoint to make queries, and put it in a variable simply called “sparql”.
  • We apply the setQuery function to this variable, which is where we put the query we used earlier. Notice that we need to replace { and } by {{ and }} : { and } are reserved characters in Python strings.
  • sparql.setReturnFormat(JSON)  tells the script that what the endpoint will return is formated in  json.
  • results = sparql.query().convert() actually makes the query to the server and converts the response to a Python dictionary called “results”.
  • And for now, we just want to print the result on screen, just to see what we get.

Let’s open a terminal and launch the script:

That’s a bunch of things, but we can see that it contains a dictionary with two entries:

  • “head”, which contains the name of the two variables returned by the query,
  • and “results”, which itself contains another dictionary with a “bindings” key, associated with a list of the actual results, each of them being a Python dictionary. Pfew…

Let’s examine one of the results:

It is a dictionary that contains two keys (label and item), each of them having for value another dictionary that has a “value” key associated with, this time, the actual value we want to get. Yay, finally!

Parsing the results

Let’s parse the “bindings” list with a Python “for” loop, so that we can extract the value:

Let me explain the  qid = result['item']['value'].split('/')[-1]  line: as the item name is stored as a full url (“https://www.wikidata.org/entity/Q815196” and not just “Q815196”), we need to separate each part of it that is between a ‘/’ character. For this, we use the “split()” function of Python, which transforms the string to a Python list containing this:

We only want the last item in the list. In Python, that means the item with the index -1, hence the [-1] at the end of the line. We then store this in the qid variable.

Let’s launch the script:

Fixing the issue

We are nearly there! Now what we need is to replace this first proud capital “S” initial by a modest “s”:

What is happening here? a Python string works like a list, so we take the part of the string between the beginning of the “label” string and the position after the first character (“label[:1]”) and force it to lower case (“.lower()”). We then concatenate it with the rest of the string (position 1 to the end or “label[1:]”) and assign all this back to the “label” variable.

Last thing, print it in a format that is suitable for QuickStatements:

That first line seems barbaric? it’s in fact pretty straightforward: "{}\tLfr\t{}" is a string that contains a first placeholder for a variable (“{}”), then a tabulation (“\t”), then the QS keyword for the French label (“Lfr”), then another tabulation and finally the second placeholder for a variable. Then, we use the “format()” function to replace the placeholders with the content of the “qid” and “label” variables. The final script should look like this:

Let’s run it:

Yay! All we have to do now is to copy and paste the result to QuickStatements and we are done.

Title picture: Photograph of typefaces by Andreas Praefcke (public domain)

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Enregistrer

Categories
Wikidata

Sunday Query: all surnames marked as disambiguation pages, with an English Wikipedia link and with “(surname)” in their English label

It’s Sunday again! Time for the queries! Last week I showed you the basics of SPARQL; this week I wanted to show you how we could use SPARQL to do maintenance work. I assume you now understand the use of PREFIX, SELECT, WHERE.

I have been a member of the WikiProject:Names for years. When I’m not working on Broadway and the Royal Academy of Dramatic Art archives, ((Yes, I really want you to read this one.)) I am one of the people who ensure that “given name:Christopher (Iowa)” is transformed back to “given name:Christopher (given name)”. Over the last weeks I’ve corrected thousands of wrong uses of the given name/family name properties, and for this work, I used dozens of SPARQL queries. I thought it could be interesting to show how I used SPARQL to create a list of strictly identical errors that I could then treat automatically.

What do we search?

If you read the constraints violations reports, you’ll see that the more frequent error for the property “family name” (P734) is the use of a disambiguation page as value instead of a family name. We can do a query like that:

link to the query. The results are in the thousands. Sigh.

But then we find something more interesting: there are entities which are both a disambiguation page and a family name. What?! That’s ontologically wrong. To use the wrong value as family name is human error; but an entity can’t be both a specific type of Wikimedia page and a family name. It’s like saying a person could as well be a book. Ontologically absurd. So all items with both P31 need to be corrected. How many are there?

link to the query.
Several thousands again. Actually, there are more entities which are both a disambiguation page and a family name than there are person using disambiguation pages as family names. This means there are family names/disambiguation pages in the database which aren’t used. They’re still wrong but it doesn’t show in violation constraints reports.

If we explore, we see than there are different cases out there: some of the family names/disambiguation pages are in reality disambiguation pages, some are family names, some are both (they link to articles on different Wikipedia, some about a disambiguation page and some about a family name; these need to be separated). Too many different possibilities: we can’t automatize the correction. Well… maybe we can’t.

Restraining our search

If we can’t treat in one go all disambig/family name pages, maybe we can ask a more precise question. In our long list of violations, I asked for English label and I found some disbelieving ones. There were items named “Poe (surname)”. As disambiguation pages. That’s a wrong use of label, which shouldn’t have precisions about the subject in brackets (that’s what the description is for) but if they are about a surname they shouldn’t be disambiguation pages too! So, so wrong.

Querying labels

But still, the good news! We can isolate these entries. For that we’ll have to query not the relations between items but the labels of the items. Until now, we had used the SERVICE wikibase:label workaround, a tool which only exists on the Wikidata endpoint, because it was really easy and we only wanted to have human-readable results, not really to query labels. But now that we want to, the workaround isn’t enough, we’ll need to do it the real SPARQL way, using rdfs.

Our question now is: can I list all items which are both family names and disambiguation pages, whose English label contains “(surname)”?

link to the query. We had several hundreds results. ((Then. We had several hundreds result then. I’m happy to say it isn’t true now.)) You should observe the changes I made in the SELECT DISTINCT as I don’t use the SERVICE wikibase:label workaround.

Querying sitelinks

Can we automatize correction now? Well… no. There is still problems. In this list, there are items which have links to several Wikipedia, the English one about the surname and the other(s) ones about a disambiguation page. Worse, there are items which don’t have an English interwiki any longer, because it was deleted or linked to another item (like the “real” family name item) and the wrong English label persisted. Si maybe we can filter our list to only items with a link to the English Wikipedia. For this, we’ll use schema.

link to the query. Well, that’s better! But our problem is still here: if they have several sitelinks, maybe the other(s) sitelink are not about the family name. So we want the items with an English interwiki and only an English interwiki. Like this:

link to the query.

Several things: we separated ?sitelink and ?WParticle. We use ?sitelink to query the number of sitelinks, and ?WParticle to query the particular of this sitelink. Note that we need to use GROUP BY, like last week.

Polishing of the query

Just to be on the safe side (we are never safe enough before automatizing corrections) we’ll also check that all items on our list are only family name/disambiguation pages; they’re not also marked as a location or something equally strange. So we query that they have only two P31 (instance of), these two being defined as Q101352 (family name) and Q4167410 (disambiguation page).

link to the query.

It should give you a beautiful “no matching records found”. Yesterday, it gave me 175 items which I knew I could correct automatically. Which I have done, with a python script made by Ash_Crow. If you are good, he’ll make a #MondayScript in response to this #SundayQuery!

(Main picture: Name List of Abhiseka – Public Domain, photograph done by Kūkai.)

Categories
Wikidata

Sunday Query: The 200 Oldest Living French Actresses

Hello, here’s Harmonia Amanda again (I think Ash_Crow just agreed to let me squat his blog indefinitely). This article shouldn’t be too long, for once. ((Well, nothing to compare to my previous one, which all of you should have read.)) It started as an hashtag on twitter #SundayQuery, where I wrote SPARQL queries for people who couldn’t find how to ask their question. So this article, and the next ones if I keep this up, are basically a kind of “how to translate a question in SPARQL step by step” tutorial.

The question this week was asked by Jean-No: who are the oldest French actresses still alive?

To begin: the endpoint and PREFIX…

SPARQL is the language you use to query a semantic database. For that you can use an endpoint, which are services that accept SPARQL queries and return results. There are many SPARQL endpoints around, but we will be using the Wikidata endpoint.

A semantic base is constituted of triples of information: subject, predicate, object (in Wikidata, we usually call that item, property, value, but it’s the exact same thing). As querying with full URIs all the time would be really tedious, SPARQL needs PREFIX.

These are the standard prefixes used to query Wikidata. If used, they should be stated at the top of the query. You can add other prefixes as needed to. Actually, the Wikidata endpoint has been created to specifically query Wikidata, so all these prefixes are already declared, even if you don’t see them. But it’s not because they aren’t visible that they aren’t there, so always remember: SPARQL NEEDS PREFIX. There.

All French women

The first thing to declare after the prefixes, is what we are asking to have for results. We can use the command SELECT or the command SELECT DISTINCT (this one ask to clear off duplicates) and then listing the variables.

As we are looking for humans, we’ll call this variable “person” (but we could choose anything we want).

We will then define the conditions we want our variable to respond to, with WHERE. We are seeking the oldest French actresses still alive. We need to cut that in little understandable bits. So the first step is, we are querying for human beings (and not fictional actresses). Humans beings are all items which have the property “instance of” (P31) with the value “human” (Q5).

So:

This query will return all humans in the database. ((Well, if it doesn’t timeout through the sheer number.)) But we don’t want all humans, we want only the female ones. So we want humans who also answer to “sex or gender” (P21) with the value “female” (Q6581072).

And we want them French as well! So with “country of citizenship” (P27) with the value “France” (Q142).

Here it is, all French women. We could also write the query this way:

It’s exactly the same query.

All French actresses

Well that seems like a good beginning, but we don’t want all women, we only want actresses. It could be as simple as “occupation” (P106) “actor” (Q33999) but it’s not. In reality Q33999 doesn’t cover all actors and actresses: it’s a class. Subclasses, like “stage actor”, “television actor”, “film actor” have all “subclass of” (P279) “actor” (Q33999). We don’t only want the humans with occupation:actor; we also want those with occupation:stage actor and so on.

So we need to introduce another variable, which I’ll call ?occupation.

Actually, if we really wanted to avoid the ?occupation variable, we could have written:

I’s the same thing, but I found it less clear for beginners.

Still alive

So now we need to filter by age. The first step is to ensure they have a birth date (P569). As we don’t care (for now) what this birth date is, we’ll introduce a new variable instead of setting a value.

Now we want the French actresses still alive, which we’ll translate to “without a ‘date of death’ (P570)”. For that, we’ll use a filter:

Here it is, all French actresses still alive!

The Oldest

To find the oldest, we’ll need to order our results. So after the WHERE section of our query, we’ll add another request: now that you have found me my results, SPARQL, can you order them as I want? Logically, this is called ORDER BY. We can “order by” the variable we want. We could order by ?person, but as it’s the only variable we have selected in our query, it’s already done. To order by a variable, we need to ask to select this variable first.

The obvious way would be to order by ?birthDate. The thing is, Wikidata sometimes has more than one birth date for people because of conflicting sources, which translates in the same people appearing twice. So, we’ll group the people by their Qid (using GROUP BY) si that duplicates exist now as a group. Then we use SAMPLE (in our SELECT) to take only the groups’ first birth date that we find… And then ORDER BY it :

This gives us all French actresses ordered by date of birth, so the oldest first. We can already see that we have problems with the data: actresses whom we don’t know the date of birth (with “unknown value”) and actresses manifestly dead centuries ago but whom we don’t know the date of death. But still! Here is our query and we answered Jean-No’s question as we could.

With labels

Well, we certainly answered the query but this big list of Q and numbers isn’t very human-readable. We should ask to have the labels too! We could do this the proper SPARQL way, with using RDFS and such, but we are querying Wikidata on the Wikidata endpoint, so we’ll use the local tool instead.

We add to the query:

which means we ask to have the labels in French (as we are querying French people), and if it doesn’t exist in French, then in English. But just adding that doesn’t work: we need to SELECT it too! (and to add it to the GROUP BY). So:

Tada! It’s much more understandable now!

Make it pretty

I don’t want all living French actresses, including the children, I want the oldest; so I add a LIMIT to the results. ((And it will be quicker that way.))

And why should I have a table with identifiers and names when I could have the results as a timeline? We ask that at the top, even before the SELECT: ((Beware that this doesn’t work on all SPARQL endpoints out there, even if it exists on the Wikidata one.))

Wait, wait! Can I have pictures too? But only if they have a picture, I still want the results if they don’t have one (and I want only one picture if they happen to have several so I use again the SAMPLE). YES WE CAN:

Here the link to the results. I hope you had fun with this SundayQuery!

(Main picture: Mademoiselle Ambroisine – created between 1832 and 1836 by an artist whose name is unreadable – Public Domain.)