OpenStreetMap Ireland & MapWarper.net completed all 675 Sheets for Townlands.ie

The OpenStreetMap Ireland community and http://www.townlands.ie/ have rectified all 675 sheets on Mapwarper.net leading towards the goal of mapping all Townloads in OpenStreetMap. Awesome stuff, it’s a very cool project.

They have also just surpassed adding 47,000 townlands!

cx54-6iwqae86i1

cx55dkiwkaa6u0rcx55cd1w8ae535o

My fun tube sign parody site closed due to TfL Lawyers

A couple of years ago I made an open source parody fun image generation  / service information sign maker and today it’s been shut down due to lawyers from Transport for London (TfL).

I made it in a weekend as during the week I was doing my civic duty with some good legal professionals on a period of Jury Service. People made images to share with their friends, make jokes, announce anniversaries, quote prayers, tell poetry, advertise events, leave personal messages and write inspirational comments and so on. I have not seen any offensive images that people made with it either on the site or on Twitter.  When it launched it got a fair bit of praise and positive coverage from many places including BBC America, ITV, The Londonist, The Atlantic Cities, The Guardian, The Next Web, B3ta.com, etc..

apple-maps-london-tube-sign

Example of a real TfL funny London Underground Service Information Sign (from http://weknowmemes.com/wp-content/uploads/2012/09/apple-maps-london-tube-sign.jpg)

This Thursday (10 Dec 2015) I got an email with a scanned letter from a lawyer from Transport for London (TfL), a UK public transport authority. Here it is with names, email and addresses redacted.

tfl_letter

So, I’ve destroyed the site, deleted the code and emailed them my confirmation of that. I decided to do it as soon as I was able (about 24 hours of the request) as I didn’t want the distraction and hassle, so I can get back to work.

As of last Thursday the site tubesign.herokuapp.com is offline and I cannot put it back online. The Ruby code, misc files and the CC-By-SA images on Heroku where it was hosted have all been deleted. My repository on GitHub has also been deleted although others may have copied their own forks of the MIT licensed code. It was only a few lines of unspectacular Ruby code anyhow.

Some people have speculated that this may also have been due to candidate for Mayor of London and MP for Tooting, Mr Sadiq Kahn tweeting one of the images someone made showing the hashtag “#youaintnomuslimbruv”  – and then dozens of people replying saying it was made using the parody website. Perhaps we will never know, it doesn’t really matter. It appears that whilst a Labour MP, ‘Mr Khan is no Corbynite leftwinger‘ but one would imagine that he might stick up for RMT Union members against TfL management. And so should you also support the staff during their industrial actions – it was these same TfL bosses who issued this takedown.

I was surprised to see that letter in my inbox and disappointed that TfL were not willing to be more civil and reasonable in their approach. However,  it’s not the first time TfL have acted in this way before in a case about a fan website about tube map variations – I remember it going around the blogs at the time in 2006.

Big institutions struggle and work slowly with technology but is it just me or is it a bit surprising to see how they have made no progress in almost ten years?

Screenshot - 111215 - 13:57:03

Now back to making some better transport maps.

Updates (last updated 16 Dec)

Continue reading

A Digital Gazetteer of Places for the Library of Congress and the NYPL

I’m proud to tell you that the project I worked on last year with Topomancy for the Library of Congress and the New York Public Library has just been released to the Internet. It’s an open source, fast, temporal enabled, versioned gazetteer using open data. It also works as a platform for a fully featured function-laden historical gazetteer.

You can check out the official release at the Library of Congress’s GitHub page, and for issues, documentation and development branches on Topomancy’s Gazetteer project site.

Here is an introduction to the project giving an overview and discussion of the challenges and listing the features of this software application. Enjoy!

DEMO: Head over to http://loc.gazetteer.us/ for the Digital Gazetteer – a more global gazetteer and to http://nypl.gazetteer.us/ for the NYPL’s  NYC focused historic gazetteer. If you want to try out the conflation, remediation and administrative roles, let us know at team@topomancy.com

Introduction, Overview, Features

A gazetteer is a geographic dictionary sometimes found as an index to an atlas. It’s a geographic search engine for named places in the world. This application is a temporal gazetteer with data source remediation, relations between places and with revisions which works with many different data sources. It’s fast, written in Python and uses ElasticSearch as a database. The application was primarily written as a Digital Gazetteer for the Library of Congress’s search and bibliographic geographic remediation needs and was also developed for the New York Public Library’s Chronology of Place project. It is currently being used in production by the Library of Congress to augment their search services. The software is MIT licensed.

Fig 1. Library of Congress Gazetteer

Fig 1. Library of Congress Gazetteer

Architecture

* JSON API
* Backbone.js frontend
* Django
* ElasticSearch as revision enabled document store
* PostGIS

Data Model

* Simple data model.
* Core properties, name, type, geometry
* Alternate Names, (incl. language and local, colloq etc)
* Administrative hierarchy
* Timeframe
* Relations between places (conflation, between geography and between time, etc
* Edit History, versioning, rollbacks, reverts

Features

Search

* Text search (with wildcard, AND and OR support – Lucene query syntax)
* Temporal search
* Search according to data source and data type
* Search within a geographic bounding box
* Search within the geography of another Place.
* GeoJSON and CSV results
* Search can consider alternate names and administrative boundaries, and address details.
* API search of historical tile layers
* Server side dynamic simplification option for complex polygon results

Fig 2. Gazetteer Text Search

Fig 2. Gazetteer Text Search

Fig 3. Temporal Search

Fig 3. Temporal Search

Fig 4. geographic search

Fig 4. geographic search

Place

* Place has alternate names and administrative boundaries
* Similar Features search (similar names, distance, type etc)
* Temporal data type with fuzzy upper and lower bounds.
* Display of any associated source raster tile layer (e.g. historical map)
* Full vector editing interface for edit / creation.
* Creation of composite places from union of existing places.
* Full revision history of changes, rollback and rollforward.

Fig 5. Alternate Names

Fig 5. Alternate Names

Fig 6. Similar Names

Fig 6. Similar Names

Fig 7. Vector Editing

Fig 7. Vector Editing

Relations between places

These are:
* Conflates (A is the same place as B)
* Contains (A contains B spatially)
* Replaces (A is the same as B but over time has changed status, temporal)
* Subsumes (B is incorporated into A and loses independent existence, temporal)
* Comprises (B comprises A if A contains B, along with C,D and E)

We will delve into these relationship concepts later

Site Admin

* GeoDjango administrative pages
* Administrative Boundary operations
* Batch CSV Import of places for create / Update
* Edit feature code definitions
* Edit groups and users and roles etc
* Edit layers (tile layers optionally shown for some features)
* Add / Edit data origin definitions

Fig 8. feature code edition

Fig 8. feature code edition

Fig 9. Django origin edition

Fig 9. Django origin edition

Background, Requirements and Challenges

Library of Congress and Bibliographic Remediation

The Library has lots of bibliographic metadata, lots of geographic information, much of it historical, almost all of it is unstructured.
For example, they have lots of metadata about books, where it was published, the topics, subjects etc. They want to try and improved the quality of the geo information associated with the metadata, and to augment site search.

So the library needs an authoritative list of places. The Library fully understands the needs for authoritative lists – they have authority files for things, ideas, places, people, files etc, but no centralised listing of them, and where there are geographic records there may be no actual geospatial information about them.

Initial Challenges

So we start with a simple data model, where a named location on the Earth’s surface has a name, a type and a geometry. All very simple right? But actually it’s a complex problem. Take the name of a place, what name to use? What happens if a place has multiple names, and what happens if it has multiple records to describe the same place? Taxonomies are also a large concern, for example establishing a set schema for every different type of feature on the earth is not trivial!

What’s the geometry of a place? Is it a point, is it a polygon, and at what scale? For administrative datasets, it’s often impossible to get a good detailed global administrative dataset. Often in many places the data is not there. Timeframe and temporal gazetteers are an another large area for research (see OpenHistoricalMap.org if this intrigues you!). But the way we describe places in time is very varied, for example “in the 1880’s” or “mid 19th Century” or “1 May 2012 at 3pm”. What about places which are vague or composed of other places, like “The South” (of the US) – how would a gazetteer handle these? And the relationships between places is another very varied research topic.

Approach

So we think the project has tried to address these challenges. For names, the system can accept multiple additional alternate names, and conflation enables the fixing of multiple records together so that the results shows the correct authoritative results. The Digital Gazetteer allows places to have any type of geometry (e.g. point, line, polygon) where all the database needs is a centroid to make search work. For temporal support, places have datestamp for start and end dates but crucially there is in addition fuzzy start and ends specified in days. This enables a place, for example to have a fuzzy start date (sometime in the year 1911) and a clear end date (23 May, 1945). For “The US South” example – composite places were created. The system generates the union of the composite places and makes a new one. The component places still exist – they just have a relationship with their siblings and with their new parent composite place. This brings us to how the Digital Gazetteer handles relations between places.

Fig 10. Composite Place

Fig 10. Composite Place

Relationships

Let’s look a bit more in detail about the relationship model. Basically the relationships between places help in conflation (reducing duplicate records) and in increasing search accuracies. The five relationships are as follows:

* Conflates
* Contains
* Replaces
* Subsumes
* Comprises

Conflates

This is the most common relationship between records initially. It effectively is an ontological statement that the place in one record is the same as described in another record, that entries A and B are the same place. It’s a spatial or a name type of relation. For example, if we had 5 records for Statue of Liberties, and all 4 were conflated to the one record, when you searched for the statue you’d get the one record, but with a link to each of the other four. Conflates hides the conflated record from search results.

Contains

Contains is a geographical relationship. Quite simply, Place A contains Place B. So for example, the town of Brighton would contain the Church St. Matthews.

Replaces

Replaces is mainly a temporal relation, where one place replaces another place if the other place has significantly changed status, name, type or boundary. For example, the building representing the Council Offices of the town from 1830-1967 is replaced by a bank.

Subsumes

Subsumes is mainly a temporal relation. Where a place A becomes incorporated into another place B and loses independent existence. For example, the ward of Ifield which existed from 1780 to 1890 becomes subsumed into the ward of Crawley.

Comprises

Comprises is primarily a spatial or name relation. Place A comprises place B along with place C,D and E. This relation creates composite places, which inherit the geometries of the component places. For example, “The US South” can be considered a composite place. This place is comprised of Virginia, Alabama etc. Virginia in this case comprises “the US South”, and the composite place “The US South” has the union of the geometry of all the places it is comprised by.

Data Sources

OpenStreetMap (OSM), Geonames, US Census Tiger/Line, Natural Earth, Historical Marker Database (HMDB), National Historical GIS (NHGIS), National Register of Historic Places Database (NRHP) and Library of Congress Authority Records

Further Challenges

Automatic Conflation

There remains two main areas for future development and research – Automatic Conflation and Search Ranking. Since there are multiple datasets, there will of course be the same record for the same place. The challenge is how to automatically find the same place from similar records by some kind of search distance. For example, by distance from each other, distance geographically, and in terms of name and place type. Tricky to get right, but the system would be able to undo any of the robots mistakes. Further information about this topic can be found on the GitHub wiki: https://github.com/topomancy/gazetteer/wiki/Conflation

Search Ranking

By default the gazetteer uses full text search which also takes into account alternate names and administrative boundaries, but there is a need to float up the more relevant places in the search results. We can also sort by distance from the search centre if doing a search within geographic bounds, which is used for helping find similar places for conflation. We could probably look at weighting results based on place type, population and area, although population and area for many urban areas in the world may not be available. One of the most promising areas of research is using Wikipedia request logs as a proxy for importance – places could be more important if they are viewed on Wikipedia more than other places.

Further Issues

Some other issues which I haven’t got space to go into here include: synchronising changes up and downstream to and from the various services and datasets. Licensing of the datasets could be looked at especially if they are being combined. What level of participation in the conflation and remediation steps should a gazetteer have, which depends on where the gazetteer is based and who it is being used for.

NYPL Chronology Of Place

I mentioned at the beginning of the post that the New York Public Library (NYPL) was also involved with the development of the Gazetteer. That project was called The Chronology of Place, and as the name suggests is more temporal in nature. But it’s also more focused geographically. Whereas the LoC are interested in the US and the World as a whole, the NYPL’s main focus is the City of New York. They wanted to deep dive into each building of the city, exploring the history and geography of buildings, streets and neighbourhoods.

Fig 11. NYPL Chronology of Place

Fig 11. NYPL Chronology of Place

Thus the level of detail was more fine grained, and is reflected in some custom default cartography in the web application client. A nondescript building in a street in a city for example are not usually considered a “place” worthy of a global gazetteer but for the NYPL each building was significant. Also, the NYPL has extensive access to historical maps via the NYPL Map Warper which Topomancy developed for them, and around a hundred thousand digitized vector buildings from these historical map atlases. This data, along with data from the city were able to be added to the system to augment the results. Additional data sources include the Census’s Historical Township boundary datasets, NYC Landmarks Preservation Commission Landmarks and NYC Building Footprints.

There were two additional features added to the application for the NYPL’s Chronology of Place. The first was expanding the data model to include street addresses, so that a building with no name can be used, and the second was to display raster tile layers (often from historical maps) for specific features. Thus,the building features which were digitized from the historical maps were able to be viewed alongside the source raster map that they came from.

Fig 12. Custom/Historical layers shown

Fig 12. Custom/Historical layers shown

A Web Maps Primer using MapWarper.net (via NYPL Blog)

Mauricio from the innovative NYPL Labs has just published an extensive tutorial on how to use MapWarper.net with GeoJSON, MapboxJS, and JSFiddle to create your own historical web map, as he says it is  “a primer on working with various free web mapping tools so you can make your own awesome maps.” The end result is worth checking out.

h5s13Mm

In the tutorial the following steps are included:

  1. geo-referencing the scanned map so that web tiles can be generated
  2. generating GeoJSON data to be overlaid
  3. creating a custom base map (to serve as reference/present day)
  4. integrating all assets in an interactive web page

81lQIRG

Its a very detailed introduction to a wide range of new, free and open geo tools on the web, and I cannot recommend it high enough! It’s also great to see mapwarper.net being used in this way!

Devise Omniauth OAuth Strategy for MediaWiki (Wikipedia, WikiMedia Commons)

Authentication of MediaWiki users with a Rails Application using Devise and Omniauth

Wikimaps is a Wikimedia Commons project to georeference/georectify historical maps. Read the wikimaps blog here. It is using a customised version of the Mapwarper open source map georectification software as seen on http://mapwarper.net to speak with the Commons infrastructure and running on Wikimedia Foundations Labs servers. We needed a way to allow Commons users to log in easily.  And so I developed the omniauth-mediakwiki strategy gem so your Ruby applications can authenticate on WikiMedia wikis, like Wikipedia.org and Wikimedia Commons.

e0974880-2ef0-11e4-9b51-e96f339fe90c

The Wikimaps Warper application uses Devise – it works very nicely with Omniauth. The above image shows traditional login with username and password and, using OmniAuth, to Wikimedia Commons, GitHub and OpenStreetMap.

After clicking the Wikimedia Commons button the user is presented with this:oauth

It may not be that pretty, but the user allowing this will redirect back to our app and the user will be logged in.

This library used the omniauth-osm library as an initial framework for building upon.

The code is on github here:   https://github.com/timwaters/omniauth-mediawiki

The gem on RubyGems is here: https://rubygems.org/gems/omniauth-mediawiki

And you can install it by including it in your Gemfile or by doing:

gem install omniauth-mediawiki

Create new registration

The mediawiki.org registration page is where you would create an OAuth consumer registration for your application. You can specify all wikimedia wikis or a specific one to work with. Registrations will create a key and secret which will work with your user so you can start developing straight away although currently a wiki admin has to approve each registration before other wiki users can use it.  Hopefully they will change this as more applications move away from HTTP Basic to more secure authentication and authorization strategies in the future!

Screenshot from 2014-09-03 21:08:33

Usage

Usage is as per any other OmniAuth 1.0 strategy. So let’s say you’re using Rails, you need to add the strategy to your `Gemfile` alongside omniauth:

gem 'omniauth'
gem 'omniauth-mediawiki'

Once these are in, you need to add the following to your `config/initializers/omniauth.rb`:

Rails.application.config.middleware.use OmniAuth::Builder do
 provider :mediawiki, "consumer_key", "consumer_secret"
end

If you are using devise, this is how it looks like in your `config/initializers/devise.rb`:

config.omniauth :mediawiki, "consumer_key", "consumer_secret", 
    {:client_options => {:site => 'http://commons.wikimedia.org' }}

If you would like to use this plugin against a wiki you should pass this you can use the environment variable WIKI_AUTH_SITE to set the server to connect to. Alternatively you can pass the site as a client_option to the omniauth config as seen above. If no site is specified the http://www.mediawiki.org wiki will be used.

Notes

In general see the pages around https://www.mediawiki.org/wiki/OAuth/For_Developers for more information

When registering for a new OAuth consumer registration you need to specify the callback url properly. e.g. for development:

http://localhost:3000/u/auth/mediawiki/callback
http://localhost:3000/users/auth/mediawiki/callback

This is different from many other OAuth authentication providers which allow the consumer applications to specify what the callback should be. Here we have to define the URL when we register the application. It’s not possible to alter the URL after the registration has been made.

Internally the strategy library has to use `/w/index.php?title=` paths in a few places, like so:

:authorize_path => '/wiki/Special:Oauth/authorize',
:access_token_path => '/w/index.php?title=Special:OAuth/token',
:request_token_path => '/w/index.php?title=Special:OAuth/initiate',

This could be due to a bug in the OAuth extension, or due to how the wiki redirects from /wiki/Special pages to /w/index.php pages….. I suspect this may change in the future.

Another thing to note is that the mediawiki OAuth implementation uses a cool but non standard way of identifying the user.  Omiauth and Devise needs a way to get the identity of the user. Calling '/w/index.php?title=Special:OAuth/identify' it returns a JSON Web Token (JWT). The JWT is signed using the OAuth secret and so the library decodes that and gets the user information.

Calling the MediaWIki API

Omniauth is mainly about authentication – it’s not really about using OAuth to do things on their behalf – but it’s relatively easy to do so if you want to do that. They recommend using it in conjunction with other libraries, for example, if you are using omniauth-twitter, you should use the Twitter gem to use the OAuth authentication variables to post tweets. There is no such gem for MediaWiki which uses OAuth. Existing  Ruby libraries such as MediaWiki Gateway and MediaWIki Ruby API currently only use usernames and passwords – but they should be looked at for help in crafting the necessary requests though.

So we will have to use the OAuth library and call the MediaWiki API directly:

In this example we’ll call the Wikimedia Commons API

Within a Devise / Omniauth setup, in the callback method, you can directly get an OAuth::AccessToken via request.env["omniauth.auth"]["extra"]["access_token"] or you can get the token and secret from request.env["omniauth.auth"]["credentials"]["token"] and request.env["omniauth.auth"]["credentials"]["secret"]

Assuming the authentication token and secret are stored in the user model, the following could be used to query the mediawiki API at a later date.

@consumer = OAuth::Consumer.new "consumer_key", "consumer_secret",
            {:site=>"https://commons.wikimedia.org"}
@access_token = OAuth::AccessToken.new(@consumer, user.auth_token, user.auth_secret)
uri = 'https://commons.wikimedia.org/w/api.php?action=query&meta=userinfo&uiprop=rights|editcount&format=json'
resp = @access_token.get(URI.encode(uri))
logger.debug resp.body.inspect
# {"query":{"userinfo":{"id":12345,"name":"WikiUser",
# "rights":["read","writeapi","purge","autoconfirmed","editsemiprotected","skipcaptcha"],
# "editcount":2323}}}

Here we called the Query action for userinfo asking for rights and editcount infomation.

Leeds Creative Labs – Initial steps and ideas around The Hajj

Cross posted from The Leeds Creative Labs blog.

I signed up to take part in Leeds Creative Labs Summer 2014 programme with the hope that it would result in something interesting, something that a techie would never get the opportunity to do normally. It’s certainly exceeded that expectation – it’s been a fascinating enthralling process so far, and I feel honoured to have been selected to participate.

 

I’m the designated “technologist” who is in partnership with Dr Seán McLoughlin and Jo Merrygold on this project around The Hajj and British Muslims. Usually I tend to do geospatial collaborative and open data projects, although I’m also a member of the Leeds group of Psychogeographers. Psychogeography is intentionally vague to describe but one definition is that it’s about the feelings and effects of space and place on people. It’s also about a critique of space – a way to see how modern day consumerism/capitalism is changing how our spaces are, and by definition how we in these spaces behave.

We had our first meeting last week – it was a “show and tell” by Seán and Jo to share some of the ideas, research, themes and topics that could be of relevance to what we will be doing.

Show and tell

Seán, from the School of Philosophy, Religion and The History of Science introduced his research on Islam and Muslim culture, politics and society in contexts of contemporary migration, diaspora and transnationalism. In particular his work has been around and with South Asian heritage British Muslim communities. The current focus of his work, and the primary subject of this project is about researching British Muslim pilgrims’ experiences of the Hajj.

The main resources are audio interviews, transcripts and on-line questionnaires from a number of different sources such as pilgrims of all ages and backgrounds, other people related to the Hajj “industry” such as tour operators and charities.

Towards the end of the year are a few set days for the Hajj – a once in a lifetime pilgrimage to the holy Saudi Arabian city of Mecca. You have probably seen similar photos such as this where thousands of pilgrims circle the Kaaba – the sacred cuboid house right in the centre of the most sacred Muslim mosque.

It’s literally the most sacred point in Islam. It’s the focal point for prayers and thoughts. Muslims orient themselves towards this building when praying. The place is thought about everywhere – for example, people may have paintings with this building in their homes in the UK, and they may bring back souvenirs of their Hajj pilgrimage . You can see that the psychogeography of space and place on the emotions and thoughts of people could be very applicable here!

And yet the Hajj itself is more than just about the Kaaba – it’s a number of activities around the area. Here’s a map!

The Hajj

These activities, all with their own days and particular ways of doing them are literally meant to be in the footsteps of key religious figures in the past. I will let the interested reader to discover for themselves, but there’s a number of fascinating issues surrounding the Hajj for British Muslims with Seán outlined.

Here’s a small example of some of these themes:

Organising the Hajj (tour operators, travel etc).
What the personal experiences of the pilgrims were.
How Mecca has changed, and how the Hajj has changed.
The commercial, the profane, the everyday and the transcendent and the sacred.
How this particular location and event works over time and space.
What are the differences and similarity of people and cultures, and possible experiences of poverty.
“Hajj is not a holiday” and Hajj Ratings.
Differences in approach of modern British Muslims to going to the Hajj (compared to say their grandparents).
Returning home and the meaning and expectations of returnees (called Hajjis).
What we did and didn’t do

We didn’t rush to define our project outputs – but we all agreed that we wanted to produce something!

Echoing Maria’s post earlier we are trying to leave the options open for what we hope to do. Allowing our imaginations to run and to explore options. I think this justice to the concept of experimentation and collaboration, and should help us be more creative. I think that we can see which spark our imaginations, what address the issues better – what examples and existing things are out there that can be re-appropriated or borrowed, and which things point us in the right direction.

What I did after

So after the show and tell my mind was spinning with new ideas and concepts. It took me a few days to go over the material and do some research of my own, and see what sorts of things I might be able to contribute to. It’s certainly sparked my curiosity!

I was to prepare for a show and tell (an ideas brain-dump) for the next meeting. The examples I prepared included things from cut and paste transcriptions, 3D maps, FourSquare and social media, to story maps, to interactive audio presentations and oral history applications. I also gave a few indications as to possible uses of psychogeography with the themes. I hope to use this blog to share some of these ideas in later posts.

Initially I mentioned the difference between a “hacker” approach and the straight client and consultant way of doing development. For example encouraging collaborative play and exploration rather than hands off development. Allowing things to remain open. The further steps would be crystallizing some of these ideas – finding better examples and working out what we want to look at or devote more time to. We’d then be able to focus on some aims and requirements for a creative interesting project.