Devise Omniauth OAuth Strategy for MediaWiki (Wikipedia, WikiMedia Commons)

Authentication of MediaWiki users with a Rails Application using Devise and Omniauth

Wikimaps is a Wikimedia Commons project to georeference/georectify historical maps. Read the wikimaps blog here. It is using a customised version of the Mapwarper open source map georectification software as seen on http://mapwarper.net to speak with the Commons infrastructure and running on Wikimedia Foundations Labs servers. We needed a way to allow Commons users to log in easily.  And so I developed the omniauth-mediakwiki strategy gem so your Ruby applications can authenticate on WikiMedia wikis, like Wikipedia.org and Wikimedia Commons.

e0974880-2ef0-11e4-9b51-e96f339fe90c

The Wikimaps Warper application uses Devise – it works very nicely with Omniauth. The above image shows traditional login with username and password and, using OmniAuth, to Wikimedia Commons, GitHub and OpenStreetMap.

After clicking the Wikimedia Commons button the user is presented with this:oauth

It may not be that pretty, but the user allowing this will redirect back to our app and the user will be logged in.

This library used the omniauth-osm library as an initial framework for building upon.

The code is on github here:   https://github.com/timwaters/omniauth-mediawiki

The gem on RubyGems is here: https://rubygems.org/gems/omniauth-mediawiki

And you can install it by including it in your Gemfile or by doing:

gem install omniauth-mediakwiki

Create new registration

The mediawiki.org registration page is where you would create an OAuth consumer registration for your application. You can specify all wikimedia wikis or a specific one to work with. Registrations will create a key and secret which will work with your user so you can start developing straight away although currently a wiki admin has to approve each registration before other wiki users can use it.  Hopefully they will change this as more applications move away from HTTP Basic to more secure authentication and authorization strategies in the future!

Screenshot from 2014-09-03 21:08:33

Usage

Usage is as per any other OmniAuth 1.0 strategy. So let’s say you’re using Rails, you need to add the strategy to your `Gemfile` alongside omniauth:

gem 'omniauth'
gem 'omniauth-mediawiki'

Once these are in, you need to add the following to your `config/initializers/omniauth.rb`:

Rails.application.config.middleware.use OmniAuth::Builder do
 provider :mediawiki, "consumer_key", "consumer_secret"
end

If you are using devise, this is how it looks like in your `config/initializers/devise.rb`:

config.omniauth :mediawiki, "consumer_key", "consumer_secret", 
    {:client_options => {:site => 'http://commons.wikimedia.org' }}

If you would like to use this plugin against a wiki you should pass this you can use the environment variable WIKI_AUTH_SITE to set the server to connect to. Alternatively you can pass the site as a client_option to the omniauth config as seen above. If no site is specified the http://www.mediawiki.org wiki will be used.

Notes

In general see the pages around https://www.mediawiki.org/wiki/OAuth/For_Developers for more information

When registering for a new OAuth consumer registration you need to specify the callback url properly. e.g. for development:

http://localhost:3000/u/auth/mediawiki/callback

http://localhost:3000/users/auth/mediawiki/callback

This is different from many other OAuth authentication providers which allow the consumer applications to specify what the callback should be. Here we have to define the URL when we register the application. It’s not possible to alter the URL after the registration has been made.

Internally the strategy library has to use `/w/index.php?title=` paths in a few places, like so:

:authorize_path => '/wiki/Special:Oauth/authorize',
:access_token_path => '/w/index.php?title=Special:OAuth/token',
:request_token_path => '/w/index.php?title=Special:OAuth/initiate',

This could be due to a bug in the OAuth extension, or due to how the wiki redirects from /wiki/Special pages to /w/index.php pages….. I suspect this may change in the future.

Another thing to note is that the mediawiki OAuth implementation uses a cool but non standard way of identifying the user.  Omiauth and Devise needs a way to get the identity of the user. Calling '/w/index.php?title=Special:OAuth/identify' it returns a JSON Web Token (JWT). The JWT is signed using the OAuth secret and so the library decodes that and gets the user information.

Calling the MediaWIki API

Omniauth is mainly about authentication – it’s not really about using OAuth to do things on their behalf – but it’s relatively easy to do so if you want to do that. They recommend using it in conjunction with other libraries, for example, if you are using omniauth-twitter, you should use the Twitter gem to use the OAuth authentication variables to post tweets. There is no such gem for MediaWiki which uses OAuth. Existing  Ruby libraries such as MediaWiki Gateway and MediaWIki Ruby API currently only use usernames and passwords – but they should be looked at for help in crafting the necessary requests though.

So we will have to use the OAuth library and call the MediaWiki API directly:

In this example we’ll call the Wikimedia Commons API

Within a Devise / Omniauth setup, in the callback method, you can directly get an OAuth::AccessToken via request.env["omniauth.auth"]["extra"]["access_token"] or you can get the token and secret from request.env["omniauth.auth"]["credentials"]["token"] and request.env["omniauth.auth"]["credentials"]["secret"]

Assuming the authentication token and secret are stored in the user model, the following could be used to query the mediawiki API at a later date.

@consumer = OAuth::Consumer.new "consumer_key", "consumer_secret",
            {:site=>"https://commons.wikimedia.org"}
@access_token = OAuth::AccessToken.new(@consumer, user.auth_token, user.auth_secret)
uri = 'https://commons.wikimedia.org/w/api.php?action=query&meta=userinfo&uiprop=rights|editcount&format=json'
resp = @access_token.get(URI.encode(uri))
logger.debug resp.body.inspect
# {"query":{"userinfo":{"id":12345,"name":"WikiUser",
# "rights":["read","writeapi","purge","autoconfirmed","editsemiprotected","skipcaptcha"],
# "editcount":2323}}}

Here we called the Query action for userinfo asking for rights and editcount infomation.

Leeds Creative Labs – Initial steps and ideas around The Hajj

Cross posted from The Leeds Creative Labs blog.

I signed up to take part in Leeds Creative Labs Summer 2014 programme with the hope that it would result in something interesting, something that a techie would never get the opportunity to do normally. It’s certainly exceeded that expectation – it’s been a fascinating enthralling process so far, and I feel honoured to have been selected to participate.

 

I’m the designated “technologist” who is in partnership with Dr Seán McLoughlin and Jo Merrygold on this project around The Hajj and British Muslims. Usually I tend to do geospatial collaborative and open data projects, although I’m also a member of the Leeds group of Psychogeographers. Psychogeography is intentionally vague to describe but one definition is that it’s about the feelings and effects of space and place on people. It’s also about a critique of space – a way to see how modern day consumerism/capitalism is changing how our spaces are, and by definition how we in these spaces behave.

We had our first meeting last week – it was a “show and tell” by Seán and Jo to share some of the ideas, research, themes and topics that could be of relevance to what we will be doing.

Show and tell

Seán, from the School of Philosophy, Religion and The History of Science introduced his research on Islam and Muslim culture, politics and society in contexts of contemporary migration, diaspora and transnationalism. In particular his work has been around and with South Asian heritage British Muslim communities. The current focus of his work, and the primary subject of this project is about researching British Muslim pilgrims’ experiences of the Hajj.

The main resources are audio interviews, transcripts and on-line questionnaires from a number of different sources such as pilgrims of all ages and backgrounds, other people related to the Hajj “industry” such as tour operators and charities.

Towards the end of the year are a few set days for the Hajj – a once in a lifetime pilgrimage to the holy Saudi Arabian city of Mecca. You have probably seen similar photos such as this where thousands of pilgrims circle the Kaaba – the sacred cuboid house right in the centre of the most sacred Muslim mosque.

It’s literally the most sacred point in Islam. It’s the focal point for prayers and thoughts. Muslims orient themselves towards this building when praying. The place is thought about everywhere – for example, people may have paintings with this building in their homes in the UK, and they may bring back souvenirs of their Hajj pilgrimage . You can see that the psychogeography of space and place on the emotions and thoughts of people could be very applicable here!

And yet the Hajj itself is more than just about the Kaaba – it’s a number of activities around the area. Here’s a map!

The Hajj

These activities, all with their own days and particular ways of doing them are literally meant to be in the footsteps of key religious figures in the past. I will let the interested reader to discover for themselves, but there’s a number of fascinating issues surrounding the Hajj for British Muslims with Seán outlined.

Here’s a small example of some of these themes:

Organising the Hajj (tour operators, travel etc).
What the personal experiences of the pilgrims were.
How Mecca has changed, and how the Hajj has changed.
The commercial, the profane, the everyday and the transcendent and the sacred.
How this particular location and event works over time and space.
What are the differences and similarity of people and cultures, and possible experiences of poverty.
“Hajj is not a holiday” and Hajj Ratings.
Differences in approach of modern British Muslims to going to the Hajj (compared to say their grandparents).
Returning home and the meaning and expectations of returnees (called Hajjis).
What we did and didn’t do

We didn’t rush to define our project outputs – but we all agreed that we wanted to produce something!

Echoing Maria’s post earlier we are trying to leave the options open for what we hope to do. Allowing our imaginations to run and to explore options. I think this justice to the concept of experimentation and collaboration, and should help us be more creative. I think that we can see which spark our imaginations, what address the issues better – what examples and existing things are out there that can be re-appropriated or borrowed, and which things point us in the right direction.

What I did after

So after the show and tell my mind was spinning with new ideas and concepts. It took me a few days to go over the material and do some research of my own, and see what sorts of things I might be able to contribute to. It’s certainly sparked my curiosity!

I was to prepare for a show and tell (an ideas brain-dump) for the next meeting. The examples I prepared included things from cut and paste transcriptions, 3D maps, FourSquare and social media, to story maps, to interactive audio presentations and oral history applications. I also gave a few indications as to possible uses of psychogeography with the themes. I hope to use this blog to share some of these ideas in later posts.

Initially I mentioned the difference between a “hacker” approach and the straight client and consultant way of doing development. For example encouraging collaborative play and exploration rather than hands off development. Allowing things to remain open. The further steps would be crystallizing some of these ideas – finding better examples and working out what we want to look at or devote more time to. We’d then be able to focus on some aims and requirements for a creative interesting project.

State of the Map Europe 2014 – Pure OpenStreetMap.

Karlsruhe

State of the Map Europe 2014 was in the German city of Karlsruhe. The city was a planned city – designed and built around 1715 – pre motor car, but with wide avenues, and half of the city seems to be a park. It’s also famous for being the home of the Karlsruhe Addressing Scheme – an example of a folksonomy tagging convention that everyone pointed to and adopted, due to the great mappers there – including the folks from Geofabrik.de who also organised the conference. Here are some notes from the conference:

Nature of the conference

The European conference seemed much more intimate with a focus on developer and contributors  – compared to the US Conference which I think had more end users and people sent there by their bosses for their company. Pretty much every single session was on topic (except for the closing buzzword laden keynote!)  – and as such there were no enlightening talks about psychogeography, general historical mapping, or other geospatial software. It was pure OSM.

All the talks are online and the video recordings are on youtube and I encourage you to view them.

3D maps

3D Maps, such as Mapzen and OSMBuildings were prominent – and both showed off some very creative ways of representing 3D maps.

Geocoder and Gazetteers

The only track in the conference – this was full of gazetteers with an announcement from OpenCage and MapZen – all appear to be using ElasticSearch – same as we (Topomancy) did last year for the NYPL and Library of Congress. Check out gazetteer here.

Other stuff

Trees – Jerry did a talk about mapping trees – about how they were represented in historical maps previously, and how we can use SVG symbols to display woods and trees in a better way. Jerry lead an expedition and workshop on the morning of the hack day to show participants the different habitats, surface types and variance in the environment that mappers could take into consideration.

Mapbox WebGL – Constantine, a European engineer of Mapbox did a fascinating talk about the complexities of the technical challenges with vector tiles and 3D maps. I really enjoyed the talk.

Image

OpenGeoFiction - using the OSM stack to create fictional worlds  – not fantasy or science fiction, but amazing experiments in amateur planning, utopian visions and creative map making. OpenGeoFiction.net

The fictional world of Opengeofiction is thought to be in modern times. So it doesn’t have orcs or elves, but rather power plants, motorways and housing projects. But also picturesque old towns, beautiful national parks and lonely beaches.

I love this project!

Vector Tiles – Andy Allan talked about his new vector tile software solution ThunderForest – being one of the only people to know the ins and outs of how Mapbox do the Mapnik / TileMill vector magic. ThunderForest powers the cycle map. Vector maps has lots of advantages and I think we’d probably use it for OpenHistoricalMap purposes at some stage. Contact Andy for your vector mapping and online cartographic needs!

POI Checker – from the same house as WheelMap.org comes POI Checker – it allows organisations to compare their data with data in OSM  – and gives a very neat diff view of Points of Interests. This could be a good project to follow.

Historical Stuff

OpenHistoricalMap There were a few things about historical maps in the conference, although in my opinion less than at any other SOTM previously. I did a lightning talk about OpenHistoricalMap and completely failed to mention the cool custom UK centric version of the NYPL’s Building Inspector.

Opening Keynote  – this was peppered with the history of the city and gave a number of beautiful historical map examples. Watch the video.

Map Roulette v2 – Serge gave a talk about the new version of Map Roulette  – it is being customised to be able to run almost any custom task on the system. We chatted a the hack day to see if the tasks from the Building Inspector could be a good fit into the new Map Roulette – I will look into this!

 

 

NYPL Adds 20,000 High Resolution Maps to the NYPL Warper – free to download

NYPL Warper – New Maps!

This weeks news was about a project I’ve been working on for the last few months with Topomancy – adding a whole load of new maps to one of the largest libraries around the New York Public Library.  These were  added to an award winning crowdsourced geo-rectification, historical map exploration and discovery application. Users can download full resolution TIFF files without the need to login, and if the map has been geo-referenced/rectified/warped, then you can freely download the warped versions too. The images are all in CC-Zero licenses – so, effectively Public Domain in nature. Credit to the library is appreciated though.

From motherboard.vice.com/read/new-york-public-library-releases-20000-beautiful-high-resolution-maps

From motherboard.vice.com/read/new-york-public-library-releases-20000-beautiful-high-resolution-maps

 

The announcement of the freely available 20,000 maps from the NYPL this week has been covered in a few places including OpenGLAM MotherBoard, OpenCulture and InfoDocket amongst others!

Castello_Plan_Warp

How

Folks may recognise that the Warper has been around for a little while now, and so here’s what we did: We hooked it up with the NYPL Digitial Collections API – this changed the way it requested , instead of internally requesting images from the Image Server, it uses the API properly. A whole suite of import processes were also generated to enable to search of maps from the repository, importing individual maps sheets, the import of individual atlases or layers full of maps, and most usefully the import of newly digitized maps.  A by product of this was to extract some of the library code into the nypl_repo Ruby Gem. There’s even some documentation for the nypl_repo gem for interacting with the NYPL Digital Collections API.

The code for the NYPL Warper can be found on GitHub – although if you are wanting to do this at home – have a look at the code for MapWaprer.net  also available on github.

geosearch

Little used feature of the warper – finding for maps using a map to search for them!

Magic, Illusion, Perception @ March Leeds Superpositon

A few days ago saw the most recent meeting of the Superposition group in Leeds. That nights was under the theme “Magic Illusion and Perception”   I’ve pinched a lot of the text in this post from that one!

There were four talks. The first was about the “curiosity” machine that uses lasers to draw moving images on clouds, the zoopraxiscope, and it was taken up in a small plane where images of a moving horse were projected onto a cloud. Wonderful stuff.

IMG_1792

Ben Dalton’s talk ‘Zines in the age of ‘big data’?’ introduced and proposed the idea of bundle publishing. At odds with current trends in digital distribution, bundle publishing involves editing a large collection of digital content and then publishing it on a specific date as a single, large file. This was the most intriguing talk of the evening, where instead of streams, or blogs, or things, that media could be published and shared in huge bundles of files. I’m encouraged partly by online publications such as The New Inquiry as an alternative to a blog roll. Ben is also interested in pseudonyms. A team of writers may publish using the same pseudonym – the pseudonym would have its own character, style of writing. There was also the pseudonyms as used by “anon” users – names that become used and familiar to people.

Experimental jazz musician and neuroscientist Christophe de Bézenac talked about the blurring of self and other in music and psychosis. Having studied at Conservatoire de Strasbourg, and been a regular performer at international music festivals he explained how perceptual ideas have guided his musical practice and how his musical work has, in turn, fed into his empirical/neuroscience research into psychosis. This talk really excited the audience, with discussions about what is ambiguity. Ambiguous language, music etc. What is the crowd? What is the mob? Can someone experience things as a group? Fascinating stuff.

IMG_1794

Professional Magician and Slight-of-Hand artist, Tony O’Neill discussed his creative process within the magical syllabus and sharing his current findings on the power of suggestion and self belief. It showed that magic, fortune telling could be used to help people, even when they knew what the process was all about. I wonder if a city needs more magicians, or if this type of magic could be used on a group of people. Things discussed include things like you can change someone’s mind by planting suggestions, etc.

IMG_1797

Tube Sign – Service Information image generation service

Part of a series of posts to cover some small projects that I did whilst not being able to work. They cover things from the role of familiar strangers on the internet and anti-social networks, through to meteorological hacks, funny memes to twitter bots. This post is about a funny meme image generation service.

Sometimes I surf the internet for funny pictures. Although the ones with cats I have a healthy distrust for – there was one class of amusing image which caught my eye. Those funny or inspirational London Underground passenger informations signs. I was seeing these every week and thought… “I could do that”. So I did, created tubesign.herokuapp.com and a few other people found it funny. At one point there was about 50 people visiting at any one time and when I put the statistics on there was 13,000 views on the second day with an image being created one every second. At time of writing it has had over 50,000 views.

from http://weknowmemes.com/wp-content/uploads/2012/09/apple-maps-london-tube-sign.jpg

An actual real life TfL service information Sign! from http://weknowmemes.com/wp-content/uploads/2012/09/apple-maps-london-tube-sign.jpg

How I did it.

First of all I looked into fonts – I wanted to get a good handwriting font which would look as if someone had used a marker on a white board. Google fonts delivered, and I chose Reenie Beany.

It uses Sinatra, Ruby and Rmagick and is hosted on the Heroku platform – even at it’s busiest it was able to cope on the free tier. It doesnt use any database. It caches requests for images though.

I use a bit of random number generation to change the angle the text is written at, and change the indent a bit.

The code for Tube Sign is on github  but give it a go firsttubesign.herokuapp.com

Viral & coverage

I posted this on facebook and my friends gave it a go, with some hilarious images being created, and then it spread to twitter, where more and more people found it. Then blogs, mainly London based blogs found it.

first image used – the source for this I could not found, and so the use of this image was discontinued

Someone said that the original image was someone’s copyright, so I changed it to a CC-By-SA image by Flickr user Lrosa, which also meant that all images created were under the same licence.

Creative Commons by Share Alike, Attribution image which is the image being used on the application. Image from Flickr, Lrosa, http://www.flickr.com/photos/lrosa/1138285047/

The main media outlets that covered it were: BBC America, ITV, The Londonist, The Atlantic Cities, The Guardian, The Next Web and the B3ta.com newsletter (very proud of that one).

There was about 50 people visiting at any one time and when I put the statistics on there was 13,000 views on the second day with an image being created one every second. At time of writing it has had over 50,000 views. Now the traffic is in the hundreds, with number of people visiting right now enough to be counted on one hand.

Future

  • Live preview
  • Better font rendering – defocus
  • Add range of images for different places (Bombay signs, Leeds Metro signs etc)
  • Store images, allow voting, create gallery

Rain Prediction for the immediate future using Met Office DataPoint – rain graph

Part of a series of posts to cover some small projects that I did whilst not being able to work. They cover things from the role of familiar strangers on the internet and anti-social networks, through to meteorological hacks, funny memes to twitter bots. This post is about a meteorological hack.

The Met Office in the UK have in this last year published an API for their range of services – all part of the open data movement I think.

DataPoint is a way of accessing freely available Met Office data feeds in a format that is suitable for application developers. It is aimed at professionals, the scientific community and student or amateur developers, in fact anyone looking to re-use Met Office data within their own innovative applications.

The year before this, in Denver, USA, I was shown by a couple of awesome mapping and weather geeks a mobile app that showed when it was going to rain, and more importantly when it wasn’t going to rain in very high temporal resolution. You can use the app and know whether to get a swift half and then leave to get the bus, or whether to stay in for an hour until the showers end. It was very detailed and highly useful. This app Dark Sky App

was freaking awesome. And I want here in the UK, so when the Met Office announced their API I was interested.

You cannot do what DarkSkyApp does with the Met Office DataPoint API though – what you can do is do some interpolations though. The API for precipitation forecasts only gives access to a 3 hourly map tile.

http://www.metoffice.gov.uk/datapoint/product/precipitation-forecast-map-layer

Although further poking around shows that they do have an undocumented 1 hourly image.

Screenshot - 061213 - 16:05:15

These map tiles then could be used. http://rain-graph.herokuapp.com  is the resulting application with the code here: https://github.com/timwaters/rain_graph

It’s a Ruby Sinatra application which for a location, grabs the precipitation tile for a defined location for each hour from now. It looks at the pixel value for the given location and determines the amount predicted. It shows when the heaviest rain is predicted and when it would stop. Interpolation is given by the graph engine itself – no fancy meteorological modelling is done (at this stage). It uses Chunky_png to get the pixel values.

687474703a2f2f692e696d6775722e636f6d2f4343306f5868772e706e67

All requests are cached to avoid hitting the MetOffice API and because an image won’t change for an hour. Additionally it uses another API method to get a human readable upcoming forecast text for that location, and displays it under the graph. Contrary to popular global belief it’s not always raining in the UK, and so most of the time you will never see a a graph showing something!

Considerations:

Pixels to Lat Lon:
Since a lat/lon location is quite specific, it could map to one pixel in a tile, and that pixel could have a lower or higher value than the ones surrounding it. I could use a kernel average – do a 6×6 pass over the pixel and get the average value. But since there are tiles are lower zoom levels, by zooming out, the spatial extent of the pixel would equal that larger area – it would do the work for us.

Interpolation between forecasts:
It wasn’t clear if the forecast images showed the predicted situation over the whole hour, or whether it showed the situation at that moment. Should a user look at an animation to see how rain cloud moves across from A->B and guess that in between that there would be rain, or should they think that that there would be no rain if there is no rain shown?

User Interface:
It looks a bit bland – we should show the image tiles underneath  – perhaps shown when hovering over a point.

Accuracy:
I haven’t tested the accuracy of this.

Location hard coding:
The text forecasts are hardcoded to a set number of regions, but we could do a closest point and get the correct forecast for the given lat and lon.

Use Yr.No API

Yr.no has detailed hour by hour forecasts API for a place giving the amount of precipitation.

http://www.yr.no/place/United_Kingdom/England/Leeds/hour_by_hour_detailed.html

<time from="2013-12-06T19:00:00" to="2013-12-06T20:00:00">
<!-- Valid from 2013-12-06T19:00:00 to 2013-12-06T20:00:00 -->
<symbol number="3" name="Partly cloudy" var="mf/03n.11" />
<precipitation value="0" /><!-- Valid at 2013-12-06T19:00:00 -->
<windDirection deg="294.2" code="WNW" name="West-northwest" />
<windSpeed mps="4.3" name="Gentle breeze" />
<temperature unit="celsius" value="1" />
<pressure unit="hPa" value="1004.9" />
</time>

Markov Chains, Twitter and Radical Texts

The next few posts will cover some pet projects that I did whilst not being able to work due to recent civic duty.  They cover things from the role of familiar strangers on the internet and anti-social networks, through to meteorological hacks, funny memes to twitter bots. The first in this series is about what happens when you use markov chains and radical texts with twitter.

Detournement is a technique now considered to the father of remixes or mashups, but with a satirical political nature. Have a look at the wikipedia entry for detournement if you want to know more about it. Basically you do something to something which twists or re routes it so that it makes new meanings. It was the Situationists, led by Debord who really adopted and ran with this as a practice.

guy_debord1

Debord would often frequently plagiarise other radical texts in his own work. (The Situationists were also the ones behind original notion of psychogeography – something that you may have caught me talking about before.)

So what would happen if we could detourn, or mashup, or plagiarise Debord’s own writings? And how about if we could publish it periodically, and how about if we had a 140 character limit? Yeah so this is my experiments with these ideas.

Bruna Rizzi; it is from this disastrous exaggeration. The peasant class could not recognize the practical change of products

The proletariat is objectively reinforced by the progressive disappearance of the globe as the bureaucracy can

Markov chains basically work like take a couple of sentences: “A lazy dog likes cheese” and “My house likes to be clean” then look at groups of two or three words together. Then if one of these groups share the same word (“likes”), make a new sentence using that word to chain together. “My house likes cheese” or “A lazy dog likes to be clean”. Markov chains result in sentences that look human readable. The more sentences you feed the population sample, the better or more varied the same of generated sentences.

Some radical texts are complete nonsense and really hard to read, so perhaps applying Markov chains to them can help reveal what truths the obscure language hide.

@markov = MarkyMarkov::TemporaryDictionary.new
@markov.parse_file "debord.txt"
raw_text = @markov.generate_23_words

My solution uses Ruby, the Twitter gem and the marky_markov gem.

https://github.com/timwaters/rattoo  is the work in progress twitter bot – it works currently on Heroku using the scheduler to periodically tweet a sentence, see if any other users have asked it questions and reply back to them.