Comment utiliser les photographies aériennes de Nantes avec Leaflet

Screenshot d'une carte interactive Ce mini post explique comment on peut utiliser les Photographies aériennes de la Loire-Atlantique pour créer des cartes interactives avec Leafletjs.

Les Photographies aériennes de la Loire-Atlantique font partie des jeux de données Open Data de la région des Pays de la Loire et pour utiliser et accéder aux données il faut acepter leur licence.

Pour accéder aux images le site vuduciel.loire-atlantique.fr délivre un flux WMS (Web Map Service) qui peut être intégré facilement avec Leaflet en utilisant une TileLayer.WMS. Par exemple avec quelques lignes de HTML on peut générer une carte et ajouter un marqueur :

<html>
<head>
    <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet-0.7.1/leaflet.css" />
    <style>
        #map { height: 480px; }
    </style>
</head>
<body>
    <div id="map"></div>
    <script src="http://cdn.leafletjs.com/leaflet-0.7.1/leaflet.js"></script>
    <script>
        var map = L.map('map').setView([47.2162, -1.5492], 14);
        var laURL = "http://services.vuduciel.loire-atlantique.fr/geoserver/ows/"
        var loireAtlantique = L.tileLayer.wms(laURL, {
            layers: 'ORTHO44:jp2',
            format: 'image/jpeg',
            attribution: "Open Data D\u00e9partement de Loire-Atlantique"
        });
        var marker = L.marker([47.2162, -1.5492]);
        marker.bindPopup("Ch&acirc;teau des ducs de Bretagne").openPopup();

        loireAtlantique.addTo(map);
        marker.addTo(map);
    </script>
</body>
</html>

voici le résultat avec le code de l'exemple.

Si on a besoin de modifier les images livrées par le WMS ou on veut servir les images depuis notre propre serveur web de façon statique on peut utiliser par exemple landez qui permet de télécharger les images (tiles) d'une zone particulier de la carte, les modifier et les héberger dans un répertoire de notre serveur web.

Disclaimer : ma langue maternelle est l'espagnol, donc n'hésitez pas à me corriger s'il y a des fautes d'orthographe :)

Testing your REST client in Python

When you start to write a client for a REST API in Python at beginning it's easy to test it using a Python interactive session, but at some point you'll have to write tests, at that moment you'll see that it's not easy to test your code against live data from the RESTful web API. You may encounter various problems, for example, you can have network problems when tests run, the web server may be temporarily down or tests become slow due to network latency.

A solution to this problem is to use Mock objects, they simulate the behavior of your real objects in a controlled way, so in this case a mock object may simulate the behavior of the urlopen function (from the urllib2 module) and return something like an HTTP response (a file-like object) without hit the real REST API. The file-like object returned may map a RESTful resource to a file which contains a pre-saved response from the real web server.

To show the idea I wrote a simple REST client for the Github API. Here's what the directory structure looks like

$ tree project
project
├── client.py
└── tests/
    ├── test_client.py
    └── resources/
        └── users/
            └── test_user

3 directories, 3 files

I use nose as test runner and Foord's Mock library to create mock objects. You can install them into a virtualenv by typing

$ pip install nose mock

Here's the content of the client.py file

import json
from urllib2 import urlopen


class ClientAPI(object):
    def request(self, user):
        url = "https://api.github.com/users/%s" % user
        response = urlopen(url)

        raw_data = response.read().decode('utf-8')
        return json.loads(raw_data)

As you can see, it calls urlopen, parse the JSON data from the HTTP response and return a Python dictionary.

read more...

"On an open Internet, where all links are created equal, good ideas win. Anyone, anywhere can share an idea that can be seen by millions."
— Alexis Ohanian, Reddit co-founder

Mario Bros in Nantes Mario Bros in the streets of Nantes. Can you guess where I found it?

How to use the Percolate API with Python

Recently, I've been working on migrate a Django project from Solr to Elasticsearch, both of them are great search servers based on Apache Lucene, but Elasticsearch has an interesting feature called Percolate, that's missing on Solr.

Percolate is the reverse operation of indexing and then searching. Instead of sending docs, indexing them, and then running queries. One sends queries, registers them, and then sends docs and finds out which queries match that doc.

So, here an example of percolation from the Percolate API documentation with Python. See the setup section to known how to setup Elasticsearch and get it running.

First, you'll want to install pyelasticsearch

$ pip install pyelasticsearch

Open a terminal and type python to use the Python interactive console

>>> from pyelasticsearch import ElasticSearch
>>> es = ElasticSearch('http://localhost:9200')

You need create a new index, named test

>>> es.create_index('test')
{u'acknowledged': True, u'ok': True}

Now, you must specify a query to index. As the _percolator index is also an index, we use the index method to index it, doc_type is the name of the index created before (test) and kuku is our query id

>>> query = {'query': {'term': {'field1': 'value1'}}}
>>> es.index(index='_percolator', doc_type='test', doc=query, id='kuku')
{u'_type': u'test', u'_id': u'kuku', u'ok': True, u'_version': 1, u'_index': u'_percolator'}

Finally to test a percolate request we need call the percolate method with a document

>>> doc = {'doc': {'field1': 'value1'}}
>>> es.percolate(index='test', doc_type='type1', doc=doc)
{u'matches': [u'kuku'], u'ok': True}

We can see that kuku matches with our document.

For more details see the Elasticsearch's reference API and the documentation of pyelasticsearch.

Pac-Man Ghosts in Nantes One of the ghosts from the Pac-Man video game that I discover in the streets of Nantes.

Running Scrapy on Amazon EC2

Sometimes can be useful to crawl sites with Scrapy using temporary resources on the cloud, and Amazon EC2 is perfect for this task. You can launch an Ubuntu OS instance and schedule your spiders using the Scrapyd API. With boto, a python interface to Amazon Web Services, you can launch instances and install the scrapy daemon using the user data feature to run a script on boot.

First, you need an AWS account with your access keys, a EC2 security group accepting TCP connections on port 6800 and a key pair for the selected region. After that you must choose an Ubuntu EC2 image, here you can find a list of Ubuntu AMIs.

read more...

Ubatar and the Ubuntu App Showdown

The last 3 weeks I was working on developing an application to participate in a contest called Ubuntu App Showdown.

My application is called Ubatar and its main objective is to provide a solution to this idea. Here you can see some videos of the latest version of Ubatar fulfilling its purpose.

During these 3 weeks I learned many things, reading a lot of source code, looking for examples and discussing with other developers. I can only say that I really enjoyed and was an incredible experience. Thanks to all the Ubuntu development team and the community at large.

And this is not the end of the project, there are still many things to improve, if you want to help please contact me via this form. Also any questions or suggestions can be sent to Questions for Ubatar in Launchpad.