- Getting Around
- About FOSS4G
FOSS4G CRUD Benchmark
Jorge Rocha, Universidade do Minho
Performance shoot-outs are being presented each year, at FOSS4G events1. The effort being made to prepare the benchmark, and the code changes returning back to the community resulting from this exercise are really helping to improve the software.
Currently, many web mapping applications are two way, in the sense that these applications are used to present and collect geospatial data from users. The examples ranges from sharing cycling tracks (with lots of points) to urban management public participation (with a few geometries). Open Street Map is one of the best examples of this web 2.0 concepts in the geospatial domain. Using Goodchild's metaphor, for six billion sensors, we need very fast upload services for spatial data.
Since geospatial data presentation is already being evaluated each year in FOSS4G events, in this paper we present a different benchmark that measure how fast our open source servers are handling spatial contributions.
We've set up a basic benchmark environment to evaluate CRUD (create, read, update and delete) geospatial operations performance. Datasets were created for the benchmark, considering simple geometries and more complex ones, but whenever possible data from the 2010 benchmark2 was used. Tests were made for 1, 10, 100, 1000 and 5000 CRUD operations in one request.
The services considered are: two different WFS-T implementations (deegree and Geoserver), the RDBMS Postgresql, the MapFish Server and the OSM API V0.6.
The WFS-T, MapFish and direct RDBMS CRUD operations are compared in terms of performance. The same operations were carried out on the same server, submitted by a local client, without any other processing overhead or network delays. The machine was always reconfigured and rebooted before the next test. Whenever necessary, developers were contacted to verify the correct dependencies and configuration of the server software.
Measurements were also made for the OSM API v0.6, but only against the production OSM server, with real contributions, arranged in different sized changesets. But the performance values are not comparable with the former ones, since we are not able to set up the appropriated environment in time for the benchmark.
Jorge Rocha is Auxiliary Professor at Universidade do Minho