3.2.1. Core data sources
The core plugins are part of the mapnik source code itself, and usually avaliable in all builds of the mapnik library. (TODO: add link to mappnik github).
CSV
The CSV plugin reads simple column separated data from a file when specified using the file
parameter, or directly from the XML style file when using the inline
paramter. In the later case all lines following the inline
parameter tag will be read as CSV input until the closing paramter tag is reached. In case that the inline data contains <
, >
or &
characters, you should enclose it in a <![CDATA[…]]>
section to prevent the content from being interpreted as XML.
When giving a file
path, this is taken as relative to the directory the style file is in, unless a base
parameter is given. In that case a relative file
path will be interpreted as relative to the directory path given in the <FileSource>
of that base name.
Processig performance can be improved by creating an additional .index
index file using the [mapnik-index] tool.
<!-- read from file path/to/file.csv -->
<DataSource>
<Parameter name="type">csv</Parameter>
<Parameter name="file">path/to/file.csv</Parameter>
</DataSource>
<!-- read inline data -->
<DataSource>
<Parameter name="type">csv</Parameter>
<Parameter name="inline"><![CDATA[
lat,lon,text
52.0,8.5,"Bielefeld"
]]></Parameter>
</DataSource>
By default the CSV plugin tries to identify the field delimiter by looking at the first line of the file, checking for , ; |
and the TAB
character. Whatever of these characters seen the most often is considered the separator character, unless you specifcy a different one with the separator
Parameter explicitly, e.g: <Parameter name="separator">;</Parameter>
.
In cases where the data does not contain a header line, one can be given as content of the headers
parameter.
The default quoting and escape characters are "
and \
, but can be changed with the quote
and escape
parameters.
Line endings are auto detected, too, so files with DOS/Windows (\r\n
), Linux/Unix (\n
) or MacOS (\r
) style line endings are read correctly out of the box.
The CSV plugin assumes that the data it reads is UTF-8 encoded, a different encoding can be specified using the encoding
parameter.
Column data can be referred to by the columns header name, using [column_name]
placeholders in expressions. The following column names have a special meaning and are used to retrieve actual geometry data for a line:
lat
orlatitude
-
Point latitude
lon
,lng
,long
, orlongitude
-
Point longitude
wkt
-
Geometry data in Well Known Text format
geojson
-
Geometry data in GeoJSON format
So each input file either needs a lat
/lon
column pair, or either a wkt
or geojson
column to be used as a Mapnik data source.
When parsing the header line fails, or no geometry column(s) can be detected in it, the plugin will print a warning by default, and not return any data. When the strict
parameter is set to true
, style processing will be terminated completely by throwing a Mapnik exception.
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
utf-8 |
Text encoding used in the CSV data |
|
int |
none |
Read only this many data rows, ignore the rest. |
|
string |
none |
Header names if the file contains none on the first line |
|
boolean |
false |
Terminate Mapnik on hitting errors? |
|
char |
|
Quote character used for string columns in the data |
|
char |
|
TODO: does this even really exist? |
|
char |
auto detected |
Field separator, typically |
|
4xfloat |
none |
ignore data that is completely outside this extent bounding box |
|
text |
none |
CSV data to be read directly from the style file |
|
file path |
none |
path of CSV file to read |
|
string |
none |
name of a |
- TODO
-
-
.index
file support? See alsomapnik-index
utility -
NULL handling?
-
Gdal
Parameter | Type | Default | Description |
---|---|---|---|
|
|||
|
string |
none |
name of a |
|
|||
|
|||
|
|||
|
|||
|
|||
|
GeoJSON
While the GeoJSON format is also supported by the OGR input plugin, a direct native GeoJSON plugin was added for performance reasons for this more and more common format.
Processig performance can be improved by creating an additional .index
index file using the [mapnik-index] tool.
Parameter | Type | Default | Description base |
---|---|---|---|
|
string |
none |
name of a |
|
boolean |
true |
|
|
string |
utf-8 |
Encoding used for textual informatin |
|
file path |
none |
Path of a GeoJSON file to read for input. |
|
string |
none |
Inline GeoJSON data as part of the stylefile itself |
|
int |
5 |
How many features of a feature set to read up front to determine what property names exist in the data |
<?xml version="1.0" encoding="utf-8"?>
<Map background-color='white'>
<Style name="style">
<Rule>
<PointSymbolizer file="symbols/[file]"/>
</Rule>
</Style>
<Layer name="layer">
<StyleName>style</StyleName>
<Datasource>
<Parameter name="type">geojson</Parameter>
<Parameter name="inline"><![CDATA[
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"file": "dot.svg"
},
"geometry": {
"type": "Point",
"coordinates": [1, 1]
}
},
{
"type": "Feature",
"properties": {
"file": "bug.svg"
},
"geometry": {
"type": "Point",
"coordinates": [2, 1]
}
}
]
}
]]></Parameter>
</Datasource>
</Layer>
</Map>
OGR
The OGR input plugin supports a large number of different vector formats via the GDAL/OGR library. For a complete list of supported formats see the Vector Drivers list in the GDAL documentation.
The OGR plugin is typically used for GPX — for which no special input plugin exists — and OSM data — for which it replaced the older OSM plugin that has now been moved to the non-core-plugins
repository and is usally not included in Mapnik binary builds anymore. So we’re going into details for these two data formats below only.
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
none |
name of a |
|
string |
auto detect |
actual vector format driver to use |
|
string |
utf-8 |
|
|
|||
|
file path |
none |
path of input file to read |
|
string |
none |
inline vector file data to read directly from style file |
|
string |
none |
name of the input layer to actually process |
|
int |
none |
number of the input layer to actually process |
|
|||
|
string |
none |
alias for |
OGR GPX
The GPX backend reads GPX XML files and provides the contained data via the following five layers:
- routes
-
Returns routes from the GPX files
<rte>
tags as lines. Each route is given an extraroute_id
attribute. - tracks
-
Returns tracks from the GPX files
<trk>
/<trkseg>
tags as multilines. Each track is given an extratrack_id
attribute. - route_points
-
Returns
<rtept>
route points from all routes, with an extraroute_fid
filed referring to theroute_id
of the route that a point belongs to. - track_points
-
Returns
<trkpt
> track points from all tracks, with extratrack_fid
andtrack_seg_id
attributes added. - waypoints
-
Returns a compbination of all route and track points.
Any extra tags that a route, track or point may have, like <name>
or <ele>
(for eleveation), can be accessed in filter expressions and symbolizers by name, e.g. as [name]
or [ele]
.
Show a marker for all GPX points with a non-empty <name>
tag.
<Style name="named_point">
<Rule>
<Filter>not ([name] = null or [name] = '')</Filter>
<PointSymbolizer file="marker.svg"/>
<TextSymbolizer face-name="DejaVu Sans Book" size="10" placement="point">[name]</TextSymbolizer>
</Rule>
</Style>
<Layer>
<StyleName>named_point</StyleName>
<Datasource>
<Parameter name="type">ogr</Parameter>
<Parameter name="driver">gpx</Parameter>
<Parameter name="file">file.gpx</Parameter>
<Parameter name="layer">waypoints</Parameter>
</Datasource>
</Layer>
For more details see the original GDAL documentation for the GPX backend
OGR OSM
The OGR plugin can read uncompressed OSM XML data andt the more compact, but not human readable, PBF format. File formats are auto detected when using the .osm
or .pbf
file extensions. When using files with other extensions, like e.g. .xml
for OSM XML, the driver
parameter needs to be set to osm
explicitly.
The OSM backend provides data in the following five layers:
points
-
Nodes that have significant tags attached.
lines
-
Ways that are recognized as non-area.
multilinestrings
-
Relations that define a multilinestring (
type=multilinestring
ortype=route
). multipolygons
-
Ways that are recognized as areas and relations that form a polygon or multipolygon (e.g.
type=multipolygon
ortype=boundary
) other_relations
-
Relations that are not in
multilinestrings
ormultipolygons
<Datasource>
<Parameter name="type">ogr</Parameter>
<Parameter name="driver">osm</Parameter>
<Parameter name="file">ways.osm</Parameter>
<Parameter name="layer">lines</Parameter>
</Datasource>
While rendering OSM data directly can work out OK for small amounts of data the usually preferred way to present OSM data is to import it into PostGIS using either the osm2pgsql or imposm import tool first, and then to use the PostGIS Datasource. This requires some extra effort up front, but performs better on larger data sets, and allows for more sophisticated preprocessing of the OSM input data than the few fixed rules statically built into the OGR OSM backend.
For more details see the original GDAL documentation for the OSM backend
PgRaster
Parameter | Type | Default | Description |
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
PostGIS
Parameter | Type | Default | Description |
---|---|---|---|
|
|||
|
string |
none |
PostgreSQL server host name or address |
|
string |
none |
PostgreSQL server port |
|
string |
none |
Database user |
|
string |
none |
Database user password |
|
string |
none |
Name of database to use |
|
int |
4 |
Connect timeout in seconds |
|
boolean |
true |
Reuse connection for subsequent queries |
|
|||
|
boolean |
false |
Whether to auto detect primary key if none is given in |
|
int |
0 |
Fetch this many features at a time, or all when zero. |
|
boolean |
false |
Try to estimate the extent from the data retrieved |
|
floatx4 |
none |
Extent bounding box |
|
boolean |
false |
|
|
string |
none |
The result field that the geometry to process is in. Auto detected when not given. |
|
string |
none |
Name of table geometry is retrieved from. Auto detected when not given, but this may fail for complex queries. |
|
int |
1 |
initial connection pool size |
|
int |
0 |
|
|
int |
0 |
|
|
string |
none |
Primary key field of table geometry is retrieved from. Auto detected when not given and |
|
boolean |
true |
|
|
int |
10 |
Max. connection pool size |
|
int |
1 |
Run that many queries in parallel, must be ⇐ |
|
int |
0 |
Only return this many features if > 0 |
|
boolean |
false |
|
|
float |
1/20 |
|
|
boolean |
false |
|
|
float |
0.0 |
|
|
float |
1/40 |
|
|
int |
0 |
SRID of returned features, auto detected when zero |
|
string |
none |
Name of a table, or SQL query text |
|
float |
0.0 |
|
|
boolean |
false |
Aside from the basic PostgreSQL connection parameters (host
, port
, user
, password
, dbname
), you can also add additional connection parameter keywords in the host
or dnname
parameter (probably the others, too, but this I didn’t test yet) for more fine grained connection control.
You can e.g. set a datasource specific application name with this:
<Parameter name='host'>localhost application_name=my_sytle</Parameter>
Or set a specific schema search path:
<Parameter name='host'>localhost options='-c search_path=foo,public'</Parameter>
Probably most important though, this allows for using SSL/TLS. In it’s most basic form you’d just enforce SSL/TLS encryption being used:
<Parameter name='host'>localhost sslmode=require</Parameter>
The PostGIS datasource supports two different methods to return data to Mapnik: in regular well known binary (WKB) or — with PostGIS v2.2 or later — tiny well known binary (TWKB) format. This is controlled by the twkb_encoding
option.
When using TWKB the twkb_rounding_adjustment
parameter then controls the resolution the TWKB aims for. A value of 0.5 would lead to a coarseness of about one pixel, the default of 0.0 would be in the range of 0.05 to 0.2 pixels usually. This is done by using the twkb_rounding_adjustment
parameter to calculate the tolerance
paramter for ST_Simplify()
and ST_RemoveRepeatedPoints()
, and the decimaldigits_xy
parater for ST_AsTWKB()
When using WKB (the default), simplification can be controlled via simplify_geometries
, simplify_snap_ratio
, simplify_dp_preserve
, simplify_dp_ratio
, simplify_prefilter
, simplify_clip_resolution
parameters. (TODO: describe in more detail)
simplify_clip_resolution
is use for both formats, and controls at what map scale geometries start getting clipped to the rendering window when non-zero.
The following special tokens can be used in SQL queries, and will be replaced b the actual mapnik values for the current render request:
!bbox!
-
the map bounding box
!scale_denominator!
-
the current scale denominator
!pixel_width!
,!pixel_height!
-
width and height of pixels (TODO: depens on STR, is ° with latlon and meters with google mercator?)
Raster
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
none |
name of a |
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Shape
The shape input plugin can read the ESRI shapefile format. The OGR plugin also supports shapefiles, but the shape plugin has more direct support for this. It is also better maintained and tested.
Shapefiles are often used instead of databases for data that doesn’t change that often, or where data available in a database requires some preprocessing. Common examples are boundaries, coastlines, and elevation countour lines.
OpenStreetMap or example provides land polygons, water polygons, coastlines, and antarctic ice sheet polygons and outlines as regularily updated shapefiles on the OsmData Download Server. Due to the way large bodies of land and water are constructed by grouping individual coast line segments into polygon relations in OSM, there’s always a risk of such lines not really being closed polygons. The OSM shapefiles are generated by extracting and aggregating the line segments data, and are only published when containing no unclosed polygons.
Another often used source of shapefiles is Natural Earth, which provides public domain geo data for lots of physical and cultural features.
Shapefile processing performance can be increased by creating an index file using the [shapeindex] tool that is included in the Mapnik source code, and usually also in binary distribution pacakges.
Parameter | Type | Default | Description |
---|---|---|---|
|
file path |
none |
shapefile path, |
|
string |
none |
name of a |
|
string |
utf-8 |
encoding used for text fields in the shapefile |
|
int |
none |
maximum number of rows to process |
<?xml version="1.0" encoding="utf-8"?>
<Map background-color='blue'>
<Style name="countries">
<Rule>
<PolygonSymbolizer fill="green"/>
</Rule>
</Style>
<Layer name="countries">
<StyleName>countries</StyleName>
<Datasource>
<Parameter name="type">shape</Parameter>
<Parameter name="file">data/world-countries.shp</Parameter>
</Datasource>
</Layer>
</Map>
SQLite
Parameter | Type | Default | Description |
---|---|---|---|
|
|||
|
|||
|
string |
none |
name of a |
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
TopoJson
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
none |
name of a |
|
|||
|
|||
|