Hi all,
Out of frustration on getting a Tile Server running I’ve been making a NodeJS OSM Query engine, so later I can also create a pure NodeJS Tile Server.
It requires no DB support, so basically should work on any OS/device that NodeJS runs on, I basically created a custom Database engine…
The process first requires you to convert the binary OSM data into a NodeJS friendly file format…
I’ve currently got Indexes for Node / Way / Relation ID’s, and TAG’s… I’ve yet to do a spatial index but that shouldn’t be too bad…
The size of the converted data is only about twice that of the original PBF file, eg. for world it’s about 50 gig. This is with uid, user, version, changeset, timestamp, create_by removed, but I don’t think it’s much bigger with them kept in.
As an example of what can currently be done, eg. this just averages all the lat/lons for ways with natural=wood… A strange query I know!!
function testWithCache() {
var j = new JDataRead({filePath: path.join(fn + '.build', 'jdata')});
return j.init().then(function () {
//simple test, query db for all ways with tag natural=wood
//then loop and read all the nodes for each way...
function testJDataRead6() {
var totNodes = 0, avlat = 0, avlon = 0;
console.time('t1');
return j.findTagAndValue('way', 'natural', 'wood').then(function (ways) {
return common.promise_loop(0, ways.length, function (i) {
var way = ways[i];
return j.getByIds('node', way.nodeRefs).then(function (nodes) {
var l, node;
for (l = 0; l < nodes.length; l += 1) {
node = nodes[l];
//lets do something weird with, node..
//sum the lat & lot's up to get an average at the end..
avlat += node.lat;
avlon += node.lon;
totNodes += 1;
}
return promise.resolve();
});
});
}).then(function () {
console.log(totNodes, avlat / totNodes, avlon / totNodes);
console.timeEnd('t1');
});
}
//run mutlple times to see what cache does
testJDataRead6().then(testJDataRead6).then(testJDataRead6).then(testJDataRead6);
});
}
Using east-yorkshire-with-hull-latest.osm.pbf converted the above would result in →
[D:\webdev2\tests\OSM\index.js:581] 29194 53.85765228973043 -0.43414646313283384
[console.js:84] t1: 2848ms
[D:\webdev2\tests\OSM\index.js:581] 29194 53.85765228973043 -0.43414646313283384
[console.js:84] t1: 1062ms
[D:\webdev2\tests\OSM\index.js:581] 29194 53.85765228973043 -0.43414646313283384
[console.js:84] t1: 1035ms
[D:\webdev2\tests\OSM\index.js:581] 29194 53.85765228973043 -0.43414646313283384
[console.js:84] t1: 1030ms
IOW: the first pass were the cache is empty it’s reading all the nodes for these ways at about 10,000 nodes/per second, with cache nearly 30,000 nodes/per second. And this is only using a single core of a Q9400, (hard drive)
I’ve no idea what speed PostgreSQL reads at but for a Pure javascript(NodeJS) I think that seems pretty good, especially when you consider it’s reading ways and then doing a kind of inner join on nodeRefs.
Once a query has been done, working with the data couldn’t be easer as it’s just a standard javascript object, eg. For way ref 288294538 you would get something like →
{
id: 288294538,
tags: {
wood: 'deciduous',
natural: 'wood'
},
nodeRefs: [ 2918568430, 2918568371, 2918568374, 2918568431, 2918568430 ]
}
Now the reason for my Post, I’m happy to make my work opensource, but I doubt I’d have much time to maintain it etc. So I’m wondering if there are any NodeJS users out there who would be happy to take this on as a project?. I’m hoping to have a basic Tile Server running in NodeJS first though, and to tidy up what I’ve already done!!
Regards
Keith…