Assisted Mapping Demo in Bangkok (looking for feedback)

Hi,

I have been trying to figure out where machine learning techniques have the most potential in the mapping process.

Currently, it doesn’t seem like these techniques will be robust enough in a fully automated setting (e.g. Facebook’s “AI-assisted road import”). Thus, I’m imagining a system that suggests tiles that have relatively “major” missing roads to a human editor, who then traces the roads and tags them appropriately.

The idea is that the system would make it easier to search for areas where the map needs improvement. Rather than scrolling through the city to look for missing roads, users are able to jump to these suggested tiles. Tiles that appear to contain roads that are more “important” to the road network would be prioritized. Hopefully this would improve the overall efficiency of mapping.

I have a demo (modified iD editor) running in a small region of Bangkok. In addition to suggesting tiles, the demo allows you to add an overlay on top of the imagery that highlights the roads found by a machine learning approach in yellow. The process is relatively straightforward:

  • Switch the background imagery to “Imagery + RoadTracer” to get the highlighted roads (yellow).

  • Press the checkmark “Jump” button on the right panel (or use “J” hotkey) to jump to the next suggested tile.

  • Trace out roads as you like, save.

  • Jump to the next tile, etc.

I am very interested in feedback to see where to go from here. I would appreciate feedback on both the fundamental semi-automated architecture (e.g., if most mappers use GPS traces then this wouldn’t be useful, or maybe imagery is used a lot but the time spent searching for areas that need improvement is actually negligible) and this specific implementation of the idea.

The demo is running here (machine is in AWS Singapore, hopefully the connection speed is reasonable):

http://osmdemo2.lndyn.com/

BTW, regarding source tag, the machine learning model is trained on “DigitalGlobe Premium Imagery”. Currently the demo does not automatically set this tag. Actually another thing that I want to explore and think would be useful especially for new contributors is an assisted tagging system that suggests what tags to use for a road to the user; but I need to think more about how this could be integrated nicely into the user interface. Maybe it should just be a warning if the system is confident that the wrong tag has been selected (I saw some issues with new contributors mis-tagging roads, which this could mitigate).

(This is for a research project. We have been working on automatic map inference for a couple years and now want to see if it can actually be deployed to improve mapping. The code for both the machine learning model and the modified iD will eventually be made open source, I’ll try to make that sooner rather than later; the machine learning component is an extension of our most recent paper, https://roadmaps.csail.mit.edu/roadtracer/, for which we have already released code.)

Thanks!
Favyen

I’m not from Thailand, but this seems interesting!
Could you point to an area which is reported by your tool so people could check how it works? Otherwise it’s like finding a needle in a haystack (at least for me) :wink:

Hey, thanks! You can press the checkmark Jump button added to the right panel to jump to a suggested location. The bounding box is:

(13.78745672547208, 100.35479077148437) to (13.915458799179646, 100.48662670898437)

Hello Favyen,

interesting idea. Your server was a bit slow to come back with results but with enough patience is is possible to get the idea.

The jump function brought me always to an area where roads had been missing. I would not agree that these missing roads are any more major than the surrounding roads.

At the following location the highlighted road is quite away from the actual road. But it succeeded in highlighting missing roads.
Have you considered creating maproulette challenges out of them?

Here the yellow marker of the northern road does not at all follow the road:

http://osmdemo2.lndyn.com/#background=RoadTracer&disable_features=boundaries&map=18.29/13.79044/100.44163

Stephan

Thanks for the feedback Stephan!

I agree about the missing roads not being major, that part still needs a bit of thought / work. Maybe I should look at a region that is farther away from a major city, since there might not be major missing roads here. The geometry is not perfect, which I was hoping wouldn’t be a problem in this context (since a human would trace it), but I do have some methods in mind to improve the geometry; I’ll try them out.

Hm adding these as tasks on MapRoulette makes a lot of sense! I will get in touch with them to see what they think about these tasks.

How difficult would it be to train your system to recognize paved roads, not just roads in general?

In Thailand many roads are in the process of being improved and getting a proper surface.

Could your system be extended to find roads tagged as unpaved which have a high probability of being paved?
The other way round is unfortunately not possible as we don’t have accurate capture dates of the sat images.

In case you can’t do something like this: Can you give a rough ballpark estimation on what minimum size training data would be needed to have? I am tempted to have a look myself but have very limited experience with machine learning.

This particular system is designed for extracting lines, but more standard classification CNNs could certainly be applied.

I tried training a CNN to classify roads as paved/unpaved.

Here are the results on unpaved roads where the CNN reports high confidence that the road is paved.

(The visualization system is the same as before, so change imagery to “Imagery + RoadTracer” and use the jump button (checkmark) in the right panel. Note that now the yellow overlay is just some arbitrarily drawn lines around the point on the road that the classifier thinks is paved.)

http://osmdemopaved.lndyn.com/#background=RoadTracer

  • I think about 1/3 of the time it is incorrect, often because it looks like it might be paved from the imagery that it is run on (DigitalGlobe Premium Imagery zoom 19), but on other imagery you can tell it is actually unpaved.
  • About 1/3 of the time it is hard to tell if it is correct or incorrect.
  • And about 1/3 of the time I think it is correct. But sometimes it is just detecting errors, like a small segment of road is labeled “ground”.

I think inputting multiple imagery sources could improve the accuracy. If we can get the accuracy high enough I think this would be perfect for MapRoulette tasks.

Also this is still around Bangkok, perhaps I should be looking at some other regions in Thailand?

Let me know what you think! And also if you have other problems that you think could be solved from imagery.

Edit: actually some of the cases where it looks paved in one imagery but unpaved in another might be because of what you said, it used to be unpaved but then later was paved, e.g.:

Looks quite promising. I clicked ten times on jump each time I would have rated the road being paved. The worst example was a bridge on a major highway.
Some times I had to consult the other imagery layers. Probably I should have saved the link for you to review. From DG premium it was not clear, but I believe in the clarity layer it was obvious being paved.

I agree that Bangkok might be the wrong area to detect unpaved roads. Even you “maybe unpaved” example might be actually paved. Road surface has the same tone than surrounding roads. And is quite large area. So either concrete (paved) or maybe gravel.

The resolution of these imagery layers is not good enough to tell. Check out Isaan for more examples of real unpaved roads.

Here is a sample of where I extended an unfinished road today:
http://osmdemopaved.lndyn.com/#background=EsriWorldImageryClarity&disable_features=boundaries&id=n3508000907&map=17.15/14.97271/103.52764

The road facing north turns from paved into unpaved.

How many samples do you need to train the network? Manual feedback could improve it.
Does it just learn from color?

This is a good example of how I wish it to recognize:
http://osmdemopaved.lndyn.com/#background=DigitalGlobe-Premium&disable_features=boundaries&id=w541962324&map=18.15/15.01317/103.55526

Road markings on asphalt. Very clear indication of a paved road.

You did not mention, but I assume you do pre-processing to mask the bitmap image based on a buffered geometry of the road, right? You might also want to split roads into smaller segments at junctions or probably check per imagery time as a road can change from paved to unpaved like in my first example.

This is something which should probably specially be reported as it could either be that it is actually only partially paved or that the imagery is not recent enough and meanwhile the full road is paved.

Stephan

Looks quite promising. I clicked ten times on jump each time I would have rated the road being paved. The worst example was a bridge on a major highway.

Hm, just to be clear, do you think that the imagery in these examples is clear enough to update the road metadata from unpaved to paved? Or would you prefer it only flag cases where it is much more obvious, e.g. with road markings on asphalt?

How many samples do you need to train the network? Manual feedback could improve it.

I think if you focus on very obvious examples of paved roads then 100 examples might be enough. This one was trained with about 3000 examples though.

Does it just learn from color?

It is a bit tricky to tell. I’ll try to do some experiments to see if it is paying attention to other features like the road markings.

You did not mention, but I assume you do pre-processing to mask the bitmap image based on a buffered geometry of the road, right?

This model is actually very simple (much simpler than the one I was using to identify roads). A point on the OSM edge corresponding to a paved/unpaved road is selected (randomly along the edge during training, but midpoint during testing). The input to CNN is the imagery centered at that point. It is trained with cross entropy loss to output [1, 0] for unpaved, and [0, 1] for paved.

I didn’t use any masking – instead, the point on the road is always in the center of the image, so the CNN can learn from that. There are some errors that arise due to this, e.g. if an unpaved road goes under a highway and we pick a point in the underpass section, then the CNN will report that it is paved. The simplest modification would be to include the line as an additional input to the CNN; so then the input includes 3 RGB channels from the imagery, plus one channel showing the OSM edge position.

You might also want to split roads into smaller segments at junctions or probably check per imagery time as a road can change from paved to unpaved like in my first example.

Yeah, that makes sense. Actually right now it is evaluating segments between vertices on the way, not the entire way, so portions between junctions would be split. But if there is a very long edge then it wouldn’t split it. I think for now it might make sense to split a long edge into multiple segments, and only report edges where the CNN has high confidence in all of the segments; then if it works well could see about dealing with roads that are half paved, half unpaved.

The bridge was this one:
https://www.openstreetmap.org/edit?editor=id&way=96065624#map=19/13.70975/100.45347

surface=ground was so clearly wrong I could fix it.

Just for writing I opened it again to get more samples.
First hit:
http://osmdemopaved.lndyn.com/#background=RoadTracer&disable_features=boundaries&id=w583106592&map=19.36/14.01344/100.72841

Yes, based on the imagery this might be changed to paved. Russ added it a month ago. I pinged him to ask for the source.

next one:
http://osmdemopaved.lndyn.com/#background=DigitalGlobe-Premium&disable_features=boundaries&id=w533509802&map=19.36/13.96067/100.39030

a bit more tricky. Cycling through the imagery I see no reason why I would have added the “unpaved” flag. The road around the bridge to the north looks a bit more “used”, probably rubber traces.

So let’s cross-check with sources we unfortunately can’t use:
April 2012 imagery from Google maps
https://www.google.com/maps/@13.9613451,100.3896184,3a,75y,190.1h,80.8t/data=!3m9!1e1!3m7!1stBxBW8Pqzj5JsQkFOi0YMg!2e0!7i13312!8i6656!9m2!1b1!2i26

https://www.google.com/maps/@13.9597329,100.3897324,3a,75y,265.52h,71.11t/data=!3m6!1e1!3m4!1sMibqdYnLiQCrN7qWGPbVcQ!2e0!7i13312!8i6656

So six years ago at least sections of that way had been unpaved. Even on the street level photos the color of that gravel nearly matches the color of the concrete.

This certainly needs survey on the ground to be certain. Tagging was added last October by Russ. I just pinged him. Maybe we can get some insight whether it was on the ground survey or aerial tracing. The changeset comment does not help here.

So to conclude:
Gravel/concrete is really tricky. The error rate might be too high to change things based on it. The obvious mistake like the bridge was fixable.

By training a network to paved roads with markings I have the hope to reduce that error rate to a much lower level. So I would see this as a first step. Trying to get too much done in the first step might lead to problems. Small evolution steps instead of a revolution.

Also the road category matters. unpaved residential roads, probably with compacted gravel like above are not that uncommon. Tertiary unpaved are getting paved earlier. This is something which could be checked first. Also based on the general importance to the road network.

With a relatively small sample size of training images: Would masking help the network? There is a lot of noise in the pictures. So the network could learn maybe better when you force it to only learn from “road” areas.

You could cross-check by looking on the other end of the scale: Are roads classified as unpaved really unpaved? If this looks similar to other roads then the network might have learned something else. Like detecting whether a road is in an urban areas or through fields.

OK, thanks for the clarification, it is very helpful!

I agree that focusing on major roads makes sense for now, and that requiring obvious indications from the imagery that the road is paved is necessary.

  • I manually selected a bit over 100 examples of roads tagged asphalt that have clear markings in the imagery, and retrained the model.
  • I then tested this model on the midpoint of ways with highway=tertiary, surface=unpaved, and length between a certain range.
  • I only found ~300 such ways all across Thailand, haven’t gotten a chance to see if there is an error with the script yet.
  • But from the 300 ways, the model reported 4 that it thinks are paved.

I put the detections on the same system:

http://osmdemopaved.lndyn.com/#background=RoadTracer&disable_features=boundaries&map=18.77/15.89871/104.20480

Currently the system is only looking at the midpoint of the way, but I manually checked these examples and I do see markings and similar road style along the entire way.

I will see about getting more test examples, the 300 seems kind of weird. Haven’t had time to investigate it yet.