3D printing tactile maps

I wish to enable printing tactile maps for blind people, using mostly likely a 3D printer.

The initial goal is to produce maps that are suitable for pedestrian navigation, and so cover an area from a few hundred meters to some two kilometers across. The solution should be easily usable by the blind people themselves, thus will eventually include an accessible website that produces files ready for 3D printing, and no manual customization of the map contents should be needed.

The easiest implementation appears to be to extend OSM2World. The required changes are at least:

  • Raise roads – this alone goes a long way to enabling pedestrian navigation
  • Add a cube underneath the map for physical support
  • Clip objects that extend outside the selected map area

My question is: does it make sense to extend OSM2World, or should I look into some other pieces of the OpenStreetMap ecosystem?

There is OSM-Importer for Blender: https://github.com/vvoovv/blender-geo

Some 3D features for OSM aren’t finished yet, but Blender is for sure a more flexible solution for your project than OSM2World.

While Blender is a great tool for designing customized 3D prints, my central goal is to make this usable by the blind themselves. That means any workflow that includes using a visual tool, such as any sort of 3D editor, is out.

I don’t suppose it would make sense to invoke Blender in the background mode, such that only an .osm import (based on your add-on) followed by an export to .stl is executed? If that’s a viable path, it might make sense, as all the powerful 3D editing features of Blender would be available.

Nearly all features of Blender are available via Python API.

A Python script for Blender can be run in the command mode without running Blender on the monitor:

blender -b -P //path/to/python/script.py

Hi, maintainer of OSM2World here. I’ve been keeping an eye on 3D printing for a while and there is an existing unpublished extension that allows OSM2World to directly output STL for 3D printing, which I could put online if you are interested. (It’s still really basic, though.) OSM2World can also be used to export files that can be further edited in Blender should you prefer that.

If you want to use OSM2World, I’d be glad to help you with getting started and any problems that might occur – it’s a worthy cause after all.

I also have a question for you, though: Do you intend to go for actual models, including e.g. 3D buildings (such as this one), or maps that only show labelled roads and such?

Hello, Tordanik. Glad to hear you’re interested in this!

STL output certainly seems relevant, although at least Shapeways accepts also OBJ files.

Changes like adding a support cube underneath the map seem to fit OSM2World’s architecture awkwardly, because the cube isn’t logically related to any OSM element. Further, to my understanding OSM2World’s goal is to create a visually accurate representation of the map data, whereas in a tactile navigation map realism is far less important than clarity of key components like roads and possibly some POIs, such as acoustic street crossing signals. Are OSM2World’s goals are quite in line with needs of a tactile navigation map?

I have prototyped modifying OSM2World to elevate roads in my fork https://github.com/skarkkai/OSM2World

Post-processing OSM2World’s OBJ files in Blender is a very reasonable possibility. All of the following modifications would seem to be easy in Blender:

  • Conversion from OBJ to STL
  • Elevating roads
  • Removing all geometry outside of the map area
  • Adding the support cube underneath the map
  • Applying tactile textures to surfaces (eg. making non-paved areas and/or forests feel rough)
  • Scaling the entire map (to produce 3D prints of appropriate size)

That said, OSM2World does a very good job producing accurate geometry, and seems like it’s improving rapidly (eg. http://forum.openstreetmap.org/viewtopic.php?id=52803)). OSM2World looks like a robust tool that I’d be happy to use as a basis for the map geometry.

Something I’d struggle to implement myself is a smart placement of labels (mainly for streets), and I would be very happy if OSM2World could help with that. A navigation map can’t possibly fit even abbreviated street names (there are many streets and braille letters can’t be very small), so my current idea is that streets would have a small number (in braille) right next to them, and the underside of the printed map would contain a mapping from the numbers to street names.

My only goal, for now, is to support effective tactile navigation. OSM2World’s current output is fine for that, once streets are elevated. It’s fairly important the buildings exist on the map with roughly correct 2D shape, but the height dimension isn’t important.

Later on, I may consider extending the solution for other purposes, such as the kind of models with 3D buildings that you link to. That said, it’s certainly more fun when buildings are physically accurate. Though you probably don’t want to 3D print a sky scraper.

Hello Samuli, have you made any progress in your fork already?

I’ve been thinking about this for a while, and it certainly depends. Something like adding a cube underneath is small enough that I have zero issues with adding it to OSM2World proper, plus it is necessary for any kind of 3D printing (which I definitely consider to be within the scope of OSM2World). The same applies to the ability to add tactile textures to surfaces, which is not so different from the capabilities necessary to produce visually appealing 3D prints. One limiting factor here are the still relatively primitive style files of OSM2World.

When it comes to aspects such as distorting or generalize the course of roads or other features for improved clarity, though, I’m not quite sure it fits into the conceptual approach and software architecture of OSM2World, so it would probably be best to do this in a fork. Even with a fork, though, I believe that a lot of progress can be shared with the main development branch where it makes sense.

That sounds like a reasonable design. What kind of help would you need for this?

Glad to hear from you again, Tordanik

My changes to OSM2World have turned out to be pretty small, so I’ve kept them simply in a patch form: https://github.com/skarkkai/touch-mapper/blob/master/osm2world.patch . I have made no attempt to make the changes in a manner that would make them mergeable to mainline, because I haven’t known your thoughts about that.

My code currently creates a usable, 3D printable tactile map from OSM data. https://raw.githubusercontent.com/skarkkai/touch-mapper/master/visualization/rendering-20151202-noss.jpg is a sample rendering of the result.

I have made the following behavior changes to OSM2World:

  • Roads have a minimum width. This is critical because especially the cheaper 3D printers can only print details with fairly large minimum horizontal size, eg. 0.5mm. At the same time, the more narrow a road, the easier it is to feel with your fingers, so this is a balancing act.

  • Larger roads are more narrow than in reality. It’s a bit difficult to tell apart a single wide road from two roads with a small gap inbetween (typical with sidewalks). My OSM2World road width adjustment hack seems to work in many cases, but might fail badly in others.

  • Buildings have a minimum and maximum height. Minimum to make it clear they are buildings, and maximum because it doesn’t matter much how high they are in reality, but you don’t want to 3D print a very high building in this type of tactile map.

  • All buildings have GabledRoof. This makes it harder to confuse them with some other type of objects. Not sure how much this matters.

  • Only roads and buildings are rendered. Everything else that OSM2World currently supports turns out to be unnecessary (and I want to keep code as simple as possible, because the conversion should work reliably for any input).

  • OSM2World outputs (to stdout) map boundaries in the .obj space. Necessary for further processing.

I then postprocess the resulting .obj file in Blender as following:

  • Raise roads

  • Clip roads at map edge

  • Remove buildings that are partially (or entirely) outside the map

  • Add the support cube

  • Output as .stl

In my plans the remaining nontrivial feature is to mark a few largest roads. My current idea is to print a single braille letter (a digit, starting from 1) where the largest roads cross the map edges. Having the letters at the edges of the map makes the letters easy to discover. This also removes the problem of the optimal positioning of the braille letters. Printing to the bottom of the support cube seems questionable when thinking about the physical constraints of 3D printing, so I plan on having a separate textual mapping from the road numbers to their names.

Tactile textures are possibly not useful, and I only plan to add them if user feedback indicates they are needed.

In summary, I wish for the following changes to OSM2World (in addition to my patch):

  • Output information, in any form, where centerline of each named road crosses the map edge. The current .obj file output may have this information, but it’s clearly not in ideal format.

  • A way to specify width for the narrowest roads, without wide roads (such as multi-lane streets) becoming wider than they are in reality. I don’t know what kind of algorithm makes sense.

Having given this further thought, I might in the future stop using OSM2World entirely, and instead create the tactile map directly from OSM data. In this implementation, buildings would be simply vertically extruded, and roads would be paths straightforwardly thickened into tubes.

However, even later, I might add a “small scale map” mode, which would aim for much greater physical accuracy than I currently do, and for that OSM2World would be a important tool.

Well, whatever works best for you. :slight_smile: Honestly, OSM2World does not have that much to offer when it comes to things like labelling, and when all you need are extruded buildings and roads, then doing things from scratch is reasonable. Feel free to contact me when you are looking into adding physically accurate models! And sorry for the sometimes slow replies.

This “3D printed tactile maps for the blind” project of mine is now released at http://touch-mapper.org/

Use of OSM2World is presently very limited, but later on I might look into extending the site to offer also physically more accurate models of different scales (town, city, region).

Congratulations! :slight_smile:

Hello Samuli,

is it true that you use Google geocoding services to find positions on an OSM based map?

why not use one of the OSM based http://wiki.openstreetmap.org/wiki/Search_engines ?

I do use Google’s geocoding service. The reason is that it, unfortunately, is much better. I initially used OSM’s service, but it was effectively unusable, mainly because it didn’t understand street numbers, AFAICT. Touch Mapper’s maps typically cover about 500 meters, which is less than length of a long street. If user enters address like “Mystreet 1000”, it’s unacceptable to get a map for “Mystreet 1” (which would often happen), especially if the user is blind, and can’t verify what area the map covers.