Need help to upload planet-latest.osm.pbf to openstreetmap.

We wish to use OSM APIs for our mobile application. So I downloaded OSM API code from github and deployed it on our AWS server with configuration as below.
AWS instance type = r4.2xlarge
RAM = 61 GB
Processor = Intel Xeon E5-2686 v4
Clock Speed GHZ = 2.3

I downloaded planet-latest.osm.pbf and tried to upload it as per your documentation.
export GH_FOREGROUND=false && export JETTY_PORT=8989 && ./graphhopper.sh web planet-latest.osm.pbf

Java opt is set to use 17GB as heap size.
export JAVA_OPTS=“-server -Xconcurrentio -Xmx17000m -Xms17000m”

But it doesnot get deployed and throwing out of memory exception.

Exception in thread “PBF Reader” java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:336)
at com.graphhopper.reader.osm.OSMInputFile.process(OSMInputFile.java:251)
at com.graphhopper.reader.osm.pbf.PbfDecoder.sendResultsToSink(PbfDecoder.java:96)
at com.graphhopper.reader.osm.pbf.PbfDecoder.processBlobs(PbfDecoder.java:151)
at com.graphhopper.reader.osm.pbf.PbfDecoder.run(PbfDecoder.java:162)
at com.graphhopper.reader.osm.pbf.PbfReader.run(PbfReader.java:47)

Can somebody help me to deploy planet OSM file? Please let me know where I am going wrong.

What are you actually trying to implement? Map display, map editing, routing…?

We will be mainly using ReverseDistance API and Geocoding API for our mobile application.
We could finally deploy planet OSM pbf file after no of retries in 20 hours. We faced a lot of issues during the deployment, where this process was just not completing. Once the process started, it took up a lot of memory and then the process went into a hung state and we got the exception as give earlier.

We started the process with 32 GB RAM and then as memory utilization grew, we increased RAM to 64 GB, but even that was being consumed very soon and once again it would go into a hung state. The process wouldn’t complete. Then we tried with MMAP configuration for data reader.data access and were finally able to complete one cycle of deployment this morning after the process ran for around 20 hours.

This has raised a few questions for us about the OSM (world data) deployment -

  1. Why does the deployment process take such a long time to complete? 20 hours is a very long time, although we are deploying world content, which is relatively larger.

  2. Memory utilization is very high during the process - The recommended server config given to us was 32 GB RAM and 1 TB HDD. We had to upgrade RAM to 64 GB and that too didn’t seem to suffice until the last time we tried when it actually utilized around 62 GB at peak time. We felt that garbage collection process was probably not functioning properly. Also we observed that CPU utilization was 100 % towards the end of the process.

  3. What is the recommended CPU configuration?

  4. We found that once the server is stopped and then re-started, OSM has to be re-deployed on the server. Why is that the case?

  5. Once OSM planet file is deployed, what is the facility provided for updating a newer version? Is there a provision for only updating the incremental changes in the new file?

If your purpose is geocoding, then you need to install an instance of Nominatim, OSM’s geocoder. Not “OSM API code” (I honestly don’t know what you mean by that).