You are not logged in.

#1 2019-10-11 21:14:04

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Alle Jahre wieder: Datenbank klemmt :(

Hi,

war schon lange überfällig: Der Diff-Update für meine Live Database klemmt, da eine Tabelle ein Problem hat.

Ich versuche das gerade zu Recovern aber das kann lange dauern.

- Wie lange? Keine Ahnung, ich versuche aber einen kompletten Reload zu vermeiden.

- Was ist put? Die Tabelle planet_osm_line mit ca 183 Mio Datensätzen (244 GB + 84 GB für Indices)

- was geht nicht? Missing Boundaries & Fools. Da ja keine Daten geändert werden, braucht auch keine Auswertung zu laufen,

- was geht? Boundaries Map und alle anderen Online-Karten, allerdings mit statischen Daten und natürlich der Webserver.

Gruss
walter

Offline

#2 2019-10-15 20:55:43

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Moin,

sieht wirklich nicht gut aus sad

Derzeit macht der Import mit osm2pgsql Probleme. Ich kann mich an eine gute Beschreibung erinnern (osm wiki?) in der das genau beschrieben wurde. Besonders welche Parameter man in der postgresql.conf ändern sollte, damit der Import läuft.

IIRC gibt es von Frederik auch ein Dokument dazu.

Hab real 64GB Memory und bekomme "out of memory" errors.

Grübelnde Grüße
walter

Offline

#3 2019-10-16 10:43:05

lonvia
Member
Registered: 2009-10-22
Posts: 27

Re: Alle Jahre wieder: Datenbank klemmt :(

Hab real 64GB Memory und bekomme "out of memory" errors.

Welche Version von osm2pgsql ist das? In der aktuellen 1.0.0er gibt es gerade Probleme beim Import ohne Flatnode-Datei für die Nodes (siehe https://github.com/openstreetmap/osm2pgsql/pull/960).

Offline

#4 2019-10-16 10:49:08

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

lonvia wrote:

Hab real 64GB Memory und bekomme "out of memory" errors.

Welche Version von osm2pgsql ist das? In der aktuellen 1.0.0er gibt es gerade Probleme beim Import ohne Flatnode-Datei für die Nodes (siehe https://github.com/openstreetmap/osm2pgsql/pull/960).

Hab zuerst die 1.0.0 genommen und bin dann auf die 0.96 ausgewichen. dort gab es aber die selben "out of memory"-Probleme.

Gruss
walter

Offline

#5 2019-10-16 11:15:03

lonvia
Member
Registered: 2009-10-22
Posts: 27

Re: Alle Jahre wieder: Datenbank klemmt :(

Hab zuerst die 1.0.0 genommen und bin dann auf die 0.96 ausgewichen. dort gab es aber die selben "out of memory"-Probleme.

Dann ist es etwas anders. Ich würde eher am '-C'-Parameter von osm2pgsql drehen oder am 'shared_buffers' Parameter von PostgreSQL. Wenn es Samstag immernoch klemmt, können wir in Karlsruhe mal gucken.

Offline

#6 2019-10-17 11:14:58

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

lonvia wrote:

Ich würde eher am '-C'-Parameter von osm2pgsql drehen oder am 'shared_buffers' Parameter von PostgreSQL.

gerne, aber ich bin mit meinem Latein an Ende.
Ich hab zwar den osm2pgsql-script, mit dem ich 2018 einen Reimport gemacht habe aber leider nicht mehr die postgresql config.

Wenn es Samstag immer noch klemmt, können wir in Karlsruhe mal gucken.

Hab meinen Trip nach Karlsruhe schon abgesagt.

more postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
autovacuum = 'off'
shared_buffers = '32GB'
max_stack_depth = '4MB'
autovacuum_max_workers = '6'
log_autovacuum_min_duration = '5min'
work_mem = '4GB'
listen_addresses = '*'
max_connections = '300'
max_worker_processes = '1'
max_parallel_workers_per_gather = '1'
max_wal_size = '8GB'
max_wal_senders = '0'
max_locks_per_transaction = '128'
ssl = 'on'
log_min_messages = 'error'
log_connections = 'off'
log_hostname = 'off'
maintenance_work_mem = '16GB'
log_min_duration_statement = '10000'
checkpoint_completion_target = '0.85'
track_io_timing = 'on'
effective_cache_size = '2GB'
log_checkpoints = 'on'
wal_level = 'minimal'
wal_compression = 'on'
checkpoint_timeout = '5min'
# Kein Flat File mehr!
# + extra_attributes
#
set -x
cd /osm/db/$1/create
OSM2PGSQL=/opt/install/osm/osm2pgsql/osm2pgsql-0.93.0-dev/build/osm2pgsql

$OSM2PGSQL --verbose \
           --create \
           --slim \
           --exclude-invalid-polygon \
           --extra-attributes \
           --style /osm/db/wno_2017.style \
           --port 5432 \
           --database $1 \
           --latlon \
           --username postgres \
           --hstore-all \
           --hstore-add-index \
           --tablespace-main-data  $1_ts1 \
           --tablespace-main-index $1_is2 \
           --tablespace-slim-data  $1_ts2 \
           --tablespace-slim-index $1_is1 \
           -C 26000 \
           --cache-strategy optimized \
           --number-processes 12 \
           --keep-coastlines \
           --multi-geometry \
import/$2

Betrübte Grüße
walter

Offline

#7 2019-10-17 11:49:19

toc-rox
Member
From: Münster
Registered: 2011-07-20
Posts: 2,118
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Hab vor Kurzem noch einen Full-Planet-Import durchgeführt ( osm2pgsql 1.0.0, 64 GB Ram). Allerdings ohne die Möglichkeit von inkrementellen Updates. Wenn es hilft, könnte ich meine Konfigurationen zur Verfügung stellen.

Offline

#8 2019-10-17 11:55:16

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

toc-rox wrote:

Hab vor Kurzem noch einen Full-Planet-Import durchgeführt ( osm2pgsql 1.0.0, 64 GB Ram). Allerdings ohne die Möglichkeit von inkrementellen Updates. Wenn es hilft, könnte ich meine Konfigurationen zur Verfügung stellen.

Gerne. Bitte auch die Postgresql Config, da ich dort das Problem vermute.

Gruss
walter

Offline

#9 2019-10-17 13:39:44

toc-rox
Member
From: Münster
Registered: 2011-07-20
Posts: 2,118
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Vielleicht hilft es:

Datenbank:
PostgreSQL 9.5.19 on x86_64-pc-linux-gnu


Import (keine inkrementellen Updates möglich):
nohup ./osm2pgsql --username postgres --multi-geometry --hstore --slim --drop --flat-nodes nodes.bin --create --cache 48000 --number-processes 8 --database osmtest --style openstreetmap-carto.style --tag-transform-script openstreetmap-carto.lua planet-latest.osm.pbf 1>load_osmtest.out 2>&1 &


PostgreSQL-Config:
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
#   name = value
#
# (The "=" is optional.)  Whitespace may be used.  Comments are introduced with
# "#" anywhere on a line.  The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload".  Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on".  Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
#                MB = megabytes                     s   = seconds
#                GB = gigabytes                     min = minutes
#                TB = terabytes                     h   = hours
#                                                   d   = days


#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------

# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.

data_directory = '/var/lib/postgresql/9.5/main'		# use data in another directory
					# (change requires restart)
hba_file = '/etc/postgresql/9.5/main/pg_hba.conf'	# host-based authentication file
					# (change requires restart)
ident_file = '/etc/postgresql/9.5/main/pg_ident.conf'	# ident configuration file
					# (change requires restart)

# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/9.5-main.pid'			# write an extra PID file
					# (change requires restart)


#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------

# - Connection Settings -

#listen_addresses = 'localhost'		# what IP address(es) to listen on;
					# comma-separated list of addresses;
					# defaults to 'localhost'; use '*' for all
					# (change requires restart)
port = 5432				# (change requires restart)
max_connections = 100			# (change requires restart)
#superuser_reserved_connections = 3	# (change requires restart)
unix_socket_directories = '/var/run/postgresql'	# comma-separated list of directories
					# (change requires restart)
#unix_socket_group = ''			# (change requires restart)
#unix_socket_permissions = 0777		# begin with 0 to use octal notation
					# (change requires restart)
#bonjour = off				# advertise server via Bonjour
					# (change requires restart)
#bonjour_name = ''			# defaults to the computer name
					# (change requires restart)

# - Security and Authentication -

#authentication_timeout = 1min		# 1s-600s
ssl = true				# (change requires restart)
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
					# (change requires restart)
#ssl_prefer_server_ciphers = on		# (change requires restart)
#ssl_ecdh_curve = 'prime256v1'		# (change requires restart)
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'		# (change requires restart)
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'		# (change requires restart)
#ssl_ca_file = ''			# (change requires restart)
#ssl_crl_file = ''			# (change requires restart)
#password_encryption = on
#db_user_namespace = off
#row_security = on

# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off

# - TCP Keepalives -
# see "man 7 tcp" for details

#tcp_keepalives_idle = 0		# TCP_KEEPIDLE, in seconds;
					# 0 selects the system default
#tcp_keepalives_interval = 0		# TCP_KEEPINTVL, in seconds;
					# 0 selects the system default
#tcp_keepalives_count = 0		# TCP_KEEPCNT;
					# 0 selects the system default


#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------

# - Memory -

shared_buffers = 8GB			# min 128kB
					# (change requires restart)
#huge_pages = try			# on, off, or try
					# (change requires restart)
#temp_buffers = 8MB			# min 800kB
#max_prepared_transactions = 0		# zero disables the feature
					# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
work_mem = 256MB			# min 64kB
maintenance_work_mem = 4GB		# min 1MB
#autovacuum_work_mem = -1		# min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB			# min 100kB
dynamic_shared_memory_type = posix	# the default is the first option
					# supported by the operating system:
					#   posix
					#   sysv
					#   windows
					#   mmap
					# use none to disable dynamic shared memory

# - Disk -

#temp_file_limit = -1			# limits per-session temp file space
					# in kB, or -1 for no limit

# - Kernel Resource Usage -

#max_files_per_process = 1000		# min 25
					# (change requires restart)
#shared_preload_libraries = ''		# (change requires restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0			# 0-100 milliseconds
#vacuum_cost_page_hit = 1		# 0-10000 credits
#vacuum_cost_page_miss = 10		# 0-10000 credits
#vacuum_cost_page_dirty = 20		# 0-10000 credits
#vacuum_cost_limit = 200		# 1-10000 credits

# - Background Writer -

#bgwriter_delay = 200ms			# 10-10000ms between rounds
#bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0		# 0-10.0 multipler on buffers scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1		# 1-1000; 0 disables prefetching
#max_worker_processes = 8


#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------

# - Settings -

#wal_level = minimal			# minimal, archive, hot_standby, or logical
					# (change requires restart)
fsync = on				# turns forced synchronization on or off
#synchronous_commit = on		# synchronization level;
					# off, local, remote_write, or on
#wal_sync_method = fsync		# the default is the first option
					# supported by the operating system:
					#   open_datasync
					#   fdatasync (default on Linux)
					#   fsync
					#   fsync_writethrough
					#   open_sync
#full_page_writes = on			# recover from partial page writes
#wal_compression = off			# enable compression of full-page writes
#wal_log_hints = off			# also do full page writes of non-critical updates
					# (change requires restart)
#wal_buffers = -1			# min 32kB, -1 sets based on shared_buffers
					# (change requires restart)
#wal_writer_delay = 200ms		# 1-10000 milliseconds

#commit_delay = 10000			# range 0-100000, in microseconds
#commit_siblings = 100			# range 1-1000

# - Checkpoints -

checkpoint_timeout = 15min		# range 30s-1h
#max_wal_size = 1GB
#min_wal_size = 80MB
checkpoint_completion_target = 0.9	# checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s		# 0 disables

# - Archiving -

#archive_mode = off		# enables archiving; off, on, or always
				# (change requires restart)
#archive_command = ''		# command to use to archive a logfile segment
				# placeholders: %p = path of file to archive
				#               %f = file name only
				# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0		# force a logfile segment switch after this
				# number of seconds; 0 disables


#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------

# - Sending Server(s) -

# Set these on the master and on any standby that will send replication data.

#max_wal_senders = 0		# max number of walsender processes
				# (change requires restart)
#wal_keep_segments = 0		# in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s	# in milliseconds; 0 disables

#max_replication_slots = 0	# max number of replication slots
				# (change requires restart)
#track_commit_timestamp = off	# collect timestamp of transaction commit
				# (change requires restart)

# - Master Server -

# These settings are ignored on a standby server.

#synchronous_standby_names = ''	# standby servers that provide sync rep
				# comma-separated list of application_name
				# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0	# number of xacts by which cleanup is delayed

# - Standby Servers -

# These settings are ignored on a master server.

#hot_standby = off			# "on" allows queries during recovery
					# (change requires restart)
#max_standby_archive_delay = 30s	# max delay before canceling queries
					# when reading WAL from archive;
					# -1 allows indefinite delay
#max_standby_streaming_delay = 30s	# max delay before canceling queries
					# when reading streaming WAL;
					# -1 allows indefinite delay
#wal_receiver_status_interval = 10s	# send replies at least this often
					# 0 disables
#hot_standby_feedback = off		# send info from standby to prevent
					# query conflicts
#wal_receiver_timeout = 60s		# time that receiver waits for
					# communication from master
					# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s	# time to wait before retrying to
					# retrieve WAL after a failed attempt


#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------

# - Planner Method Configuration -

#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on

# - Planner Cost Constants -

#seq_page_cost = 1.0			# measured on an arbitrary scale
#random_page_cost = 4.0			# same scale as above
#cpu_tuple_cost = 0.01			# same scale as above
#cpu_index_tuple_cost = 0.005		# same scale as above
#cpu_operator_cost = 0.0025		# same scale as above
effective_cache_size = 16GB

# - Genetic Query Optimizer -

#geqo = on
#geqo_threshold = 12
#geqo_effort = 5			# range 1-10
#geqo_pool_size = 0			# selects default based on effort
#geqo_generations = 0			# selects default based on effort
#geqo_selection_bias = 2.0		# range 1.5-2.0
#geqo_seed = 0.0			# range 0.0-1.0

# - Other Planner Options -

default_statistics_target = 1000	# range 1-10000
#constraint_exclusion = partition	# on, off, or partition
#cursor_tuple_fraction = 0.1		# range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8		# 1 disables collapsing of explicit
					# JOIN clauses


#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------

# - Where to Log -

#log_destination = 'stderr'		# Valid values are combinations of
					# stderr, csvlog, syslog, and eventlog,
					# depending on platform.  csvlog
					# requires logging_collector to be on.

# This is used when logging to stderr:
#logging_collector = off		# Enable capturing of stderr and csvlog
					# into log files. Required to be on for
					# csvlogs.
					# (change requires restart)

# These are only used if logging_collector is on:
#log_directory = 'pg_log'		# directory where log files are written,
					# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'	# log file name pattern,
					# can include strftime() escapes
#log_file_mode = 0600			# creation mode for log files,
					# begin with 0 to use octal notation
#log_truncate_on_rotation = off		# If on, an existing log file with the
					# same name as the new log file will be
					# truncated rather than appended to.
					# But such truncation only occurs on
					# time-driven rotation, not on restarts
					# or size-driven rotation.  Default is
					# off, meaning append to existing files
					# in all cases.
#log_rotation_age = 1d			# Automatic rotation of logfiles will
					# happen after that time.  0 disables.
#log_rotation_size = 10MB		# Automatic rotation of logfiles will
					# happen after that much log output.
					# 0 disables.

# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'

# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'

# - When to Log -

#client_min_messages = notice		# values in order of decreasing detail:
					#   debug5
					#   debug4
					#   debug3
					#   debug2
					#   debug1
					#   log
					#   notice
					#   warning
					#   error

#log_min_messages = warning		# values in order of decreasing detail:
					#   debug5
					#   debug4
					#   debug3
					#   debug2
					#   debug1
					#   info
					#   notice
					#   warning
					#   error
					#   log
					#   fatal
					#   panic

#log_min_error_statement = error	# values in order of decreasing detail:
					#   debug5
					#   debug4
					#   debug3
					#   debug2
					#   debug1
					#   info
					#   notice
					#   warning
					#   error
					#   log
					#   fatal
					#   panic (effectively off)

#log_min_duration_statement = -1	# -1 is disabled, 0 logs all statements
					# and their durations, > 0 logs only
					# statements running at least this number
					# of milliseconds


# - What to Log -

#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default		# terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t [%p-%l] %q%u@%d '			# special values:
					#   %a = application name
					#   %u = user name
					#   %d = database name
					#   %r = remote host and port
					#   %h = remote host
					#   %p = process ID
					#   %t = timestamp without milliseconds
					#   %m = timestamp with milliseconds
					#   %i = command tag
					#   %e = SQL state
					#   %c = session ID
					#   %l = session line number
					#   %s = session start timestamp
					#   %v = virtual transaction ID
					#   %x = transaction ID (0 if none)
					#   %q = stop here in non-session
					#        processes
					#   %% = '%'
					# e.g. '<%u%%%d> '
#log_lock_waits = off			# log lock waits >= deadlock_timeout
#log_statement = 'none'			# none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1			# log temporary files equal or larger
					# than the specified size in kilobytes;
					# -1 disables, 0 logs all temp files
log_timezone = 'localtime'


# - Process Title -

#cluster_name = ''			# added to process titles if nonempty
					# (change requires restart)
#update_process_title = on


#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------

# - Query/Index Statistics Collector -

#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none			# none, pl, all
#track_activity_query_size = 1024	# (change requires restart)
stats_temp_directory = '/var/run/postgresql/9.5-main.pg_stat_tmp'


# - Statistics Monitoring -

#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off


#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------

#autovacuum = on			# Enable autovacuum subprocess?  'on'
					# requires track_counts to also be on.
#log_autovacuum_min_duration = -1	# -1 disables, 0 logs all actions and
					# their durations, > 0 logs only
					# actions running at least this number
					# of milliseconds.
#autovacuum_max_workers = 3		# max number of autovacuum subprocesses
					# (change requires restart)
#autovacuum_naptime = 1min		# time between autovacuum runs
#autovacuum_vacuum_threshold = 50	# min number of row updates before
					# vacuum
#autovacuum_analyze_threshold = 50	# min number of row updates before
					# analyze
#autovacuum_vacuum_scale_factor = 0.2	# fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1	# fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000	# maximum XID age before forced vacuum
					# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000	# maximum multixact age
					# before forced vacuum
					# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms	# default vacuum cost delay for
					# autovacuum, in milliseconds;
					# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1	# default vacuum cost limit for
					# autovacuum, -1 means use
					# vacuum_cost_limit


#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------

# - Statement Behavior -

#search_path = '"$user", public'	# schema names
#default_tablespace = ''		# a tablespace name, '' uses the default
#temp_tablespaces = ''			# a list of tablespace names, '' uses
					# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0			# in milliseconds, 0 is disabled
#lock_timeout = 0			# in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#bytea_output = 'hex'			# hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB

# - Locale and Formatting -

datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'localtime'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
					# abbreviations.  Currently, there are
					#   Default
					#   Australia (historical usage)
					#   India
					# You can create your own file in
					# share/timezonesets/.
#extra_float_digits = 0			# min -15, max 3
#client_encoding = sql_ascii		# actually, defaults to database
					# encoding

# These settings are initialized by initdb, but they can be changed.
lc_messages = 'en_US.UTF-8'			# locale for system error message
					# strings
lc_monetary = 'en_US.UTF-8'			# locale for monetary formatting
lc_numeric = 'en_US.UTF-8'			# locale for number formatting
lc_time = 'en_US.UTF-8'				# locale for time formatting

# default configuration for text search
default_text_search_config = 'pg_catalog.english'

# - Other Defaults -

#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#session_preload_libraries = ''


#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------

#deadlock_timeout = 1s
#max_locks_per_transaction = 64		# min 10
					# (change requires restart)
#max_pred_locks_per_transaction = 64	# min 10
					# (change requires restart)


#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------

# - Previous PostgreSQL Versions -

#array_nulls = on
#backslash_quote = safe_encoding	# on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on

# - Other Platforms and Clients -

#transform_null_equals = off


#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------

#exit_on_error = off			# terminate session on any error?
#restart_after_crash = on		# reinitialize after backend crash?


#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------

# These options allow settings to be loaded from files other than the
# default postgresql.conf.

#include_dir = 'conf.d'			# include files ending in '.conf' from
					# directory 'conf.d'
#include_if_exists = 'exists.conf'	# include file only if it exists
#include = 'special.conf'		# include file


#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------

# Add settings for extensions here

Offline

#10 2019-10-17 16:31:57

Wulf4096
Member
From: Hamburg
Registered: 2018-10-23
Posts: 588

Re: Alle Jahre wieder: Datenbank klemmt :(

Was für Festplatten hast du drin? Und wie lange hat's etwa gedauert?

Offline

#11 2019-10-17 21:45:51

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Wulf4096 wrote:

Was für Festplatten hast du drin? Und wie lange hat's etwa gedauert?

3 TB SSD für die Slim-Files (nodes, way und rels), 4 TB hdd für den Rest.

Dauer: mehrere Tage und dann noch die Diffs einspielen.

Aber das ist/war nicht das Problem. Ich musste die cache-size (-C) drastisch reduzieren, damit der Import überhaupt startet. Und jetzt scheint es zu laufen.

Gruss
walter

Offline

#12 2019-10-18 17:20:21

Nop
Moderator
Registered: 2009-01-26
Posts: 2,525

Re: Alle Jahre wieder: Datenbank klemmt :(

Mit welcher Version läuft es jetzt?

Und warum verzichtest Du eigentlich auf ein flatfile?


Nothing is too difficult for the man who does not have to do it himself...
Projekte: Reit- und Wanderkarte mit Navigation - Kartengenerator Map Composer - GPS Track Editor Track Guru

Offline

#13 2019-10-18 18:30:10

lonvia
Member
Registered: 2009-10-22
Posts: 27

Re: Alle Jahre wieder: Datenbank klemmt :(

Wie Nop sagt, solltest du auf jeden Fall eine Faltnode-File verwenden für Planeten-Importe (Parameter: -F <Dateinname>). Macht alles um ein Vielfaches schneller. Faustregel: Die Datei für den Flatnode-Store braucht ungefähr soviel Platz wie der PBF-Planet gerade gross ist und dann rechnet man nochmal mit 10% Wachstum im Jahr.

Ausserdem würde ich 'shared_buffers' heruntersetzen. Ich verwende da 2-4GB bei 64GB RAM. Das Problem mit 'shared_buffers' ist, dass PostgreSQL diesen Speicher fix belegt und nie wieder frei gibt. Da osm2pgsql aber auch ziemlich speicherhungrig ist, gibt es da Konflikte.

Mit diesen beiden Änderungen sollte der Import dann hoffentlich durchlaufen.

Sonstige Beobachtung: 'autovacuum = off' ist meistens keine gute Idee, weil das den Query-Planer von Postgresql verwirrt.

Und hier noch zwei Optionen, die du noch optimieren könntest, die sich aber nicht wirklich auf den Import selbst gross auswirken werden. Eher danach auf den Betrieb:
'effective_cache_size' kannst du so auf 75% RAM setzen (also 50GB).
Wenn du SSDs hast, kannst du auch noch 'effective_io_concurrency = 500' setzen.

Offline

#14 2019-10-18 18:46:42

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Nop wrote:

Mit welcher Version läuft es jetzt?

es lief bis vor einer Stunde mit osmpgsql 96.0. Dann wieder Abbruch wegen fehlenden Speicher.

Und warum verzichtest Du eigentlich auf ein flatfile?

hat beim letzten Mal prima funktioniert. Aber jetzt werde ich das wieder verwenden.

Offline

#15 2019-10-18 18:54:53

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

lonvia wrote:

Wie Nop sagt, solltest du auf jeden Fall eine Faltnode-File verwenden für Planeten-Importe (Parameter: -F <Dateinname>). Macht alles um ein Vielfaches schneller. Faustregel: Die Datei für den Flatnode-Store braucht ungefähr soviel Platz wie der PBF-Planet gerade gross ist und dann rechnet man nochmal mit 10% Wachstum im Jahr.

Ausserdem würde ich 'shared_buffers' heruntersetzen. Ich verwende da 2-4GB bei 64GB RAM. Das Problem mit 'shared_buffers' ist, dass PostgreSQL diesen Speicher fix belegt und nie wieder frei gibt. Da osm2pgsql aber auch ziemlich speicherhungrig ist, gibt es da Konflikte.

Hab ich beides gemacht.

Sonstige Beobachtung: 'autovacuum = off' ist meistens keine gute Idee, weil das den Query-Planer von Postgresql verwirrt.

hab ich doch nur für den Import abgeschaltet. Oder spricht was dagegen?

Und hier noch zwei Optionen, die du noch optimieren könntest, die sich aber nicht wirklich auf den Import selbst gross auswirken werden. Eher danach auf den Betrieb:
'effective_cache_size' kannst du so auf 75% RAM setzen (also 50GB).
Wenn du SSDs hast, kannst du auch noch 'effective_io_concurrency = 500' setzen.

Danke und Gruss
walter

Offline

#16 2019-10-29 14:40:33

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Moin,

so langsam kommt der Server wieder in's rollen smile

Datenbank ist aktuell, Data Collector läuft, die Missings sehen einigermaßen gut aus, das CLI der OSM Bounadries Map läuft  und jetzt muss ich "nur noch" das Gui der Osm Boundaries Map genauer checken.

Die anderen Anwendungen (Liste unten) sollten auch laufen, allerdings ist Fools noch abgeklemmt.

Müde Grüße
Walter

Last edited by wambacher (2019-10-29 14:42:34)

Offline

#17 2020-03-21 17:14:48

wambacher
Member
From: Schlangenbad/Wambach, Germany
Registered: 2009-12-16
Posts: 16,497
Website

Re: Alle Jahre wieder: Datenbank klemmt :(

Moin,

Es sind mehrere Sachen passiert:

- In der Datenbank ist eine Tabelle defekt - planet.osm.line Damit laufen die meisten Anwendungen weiter aber es ist kein Update der OSM-Daten mehr möglich.
- Dann ist mein Development-Rechner abgeraucht.
- Der GeoServer, der das Interface zwischen DB und Anwendung darstellt, hat ebenfalls die Fliege gemacht.
- Und noch einiges mehr.

Der ganze Mist innerhalb von zwei Wochen. Das war sehr motivierend und den Rest könnt ihr euch evtl. vorstellen. (*)

Aktueller Stand:

- Der Dev-Rechner läuft wieder (ohne Datenverluste)
- Der aktuelle Geoserver ist installier, "kennt" aber noch nicht alle Schemata und Layouts, die die Web-Anwendungen brauchen.
- Die OSM Software Watchlist geht seit einigen Wochen wieder.

Offen:

- Änderungen der Web-Anwendungen, sodass die die Anwender darüber informieren werden, dass jetzt Wartung ist. ("server not found" ist nicht gerade toll)
- Import vom aktuellen Planet-File
- Geoserver fertig konfigurieren (Emergency ...)
- Verarbeitung der Boundaries-Daten (Boundaries Map)
- Missing Boundaries
- und sicher noch mehr.

Aktuell: Popups für Wartung

Gruss
walter

*) TL/DR Schnautze gestrichen voll.

Offline

#18 2020-03-22 08:05:51

blaubaer11
Member
Registered: 2009-07-22
Posts: 392

Re: Alle Jahre wieder: Datenbank klemmt :(

Hallo,
das hört sich teuer und nach viel Arbeit an....

Vielen Dank für die Info. Dann besteht ja wieder Hoffnung auf eine emergency map u.a.

Offline

Board footer

Powered by FluxBB