Automated edits for name tags

I don’t think the “fixme” tagging is working. There are more than 30,000 nodes and 1700 ways with a “fixme” tag in Israel, not no mention “fixme” in “note” tags…

Having an table with columns for “name”, “name:he”, “name1”, and “name:he1” would be more efficient.

See this post for how to update OSM tags using a CSV file.

I’d say fixmes pending review indefinitely are better than name mismatches pending review indefinitely.

By the way, almost all node fixmes are from the GTFS bus stop import. It may be wise to bulk-remove those, to shine the light on the more important fixmes.

Edit: Sorry, I was confusing this with something else and this comment is wrong regarding overpass. Please ignore the previous version of this comment.

I’ll not apply auto-fixes for now. I’ll look into manually updating them (via the csv method or some other way).

Nope, I still don’t like this. It’s wasted manpower; Even if I manually fix them all, the mismatches will accumulate over time and the manual fix must be done periodically, and most mismatch “errors” will be in fact legitimate edits where someone changes a name tag and forgets the other. It’s mechanical drudgery.

Wouldn’t it be easier to just treat the two tags as a single tag and have the bot auto-synchronize them? We also don’t even need a fixme for this; If a user mis-edits a tag, it’s not the synchronization’s fault. Mis-edits are normal in OSM, and name tag mis-edits should be handled like any other mis-edit, through monitoring tools and such.

I can understand the need for a fixme tag for the first edit only (because some autofixes will be grabbed from deeper history), but no need for a fixme when this is synchronized periodically. By the way, it’ll only add an additional 372 fixmes to the 30k already present.

For the record, http://overpass-turbo.eu/s/qmy is a query that finds elements in Israel that have a Hebrew “name” tag and a “name:he” tag that are different.

It has an option to output a CSV file by un-commenting the CSV output definition.

The current element count is: nodes: 62, ways: 430, relations: 11

There are also 586 in total, if we count Arabic, Hebrew, English. 372 of which are auto-fixable by swiftfast_bot.

Does everyone agree that we should always have Hebrew “name” tags duplicated at “name:he”? I asked prior to running the scripts and I think everyone agreed, but I cannot find the post anymore. (Rationale: The language of the “name” tag varies in Israel. But name:he guarantees Hebrew).

I agree that

The following cases should not be handles automatically:

  • name tags with foreign language characters
  • name tags that are different than the name:he tags

The current rules are similar, except for line 2. I think they are as follows.

This allows copying things like “KSP מחשבים”. Do you think that’s a bad idea?

I should publish the source code soon. (I wanted to fully automate it first, but that’s not going to happen soon).

It’s not good enough because it has no notion of the English contents and would also copy “Herzl הרצל” - using a naming scheme we cleaned-up in Jerusalem a while ago.

Why is that a bad thing? It wouldn’t introduce a new error, it would just keep an already existing error unfixed. This is similar to Sanniu’s opposition to the autofixes.

I think everything would be much simpler if we treat the bot as a convenience copy-machine. It “binds” name and name:he. If the bot copies a faulty tag, it’s not the bot’s fault, and it doesn’t really make things worse. A human needed to fix that anyways.

In that scenario, there was no error in the name:he tag, and the bot created one.
On the other hand, the human work needed for a fix is doubled by a bot.
As a person who spend a significant time in manually fixing errors, I would like the bots to “do no evil”, rather than spreading it.

The bot does not introduce additional fixing effort. In the case of “Herzl הרצל”, you had to fix both “name” and “name:he” regardless of the bot’s work. There were two errors (missing name:he, wrong name), and there remained two errors.

(In fact, I argue the bot makes this slightly easier, because you don’t have to type in “name:he” in JOSM and you just edit the value)

On the other hand the bot alleviates effort by fixing many cases (like “ksp מחשבים”).

Regarding autofixes (fixing cases where name,name:he exist but mismatch, by tracing history) Yes, a minority of nodes that had 1 error would have 2. But most nodes that had 1 error would have 0. The net result is less man work.

(autofixes are not enabled but the code works)

Here are a few ideas for safe corrections that a bot can do and save manual work:

  • Remove leading spaces, trailing spaces, and multiple spaces from the “name” tag and all “name:*” tags
  • Add “outer” role on members of “type=polygon” and “type=boundary” relations

The bot supports this (and does it opportunistically), I’ll just need to fetch all of the bad names with an overpass regex to do it for all nodes.

Out of scope for this script, but that’s a good idea for a different script.

I’m all for saving manual work, and my point is that the “all characters in the name tag are either Hebrew, digits, English, space, or symbol characters, with at least 1 Hebrew character” and the “autofixes” have a net result of vast work saving for the most nodes, despite copying an error in a minority of nodes.

Not just for nodes. Ways and relations too.

http://overpass-turbo.eu/s/qCx

It supports this for all primtiives. I didn’t mean “nodes”. Sorry.

I’ve found that many fixes done mainly by Moveit team are applied only to name tag, leaving name:* unchanged - that leads to mismatch between English and Hebrew street names. Thanks for the old name:he tag these problems easily detectable on KeepRight, but if you’ll run you bot all such problems will be wallpapered…

Not connected to this, I was thinking about how to detect name:he/name:en tag mismatches. May be table can be created for streets that have same name:he, but different name:en in different cities or street parts?

That’s a good point. I’ll keep autofixes off.

I’ve been thinking of an algorithm that compares a Hebrew and an English string and decides if they’re likely the same. No table needed. It’s not tested yet: “Normalize” Hebrew and English, then compare. Normalization roughly as follows:

  • We start with a Hebrew an an English string and apply these:

  • Remove all vowels (u,a, ו, etc)

  • Lowercase everything

  • Convert all Hebrew characters to English (א > A, ב > B) and so on.

  • Normalize problematic / phonetically similar characters, (e.g. b,v,ב all become b)

  • Normalize the remaining problem characters like צ which may translate to ts or tz etc. (This requires real world testing)

Now compare the strings. If they’re not identical, mark the node as suspicious. False positives will help me refine this.