It is true, but originally that wasn’t even my intent. What I want to do is making wiki parsing easier. So as a start, one would only replace any code accessing the MediaWiki API and searching for templates in the results with an access to my json representation of wiki templates.
Of course it would be nice if dynamic pages could be generated directly based that json, but as you say this might create additional requirements on the server - such as filtering. Do you think this is possible without semantic knowledge specific to each application?
I’m not sure if I fully understand this. Would you expect the user to paste the template code into that validator, or would that functionality become available somehow after saving the page?
There are a lot of successful examples: Google Play, AppStore.
Anyway, it is just a database. We have to think how to design structure. To design structure we need list of all use cases and requirements. I will prepare requirements of Catalogue.
We can provide three options:
Option to parse Templates like how it works right now. We just have to add some samples and documentation. Maybe implement Template parser in Java?
Option to download complete new JSON file. The difference comparing to first option is this file is easy to parse.
Option to download part of the new JSON file using server side filtering.
All options will be implemented one by one in the order of the listing. User will choose suitable. It is like .NET/Java approach. You can start at any point of abstraction (File->Stream->Serializer).
I prefer “after saving page”.
If wiki does not support triggers, we can add some button to regenerate JSON file.
I believe it is better to have strong typified fields. If field is of text type, consumer should use it as text. If field is of list type, consumer should treat field as list. Otherwise… it is hard to control software. What we need is template validator (to fix errors in the edit time) and JSON generator with option to generate JSON files of deprecated formats to support old consumers at least some time (in case of format changing). For example right now price field is list. OSM-JSON v1 will have price as list. However I’d like to have separated fields per currency per ApPStore. If we will change JSON structure we still will be able to generate OSM-JSON v1.
I would be happy to make OSM Software Catalog one of the official applications. But I think TTTBot should be fixed too. At least Linux users need it and those who do not want to run any software locally.
Dmytro Ovdiienko, thanks for the effort but software catalog should be web based (without any installation or platform dependency) and in multiple languages (otherwise it have a little over what we had). Filtering is a nice feature but it isn’t crucial compared to simple translated tables X/Y: software/feature(s).
Translated tables allow you to “filter” by one feature only. Try to find freeware Navigator application which can calculate route offline and support vector maps.
boolean columns (such as featureX present / absent) are easy to sort using html widgets.
“Operating system” would be indeed interesting filtering criteria - but again, you can display multiple static tables instead of database + filtering interface.
Simple reports with sane defaults are easier to use than db+filtering interface. There too many steps in filtering for our use case. You don’t want send link to a user, then ask him to enter right filters, then discuss what you see together. It should be done using single link and no actions from other party.
I believe it is good idea to add link to Software page to the menu on left side of Wiki page (below the The Map link). Variety of available programs is what differentiates OSM.