Weather reporting

As an example practical application of "XML over HTTP" that would look good enough to an end-user... and motivated by the point that my friend was saying he was going to convert one of his old laptops into a digital picture frame... I was working on a "XML over HTTP" application for weather reports.

The weather reports come in XML over HTTP, the idea was that the application chooses an appropriate pretty picture for the weather foretasted for tomorrow, combines it with a little overlay using SVG and outputs a picture that is set as a Windows desktop wallpaper by a third application (I'm thinking Active Desktop or something more secure). This means that when using the application, your desktop wallpaper is a pretty picture that matches the weather that is going to be tomorrow and has all the info on the side in case the picture doesn't mean much to you.

The problem currently was that the weather report comes with 8 levels of cloudiness plus 16 types of weather, which sums up to 128 distinct weather states, which I didn't want to end up finding pictures for. Right now the data structure is made so that the 16 weather states have pictures they default to when separate cloudiness pictures aren't set. This however makes the application dull and boring, when there are not enough pictures entered.

Ultimately in order to make this application work out, it'd have to be a community effort, where a website would allow visitors to enter picture URLs and associate them with different weather states or link them to weather transitions (the application could pick out a picture depending on what weather today is turning into what weather tommorow, for an added sense of realism). The server would then allow the user to pick, or randomly select out a suitable image from the pool and use it for that user's wallpaper. The problem with this scenario is that of course, the weather reports are for Slovenia, and finding a suitably sized community of picture authors is somewhat unlikely.

Still this is an interesting project, I will prepare the code and try to make it as universal as possible, then leave it up to the end-users if they want to make it live or not. The current code is able to interpret a weather forecast and output a picture using SVG. I will still have to add rendering into a format you can display in Windows, and the overlay box with the details.


Human Resource Managment

Thomasz (from NET) and I have been discussing creating some computerized interface for Human Resource Management during the last weekly NET meeting. And while the current interface (the user addition thingy for Joomla, which is essentially a forum member registration and login tool) exists, it is not very practical in that when a new organization comes work with us, we can't immediately produce a list of things we could do for them as a team of experts.

If a similar situation arised at my job, I would have defenitely used a database to solve it. Databases are uniquely suited for this type of problem: Enter your people into a table, fill out the data you can and do your best to keep it up to date either trough convenience or automation. Then when somebody asks you a question, pull up a query and tell them.

Doing something like this for any group of people larger than six, without a database, would be a major nightmare: How can you give someone an answer you are honestly sure of, when they present you with a set of complex criteria and you have 50 people to consider and compare? Obviously however, a database is not good at subjective comparisons either, so the key to setting up a database you can use, is setting up the fields so that comparing them using logic functions or using simple math, gives you the answers you'll need.

So I have rememberd the Semantic Wiki extension I have set up into our wiki purely out of curiosity. The last that I was working with it, I realized it produced database-like functionality: If you give your properties in your articles their values, then you are able to query your wiki for the data and end up with a value, list or table. I thought that was really cool and still do. For this particular situation, we will be using this functionality to experiment with different fields (table columns, in databasespeak) and seeing how they work out for the output we need.

Once we have a good idea of what to pick, we will decide what to implement it in. At that time I will likely be discussing with Igor. I am looking at the idea of keeping Semantic Wiki underneath since it so conveniently produces "XML over HTTP" output, which would work neatly with you-know-what. But then again it is also very complicated, especially considering the point that we won't need this flexibility once we are all set and that reimplementing "XML over HTTP" output using PHP is childs play.

We will see.


Article done

The article I was writing, Technocratic information exchange is now 'complete'. Actually section 7.3 is still not finalized since the software package on which it is based is not yet complete.

Accepting peer review on the subject here. Still have to prepare a presentation for it... but these are things I can do simultaneously.


Writing article

I am currently working on an article regarding my XML over HTTP project mentioned here. It is taking shape on the NET article writing wiki here. I have named it "Technocratic information exchange".

I think I have managed to describe the concept much better in the article and I have also managed to include comprehensive diagrams and references that I have not had an equal chance of including during my blogging and forum posting on the issue.

The article is not yet finished... Part 5, describing how my software development fits into all of this, is not yet written and I usually write the introduction and conclusion last... as well as of course the abstract. But those do not take much effort it is the creative stuff that is hardest to write.

I intend to have a presentation of the content offered in the article during this year's NET AGM (Annual General Meeting), on May 15-17th in Umea, Sweeden, where I will of course also include all the progress that I have made on the project in the time being. Get in touch with NET if you also wish to attend!


Publicizing XML over HTTP

A few days ago I have started investigating the options of obtaining XML over HTTP data from devices managed by the government, which produce data that is already made publicly available in other formats.

I have been looking at atmospheric quality control sensors on two locations and traffic counting and categorization sensors in Ljubljana as well as statistical data offered by the Slovenian Eurostat equivalent. If this data was made available using XML over HTTP, it should be easy for third-parties to produce a wide variety of usable webpages with live data, from WAP-based traffic reports to national energy output analisyses.

I have sent out a number of emails to different government and maintenence addresses in a quest for information. I had hoped to accumulate enough information in order to prepare a project I could do for them with a complete solution that offers XML over HTTP. I have recieved some responses so far, so here is what I got:
* The statistical office of Slovenia reports that they are aware of the value of offering XML over HTTP and are planning to implement this in the next version of their website. All we can thus do for now is wait and see.
* The municipality of Ljubljana reports that data from their atmospheric quality control sensors is collected and made avaliable on their website (as we already knew) and in a standard format over FTP which can be made available for educational purposes upon agreement. FTP access can be scripted, thus the data could be made available using XML over HTTP.
* The maintainer of the traffic counting and categorization system reports that the devices call home using a GSM communicator, meaning the data is likely gathered on another system via some kind of automated mechanism. Further investigation would be needed to figure out if an XML over HTTP access point could be created without incuring any additional costs.

I have also recieved a hint from Thomasz of NET on how to properly address government institutions. I plan to contact them in the name of NET with the framework for a project ready to enable XML over HTTP access as well as demonstrate practical implementation of a third-party XML data presentation program.


Senzor update 2

I have been working some more on the project to create an example program for industrial application that utilizes XML over HTTP. The program's data output now works flawlessly for 1 channel which includes outputting the data into a SVG-based graph. I have also set up the appropriate mechanisms to ensure that the XML to SVG processing is done client side when supported (Firefox, Opera, etc) and server side when not (Internet Explorer).

In the future I will work on making the user interface as convenient and pretty as it can be, working on expanding the hardware I/O software to read from all 16 channels and look for a way to implement feedback.


RSS - continued 5

I updated the XSLTaggregator code to properly handle RFC 822 dates, for sources that omit the weekday.


RSS - continued 4

I have made a few minor adjustments to the XSLTaggregator program. Apparently I got part of the RSS 2.0 standard wrong, RSS 2.0 tags have no namespace. Corrected this and the validation now succeeds. This also makes the RSS feed viewable in Internet Explorer 7.

The source code ZIP is automatically updated with the latest code of course and I have also updated the websites that offer these combined feeds, including the one that offers the NET Blogs feed.

EDIT: I also got the idea to set up a combined RSS feed for all Zeitgeist Movement blogs. I don't know if the idea will work out but I will try. Posted about it on the Zeitgeist Movement forums. Also set up a common website for all XSLTaggregator installations I intend to set up.


Biogas update 2

Just thought I'd also mentioned that the car modification company, mentioned in an earlier blog, has replied to my email and summarily answered that the conversion costs approx 1150 € plus taxes and that cars could be converted to run on methane too.

This basically means that the technology is all there and there is nothing more to ponder about in that respect.


RSS - continued 3

The XSLTaggregator program has been updated to include a properly functioning sorting algorithm. While this is not actually necessary as any properly designed RSS tool should be able to display the entries in the proper order anyway, this improvement is designed for all the improperly designed RSS tools. ;) The order of choice is latest-first, so the same as typical blogs.

I would like to, again, thank the helpful people who helped me:
* Combine the for-each loops that check all items in all documents into a single one
* Properly sort by date in XSLT 1.0

The code is available on the existing link published in the last blog post.

Also, apparently the other members of NET have agreed to set up blogs for the purpse, which is great as it means my code will actually be used in at least one place.


RSS - continued 2

The source code for the XSLT based RSS aggregator is now complete released under GPL (v3) at:

There is hopefully sufficient documentation included with the files. Some very basic technical skills required to get it doing what you'd want it to do, but it's all very clean and flexible. It is officially faster than Planet too (benchmarked on my Celeron 300 MHz).

If there will be any modifications to the code, they will always be published under that link.