I recently attended an international conference where a PhD student presented a comparison tool designed to find differences in an XML grammar for tagging academic articles. The purpose of this tool is to improve the peer-review process for such articles. As every good PhD student should, he began with a review of the existing tools available on the market, both commercial and free open-source software. This review gave each tool a score out of 36, calculated by running it through a series of tests designed to determine its suitability for the task of finding differences in academic articles. DeltaXML featured within this review and was allocated only 11 points out of 36. I have to admit to feeling a little affronted!
I had to ask myself why we had scored so poorly, particularly when I knew for certain that many of the features outlined as desirable in an XML differencing tool are present in XML Compare. I decided to do a little bit of digging.
First of all, when and how was our tool evaluated? We offer a 14-day evaluation license for XML Compare and this includes a fully functional version of the product. After a little searching I found a record of an evaluation from a couple of years ago but surprisingly I found no record of any conversation with our technical sales team. While we understand that nobody wants to be annoyed by over-eager salespeople, we regularly email evaluators because we understand that out-of-the-box results might not be exactly as expected but crucially, we can help you set set things up to get better results. We have over 20 years’ experience in configuring our products, we know the output format inside out and our technical team all receive extensive training in XSLT, the XML language used to write configuration pipelines for XML Compare. During your evaluation we will work closely with you to fine-tune the comparison to your specific needs. This is a free-of-charge service that we offer to all of our evaluators and we’d love everyone to take advantage of it.
So what of our 11 out of 36 score? The final part of my investigation was to get hold of the test files that were used to review XML Compare and apply a configuration pipeline that would gain a more respectable score. The result? Well the test scoring system was far from clear but by my reckoning the score is at least 33 out of 36, if not 34. What we’ll never know is this – would 33 out of 36 have been good enough to fulfil the stated requirement and save all those years of PhD research?
If you’re looking for a comparison tool that will handle your specific XML grammar, then consider DeltaXML’s XML Compare. Take advantage of our 14-day evaluation and if you don’t get the result you want right out of the box please, please don’t ignore our technical sales team when they get in touch – they really do know what they’re doing! And who knows, you could save yourself years’ worth of effort writing, debugging, and maintaining your own code.
Don’t build it, buy it.