It is possible to compare very large datasets using XML Compare; example test data has demonstrated that files over 1GB in size can be loaded from disk and compared in around 7 minutes.
NOTE: The Professional Named User licence for DeltaXML has a node limit and therefore will NOT process very large files. Please contact us if you need an evaluation to check processing of large files.
There are many different factors that affect performance with large files apart from the CPU type and speed, and the amount of physical memory, both of which must be adequate for the job. Some of the more important are discussed in the following sections. We also provide some typical metrics for DeltaXML on basic machines.
White space nodes are generally significant in XML files. Each white space, e.g. newline or space, can be treated as a node in the XML file. This can increase the memory image size and slow the comparison process. It can also result in differences being identified that are not significant.
In many situations white space nodes are not important and can be ignored. If a file has a DTD, an XML parser can use this as the file is read in to identify whether white space is ignorable or not. If a white space node is ignorable, for example because it appears between two markup tags, DeltaXML will ignore it in the comparison process.
If there is no DTD white space nodes should be removed either using an editor or by processing using an XSL filter such as normalize-space.xsl, though using XSL can be time consuming for large files. The delta files generated by DeltaXML have no white space added to them: if you look at them in an editor you will see that new lines are added only inside tags. This may look strange at first but it is an effective way to have shorter lines without adding white space nodes to an XML file. White space inside tags will be ignored by any XML parser.
Remember also that indentation of PCDATA within a file has an effect: often white space in PCDATA and attributes should be normalized before comparison. Otherwise, again, there will be a lot of differences reported that are not important.
There is a performance difference in comparing 'flat' XML files, i.e. large number of records at one level, and more nested files, which tends to require less processing because there are fewer nodes at each level. Comparison of orderless data is generally slower.
The performance is affacted by the number of differences: it is quickest when there are no differences! The more differences there are the slower the comparison process because the software is trying to find a best match between the files. The LCS algorithm used in DeltaXML for pattern matching ordered sequences has optimal performance for small numbers of differences and slows significantly for large numbers of differences.
DeltaXML shares text strings, so many different text strings will result in a larger memory image and may cause the program to hit memory size limitations sooner. On the other hand, files with many identical strings will be stored very efficiently.
The DeltaXML API has the ability to generate a delta file with 'changes-only' or a 'full delta' that includes unchanged data as well.
The time for comparison and the memory required is independent of the type of delta being produced. However the full-context delta output is typically larger and will require more disk IO and CPU time to write to disk.
The size of the JVM heap is one of the main factors which determines the size of datasets which DeltaXML can process. The size of the heap, amount of available RAM and other JVM configuration options affects both capacity and performance (too small a heap will result in execess garbage collection, similarly not enough RAM will causes performance degredation). The following guidelines are suggested:
java -Xmxcan be used to increase the fairly small default JVM heap size. For example invoking using the: (
java -Xmx512m...) command line argument will allocate half a gigabyte of RAM to the JVM heap.
java -Xmxargument is available as free RAM.
java -server... is recommended for best performance.
The use of Multiple Page Size Suppport
java -XX:+UseMPSS ... on Solaris provided a 5% runtime
improvement in testing, with no measurable memory overhead.
java -Xincgc ...) showed no benefit when tested.
java -XX:+UseParallelGC ...) would provide improved run times on multiprocessors as garbage collection could occur concurrently on a separate CPU. It actually had the opposite effect, doubling the elapsed runtime and trebling the CPU time consumed.
Reading from disk based files, for example, using the command.jar command-line interpreter, is typically slower than processing SAX events produced from an existing in memory data representation. As well as the reduced disk IO a more significant speedup arises from the lack of lexical analysis/tokenization that is otherwise performed by a SAX parser. We also recommend testing different SAX parsers and comparing their performance using your data if you need to read XML files from disk.
It is difficult to give accurate performance metrics for the reasons outlined above. But some examples may help as an indication.
There are very few large XML datasets which are publically available so we have used the XMark benchmark generator from: http://www.xml-benchmark.org/. This is typically used for testing XML content repositories/databases and XQuery implementations. Suggestions for alternative benchmark data and particularly documents are welcome.
Example files of test data were generated using the following command-lines with the xmlgen application, the intention was to generate files around 1GByte in size:
$ ./xmlgen -v
This is xmlgen, version 0.92
by Florian Waas (email@example.com)
$ ./xmlgen -f 10.0 -d -o f10.xml
$ ./xmlgen -e -o auction.dtd
$ ls -l f10.xml
-rw-rw-r-- 1 nigelw staff 1172322571 Nov 7 16:36 f10.xml
Some characteristics of the generated file are described using the following XPaths and their result values:
While the shortest runtime is from performing an identity comparison (ie comparing the same file or data with itself) we wanted a more realistic test with perhaps a small number of changes. To achieve this we deleted 7 random grand-children elements in the data, from the elements with large numbers of children.
The following XPaths describe the elements which were deleted:
This 'trimmed' file was called f10t.xml and was slightly smaller than the original input.
Test hardware was a Sun x4100, with:
Test software was XML Compare 5.2
The command-line driver was used to run and time the tests. The following command was used:
$ time java -server -d64 -Xmx6g -jar
/usr/local/deltaxml/DeltaXMLCore-5_2/command.jar compare delta f10.xml f10t.xml
DeltaXML Command Processor, version: 1.5
Copyright (c) 2000-2008 DeltaXML Ltd. All rights reserved.
Using: XML Compare, version: 5.2
The above command represents the basic, default command-line usage. The UNIX time results show a comparison time of around 7.5 minutes for these 1GByte data files. Faster times were obtained with the following techniques:
"Indent=no"to the command-line saves around 10 seconds from both the CPU and elapsed times.
"Enhanced Match 1=false"to the command line reduces the times to 6m38s real/9m05s user.
Some further issues relating to performance include:
We welcome feedback on these results and are prepared to look at tuning and performance issues for customers through our normal support channels. Any suggestions for large XML datasets which can be used for benchmarking and performance testing would also be welcomed.
We often have enquiries about handling large files, 500Mb to several Gb. Here are a few other comments and suggestions.
Download an evaluation of XML Compare to try it on your own files. The Professional Named User edition has a 1M node limit - it will tell you if you hit this. The Professional Server and Enterprise do not have this limit. If you do word-by-word comparison the node count goes up a lot because the text is split into words and a word is counted as a node in the XML tree.
You would need to have sufficient memory - exactly how much depends on the nature of the data but for 500Mb files we would initially suggest somewhere in the 4 to 8 Gb range.
DeltaXML will work fine in a 64 bit environment, provided that you:
1. use a 64 bit OS and hardware
2. use a 64 bit JVM and invoke it appropriately.
For example, use the command line access like this:
$ java -Xmx4g -jar command.jar compare delta file1.xml file2.xml
java -version command will often report the use of a 32 or
64 bit JVM, for example:
$ java -version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode)
-d64 argument if your default JVM is reported as 32 bit
and use the
-Xmx argument to adjust the heap size should you get
any Java memory exceptions. The example above was for 4Gbytes which works well
on a Mac desktop with 8GB of RAM.
We spend time optimizing our products and associated XML tools for lower memory footprints - some recent work was reported here: "XML Pipeline Performance". This paper describes advanced methods for optimizing XML pipeline performance. Presented at XML Prague 2010, held March 13th and 14th, 2010, Prague, CZ. Paper: PDF. Poster: PDF. (Please note that the performance figures presented predate the Saxon 9.3 release which addresses some of the issues discussed).
Be sure to remove white space from large input files. Performance depends on file structure and text content so needs to be evaluated on your own data.
However, it is clear form the above that DeltaXML can be used successfully with very large XML datasets.