.\" Automatically generated by Pod::Man 4.14 (Pod::Simple 3.42) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "MediaWiki::DumpFile 3pm" .TH MediaWiki::DumpFile 3pm "2022-06-15" "perl v5.34.0" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" MediaWiki::DumpFile \- Process various dump files from a MediaWiki instance .SH "SYNOPSIS" .IX Header "SYNOPSIS" .Vb 1 \& use MediaWiki::DumpFile; \& \& $mw = MediaWiki::DumpFile\->new; \& \& $sql = $mw\->sql($filename); \& $sql = $mw\->sql(\e*FH); \& \& $pages = $mw\->pages($filename); \& $pages = $mw\->pages(\e*FH); \& \& $fastpages = $mw\->fastpages($filename); \& $fastpages = $mw\->fastpages(\e*FH); \& \& use MediaWiki::DumpFile::Compat; \& \& $pmwd = Parse::MediaWikiDump\->new; .Ve .SH "ABOUT" .IX Header "ABOUT" This module is used to parse various dump files from a MediaWiki instance. The most likely case is that you will want to be parsing content at http://download.wikimedia.org/backup\-index.html provided by WikiMedia which includes the English and all other language Wikipedias. .PP This module is the successor to Parse::MediaWikiDump acting as a near full replacement in feature set and providing an independent 100% backwards compatible \s-1API\s0 that is faster than Parse::MediaWikiDump is (see the MediaWiki::DumpFile::Compat and MediaWiki::DumpFile::Benchmarks documentation for details). .SH "STATUS" .IX Header "STATUS" This software is maturing into a stable and tested state with known users; the \s-1API\s0 is stable and will not be changed. The software is actively being maintained and improved; please submit bug reports, feature requests, and other feedback to the author using the bug reporting features described below. .SH "FUNCTIONS" .IX Header "FUNCTIONS" .SS "sql" .IX Subsection "sql" Return an instance of MediaWiki::DumpFile::SQL. This object can be used to parse any arbitrary \s-1SQL\s0 dump file used to recreate a single table in the MediaWiki instance. .SS "pages" .IX Subsection "pages" Return an instance of MediaWiki::DumpFile::Pages. This object parses the contents of the page dump file and supports both single and multiple revisions per article as well as associated metadata. The page can be parsed in either normal or fast mode where fast mode is only capable of parsing the article titles and text contents, with restrictions. .SS "fastpages" .IX Subsection "fastpages" Return an instance of MediaWiki::DumpFile::FastPages. This class is a subclass of MediaWiki::DumpFile::Pages that configures it to fast mode by default and uses a tuned iterator interface with slightly less overhead. .SH "SPEED" .IX Header "SPEED" MediaWiki::DumpFile now runs in a slower configuration when installed with out the recommended Perl modules; this was done so that the package can be installed with out a C compiler and still have some utility. As well there is a fast mode available when parsing the \s-1XML\s0 document that can give significant speed boosts while giving up support for anything except for the article titles and text contents. If you want to decrease the processing overhead of this system follow this guide: .IP "Install XML::CompactTree::XS" 4 .IX Item "Install XML::CompactTree::XS" Having this module on your system will cause XML::TreePuller to use it automatically \- this will net you a dramatic speed boost if it is not already installed. This can give you a 3\-4 times speed increase when not using fast mode. .IP "Use fast mode if possible" 4 .IX Item "Use fast mode if possible" Details of fast mode and the restrictions it imposes are in the MediaWiki::DumpFile::Pages documentation. Fast mode is also available in the compatibility library as a new available option. Fast mode can give you a further 3\-4 times speed increase over parsing with XML::CompatTree::XS installed but it does not require that module to function; fast mode is nearly the same speed with or with out XML::CompactTree::XS installed. .IP "Stop using compatibility mode" 4 .IX Item "Stop using compatibility mode" If you are using the compatibility \s-1API\s0 you lose performance; the compatibility \s-1API\s0 is a set of wrappers around the MediaWiki::DumpFile \s-1API\s0 and while it is faster than the original Parse::MediaWikiDump::Pages it is still slower than MediaWiki::DumpFile::Pages by a few percent. .IP "Use MediaWiki::DumpFile::FastPages" 4 .IX Item "Use MediaWiki::DumpFile::FastPages" This is a subclass of MediaWiki::DumpFile::Pages that configures it by default to run in fast mode and uses a tuned iterator that decreases overhead another few percent. This is generally the absolute fastest fully supported and tested way to parse the \s-1XML\s0 dump files. .IP "Start hacking" 4 .IX Item "Start hacking" I've put some considerable effort into finding the fastest ways to parse the \s-1XML\s0 dump files. Probably the most important part of this research has been an \s-1XML\s0 benchmarking suite I created for specifically measuring the performance of parsing the Mediawiki page dump files. The benchmark suite is present in the module tarball in the speed_test/ directory. It contains a comprehensive set of test cases to measure the performance of a good number of \s-1XML\s0 parsers and parsing schemes from \s-1CPAN.\s0 You can use this suite as a starting point to see how various parsers work and how fast they go; as well you can use it to reliably verify the performance impacts of experiments in parsing performance. .Sp The result of my research into \s-1XML\s0 parsers was to create XML::TreePuller which is the heart \&\s-1XML\s0 processing system of MediaWiki::DumpFile::Pages \- it's fast but I'm positive there is room for improvement. Increaseing the speed of that module will increase the speed of MediaWiki::DumpFile::Pages as well. .Sp Please consider sharing the results of your hacking with me by opening a ticket in the bug reporting system as documented below. .Sp The following test cases are notable and could be used by anyone who just needs to extract article titles and text: .RS 4 .IP "XML-Bare" 4 .IX Item "XML-Bare" Wow is it fast! And wrong! Just so very wrong... but it does pass the tests *shrug* .RE .RS 4 .RE .SS "Benchmarks" .IX Subsection "Benchmarks" See MediWiki::DumpFile::Benchmarks for a comprehensive report on dump file processing speeds. .SH "AUTHOR" .IX Header "AUTHOR" Tyler Riddle, \f(CW\*(C`\*(C'\fR .SH "LIMITATIONS" .IX Header "LIMITATIONS" .IP "English Wikipedia comprehensive dump files not supported" 4 .IX Item "English Wikipedia comprehensive dump files not supported" There are two types of Mediawiki dump files sharing one schema: ones with one revision of page per entry and one with multiple revisions of a page per entry. This software is designed to parse either case and provide a consistent \s-1API\s0 however it comes with the restriction that an entire entry must fit in memory. The normal English Wikipedia dump file is around 20 gigabytes and each entry easily fits into \&\s-1RAM\s0 on most machines. .Sp In the case of the comprehensive English Wikipedia dump files the file itself is measured in the terabytes and a single entry can be 20 gigabytes or more. It is technically possible for the original Parse::MediaWikiDump::Revisions (not the compatibility version provided in this module) to parse that dump file however Parse::MediaWikiDump runs at a few megabytes per second under the best of conditions. .SH "BUGS" .IX Header "BUGS" Please report any bugs or feature requests to \f(CW\*(C`bug\-mediawiki\-dumpfile at rt.cpan.org\*(C'\fR, or through the web interface at . I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. .IP "56843 ::Pages\->\fBcurrent_byte()\fR wraps at 2 gigs+" 4 .IX Item "56843 ::Pages->current_byte() wraps at 2 gigs+" If you have a large \s-1XML\s0 file, where the file size is greater than a signed 32bit integer, the returned value from this method can go negative. .SH "SUPPORT" .IX Header "SUPPORT" You can find documentation for this module with the perldoc command. .PP .Vb 1 \& perldoc MediaWiki::DumpFile .Ve .PP You can also look for information at: .IP "\(bu" 4 \&\s-1RT: CPAN\s0's request tracker .Sp .IP "\(bu" 4 AnnoCPAN: Annotated \s-1CPAN\s0 documentation .Sp .IP "\(bu" 4 \&\s-1CPAN\s0 Ratings .Sp .IP "\(bu" 4 Search \s-1CPAN\s0 .Sp .SH "ACKNOWLEDGEMENTS" .IX Header "ACKNOWLEDGEMENTS" All of the people who reported bugs or feature requests for Parse::MediaWikiDump. .SH "COPYRIGHT & LICENSE" .IX Header "COPYRIGHT & LICENSE" Copyright 2009 \*(L"Tyler Riddle\*(R". .PP This program is free software; you can redistribute it and/or modify it under the terms of either: the \s-1GNU\s0 General Public License as published by the Free Software Foundation; or the Artistic License. .PP See http://dev.perl.org/licenses/ for more information.