�
IURc@s>dZdZdZdZdZddlmZmZmZdS(s�A high-level cross-protocol url-grabber.
Using urlgrabber, data can be fetched in three basic ways:
urlgrab(url) copy the file to the local filesystem
urlopen(url) open the remote file and return a file object
(like urllib2.urlopen)
urlread(url) return the contents of the file as a string
When using these functions (or methods), urlgrabber supports the
following features:
* identical behavior for http://, ftp://, and file:// urls
* http keepalive - faster downloads of many files by using
only a single connection
* byte ranges - fetch only a portion of the file
* reget - for a urlgrab, resume a partial download
* progress meters - the ability to report download progress
automatically, even when using urlopen!
* throttling - restrict bandwidth usage
* retries - automatically retry a download if it fails. The
number of retries and failure types are configurable.
* authenticated server access for http and ftp
* proxy support - support for authenticated http and ftp proxies
* mirror groups - treat a list of mirrors as a single source,
automatically switching mirrors if there is a failure.
s3.10s
2013/10/09s�Michael D. Stenner <mstenner@linux.duke.edu>, Ryan Tomayko <rtomayko@naeblis.cx>Seth Vidal <skvidal@fedoraproject.org>Zdenek Pavlas <zpavlas@redhat.com>shttp://urlgrabber.baseurl.org/i����(turlgrabturlopenturlreadN( t__doc__t__version__t__date__t
__author__t__url__tgrabberRRR(((s7/usr/lib/python2.7/site-packages/urlgrabber/__init__.pyt<module>-s
|