This file documents the GNU Wget utility for downloading network data.
Copyright © 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.
GNU Wget is a free utility for non-interactive download of files from the Web. It supports http, https, and ftp protocols, as well as retrieval through http proxies.
This chapter is a partial overview of Wget's features.
By default, Wget is very simple to invoke. The basic syntax is:
wget [option]... [URL]...
Wget will simply download all the urls specified on the command line. URL is a Uniform Resource Locator, as defined below.
However, you may wish to change some of the default parameters of Wget. You can do it two ways: permanently, adding the appropriate command to .wgetrc (see Startup File), or specifying it on the command line.
URL is an acronym for Uniform Resource Locator. A uniform resource locator is a compact string representation for a resource available via the Internet. Wget recognizes the url syntax as per rfc1738. This is the most widely used form (square brackets denote optional parts):
You can also encode your username and password within a url:
Either user or password, or both, may be left out. If you leave out either the http username or password, no authentication will be sent. If you leave out the ftp username, anonymous will be used. If you leave out the ftp password, your email address will be supplied as a default password.1
Important Note: if you specify a password-containing url on the command line, the username and password will be
plainly visible to all users on the system, by way of
multi-user systems, this is a big security risk. To work around it, use
wget -i - and feed the urls to Wget's
standard input, each on a separate line, terminated by C-d.
You can encode unsafe characters in a url as
xy being the
hexadecimal representation of the character's ascii value.
Some common unsafe characters include %
(quoted as %25), : (quoted as %3A), and @
(quoted as %40). Refer to rfc1738 for a comprehensive list of unsafe characters.
Wget also supports the
type feature for ftp urls. By default, ftp documents are retrieved in the binary mode (type
i), which means that they are downloaded
unchanged. Another useful mode is the a
(ASCII) mode, which converts the line delimiters between the
different operating systems, and is thus useful for text files. Here is an
Two alternative variants of url specification are also supported, because of historical (hysterical?) reasons and their widespreaded use.
ftp-only syntax (supported by
http-only syntax (introduced by
These two alternative forms are deprecated, and may cease being supported in the future.
If you do not understand the difference between these notations, or do not
know which one to use, just use the plain ordinary format you use with your
favorite browser, like
Since Wget uses GNU getopt to process command-line arguments, every option has a long form along with the short one. Long options are more convenient to remember, but take time to type. You may freely mix different option styles, or specify options after the command-line arguments. Thus you may write:
wget -r --tries=10 http://fly.srk.fer.hr/ -o log
The space between the option accepting an argument and the argument may be omitted. Instead of -o log you can write -olog.
You may put several options that do not require arguments together, like:
wget -drc URL
This is completely equivalent to:
wget -d -r -c URL
Since the options can be specified after the arguments, you may terminate them with --. So the following will try to download url -x, reporting failure to log:
wget -o log -- -x
The options that accept comma-separated lists all respect the convention that
specifying an empty list clears its value. This can be useful to clear the
.wgetrc settings. For instance, if your
exclude_directories to /cgi-bin, the following example will first reset it,
and then set it to exclude /~nobody and
/~somebody. You can also clear the lists in
.wgetrc (see Wgetrc
wget -X '' -X /~nobody,/~somebody
Most options that do not accept arguments are boolean options, so named because their state can be captured with a yes-or-no (boolean) variable. For example, --follow-ftp tells Wget to follow FTP links from HTML files and, on the other hand, --no-glob tells it not to perform file globbing on FTP URLs. A boolean option is either affirmative or negative (beginning with --no). All such options share several properties.
Unless stated otherwise, it is assumed that the default behavior is the opposite of what the option accomplishes. For example, the documented existence of --follow-ftp assumes that the default is to not follow FTP links from HTML pages.
Affirmative options can be negated by prepending the --no- to the option name; negative options can be
negated by omitting the --no- prefix.
This might seem superfluousif the default for an affirmative option is to not
do something, then why provide a way to explicitly turn it off? But the startup
file may in fact change the default. For instance, using
on in .wgetrc makes Wget
follow FTP links by default, and using --no-follow-ftp is the only way to restore the factory
default from the command line.
If this function is used, no urls need be present on the command line. If there are urls both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line.
However, if you specify --force-html, the document will be regarded as
html. In that case you may have
problems with relative links, which you can solve either by adding
"> to the documents
or by specifying --base=url
on the command line.
If the file is an external one, the document will be automatically treated as html if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified.
">to html, or using the --base command-line option.
BASEtag in the html input file, with URL as the value for the
For instance, if you specify http://foo/bar/a.html for URL, and Wget reads ../baz/b.html from the input file, it would be resolved to http://foo/baz/b.html.
Use of -O is not intended to mean simply use the name file instead of the one in the URL; rather, it is analogous to shell redirection: wget -O file http://foo is intended to work like wget -O - http://foo > file; file will be truncated immediately, and all downloaded content will be written there.
For this reason, -N (for timestamp-checking) is not supported in combination with -O: since file is always newly created, it will always have a very new timestamp. A warning will be issued if this combination is used.
Similarly, using -r or -p with -O may not work as you expect: Wget won't just download the first file to file and then download the rest to their normal names: all downloaded content will be placed in file. This was disabled in version 1.11, but has been reinstated (with a warning) in 1.11.2, as there are some cases where this behavior can actually have some use.
Note that a combination with -k is only permitted when downloading a single document, as in that case it will just convert all relative URIs to external ones; -k makes no sense for multiple URIs when they're all being downloaded to a single file; -k can be used only when the output is a regular file.
When running Wget without -N,
-nc, -r, or -p,
downloading the same file in the same directory will result in the original
copy of file being preserved and the second copy being named
file.1. If that file is
downloaded yet again, the third copy will be named file.2, and so on. (This is also the behavior with
-nd, even if -r or -p are
in effect.) When -nc is specified, this
behavior is suppressed, and Wget will refuse to download newer copies of
actually a misnomer in this modeit's not clobbering that's prevented (as the
numeric suffixes were already preventing clobbering), but rather the multiple
version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, re-downloading a file will result in the new copy simply overwriting the old. Adding -nc will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file (see Time-Stamping). -nc may not be specified at the same time as -N.
Note that when -nc is specified, files with the suffixes .html or .htm will be loaded from the local disk and parsed as if they had been retrieved from the Web.
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file.
Note that you don't need to specify this option if you just want the current invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. -c only affects resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting around.
Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.
Beginning with Wget 1.7, if you use -c on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents. If you really want the download to start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)because continuing is not meaningful, no download occurs.
On the other side of the coin, while using -c, any file that's bigger on the server than
locally will be considered an incomplete download and only
(length(remote) - length(local)) bytes will be downloaded and
tacked onto the end of the local file. This behavior can be desirable in
certain casesfor instance, you can use wget
-c to download just the new portion that's been appended to a
data collection or log file.
However, if the file is bigger on the server because it's been changed, as opposed to just appended to, you'll end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the remote file. You need to be especially careful of this when using -c in conjunction with -r, since every file will be considered as an "incomplete download" candidate.
Another instance where you'll get a garbled file if you try to use -c is if you have a lame http proxy that inserts a transfer interrupted string into the local file. In the future a rollback option may be added to deal with this case.
Note that -c only works with ftp servers and with http servers that
The bar indicator is used by default. It draws an ascii progress bar graphics (a.k.a thermometer display) indicating the status of retrieval. If the output is not a TTY, the dot bar will be used by default.
Use --progress=dot to switch to the dot display. It traces the retrieval by printing dots on the screen, each dot representing a fixed amount of downloaded data.
When using the dotted retrieval, you may also set the style by
specifying the type as dot:style. Different styles assign
different meaning to one dot. With the
default style each dot
represents 1K, there are ten dots in a cluster and 50 dots in a line. The
binary style has a more computer-like orientation8K dots,
16-dots clusters and 48 dots per line (which makes for 384K lines). The
mega style is suitable for downloading very large fileseach dot
represents 64K retrieved, there are eight dots in a cluster, and 48 dots on
each line (so each line contains 3M).
Note that you can set the default style using the
command in .wgetrc. That setting may be
overridden from the command line. The exception is that, when the output is
not a TTY, the dot progress will be favored over bar. To force the bar
output, use --progress=bar:force.
By default, when a file is downloaded, it's timestamps are set to match those from the remote file. This allows the use of --timestamping on subsequent invocations of wget. However, it is sometimes useful to base the local file's timestamp on when it was actually downloaded; for that purpose, the --no-use-server-timestamps option has been provided.
wget --spider --force-html -i bookmarks.html
This feature needs much more work for Wget to get close to the functionality of real web spiders.
When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings.
All timeout-related options accept decimal values, as well as subsecond values. For example, 0.1 seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.
Of course, the remote server may choose to terminate the connection sooner than this option requires. The default read timeout is 900 seconds.
This option allows the use of decimal numbers, usually in conjunction with power suffixes; for example, --limit-rate=2.5k is a legal value.
Note that Wget implements the limiting by sleeping the appropriate amount of time after a network read that took less time than specified by the rate. Eventually this strategy causes the TCP transfer to slow down to approximately the specified rate. However, it may take some time for this balance to be achieved, so don't be surprised if limiting the rate doesn't work well with very small files.
msuffix, in hours using
hsuffix, or in days using
Specifying a large value for this option is useful if the network or the
destination host is down, so that Wget can wait long enough to reasonably
expect the network error to be fixed before the retry. The waiting interval
specified by this function is influenced by
By default, Wget will assume a value of 10 seconds.
A 2001 article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly. Its author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses.
The --random-wait option was inspired by this ill-advised recommendation to block many unrelated users from a web site due to the actions of one.
*_proxyenvironment variable is defined.
For more information about the use of proxies with Wget, See Proxies.
Note that quota will never affect downloading a single file. So if you specify wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz, all of the ls-lR.gz will be downloaded. The same goes even when several urls are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type wget -Q2m -i sitesdownload will be aborted when the quota is exceeded.
Setting quota to 0 or to inf unlimits the download quota.
However, it has been reported that in some situations it is not desirable
to cache host names, even for the duration of a short-running application like
Wget. With this option Wget issues a new DNS lookup (more precisely, a new
getaddrinfo) each time it
makes a new connection. Please note that this option will not affect
caching that might be performed by the resolving library or by an external
caching layer, such as NSCD.
If you don't understand exactly what this option does, you probably won't need it.
By default, Wget escapes the characters that are not valid or safe as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, perhaps because you are downloading to a non-native partition, or because you want to disable escaping of the control characters, or you want to further restrict characters to only those in the ascii range of values.
The modes are a comma-separated set of text values. The acceptable values are unix, windows, nocontrol, ascii, lowercase, and uppercase. The values unix and windows are mutually exclusive (one will override the other), as are lowercase and uppercase. Those last are special cases, as they do not change the set of characters that would be escaped, but rather force local file paths to be converted either to lower- or uppercase.
When unix is specified, Wget escapes the character / and the control characters in the ranges 031 and 128159. This is the default on Unix-like operating systems.
When windows is given, Wget escapes the characters \, |, /, :, ?, ", *, <, >, and the control characters in the ranges 031 and 128159. In addition to this, Wget in Windows mode uses + instead of : to separate host and port in local file names, and uses @ instead of ? to separate the query portion of the file name from the rest. Therefore, a URL that would be saved as www.xemacs.org:4300/search.pl?input=blah in Unix mode would be saved as www.xemacs.org+4300/search.pl@input=blah in Windows mode. This mode is the default on Windows.
If you specify nocontrol, then the escaping of the control characters is also switched off. This option may make sense when you are downloading URLs whose names contain UTF-8 characters, on a system which can save and display filenames in UTF-8 (some possible byte values used in UTF-8 byte sequences fall in the range of values designated by Wget as controls).
The ascii mode is used to specify that any bytes whose values are outside the range of ascii characters (that is, greater than 127) shall be escaped. This can be useful when saving filenames whose encoding does not match the one used locally.
Neither options should be needed normally. By default, an IPv6-aware Wget
will use the address family specified by the host's DNS record. If the DNS
responds with both IPv4 and IPv6 addresses, Wget will try them in sequence
until it finds one it can connect to. (Also see
option described below.)
These options can be used to deliberately force the use of IPv4 or IPv6
address families on dual family systems, usually to aid debugging or to deal
with broken network configuration. Only one of --inet6-only and --inet4-only may be specified at the same time.
Neither option is available in Wget compiled without IPv6 support.
This avoids spurious errors and connect attempts when accessing hosts that
resolve to both IPv6 and IPv4 addresses from IPv4 networks. For example,
www.kame.net resolves to 2001:200:0:8002:203:47ff:fea5:3085 and to
220.127.116.11. When the preferred
IPv4, the IPv4 address is used first; when the
preferred family is
IPv6, the IPv6 address is used first; if the
specified value is
none, the address order returned by DNS is
used without change.
Unlike -4 and -6, this option doesn't inhibit access to any
address family, it only changes the order in which the addresses are
accessed. Also note that the reordering performed by this option is
stableit doesn't affect order of addresses of the same family.
That is, the relative order of all IPv4 addresses and of all IPv6 addresses
remains intact in all cases.
You can set the default state of IRI support using the
command in .wgetrc. That setting may be
overridden from the command line.
Wget use the function
nl_langinfo() and then the
CHARSET environment variable to get the locale. If it fails,
ascii is used.
You can set the default local encoding using the
local_encoding command in .wgetrc. That setting may be overridden from the
For HTTP, remote encoding can be found in HTTP
header and in HTML
Content-Type http-equiv meta tag.
You can set the default encoding using the
command in .wgetrc. That setting may be
overridden from the command line.
Take, for example, the directory at ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with -r, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the -nH option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. This is where --cut-dirs comes in handy; it makes Wget not see number remote directory components. Here are several examples of how --cut-dirs option works.
No options -> ftp.xemacs.org/pub/xemacs/ -nH -> pub/xemacs/ -nH --cut-dirs=1 -> xemacs/ -nH --cut-dirs=2 -> . --cut-dirs=1 -> ftp.xemacs.org/xemacs/ ...
If you just want to get rid of the directory structure, this option is similar to a combination of -nd and -P. However, unlike -nd, --cut-dirs does not lose with subdirectoriesfor instance, with -nH --cut-dirs=1, a beta/ subdirectory will be placed to xemacs/beta, as one would expect.
Note that filenames changed in this way will be re-downloaded every time you re-mirror a site, because Wget can't tell that the local X.html file corresponds to remote URL X (since it doesn't yet know that the URL produces output of type text/html or application/xhtml+xml.
As of version 1.12, Wget will also ensure that any downloaded files of type text/css end in the suffix .css, and the option was renamed from --html-extension, to better reflect its new behavior. The old option name is still acceptable, but should now be considered deprecated.
At some point in the future, this option may well be expanded to include suffixes for other types of content, including content types that are not parsed by Wget.
digest, or the Windows
Another way to specify username and password is in the url itself (see URL
Format). Either method reveals your password to anyone who bothers to run
ps. To prevent the passwords from being seen, store them in
.wgetrc or .netrc, and make sure to protect those files from
other users with
chmod. If the passwords are really important, do
not leave them lying in those files eitheredit the files and delete them
after Wget has started the download.
This option is useful when, for some reason, persistent (keep-alive) connections don't work for you, for example due to a server bug or due to the inability of server-side scripts to cope with the connections.
Caching is allowed by default.
You will typically use this option when mirroring sites that require that you be logged in to access some or all of their content. The login process typically works by the web server issuing an http cookie upon receiving and verifying your credentials. The cookie is then resent by the browser when accessing that part of the site, and so proves your identity.
Mirroring such a site requires Wget to send the same cookies your browser sends when communicating with the site. This is achieved by --load-cookiessimply point Wget to the location of the cookies.txt file, and it will send the same cookies your browser would send in the same situation. Different browsers keep textual cookie files in different locations:
If you cannot use --load-cookies, there might still be an alternative. If your browser supports a cookie manager, you can use it to view the cookies used when accessing the site you're mirroring. Write down the name and value of the cookie, and manually instruct Wget to send those cookies, bypassing the official cookie support:
wget --no-cookies --header "Cookie: name=value"
Since the cookie file format does not normally carry session cookies, Wget marks them with an expiry timestamp of 0. Wget's --load-cookies recognizes those as session cookies, but it might confuse other browsers. Also note that cookies so loaded will be treated as other session cookies, which means that if you want --save-cookies to preserve them again, you must use --keep-session-cookies again.
Content-Lengthheaders, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte.
With this option, Wget will ignore the
headeras if it never existed.
You may define more than one additional header by specifying --header more than once.
wget --header='Accept-Charset: iso-8859-2' \ --header='Accept-Language: hr' \ http://fly.srk.fer.hr/
Specification of an empty string as the header value will clear all previous user-defined headers.
As of Wget 1.10, this option can be used to override headers otherwise
generated automatically. This example instructs Wget to connect to localhost,
but to specify foo.bar in the
wget --header="Host: foo.bar" http://localhost/
In versions of Wget prior to 1.10 such use of --header caused sending of duplicate headers.
Security considerations similar to those with --http-password pertain here as well.
The http protocol allows the clients to identify
themselves using a
User-Agent header field. This enables
distinguishing the www software, usually for statistical
purposes or for tracing of protocol violations. Wget normally identifies as
version being the current version number of Wget.
However, some sites have been known to impose the policy of tailoring the
output according to the
User-Agent-supplied information. While
this is not such a bad idea in theory, it has been abused by servers denying
information to clients other than (historically) Netscape or, more frequently,
Microsoft Internet Explorer. This option allows you to change the
User-Agent line issued by Wget. Use of this option is
discouraged, unless you really know what you are doing.
Specifying empty user agent with --user-agent="" instructs Wget not to send the
User-Agent header in http requests.
key1=value1&key2=value2, with percent-encoding for special characters; the only difference is that one expects its content as a command-line parameter and the other accepts its content from a file. In particular, --post-file is not for transmitting files as form attachments: those must appear as
key=valuedata (with appropriate percent-coding) just like everything else. Wget does not currently support
multipart/form-datafor transmitting POST data; only
application/x-www-form-urlencoded. Only one of --post-data and --post-file should be specified.
Please be aware that Wget needs to know the size of the POST data in
advance. Therefore the argument to
--post-file must be a regular
file; specifying a FIFO or something like /dev/stdin won't work. It's not quite clear how to
work around this limitation inherent in HTTP/1.0. Although HTTP/1.1 introduces
chunked transfer that doesn't require knowing the request length in
advance, a client can't use chunked unless it knows it's talking to an
HTTP/1.1 server. And it can't know that until it receives a response, which in
turn requires the request to have been completed a chicken-and-egg problem.
Note: if Wget is redirected after the POST request is completed, it will not send the POST data to the redirected URL. This is because URLs that process POST often respond with a redirection to a regular page, which does not desire or accept POST. It is not completely clear that this behavior is optimal; if it doesn't work out, it might be changed in the future.
This example shows how to log to a server using POST and then proceed to download the desired pages, presumably only accessible to authorized users:
# Log in to the server. This can be done only once. wget --save-cookies cookies.txt \ --post-data 'user=foo&password=bar' \ http://server.com/auth.php # Now grab the page or pages we care about. wget --load-cookies cookies.txt \ -p http://server.com/interesting/article.php
If the server is using session cookies to track user authentication, the above will not work because --save-cookies will not save them (and neither will browsers) and the cookies.txt file will be empty. In that case use --keep-session-cookies along with --save-cookies to force saving of session cookies.
Content-Dispositionheaders is enabled. This can currently result in extra round-trips to the server for a
HEADrequest, and is known to suffer from a few bugs, which is why it is not currently enabled by default.
This option is useful for some file-downloading CGI programs that use
Content-Disposition headers to describe what the name of a
downloaded file should be.
Use of this option is not recommended, and is intended only to support some few obscure servers, which never send HTTP authentication challenges, but accept unsolicited auth info, say, in addition to form-based authentication.
To support encrypted HTTP (HTTPS) downloads, Wget must be compiled with an external SSL library, currently OpenSSL. If Wget is compiled without SSL support, none of these options are available.
Specifying SSLv2, SSLv3, or TLSv1 forces the use of the corresponding protocol. This is useful when talking to old and buggy SSL server implementations that make it hard for OpenSSL to choose the correct protocol version. Fortunately, such servers are quite rare.
As of Wget 1.10, the default is to verify the server's certificate against the recognized certificate authorities, breaking the SSL handshake and aborting the download if the verification fails. Although this provides more secure downloads, it does break interoperability with some sites that worked with previous Wget versions, particularly those using self-signed, expired, or otherwise invalid certificates. This option forces an insecure mode of operation that turns the certificate verification errors into warnings and allows you to proceed.
If you encounter certificate verification errors or ones saying that common name doesn't match requested host name, you can use this option to bypass the verification and proceed with the download. Only use this option if you are otherwise convinced of the site's authenticity, or if you really don't care about the validity of its certificate. It is almost always a bad idea not to check the certificates when transmitting confidential or important data.
Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time.
c_rehashutility supplied with OpenSSL. Using --ca-directory is more efficient than --ca-certificate when many certificates are installed because it allows Wget to fetch certificates on demand.
Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time.
On such systems the SSL library needs an external source of randomness to
initialize. Randomness may be provided by EGD (see --egd-file below) or read from an external source
specified by the user. If this option is not specified, Wget looks for random
$RANDFILE or, if that is unset, in $HOME/.rnd. If none of those are available, it is
likely that SSL encryption will not be usable.
If you're getting the Could not seed OpenSSL PRNG; disabling SSL. error, you should provide random data using some of the methods described above.
OpenSSL allows the user to specify his own source of entropy using the
RAND_FILE environment variable. If this variable is unset, or if
the specified file does not produce enough randomness, OpenSSL will read
random data from EGD socket specified using this option.
If this option is not specified (and the equivalent startup command is not used), EGD is never contacted. EGD is not needed on modern Unix systems that support /dev/random.
Another way to specify username and password is in the url itself (see URL
Format). Either method reveals your password to anyone who bothers to run
ps. To prevent the passwords from being seen, store them in
.wgetrc or .netrc, and make sure to protect those files from
other users with
chmod. If the passwords are really important, do
not leave them lying in those files eitheredit the files and delete them
after Wget has started the download.
Note that even though Wget writes to a known filename for this file, this
is not a security hole in the scenario of a user making .listing a symbolic link to /etc/passwd or something and asking
to run Wget in his or her directory. Depending on the options used, either
Wget will refuse to write to .listing,
making the globbing/recursion/time-stamping operation fail, or the symbolic
link will be deleted and replaced with the actual .listing file, or the listing will be written to a
Even though this situation isn't a problem, though,
should never run Wget in a non-trusted user's directory. A user could do
something as simple as linking index.html
to /etc/passwd and asking
root to run Wget with -N
or -r so the file will be overwritten.
By default, globbing will be turned on if the url contains a globbing character. This option may be used to turn globbing on or off permanently.
You may have to quote the url to protect it from
being expanded by your shell. Globbing makes Wget look for a directory
listing, which is system-specific. This is why it currently works only with
Unix ftp servers (and the ones emulating Unix
If the machine is connected to the Internet directly, both passive and
active FTP should work equally well. Behind most firewall and NAT
configurations passive FTP has a better chance of working. However, in some
rare firewall configurations, active FTP actually works when passive FTP
doesn't. If you suspect this to be the case, use this option, or set
passive_ftp=off in your init file.
When --retr-symlinks is specified, however, symbolic links are traversed and the pointed-to files are retrieved. At this time, this option does not cause Wget to traverse symlinks to directories and recurse through them, but in the future it should be enhanced to do this.
Note that when retrieving a file (not a directory) because it was specified on the command-line, rather than because it was recursed to, this option has no effect. Symbolic links are always traversed in this case.
wget -r -nd --delete-after http://whatever.com/~popular/page/
The -r option is to retrieve recursively, and -nd to not create directories.
Note that --delete-after deletes files on the local machine. It does not issue the DELE command to remote FTP sites, for instance. Also note that when --delete-after is specified, --convert-links is ignored, so .orig files are simply not created in the first place.
Each link will be changed in one of the two ways:
Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif. This kind of transformation works reliably for arbitrary combinations of directories.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads.
Ordinarily, when downloading a single html page, any requisite documents that may be needed to display it properly are not downloaded. Using -r together with -l can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left with leaf documents that are missing their requisites.
For instance, say document 1.html
<IMG> tag referencing 1.gif and an
<A> tag pointing to
external document 2.html. Say that
2.html is similar but that its image is
2.gif and it links to 3.html. Say this continues up to some arbitrarily
If one executes the command:
wget -r -l 2 http://site/1.html
then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded. As you can see, 3.html is without its requisite 3.gif because Wget is simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion. However, with this command:
wget -r -l 2 -p http://site/1.html
all the above files and 3.html's requisite 3.gif will be downloaded. Similarly,
wget -r -l 1 -p http://site/1.html
will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One might think that:
wget -r -l 0 -p http://site/1.html
would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l infthat is, infinite recursion. To download a single html page (or a handful of them, all specified on the command-line or in a -i url input file) and its (or their) requisites, simply leave off -r and -l:
wget -p http://site/1.html
Note that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actually, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to -p:
wget -E -H -k -K -p http://site/document
To finish off this topic, it's worth knowing that Wget's idea of an
external document link is any URL specified in an
<AREA> tag, or a
<LINK> tag other
According to specifications, html comments are expressed as sgml declarations. Declaration is special markup that begins with <! and ends with >, such as <!DOCTYPE ...>, that may contain comments between a pair of -- delimiters. html comments are empty declarations, sgml declarations without any non-comment text. Therefore, <!--foo--> is a valid comment, and so is <!--one-- --two-->, but <!--1--2--> is not.
On the other hand, most html writers don't perceive comments as anything other than text delimited with <!-- and -->, which is not quite the same. For example, something like <!------------> works as a valid comment as long as the number of dashes is a multiple of four (!). If not, the comment technically lasts until the next --, which may be at the other end of the document. Because of this, many popular browsers completely ignore the specification and implement what users have come to expect: comments delimited with <!-- and -->.
Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but had the misfortune of containing non-compliant comments. Beginning with version 1.9, Wget has joined the ranks of clients that implements naive comments, terminating each comment at the first occurrence of -->.
If, for whatever reason, you want strict comment parsing, use this option to turn it on.
In the past, this option was the best bet for downloading a single page and its requisites, using a command-line like:
wget --ignore-tags=a,area -H -k -K -r http://site/document
However, the author of this option came across a page with tags like
<LINK REL="home" HREF="/"> and came to the realization that
specifying tags to ignore was not enough. One can't just tell Wget to ignore
<LINK>, because then stylesheets will not be downloaded.
Now the best bet for downloading a single page and its requisites is the
dedicated --page-requisites option.
Wget may return one of several error codes if it encounters problems.
With the exceptions of 0 and 1, the lower-numbered exit codes take precedence over higher-numbered ones, when multiple types of errors are encountered.
In versions of Wget prior to 1.12, Wget's exit status tended to be unhelpful and inconsistent. Recursive downloads would virtually always return 0 (success), regardless of any issues encountered, and non-recursive fetches only returned the status corresponding to the most recently-attempted download.
GNU Wget is capable of traversing parts of the Web (or a single http or ftp server), following links and directory structure. We refer to this as to recursive retrieval, or recursion.
With http urls, Wget retrieves
and parses the html or css from the
given url, retrieving the files the document refers to,
through markup like
src, or css uri values specified using the
url() functional notation. If the freshly
downloaded file is also of type
text/css, it will be parsed
and followed further.
Recursive retrieval of http and html/css content is breadth-first. This means that Wget first downloads the requested document, then the documents linked from that document, then the documents linked by them, and so on. In other words, Wget first downloads the documents at depth 1, then those at depth 2, and so on until the specified maximum depth.
The maximum depth to which the retrieval may descend is specified with the -l option. The default maximum depth is five layers.
When retrieving an ftp url
recursively, Wget will retrieve all the data from the given directory tree
(including the subdirectories up to the specified depth) on the remote server,
creating its mirror image locally. ftp retrieval is also
limited by the
depth parameter. Unlike http
recursion, ftp recursion is performed depth-first.
By default, Wget will create a local directory tree, corresponding to the one found on the remote server.
Recursive retrieving can find a number of applications, the most important of which is mirroring. It is also useful for www presentations, and any other opportunities where slow network connections should be bypassed by storing the files locally.
You should be warned that recursive downloads can overload the remote servers. Because of that, many administrators frown upon them and may ban access from your site if they detect very fast downloads of big amounts of content. When downloading from Internet servers, consider using the -w option to introduce a delay between accesses to the server. The download will take a while longer, but the server administrator will not be alarmed by your rudeness.
Of course, recursive download may cause problems on your machine. If left to run unchecked, it can easily fill up the disk. If downloading from local network, it can also take bandwidth on the system, as well as consume memory and CPU.
Try to specify the criteria that match the kind of download you are trying to achieve. If you want to download only one page, use --page-requisites without any additional recursion. If you want to download things under one directory, use -np to avoid downloading things from other directories. If you want to download all the files from one directory, use -l 1 to make sure the recursion depth never exceeds one. See Following Links, for more information about this.
Recursive retrieval should be used with care. Don't say you were not warned.
When retrieving recursively, one does not wish to retrieve loads of unnecessary data. Most of the time the users bear in mind exactly what they want to download, and want Wget to follow only specific links.
For example, if you wish to download the music archive from fly.srk.fer.hr, you will not want to download all the home pages that happen to be referenced by an obscure part of the archive.
Wget possesses several mechanisms that allows you to fine-tune which links it will follow.
Wget's recursive retrieval normally refuses to visit hosts different than the one you specified on the command line. This is a reasonable default; without it, every retrieval would have the potential to turn your Wget into a small version of google.
However, visiting different hosts, or host spanning, is sometimes a useful option. Maybe the images are served from a different server. Maybe you're mirroring a site that consists of pages interlinked between three servers. Maybe the server has two equivalent names, and the html pages refer to both interchangeably.
wget -rH -Dserver.com http://www.server.com/
You can specify more than one address by separating them with a comma, e.g.
wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \ http://www.foo.edu/
When downloading material from the web, you will often want to restrict the retrieval to only certain file types. For example, if you are interested in downloading gifs, you will not be overjoyed to get loads of PostScript documents, and vice versa.
Wget offers two options to deal with this problem. Each option description lists a short name, a long name, and the equivalent command in .wgetrc.
So, specifying wget -A gif,jpg will make Wget download only the files ending with gif or jpg, i.e. gifs and jpegs. On the other hand, wget -A "zelazny*196[0-9]*" will download only files beginning with zelazny and containing numbers from 1960 to 1969 anywhere within. Look up the manual of your shell for a description of how pattern matching works.
Of course, any number of suffixes and patterns can be combined into a comma-separated list, and given as an argument to -A.
So, if you want to download a whole page except for the cumbersome mpegs and .au files, you can use wget -R mpg,mpeg,au. Analogously, to download all files except the ones beginning with bjork, use wget -R "bjork*". The quotes are to prevent expansion by the shell.
The -A and -R options may be combined to achieve even better fine-tuning of which files to retrieve. E.g. wget -A "*zelazny*" -R .ps will download all the files having zelazny as a part of their name, but not the PostScript files.
Note that these two options do not affect the downloading of html files (as determined by a .htm or .html filename prefix). This behavior may not be desirable for all users, and may be changed for future versions of Wget.
Note, too, that query strings (strings at the end of a URL beginning with a question mark (?) are not included as part of the filename for accept/reject rules, even though these will actually contribute to the name chosen for the local file. It is expected that a future version of Wget will provide an option to allow matching against query strings.
Finally, it's worth noting that the accept/reject lists are matched twice against downloaded files: once against the URL's filename portion, to determine if the file should be downloaded in the first place; then, after it has been accepted and successfully downloaded, the local file's name is also checked against the accept/reject lists to see if it should be removed. The rationale was that, since .htm and .html files are always downloaded regardless of accept/reject rules, they should be removed after being downloaded and scanned for links, if they did match the accept/reject lists. However, this can lead to unexpected results, since the local filenames can differ from the original URL filenames in the following ways, all of which can change whether an accept/reject rule matches:
This behavior, too, is considered less-than-desirable, and may change in a future version of Wget.
Regardless of other link-following facilities, it is often useful to place the restriction of what files to retrieve based on the directories those files are placed in. There can be many reasons for thisthe home pages may be organized in a reasonable directory structure; or some directories may contain useless information, e.g. /cgi-bin or /dev directories.
Wget offers three different options to deal with this requirement. Each option description lists a short name, a long name, and the equivalent command in .wgetrc.
So, if you wish to download from http://host/people/bozo/ following only links to bozo's colleagues in the /people directory and the bogus scripts in /cgi-bin, you can specify:
wget -I /people,/cgi-bin http://host/people/bozo/
The same as with -A/-R, these two options can be combined to get a better fine-tuning of downloading subdirectories. E.g. if you want to load all the files from /pub hierarchy except for /pub/worthless, specify -I/pub -X/pub/worthless.
The --no-parent option (short -np) is useful in this case. Using it guarantees that you will never leave the existing hierarchy. Supposing you issue Wget with:
wget -r --no-parent http://somehost/~luzer/my-archive/
You may rest assured that none of the references to /~his-girls-homepage/ or /~luzer/all-my-mpegs/ will be followed. Only the archive you are interested in will be downloaded. Essentially, --no-parent is similar to -I/~luzer/my-archive, only it handles redirections in a more intelligent fashion.
Note that, for HTTP (and HTTPS), the trailing slash is very important to --no-parent. HTTP has no concept of a directoryWget relies on you to indicate what's a directory and what isn't. In http://foo/bar/, Wget will consider bar to be a directory, while in http://foo/bar (no trailing slash), bar will be considered a filename (so --no-parent would be meaningless, as its parent is /).
When -L is turned on, only the relative links are ever followed. Relative links are here defined those that do not refer to the web server root. For example, these links are relative:
<a href="foo.gif"> <a href="foo/bar.gif"> <a href="../foo/bar.gif">
These links are not relative:
<a href="/foo.gif"> <a href="/foo/bar.gif"> <a href="http://www.server.com/foo/bar.gif">
Using this option guarantees that recursive retrieval will not span hosts, even without -H. In simple cases it also allows downloads to just work without having to convert links.
This option is probably not very useful and might be removed in a future release.
The rules for ftp are somewhat specific, as it is necessary for them to be. ftp links in html documents are often included for purposes of reference, and it is often inconvenient to download them by default.
To have ftp links followed from html documents, you need to specify the --follow-ftp option. Having done that, ftp links will span hosts regardless of -H setting. This is logical, as ftp links rarely point to the same host where the http server resides. For similar reasons, the -L options has no effect on such downloads. On the other hand, domain acceptance (-D) and suffix rules (-A and -R) apply normally.
Also note that followed links to ftp directories will not be retrieved recursively further.
One of the most important aspects of mirroring information from the Internet is updating your archives.
Downloading the whole archive again and again, just to replace a few changed files is expensive, both in terms of wasted bandwidth and money, and the time to do the update. This is why all the mirroring tools offer the option of incremental updating.
Such an updating mechanism means that the remote server is scanned in search of new files. Only those new files will be downloaded in the place of the old ones.
A file is considered new if one of these two conditions are met:
To implement this, the program needs to be aware of the time of last modification of both local and remote files. We call this information the time-stamp of a file.
The time-stamping in GNU Wget is turned on using --timestamping (-N) option, or through
timestamping = on
directive in .wgetrc. With this option, for
each file it intends to download, Wget will check whether a local file of the
same name exists. If it does, and the remote file is not newer, Wget will not
If the local file does not exist, or the sizes of the files do not match, Wget will download the remote file no matter what the time-stamps say.
The usage of time-stamping is simple. Say you would like to download a file so that it keeps its date of modification.
wget -S http://www.gnu.ai.mit.edu/
ls -l shows that the time stamp on the local file
equals the state of the
Last-Modified header, as returned by the
server. As you can see, the time-stamping info is preserved locally, even
without -N (at least for http).
Several days later, you would like Wget to check if the remote file has changed, and download it if it has.
wget -N http://www.gnu.ai.mit.edu/
Wget will ask the server for the last-modified date. If the local file has the same timestamp as the server, or a newer one, the remote file will not be re-fetched. However, if the remote file is more recent, Wget will proceed to fetch it.
The same goes for ftp. For example:
(The quotes around that URL are to prevent the shell from trying to interpret the *.)
After download, a local directory listing will show that the timestamps match those on the remote server. Reissuing the command with -N will make Wget re-fetch only the files that have been modified since the last download.
If you wished to mirror the GNU archive every week, you would use a command like the following, weekly:
wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
Note that time-stamping will only work for files for which the server gives a
timestamp. For http, this depends on getting a
Last-Modified header. For ftp, this depends
on getting a directory listing with dates in a format that Wget can parse (see
Time-stamping in http is implemented by checking of the
Last-Modified header. If you wish to retrieve the file foo.html through http, Wget will
check whether foo.html exists locally. If
it doesn't, foo.html will be retrieved
If the file does exist locally, Wget will first check its local time-stamp
(similar to the way
ls -l checks it), and then send a
HEAD request to the remote server, demanding the information on the
Last-Modified header is examined to find which file was
modified more recently (which makes it newer). If the remote file is newer, it
will be downloaded; if it is older, Wget will give up.2
When --backup-converted (-K) is specified in conjunction with -N, server file X is compared to local file X.orig, if extant, rather than being compared to local file X, which will always differ if it's been converted by --convert-links (-k).
Arguably, http time-stamping should be implemented
In theory, ftp time-stamping works much the same as http, only ftp has no headerstime-stamps must be ferreted out of directory listings.
If an ftp download is recursive or uses globbing, Wget
will use the ftp
LIST command to get a file
listing for the directory containing the desired file(s). It will try to analyze
the listing, treating it like Unix
ls -l output, extracting the
time-stamps. The rest is exactly the same as for http.
Note that when retrieving individual files from an ftp
server without using globbing or recursion, listing files will not be downloaded
(and thus files will not be time-stamped) unless -N is specified.
Assumption that every directory listing is a Unix-style listing may sound extremely constraining, but in practice it is not, as many non-Unix ftp servers use the Unixoid listing format because most (all?) of the clients understand it. Bear in mind that rfc959 defines no standard way to get a file list, let alone the time-stamps. We can only hope that a future standard will define this.
Another non-standard solution includes the use of
that is supported by some ftp servers (including the
wu-ftpd), which returns the exact time of the specified
file. Wget may support this command in the future.
Once you know how to change default settings of Wget through command line arguments, you may wish to make some of those settings permanent. You can do that in a convenient way by creating the Wget startup file.wgetrc.
Besides .wgetrc is the main initialization file, it is convenient to have a special facility for storing passwords. Thus Wget reads and interprets the contents of $HOME/.netrc, if it finds it. You can find .netrc format in your system manuals.
Wget reads .wgetrc upon startup, recognizing a limited set of commands.
When initializing, Wget will look for a global startup file, /usr/local/etc/wgetrc by default (or some prefix other than /usr/local, if Wget was not installed there) and read commands from there, if it exists.
Then it will look for the user's file. If the environmental variable
WGETRC is set, Wget will try to load that file. Failing that, no
further attempts will be made.
WGETRC is not set, Wget will try to load $HOME/.wgetrc.
The fact that user's settings are loaded after the system-wide ones means that in case of collision user's wgetrc overrides the system-wide wgetrc (in /usr/local/etc/wgetrc by default). Fascist admins, away!
The syntax of a wgetrc command is simple:
variable = value
The variable will also be called command. Valid values are different for different commands.
The commands are case-insensitive and underscore-insensitive. Thus DIr__PrefiX is the same as dirprefix. Empty lines, lines beginning with # and lines containing white-space only are discarded.
Commands that expect a comma-separated list will clear the list on an empty command. So, if you wish to reset the rejection list specified in global wgetrc, you can do it with:
The complete set of commands is listed below. Legal values are listed after the =. Simple Boolean values can be set or unset using on and off or 1 and 0.
Some commands take pseudo-arbitrary values. address values can be hostnames or dotted-quad IP addresses. n can be any positive integer, or inf for infinity, where appropriate. string values can be any non-empty string.
Most of these commands have direct command-line equivalents. Also, any wgetrc command can be specified on the command line using the --execute switch (see Basic Startup Options.)
This command used to be named
passwd prior to Wget 1.10.
This command used to be named
login prior to Wget 1.10.
Content-Lengthheader; the same as --ignore-length.
This is the sample initialization file, as given in the distribution. It is divided in two sectionone for global usage (suitable for global startup file), and one for local usage (suitable for $HOME/.wgetrc). Be careful about the things you change.
Note that almost all the lines are commented out. For a command to have any effect, you must remove the # character at the beginning of its line.
### ### Sample Wget initialization file .wgetrc ### ## You can use this file to change the default behaviour of wget or to ## avoid having to type many many command-line options. This file does ## not contain a comprehensive list of commands -- look at the manual ## to find out what you can put into this file. ## ## Wget initialization file can reside in /usr/local/etc/wgetrc ## (global, for all users) or $HOME/.wgetrc (for a single user). ## ## To use the settings in this file, you will have to uncomment them, ## as well as change them, in most cases, as the values on the ## commented-out lines are the default values (e.g. "off"). ## ## Global settings (useful for setting up in /usr/local/etc/wgetrc). ## Think well before you change them, since they may reduce wget's ## functionality, and make it behave contrary to the documentation: ## # You can set retrieve quota for beginners by specifying a value # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The # default quota is unlimited. #quota = inf # You can lower (or raise) the default number of retries when # downloading a file (default is 20). #tries = 20 # Lowering the maximum depth of the recursive retrieval is handy to # prevent newbies from going too "deep" when they unwittingly start # the recursive retrieval. The default is 5. #reclevel = 5 # By default Wget uses "passive FTP" transfer where the client # initiates the data connection to the server rather than the other # way around. That is required on systems behind NAT where the client # computer cannot be easily reached from the Internet. However, some # firewalls software explicitly supports active FTP and in fact has # problems supporting passive transfer. If you are in such # environment, use "passive_ftp = off" to revert to active FTP. #passive_ftp = off # The "wait" command below makes Wget wait between every connection. # If, instead, you want Wget to wait only between retries of failed # downloads, set waitretry to maximum number of seconds to wait (Wget # will use "linear backoff", waiting 1 second after the first failure # on a file, 2 seconds after the second failure, etc. up to this max). #waitretry = 10 ## ## Local settings (for a user to set in his $HOME/.wgetrc). It is ## *highly* undesirable to put these settings in the global file, since ## they are potentially dangerous to "normal" users. ## ## Even when setting up your own ~/.wgetrc, you should know what you ## are doing before doing so. ## # Set this to on to use timestamping by default: #timestamping = off # It is a good idea to make Wget send your email address in a `From:' # header with your request (so that server administrators can contact # you in case of errors). Wget does *not* send `From:' by default. #header = From: Your Name <firstname.lastname@example.org> # You can set up other headers, like Accept-Language. Accept-Language # is *not* sent by default. #header = Accept-Language: en # You can set the default proxies for Wget to use for http, https, and ftp. # They will override the value in the environment. #https_proxy = http://proxy.yoyodyne.com:18023/ #http_proxy = http://proxy.yoyodyne.com:18023/ #ftp_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. #use_proxy = on # You can customize the retrieval outlook. Valid options are default, # binary, mega and micro. #dot_style = default # Setting this to off makes Wget not download /robots.txt. Be sure to # know *exactly* what /robots.txt is and how it is used before changing # the default! #robots = on # It can be useful to make Wget wait between connections. Set this to # the number of seconds you want Wget to wait. #wait = 0 # You can force creating directory structure, even if a single is being # retrieved, by setting this to on. #dirstruct = off # You can turn on recursive retrieving by default (don't do this if # you are not sure you know what it means) by setting this to on. #recursive = off # To always back up file X as X.orig before converting its links (due # to -k / --convert-links / convert_links = on having been specified), # set this variable to on: #backup_converted = off # To have Wget follow FTP links from HTML files by default, set this # to on: #follow_ftp = off # To try ipv6 addresses first: #prefer-family = IPv6 # Set default IRI support state #iri = off # Force the default system encoding #locale = UTF-8 # Force the default remote server encoding #remoteencoding = UTF-8
The examples are divided into three sections loosely based on their complexity.
wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
The ampersand at the end of the line makes sure that Wget works in the background. To unlimit the number of retries, use -t inf.
wget ftp://ftp.gnu.org/pub/gnu/ links index.html
wget -i file
If you specify - as file name, the urls will be read from standard input.
wget -r http://www.gnu.org/ -o gnulog
wget --convert-links -r http://www.gnu.org/ -o gnulog
wget -p --convert-links http://www.server.com/dir/page.html
The html page will be saved to www.server.com/dir/page.html, and the images, stylesheets, etc., somewhere under www.server.com/, depending on where they were on the remote server.
wget -p --convert-links -nH -nd -Pdownload \ http://www.server.com/dir/page.html
wget -S http://www.lycos.com/
wget --save-headers http://www.lycos.com/ more index.html
wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
More verbose, but the effect is the same. -r -l1 means to retrieve recursively (see Recursive Download), with maximum depth of 1. --no-parent means that references to the parent directory are ignored (see Directory-Based Limits), and -A.gif means to download only the gif files. -A "*.gif" would have worked too.
wget -nc -r http://www.gnu.org/
Note, however, that this usage is not advisable on multi-user systems
because it reveals your password to anyone who looks at the output of
wget -O - http://jagor.srce.hr/ http://www.srce.hr/
You can also combine the two options and make pipelines to retrieve the documents from remote hotlists:
wget -O - http://cool.list.com/ | wget --force-html -i -
crontab 0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
wget --mirror --convert-links --backup-converted \ http://www.gnu.org/ -o /home/me/weeklog
wget --mirror --convert-links --backup-converted \ --html-extension -o /home/me/weeklog \ http://www.gnu.org/
Or, with less typing:
wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
This chapter contains all the stuff that could not fit anywhere else.
Proxies are special-purpose http servers designed to transfer data from remote servers to local clients. One typical use of proxies is lightening network load for users behind a slow connection. This is achieved by channeling all http and ftp requests through the proxy which caches the transferred data. When a cached resource is requested again, proxy will return the data from cache. Another use for proxies is for companies that separate (for security reasons) their internal networks from the rest of Internet. In order to obtain information from the Web, their users connect and retrieve remote data using an authorized proxy.
Wget supports proxies for both http and ftp retrievals. The standard way to specify proxy location, which Wget recognizes, is using the following environment variables:
https_proxyvariables should contain the urls of the proxies for http and https connections respectively.
ftp_proxyare set to the same url.
no_proxyis .mit.edu, proxy will not be used to retrieve documents from MIT.
In addition to the environment variables, proxy location and settings may be specified from within Wget itself.
Some proxy servers require authorization to enable you to use them. The
authorization consists of username and password, which
must be sent by Wget. As with http authorization, several
authentication schemes exist. For proxy authorization only the
Basic authentication scheme is currently implemented.
You may specify your username and password either through the proxy url or through the command-line options. Assuming that the company's proxy is located at proxy.company.com at port 8001, a proxy url location containing authorization data might look like this:
Alternatively, you may use the proxy-user and proxy-password options, and the equivalent .wgetrc settings
proxy_password to set the proxy username and password.
Like all GNU utilities, the latest version of Wget can be found at the master GNU archive site ftp.gnu.org, and its mirrors. For example, Wget 1.13.4 can be found at ftp://ftp.gnu.org/pub/gnu/wget/wget-1.13.4.tar.gz
The official web site for GNU Wget is at http://www.gnu.org/software/wget/. However, most useful information resides at The Wget Wgiki, http://wget.addictivecode.org/.
The primary mailinglist for discussion, bug-reports, or questions about GNU Wget is at email@example.com. To subscribe, send an email to firstname.lastname@example.org, or visit http://lists.gnu.org/mailman/listinfo/bug-wget.
You do not need to subscribe to send a message to the list; however, please note that unsubscribed messages are moderated, and may take a while before they hit the listusually around a day. If you want your message to show up immediately, please subscribe to the list before posting. Archives for the list may be found at http://lists.gnu.org/pipermail/bug-wget/.
An NNTP/Usenettish gateway is also available via Gmane. You can see the Gmane archives at http://news.gmane.org/gmane.comp.web.wget.general. Note that the Gmane archives conveniently include messages from both the current list, and the previous one. Messages also show up in the Gmane archives sooner than they do at lists.gnu.org.
Additionally, there is the email@example.com mailing list. This is a non-discussion list that receives bug report notifications from the bug-tracker. To subscribe to this list, send an email to firstname.lastname@example.org, or visit http://addictivecode.org/mailman/listinfo/wget-notify.
Previously, the mailing list email@example.com was used as the main discussion list, and another list, firstname.lastname@example.org was used for submitting and discussing patches to GNU Wget.
Messages from email@example.com are archived at
Messages from firstname.lastname@example.org are archived at
In addition to the mailinglists, we also have a
support channel set up via IRC at
#wget. Come check it out!
You are welcome to submit bug reports via the GNU Wget bug tracker (see http://wget.addictivecode.org/BugTracker).
Before actually submitting a bug report, please try to follow a few simple guidelines.
Also, while I will probably be interested to know the contents of your .wgetrc file, just dumping it into the debug message is probably a bad idea. Instead, you should first try to see if the bug repeats with .wgetrc moved out of the way. Only if it turns out that .wgetrc settings affect the bug, mail me the relevant parts of the file.
Note: please make sure to remove any potentially sensitive information from
the debug log before sending it to the bug address. The
go out of its way to collect sensitive information, but the log will
contain a fairly complete transcript of Wget's communication with the server,
which may include passwords and pieces of downloaded data. Since the bug
address is publically archived, you may assume that all bug reports are
visible to the public.
gdb `which wget` coreand type
whereto get the backtrace. This may not work if the system administrator has disabled core files, but it is safe to try.
Like all GNU software, Wget works on the GNU system. However, since it uses GNU Autoconf for building and configuring, and mostly avoids using special features of any particular Unix, it should compile (and work) on all common Unix flavors.
Various Wget versions have been compiled and tested under many kinds of Unix systems, including GNU/Linux, Solaris, SunOS 4.x, Mac OS X, OSF (aka Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some of those systems are no longer in widespread use and may not be able to support recent versions of Wget. If Wget fails to compile on your system, we would like to know about it.
Thanks to kind contributors, this version of Wget compiles and works on 32-bit Microsoft Windows platforms. It has been compiled successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC compilers. Naturally, it is crippled of some features available on Unix, but it should work as a substitute for people stuck with Windows. Note that Windows-specific portions of Wget are not guaranteed to be supported in the future, although this has been the case in practice for many years now. All questions and problems in Windows usage should be reported to Wget mailing list at email@example.com where the volunteers who maintain the Windows-related features might look at them.
Support for building on MS-DOS via DJGPP has been contributed by Gisle Vanem; a port to VMS is maintained by Steven Schweda, and is available at http://antinode.org/.
purpose of Wget is background work, it catches the hangup signal
SIGHUP) and ignores it. If the output was on standard output, it
will be redirected to a file named wget-log. Otherwise,
SIGHUP is ignored.
This is convenient when you wish to redirect the output of Wget after having
$ wget http://www.gnus.org/dist/gnus.tar.gz & ... $ kill -HUP %% SIGHUP received, redirecting output to `wget-log'.
Other than that, Wget will not try to interfere with signals in any way.
kill -TERM and
kill -KILL should kill
This chapter contains some references I consider useful.
It is extremely easy to make Wget wander aimlessly around a web site, sucking all the available data in progress. wget -r site, and you're set. Great? Not for the server admin.
As long as Wget is only retrieving static pages, and doing it at a reasonable
rate (see the --wait option), there's not
much of a problem. The trouble is that Wget can't tell the difference between
the smallest static page and the most demanding CGI. A site I know has a section
handled by a CGI Perl script that converts Info files to html on the fly. The script is slow, but works well enough for
human users viewing an occasional Info file. However, when someone's recursive
Wget download stumbles upon the index page that links to all the Info files
through the script, the system is brought to its knees without providing
anything useful to the user (This task of converting Info files could be done
locally and access to Info documentation for all installed GNU software on a
system is available from the
To avoid this kind of accident, as well as to preserve privacy for documents that need to be protected from well-behaved robots, the concept of robot exclusion was invented. The idea is that the server administrators and document authors can specify which portions of the site they wish to protect from robots and those they will permit access.
The most popular mechanism, and the de facto standard supported by all the major robots, is the Robots Exclusion Standard (RES) written by Martijn Koster et al. in 1994. It specifies the format of a text file containing directives that instruct the robots which URL paths to avoid. To be found by the robots, the specifications must be placed in /robots.txt in the server root, which the robots are expected to download and parse.
Although Wget is not a web robot in the strictest sense of the word, it can download large parts of the site without the user's intervention to download an individual page. Because of that, Wget honors RES when downloading recursively. For instance, when you issue:
wget -r http://www.server.com/
First the index of www.server.com will be downloaded. If Wget finds that it wants to download more documents from that server, it will request http://www.server.com/robots.txt and, if found, use it for further downloads. robots.txt is loaded only once per each server.
Until version 1.8, Wget supported the first version of the standard, written by Martijn Koster in 1994 and available at http://www.robotstxt.org/wc/norobots.html. As of version 1.8, Wget has supported the additional directives specified in the internet draft <draft-koster-robots-00.txt> titled A Method for Web Robots Control. The draft, which has as far as I know never made to an rfc, is available at http://www.robotstxt.org/wc/norobots-rfc.txt.
This manual no longer includes the text of the Robot Exclusion Standard.
The second, less known mechanism, enables the author of an individual
document to specify whether they want the links from the file to be followed by
a robot. This is achieved using the
META tag, like this:
<meta name="robots" content="nofollow">
This is explained in some detail at http://www.robotstxt.org/wc/meta-user.html. Wget supports this method of robot exclusion in addition to the usual /robots.txt exclusion.
If you know what you are doing and really really wish to turn off the robot
exclusion, set the
robots variable to off in your .wgetrc. You can achieve the same effect from the
command line using the
-e switch, e.g. wget
-e robots=off url....
When using Wget, you must be aware that it sends unencrypted passwords through the network, which may present a security problem. Here are the main issues, and some solutions.
ps. The best way around it is to use
wget -i -and feed the urls to Wget's standard input, each on a separate line, terminated by C-d. Another workaround is to use .netrc to store passwords; however, storing unencrypted passwords is also considered a security risk.
GNU Wget was written by Hrvoje Niksic firstname.lastname@example.org.
However, the development of Wget could never have gone as far as it has, were it not for the help of many people, either with bug reports, feature proposals, patches, or letters saying Thanks!.
Special thanks goes to the following people (no particular order):
--page-requisitesand related options. He was the principal maintainer for some time and released Wget 1.6.
ansi2knr-ization. Lots of portability fixes.
The following people have provided patches, bug/build reports, useful suggestions, beta testing services, fan mail and all the other things that make maintenance so much fun:
Tim Adam, Adrian Aichner, Martin Baehr, Dieter Baron, Roger Beeman, Dan Berger, T. Bharath, Christian Biere, Paul Bludov, Daniel Bodea, Mark Boyns, John Burden, Julien Buty, Wanderlei Cavassin, Gilles Cedoc, Tim Charron, Noel Cragg, Kristijan Conkas, John Daily, Andreas Damm, Ahmon Dancy, Andrew Davison, Bertrand Demiddelaer, Alexander Dergachev, Andrew Deryabin, Ulrich Drepper, Marc Duponcheel, Damir Dzeko, Alan Eldridge, Hans-Andreas Engel, Aleksandar Erkalovic, Andy Eskilsson, Joao Ferreira, Christian Fraenkel, David Fritz, Mike Frysinger, Charles C. Fu, FUJISHIMA Satsuki, Masashi Fujita, Howard Gayle, Marcel Gerrits, Lemble Gregory, Hans Grobler, Alain Guibert, Mathieu Guillaume, Aaron Hawley, Jochen Hein, Karl Heuer, Madhusudan Hosaagrahara, HIROSE Masaaki, Ulf Harnhammar, Gregor Hoffleit, Erik Magnus Hulthen, Richard Huveneers, Jonas Jensen, Larry Jones, Simon Josefsson, Mario Juric, Hack Kampbjorn, Const Kaplinsky, Goran Kezunovic, Igor Khristophorov, Robert Kleine, KOJIMA Haime, Fila Kolodny, Alexander Kourakos, Martin Kraemer, Sami Krank, Jay Krell, Simos KSenitellis, Christian Lackas, Hrvoje Lacko, Daniel S. Lewart, Nicolas Lichtmeier, Dave Love, Alexander V. Lukyanov, Thomas Lussnig, Andre Majorel, Aurelien Marchand, Matthew J. Mellon, Jordan Mendelson, Ted Mielczarek, Robert Millan, Lin Zhe Min, Jan Minar, Tim Mooney, Keith Moore, Adam D. Moss, Simon Munton, Charlie Negyesi, R. K. Owen, Jim Paris, Kenny Parnell, Leonid Petrov, Simone Piunno, Andrew Pollock, Steve Pothier, Jan Prikryl, Marin Purgar, Csaba Raduly, Keith Refson, Bill Richardson, Tyler Riddle, Tobias Ringstrom, Jochen Roderburg, Juan Jose Rodriguez, Maciej W. Rozycki, Edward J. Sabol, Heinz Salzmann, Robert Schmidt, Nicolas Schodet, Benno Schulenberg, Andreas Schwab, Steven M. Schweda, Chris Seawood, Pranab Shenoy, Dennis Smit, Toomas Soome, Tage Stabell-Kulo, Philip Stadermann, Daniel Stenberg, Sven Sternberger, Markus Strasser, John Summerfield, Szakacsits Szabolcs, Mike Thomas, Philipp Thomas, Mauro Tortonesi, Dave Turner, Gisle Vanem, Rabin Vincent, Russell Vincent, Zeljko Vrba, Charles G Waldman, Douglas E. Wegscheid, Ralf Wildenhues, Joshua David Williams, Benjamin Wolsey, Saint Xavier, YAMAZAKI Makoto, Jasmin Zainul, Bojan Zdrnja, Kristijan Zimmer, Xin Zou.
Apologies to all who I accidentally left out, and many thanks to all the subscribers of the Wget mailing list.
Copyright © 2000, 2001, 2002, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc. http://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of copyleft, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The Document, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as you. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A Modified Version of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A Secondary Section is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The Invariant Sections are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The Cover Texts are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A Transparent copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not Transparent is called Opaque.
Examples of suitable formats for Transparent copies include plain ascii without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The Title Page means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, Title Page means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
The publisher means any person or entity that distributes copies of the Document to the public.
A section Entitled XYZ means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as Acknowledgements, Dedications, Endorsements, or History.) To Preserve the Title of such a section when you modify the Document means that it remains a section Entitled XYZ according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled Endorsements, provided it contains nothing but endorsements of your Modified Version by various partiesfor example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled History in the various original documents, forming one section Entitled History; likewise combine any sections Entitled Acknowledgements, and any sections Entitled Dedications. You must delete all sections Entitled Endorsements.
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an aggregate if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled Acknowledgements, Dedications, or History, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License or any later version applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.
Massive Multiauthor Collaboration Site (or MMC Site) means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A Massive Multiauthor Collaboration (or MMC) contained in the site means any set of copyrightable works thus published on the MMC site.
CC-BY-SA means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.
Incorporate means to publish or republish a Document, in whole or in part, as part of another Document.
An MMC is eligible for relicensing if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) year your name. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the with...Texts. line with this:
with the Invariant Sections being list their titles, with the Front-Cover Texts being list, and with the Back-Cover Texts being list.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.