Sunday, December 20, 2009

Command WGET

Sometimes we are faced with a problem for a lot of downloading files in a directory. However, where we downloaded was not possible to do so for some reason, misalhkan closed ports and so on. Suppose we would download hundreds or thousands of files in a directory, it is impossible to have one by one. There are tricks of wget, which can do to download the file through linux console. Following command:

wget -r -l2 --no-parent -A.zip -R.html,.gif http://xxxx
note :
-r = --recursive, Turn on recursive retrieving.
--no-parent = In recursive retrievals, do not ever go up to the parent directory
-A = acclist, Specify a comma-separated list of filename suffixes or patterns to accept.
-R = rejlist, Specify a comma-separated list of filename suffixes or patterns to reject
xxxx = the name of the intended website

example:
airaku@airaku-desktop:~$ wget -r -l2 --no-parent -A.zip -R.html,.gif http://cran.r-project.org/bin/windows/contrib/2.8/

and the result :
--2010-03-10 13:32:01-- http://cran.r-project.org/bin/windows/contrib/2.8/
Resolving cran.r-project.org... 137.208.57.37
Connecting to cran.r-project.org|137.208.57.37|:80... connected.
HTTP request sent, awaiting response... 200 OK
Saving to: `cran.r-project.org/bin/windows/contrib/2.8/aaMI_1.0-1.zip'

[ <=> ] 364,307 105K/s in 3.4s

2010-03-10 13:32:05 (105 KB/s) - `cran.r-project.org/bin/windows/contrib/2.8/index.html' saved [364307]



No comments:

Post a Comment