wget

don't re-retrieve files unless newer than local

  -N,  --timestamping

download site recursive

wget -e robots=off -r -np --page-requisites --convert-links https://SITE

don't follow redirect, keep connection open, tee to file

wget --max-redirect 0  http://..../{1..1000} 2>&1 | tee file.txt

recursive download images

wget -nd -r -A jpeg,jpg,bmp,gif,png http://www.domain.com

output to stdout

wget http://connect.doodle3d.com/api/signin.php -O -

output folder / output directory

wget --directory-prefix=FOLDER URL

wildcards

wget http://site.com/c={a..z}
wget http://site.com/c={3000..4000}

don't download if file exists

wget -nc   # or --no-clobber: skip downloads that would download to existing files.

download url's from file

wget -i file.txt

download all files from ftp folder

wget -i ftp://domain.com/folder/*

recursive rip a page or site

wget -r http://site
wget -r --no-parent http://site

basic auth

you can just supply the username and password in the URL like this:

wget http://user:password@domain.com