[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Bug-wget] Feature question ...
From: |
Tony Lewis |
Subject: |
Re: [Bug-wget] Feature question ... |
Date: |
Thu, 19 Apr 2012 08:27:52 -0700 |
You're looking for:
--page-requisites get all images, etc. needed to display HTML page.
wget URL --page-requisites
should give you what you need.
-----Original Message-----
From: address@hidden
[mailto:address@hidden On Behalf Of Garry
Sent: Thursday, April 19, 2012 2:45 AM
To: address@hidden
Subject: [Bug-wget] Feature question ...
Hi,
not exactly a bug question/report, but something I was trying to get done
with wget but have either overlooked in the docs, misread or it's not plain
not possible at the moment ...
I'm trying to mirror a full web page, but with some restrictions ... I need
a single page (either the full path, or - if it's the main page in a
directory - just that index page) to be downloaded, with all contained media
(at least images, css, js-includes etc.), even if that media/files are not
stored on that server. As I need the information for archival purposes, I do
not want a full tree of directories rebuilt, as wget would normally do. All
the files should be downloaded and stored in some unique file names in the
same directory as the page file, and of course the html page should be
re-coded as to use the relative path to those renamed files.
Can this be done with wget? Or if not, is there a different program
(Linux) that will do this?
Tnx, Garry