[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Bug In Wget
From: |
address@hidden |
Subject: |
Bug In Wget |
Date: |
Wed, 10 Mar 2021 02:29:42 +0200 |
Здравствуйте, Bug-wget.
I found bug in wget
Real Bug.
I'm not fool.
I wrote here one time already.
I write here two time now.
I have found one unpleasant particularity Wget.
Sometimes it can't completely copy recursive the site.
Since pages and directory of the site are generated dynamically on the grounds
of Database (MySQL) and do not exist in realities.
As example the pages of the shop.
Command:
wget.exe -x -c --no-check-certificate -i getprom1.txt -P ".\shop1"
getprom1.txt has list URL
https://modastori.prom.ua/g39944845-zhenskaya-obuv
https://modastori.prom.ua/g39944845-zhenskaya-obuv/page_2
We get 1 file
./shop1/g39944845-zhenskaya-obuv/page_2
And command
wget.exe -x -c --no-check-certificate -i getprom2.txt -P ".\shop2"
getprom2.txt has list URL
https://modastori.prom.ua/g39944845-zhenskaya-obuv/page_2
https://modastori.prom.ua/g39944845-zhenskaya-obuv
We get 2 files
./shop2/g39944845-zhenskaya-obuv/page_2
./shop2/g39944845-zhenskaya-obuv.1
All are equally.
But we get 2 different results.
In first Case wget can't correct create files.
It create first file.
Delete this file.
Create directory with name this file.
And in it create second file.
When we recursive download the sites like it we lose a lot of pages.
I hope what you will understand me.
--
С уважением,
Kmb697 mailto:kmb697@yandex.ru
- Bug In Wget,
address@hidden <=