I'm a bit of an amateur when it comes to using like this, so pardon if my question is dumb/silly. (Or if I missed something in the manual).
I'm trying to figure out if there is a way for wget to download the source code I'd get for a site by viewing that source from a browser. The information I'm bulk downloading isn't in the actual page, but is accessed from elsewhere. A recursive download does not solve the problem because of various restrictions set up on the site.
Any help would be very much appreciated. Thanks in advance.