I am working on a phishing quest, where I have to clone a company login form and test my colleagues who failed the cyber training. My planning procedure is the following:
$ wget --mirror --convert-links --page-requisites https://somerandomloginform
I'am stuck in the first step, because I thought that --convert-links
will download not only the HTML page, but also all related js files, but it's not happened.
So maybe I didn't understand it correcly. Take for example Gmail login form, is it even possible to do it with simple command without tools like SET. And also because I am a web developer, I wonder if is there any protection against that?
The easiest way is by CTRL + S or more complicated with web scraping.