How to get content of a webpage Curl vs Wget?

Hello,
What I am trying to do is to get html data of a website automatically.
Firstly I decided to do it manually and via terminal I entered below code:

$ wget http://www.***.*** -q -O code.html

Unfortunately code.html file was empty.

When I enter below code it gave Error 303-304

$ wget http://www.***.***

When I try below command, I see what I want to achieve but this is only streaming data on terminal window that can't be copied:

$ curl http://www.***.***

Could you please let me know is there a way to save the data with Curl and save it into a file ?

Thanks in advance
Boris

Try

curl -L -o code.html http://website.domain

-L will follow redirects

1 Like