Wget/curl and javascript

What can I use instead of wget/curl when I need to log into websites that use javascript?

Wget and curl don't handle javascript.

Your question makes no sense.

The web uses HTTP as a universal protocol for client-server web transactions.

The server and the client are independent objects/ entities that can exchange data because of HTTP. Javascript or not, curl or not, wget or not? These topics are an independent and mostly unrelated issue.

That is why there are communication protocols, to decouple the application and underlying programming language from the client-server comms. In this case:

Hypertext Transfer Protoco (HTTP)

So, it should not matter if talking "shoestring" to "elephant ear" as long as both the client and the server adhere to the same HTTP protocol standard(s).

Anyway, underlying most Javascript implements like node.js (on the server side) is C++ and the same is true of wget, curl, etc. At the core of most of all this is C++ (not that it matters).

Regardless of the programming language, HTTP-based client-server applications can communicate (exchange data) because they follow the same set of communication protocol standards.

Wget and curl sometimes don't show the same content as web browsers. The parts generated by javascript don't seem to be included in wget and curl.

You reply has little to do with your original statement:

I have no idea (without a wild guess on my part) what that means. Please be precise when talking tech :slight_smile:

I work with server-side data every day and I have not used wget or curl to pull any web site date for a web application in over 6 years, maybe longer.

All modern day web developers use Javascript libs to pull data from the web when building a web app.

Of course, we all occasionally use wget and curl to do simple tasks like pull a single file from a web site; or some other very simple function. But in general, that is also rapidly becoming obsolete as most web developers use GitHub for their repos and use git to store and push the data.

And of course, some people still use wget and curl - like tools when they want to spider another web site and try to pull it's data (often without the website's owners permission).

It would help if you would explain what you are trying to do.

Are you are trying to "spider" or "content scrape" web sites?

It does not sound like you are building a web app.

What are you doing, really?

I'm creating login, download and upload scripts for different websites, and have become curious during the process.

I'm not in conflict with anyone's permission.

To some extent you can work around javascript things by reverse-engineering the javascript and figuring out what webpage data is actually being sent where. Essentially you're working out the 'protocol' so you can throw away the javascript and do it yourself. This is tedious and fraught.

If you want to actually execute javascript natively, you will need a web browser. links2 is a browser with console mode that still has a javascript interpreter, but automating it doesn't seem any easier than any other browser.

If you want to process Javascript outside of the browser you can use node.js and other V8 engines.

Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine.

See also, as a reference on how to "screen scrape" with Javascript and render Javascript:

The Ultimate Guide to Web Scraping with Node.js

You can research, modify and adapt these ideas as you see fit for your web app.