QOS script

Does anyone know how can I determine which user is UPLOADING via http protocol? Like sending video on youtube?

Or maybe how to throttle multimedia uploads ? Or any uploads?
maybe get packet size?

The difference between uploading and downloading via HTTP is simply one of traffic. The protocol is the same but most of the traffic will be upload.

Sure, I know that,
Just.. how can I classify that type of data transfer ?
For example to send it to QOS? or xt_route?

That type of transfer is classified as 'HTTP', the same kind as any other HTTP transfer. If you want to restrict it, restrict the upload rate of all HTTP.

Is there any way I can separate that transfer into legit html and video upload? Maybe by outgoing packet size?

Basically I want users to surf fast (my qos is set like that), but in case of uploading files/media -- I want that to be as slow as possible.

If I limit all http, then both upload and download will be restricted. ie. the request for pages will be slow.

more likely by the URL involved, if you can get it. Are you using a caching proxy of any sort?

A back door possibility is to use lsof to show what files and sockets are open on the server, and post-process to link socket processes to files to file size or extension. Of course, if there is a multithreaded server process, it gets fuzzy again.

Finding a way to throttle sockets that have moved too many bytes might be nasty to persistent HTTP1.1 users, a kind of usage that is friendlier to the server than file per connection, never mind ftp's two connections. If you could serve one packet out per socket in strict rotation, the long connections for big files have less effect on the new ones. Optimally, you want the oldest connection to get done, so you have fewer concurrent connections and least redo activity, yet you want the new connections to be favored for a while, since they may want little. If you can move aging connections to the long pool tagged with their start time, you can favor the young connections and allow the oldest to take from their excess bandwidth. You may have more bandwidth than the oldest connection, so what it cannot use of that excess goes to the second oldest, and so on.

I am not sure what tools facilitate this, or if any tool can tag HTTP1.1 persistent connections and tell if such a connection is not doing large files.

Packet size is maxed out for every transfer with more than MTU to move at that moment, but MTU is characteristically only 1500, not much of a challenge to fill.

Large files served out might have their sockets tagged, or be segregated on a server that is allowed less priority to the network.

Uploads are really hard, since you cannot look at the file in advance. Extensions and even Content-type: are not very reliable, easily subverted for download/upload by a zip of temporary rename. Files may be encoded to make them not sniffable for type signatures.

Just spitballing! :smiley:

that's some good thinking! :slight_smile:

another question.. how can I track connection time?
yet another.. can I track connection since it gets into --state-NEW till end of transfer?

If it could be possible, then I can use -m --string "upload" in front of it and capture whole connection to some filter or something.

It's nice thinking about filename/content-type ... I can use L7 Filter to capture .avi files ... but thing is.. that I can only do it for both upload/download ... not one in particular?

Corona688 - no, not any caching... I have two servers that control about 300 users accessing internet. and If only one is uploading some file, then everything get's slow.

Why not limit upload speed then? Limiting upload speed some (not draconically) may improve their experience a lot by not letting uploads hog the entire connection. I know with our ISP, we get 1 megabit down, 100kilobit up, but not at the same time so uploads clog our connection badly. Limiting uploads to 70kilobit helps significantly.

I did that.. my upload is at 80% cause of QOS and my ACK-s have priority... I can utilize 100% download and 100% upload at same time.. thing is, packets get delayed when upload is at 100% ... no matter what - that lag just keeps coming back.

I came here if someone knows thing or two about advanced QOS in combination with iptables and perhaps L7 filter... or just to get new idea on how to do things.

Straight forward question is -- how can I throttle upload speed to (say) .upload.youtube.com ... How can I detect that kind of connection .. basically I need some innovative idea ... I have none.. or none of my work...

I had one idea while back.. and that is to use counters and calculate how much upload did user do in say one minute.. if that sum is grater than it should be. then user should be throttled by QOS. With --string "upload" I can perhaps initiate counters and script that would harvest those counters?

So if anyone knows how I might do that... hm.. iptraf? I've searched everywhere.. simple script/program like this for linux which is extra useful simply don't exist...