UNIX.com response times

Friends, Admins, Countrymen,

for a few days now, this site is incredibly dragging its feet again - 40 plus sec to open e.g. "New Topics" or "Home", 24 + for "subscribed Threads". For a comparison: subsecond response for wikipedia, Englisch Deutsch Worterbuch - leo.org: Startseite, or similar.

This is more than demotivating - anybody else experiencing this behaviour? Can someone help?

Rgds
R�diger

1 Like

Same here. For reference, this is my equipment:

Intel i7-6800K CPU @ 3.40GHz, 128GB Ram
2x NVidia GeForce GTX 750 Ti
1000MBit Full-duplex cable-bound
Linux Kernel 4.10.0.33 x86_64
Mozilla Firefox 55.0.2 (64-bit)

As i am in Germany (like RudiC), can somebody from another region of the world tell if the problem is worldwide?

bakunin

Hi.

Same here in Minnesota, USA. It took 20 counting one-one-thousand, two-one-thousand, ... just to bring up this thread.

I test my 'net connection speed every day: 40 Mbps, with slight variations, none of which have been significant in the past month or so (when they are, I reboot the DSL modem to force a new server at the ISP).

I usually use Mozilla Firefox 52.3.0, on this box:

A Mac mini, Mozilla Firefox 55.0.3:

OS, ker|rel, machine: Apple/BSD, Darwin 16.7.0, x86_64
Distribution        : macOS 10.12.6 (16G29), Sierra

is also slow.

For site comparison, LQ responds almost instantly, as does stackexchange.

Best wishes ... cheers, drl

We have a satellite connection, so I have always blamed it for poor response. Does anyone have a problem logging in? Sometimes I have to enter my user and password 5 times before I get successfully logged in.
Maybe the hamsters don't like the new food?

Yes, unix.com is pretty slow from the UK right now (and in recent days) but it doesn't seem unique to unix.com

Some other sites which are normally quick are also quite sluggish so not sure what's going on. That isn't much help I know!

---------- Post updated at 05:42 PM ---------- Previous update was at 05:03 PM ----------

@jgt......I don't see any issues with logging in from the UK where I have to make multiple attempts. It can just say 'unix.com is not responding' but when I wait and wait and do nothing, it eventually will log me in. Not the same problem as you are seeing.

jgt: It's not that bad. You probably can blame your satellite for that.

Mostly it's just lags in loading pages.

Just for the record: it's back to bearable now. 3 to 5 sec - not lightning fast, but OK.

It's usually blazing fast for me, honestly. I've never seen anything but minute or two intervals of slow performance, then back to fast the majority of the time.

I'm in Asia and it's blazing fast for me now.

However, yesterday I noticed a period of very slow response and I traced the problem to BingBot hammering the site.

Most of the problems I have noticed of lately have been related to bots indexing the site in an abusive way.

Normally I block them, but I hesitate to blog the BingBot so I just set the crawl-delay to 1

User-agent: msnbot 
Crawl-delay: 1

User-agent: bingbot
Crawl-delay: 1

Let's see if this helps slow down the bingbots!

PS: This is a constant problem that we have to deal with..... most bots do not follow the robots.txt directive, especially those from China, Russia, etc.

1 Like

I also noticed +30 second response times since a few days.
But now it's good again, 1-2 seconds :slight_smile:

Yeah, it's a full time job just managing malicious and poorly configured bots.

I'll try to look at some other adjustments to see if I can mitigate this in other ways. Some DB parameter tweaking might also help mitigate this.

1 Like

This seems to have done the trick, unix.com is back to normal again now.

bakunin

Thanks!

It's constant admin work trying to beat back all the misconfigured and malicious indexing bots. Seems they pop up and try to suck all the posts and threads from the site with no benefit to us or the site, only draining resources and causing the site to slow down.

Most bots, unfortunately, do not follow the robots.txt directives and must be blocked.

3 Likes

I couldn't help but notice that over recent days response has been really fast (accessing from the UK).

I have read on the Internet that SSL (HTTPS) sites are often faster than HTTP sites; but not sure if this is the reason it seems faster.

Is that partly because encryption includes compression? It wouldn't seem to be transferring enough data each time to make a noticeable difference though.

This is not the case. In fact, SSL works like this (short introduction to encryption theory):

First, we need to establish the difference between asymmetric and symmetric encryption methods.

In symmetric encryption a cipher is used to encrypt as well as decrypt the message. The cipher is shared between the sender and the receiver beforehand. Advantage: keys can be smaller (typically 128-bit or 256-bit) and it allows for two-way communication. Disadvantage: whoever knows the cipher can encode as well as decode it.

Asymmetric encryption works with two different ciphers: one (the "public" key) is used (only!) to encrypt the message. To decrypt it one needs the other "private" cipher. You can send around your public key without caring for who knows it, because only the encryption is possible. As long as you keep your private key to yourself you alone can decrypt anything encrypted with your public key. Advantage: you don't need to share the (private) key with anyone. Disadvantage: allows only a one-way communication and uses significantly larger keys (1024 or 2048 bit for RSA nowadays).

The most common asymmetric algorithms are RSA and elliptic curves (ECC). RSA is based on the fact that integer factorisation is difficult and expensive computation-wise. Basically you build the product of two very large prime numbers: the product is easy to calculate (and published) but without knowing the factors it is difficult to compute them (the private key) from the product. ECC computes the discrete logarithm of a random elliptic curve element. The elliptic curve is built over a Galois field (not the real numbers) and the discrete logarithm is computed in respect to a point at infinity.

As asymmetric encryption only works one-way, how is it used for information exchange, say, between a web server and the browser? The idea is to use a handshake-procedure to establish a session:

1) Server sends his public key to client.
2) Client creates a symmetric session key, encrypts it with the public key of the server and sends it back
3) Server decrypts the session key and
4) both client and server use this symmetric key for the duration of the session

All these algorithms do NOT compress anything at all. In fact they are neutral to the amount of data being transferred.

I hope this helps.

bakunin

1 Like

Update: here is a document explaining why HTTPS is faster than HTTP.

It boils down to a Google-developed additional session layer (SPDY) from which only HTTPS profits. Basically it is not HTTPS vs. HTTP but multiplexed sessions over a single TCP connection versus unmultiplexed sessions. It would be possible to do HTTP over SPDY too (it is just not done). HTTP2 is basically SPDY standardised and further developed.

In principle HTTP is slightly faster than HTTPS: there are caching facilities so that not every retransmission has to be originated by the client. HTTPS lacks that because relaying stations cannot read what they transmit.

How SPDY speeds up things is especially via the session multiplexing. This gains lots of time because of the delayed TCP-ack, which takes 500ms.

bakunin

4 Likes

Site is a bit slower because I bumped up GoogleBot to index the site at "full pull" speed; and this is normally to high of a pull rate and will need to back it off soon.