CoolComputing Logo
Coupons/Deals  ·  New Promo Codes/Coupons  · February 21, 2017

Fix for PageSpeed Module Causing High Server Load

Posted on Wednesday, December 9, 2015 @ 04:05:25 PM CST by David Yee [] [read 3121 times]
 
Tips: Linux/Unix world

Google's PageSpeed Module has had a lot of praises heaped upon it for its ability to perform a myriad of performance optimizations to help web servers deliver content quicker. I had tried it in the past on my servers (running CentOS & Apache) but was always hit by heavy, unacceptable server loads. So I've always had to just go back and disable it. That was actually a couple of years ago, so recently I gave it another go. I downloaded the latest beta version (mod_pagespeed version 1.9.32.10-7423). Installation was easy, though I ran into a hiccup because I had a previous version installed, so I had to first uninstall it:

rpm -e mod-pagespeed-stable-1.0.22.7-2005.i386

then:

rpm -Uvh mod-pagespeed-beta_current_i386.rpm

worked like a charm.

So I left it at the default settings, and just let Pagespeed do its magic. I then went to Pagespeed Insights to quantify any performance difference, and it did gave 4 to 5 points better scoring for mobile. Though on at Webpagetest.org the results were mixed, though I was able to tell from the performance filmstrip mode that the sites started rendering on screen faster, which is always a positive. Checking on the iPhone, and I did feel that the sites were feeling snappier. Great- or so I thought.





After about an hour or so, however, the load on my Linux servers climbed steadily until reaching 13 to 15, whereas normally it would be between 0.5 and 2. I looked around the web for an explanation and the best I was able to gather was that Pagespeed was building its initial cache. So I continued to monitor the servers throughout the day, restarting Apache when necessary. Unfortunately by early day 2, the heavy server load continued to plague the machines, so I wanted to toy around with some settings. First, I disabled rewrite_images because I have some photo-heavy sites. But that did not help. I then tried different httpd.conf settings such as Timeout, MaxKeepAliveRequests, and KeepAliveTimeout, but that made zero difference as well. All I can gather, especially from using iotop (a nice tool that measures disk usage) was that disk usage was extremely heavy. So I looked into what was being cached, and noted that many duplicate pages with urls containing sid and similar GET variables were being cached. So I added the following to my pagespeed.conf file under /etc/httpd/conf.d/

ModPagespeedDisallow "*"
ModPagespeedAllow "*.html"
ModPagespeedAllow "*.jpg"
ModPagespeedAllow "*.png"
ModPagespeedAllow "*.js"
ModPagespeedAllow "*.css"
ModPagespeedAllow "*.php"
ModPagespeedAllow "*/"
ModPagespeedDisallow "*-sid-*"
ModPagespeedDisallow "*cache*"

The first line disallows all pages by default. Next I specifically allowed URLs ending with certain extensions/character, then the last two lines explicitly prevent any URL called that has "-sid-" and "cache" in it. Some of my pages are re-written by mod_rewrite from "?sid=" to "-sid-", so in your case if you have session variables in URLs what to disallow depends on what CMS system you use, etc.

That seemed to help a, but still I was getting loads of over 10. So I decided to try creating a tmpfs RAM drive of sorts and and place the cache directory in there. My servers have 8GB of RAM, so I thought 1 gig should suffice. That SOLVED THE PROBLEM- server load went back down to under 2 on average. But a new problem crept up- I started to get messages like Failed to mkdir /cache/mod_pagespeed/!clean!lock!: No space left on device. Even though the tmpfs drive had over 30% space left, the volume was full. Apparently it had ran out of inodes which determines the maximum number of files you can have. The default was about 180K inodes, so once I raised it to 500K it worked just fine. What worked great for me was as follows:

mkdir /cache/
chown apache /cache/
mount -t tmpfs -o size=1024M,mode=0777,nr_inodes=500k tmpfs /cache

Don't forget to add a corresponding entry in /etc/fstab as well:

tmpfs /cache tmpfs size=1024M,mode=0777,nr_inodes=500k 0 0

And of course you need to edit pagespeed.conf and change the default cache directory from "/var/cache/mod_pagespeed/" to something like "/cache/mod_pagespeed/".

Restart Apache and bam- enjoy all the benefits of Mod Pagespeed without the awful high CPU plus I/O utilization resulting in heavy server load. The following is a sample output from iotop AFTER switching to tmpfs for pagespeed's cache:

I don't have a prior screenshot of iotop before switching to the RAM disk, but at least 10 httpd processes were all up there, each eating up 10% to 25%.

BTW I had a weird issue where rows of Instagram images and Facebook text were garbled in a custom social news page I had written in PHP. After debugging the issue it turned out that it was because I had forgotten to close a noscript HTML tag. The page displayed fine before with the mistake, but because Pagespeed rewrote the HTML trying to consolidate tags, hence the messed up result by grouping some of the images and text into a single noscript tag.

I do have a minor annoyance to report, however. I keep getting in the error_log that a specific HTML file has CSS Parsing Error, but it doesn't tell me which line, even if I had changed the Apache log level from warn to debug. If it were a CSS file, I think it would have provided the line number, or perhaps it is an issue with this beta version- no big deal, though I continue to my search on what is causing the CSS parse error message.



Please rate this article or post a brief review of Fix for PageSpeed Module Causing High Server Load or comment on benchmark/performance, features, availability, price or anything else regarding Fix for PageSpeed Module Causing High Server Load. Thanks!

Fix For PageSpeed Module Causing High Server Load Rating: 2.8/5 (12 votes cast)

Your Name:
I have read and agreed to the Review Posting Agreement.
Review Title:
Comment/Review: