Accelerating a High-Load WordPress Website
This case study outlines the process for accelerating a content-rich WordPress website challenged with promptly processing a large number of user requests and delivering extensive amounts of content from the backend server to the browser.
The Challenge
Our client is a retail trader with a branded online store. As the company’s inventory grew, it expanded its catalog with new categories and product descriptions and added new product reviews and informative articles to its blog.
Snowballing content and a large number of simultaneous user requests led to slow loading times and poor website performance. The existing system was unable to promptly extract data from the server and provide it to customers.
The client asked Digital Clever Solutions to accelerate content delivery on their platform and improve the shopping experience of customers.
The Solution
We solved the slow loading time issue in the following ways:
- Set up the Varnish cache server to work in conjunction with the Nginx web server that was used for SSL termination.
- Shifted all settings and cache controls to Varnish.
- To clear the Varnish cache, we used the WordPress W3 Total Cache plugin.
- Set up WordPress for work in conjunction with PHP and Apache.
- Since Varnish does cache work, we disabled caching in Nginx, Apache, and PHP.
- We acquired an SSL Certificate to protect the website from Google Chrome’s October 2020 sanctions for insecure HTTP traffic.
Technology Stack
Nginx
Nginx is a web server that provides high performance and stability. It can be used for:
- proxying;
- caching;
- load-balancing;
- fault-tolerance;
- media streaming;
- handling static and index files;
- auto indexing;
- support for HTTP/2 with dependency-based and weighted prioritization.
We used Nginx for SSL termination.
What is SSL Termination?
For data to be transmitted from a user computer to a web server, it needs to be encrypted and have a certificate of authentication to create a secure socket layer (SSL) connection. At the end of the connection, data is decrypted and sent to the backend server. If decrypting takes place directly on the server, it can affect the speed of other processes.
It is more effective to decrypt and verify data on a load balancer between the client and server sides. This operation is called SSL termination (offloading). It takes the burden of decrypting off the CPU and the application server, and directs resources to other tasks. It facilitates page loading, ensures the website’s smooth and fast performance, and simplifies the management of SSL certificates.
Organizations concerned about the safety of their sensitive data often ask the following questions:
- Will decryption on a load balancer reduce the security of an SSL connection that has been disrupted prior to data delivery to the backend server?
- Will SSL termination expose the short route of unencrypted data from the load balancer to the server to attacks?
In some cases, the answer may be “Yes.” There are several ways to restore security:
- data can be re-encrypted when sent from a load balancer to a backend server;
- a load balancer can be placed in the same data center as the backend server;
- a self-signed SSL can be used.
SSL offloading can be hardware- or software-based. If special software is used, it is easier to acquire and install updates and advanced security features, while the purchase of new hardware is costly. Modern SSL termination software must align with Advanced Encryption Standard New Instructions (AES-NI) that ensure speedy operation and resistance to external attacks.
1. Varnish
Varnish is a reverse proxy placed between the Internet and the company’s web server. This is not a standalone solution, since it requires a dedicated web server such as Nginx or Apache.
Varnish is the entry point for all HTTP requests directed to the company’s website. It filters them to save server space and processing resources. Varnish can be used to cache both dynamic and static content. According to its developers, Varnish can accelerate content delivery up to 300-1000 times, depending on the architecture.
Acceleration is ensured by a number of factors:
- The cache server works faster than the source server when delivering content, since the workload on the former is lower.
- The cache server handles all the static data such as CSS and JavaScript files, reducing the load on the source server.
- Time to first byte (TTFB) is reduced because the processing time of the internal server database is shorter.
- Varnish can be used as part of a highly accessible environment for serving cached content, even when the web server is experiencing downtime.
2. Apache
Apache is an open-source and cross-platform software that is used by about 46% of websites worldwide. Although often called a web server, Apache is actually a program that runs on the server.
Apache establishes a connection with the visitor’s browser when delivering files back and forth within the client-server structure. When a visitor views a page on your website, their browser sends a request to your server, and Apache returns a response with all the requested content. Apache is easily customizable, thanks to its modular structure.
The use of Apache in conjunction with Nginx allows us to make the most of both. By compensating for the weaknesses of one technology with the strengths of another, the performance of the overall system is greatly improved.
Acquisition of an SSL Certificate
We installed an SSL Certificate on the Nginx server, since all modern browsers block unencrypted traffic as insecure. After October 2020, Chrome 86 will block all data transferred with an insecure HTTP protocol. An SSL Certificate is essential for the survival and growth of online businesses in the modern web environment.
In addition to ensuring secure data transfer and confidentiality, an SSL Certificate enhances an enterprise’s reputation. The Secure Site Seal on a corporate website signifies a legitimate organization that is concerned about the information safety of its clients, nurturing long-lasting and trusting business relationships.
There are three types of SSL Certificates:
- Extended Validation (EV) SSL Certificate. The Certificate Authority (CA) checks whether the organization has the right to use a certain domain name and thoroughly examines its authenticity and integrity according to 2007 EV Guidelines. Businesses of any type can obtain an EV certificate. CAs are audited on an annual basis according to EV Audit Guidelines, to confirm their right to issue EV SSL Certificates.
- Organization Validation (OV) SSL Certificate. CA checks whether the organization has the right to use a certain domain name and superficially examines its authenticity and integrity.
- Domain Validation (DV) SSL Certificate. CA only checks whether the organization has the right to use a certain domain name.
The choice of the certificate type depends on the goals, size and financial capabilities of a business. Most large companies pass extended validation to confirm maximum security. Small enterprises like our client’s online store that cannot afford costly validation usually lean towards domain validation. This option saves time and money, since there is no need for tedious paperwork. A DV SSL Certificate can be obtained almost immediately after submitting the application, which is why we chose that option.
Configuring the System
Here we outline the process for configuring Nginx, Varnish, Apache, PHP, and WordPress. It is assumed that all the components are up and running and an SSL Certificate is available. File locations may vary, depending on the OS version.
- The first step is to configure PHP (version 7.3). We disable opcache in the php.ini file (/etc/php/7.3/apache2/php.ini): opcache.enable = 0.
- Configuring Apache involves configuring virtual hosts. We use port 8080.
<VirtualHost 127.0.0.1:8080>
……..
SetEnvIf X-Forwarded-Proto https HTTPS=on
……..
</VirtualHost>
The setenvif_module must be installed and enabled. The LoadModule line setenvif_module/usr/lib/apache2/modules/mod_setenvif.so must be checked in the apache2/mods-enabled/setenvif.load file.
- Let’s look at configuring Varnish in more detail. Pay attention to file configuration. It varies from version to version. In the framework of this case study, we used Varnish version 6.1.1.
Useful information sources that will help you get started:
- Installation guide: https://packagecloud.io/varnishcache/varnish60lts/install;
- Recommend for reading: https://book.varnish-software.com/4.0/;
- Pay special attention to this chapter: https://book.varnish-software.com/4.0/chapters/VCL_Basics.html#varnish-finite-state-machine;
- Since names and order of features may vary from version to version, get acquainted with the latest upgrades: https://varnish-cache.org/docs/6.0/whats-new/upgrading-6.0.html.
By default, Varnish uses port 6081. We changed it to 8085. Since Varnish runs as a unit in system.d, the configuration file should be located in /usr/lib/systemd/system/varnish.service or man varnishd.
More details: https://www.varnish-cache.org/docs/6.1/.
# Default backend definition pointing to a content server. backend default { .host = “127.0.0.1”; .port = “8080”; } acl purgers { “localhost”; “127.0.0.1”; } sub vcl_recv { # Happens before we check if we have this in cache already. # # Typically you clean up the request here, removing unnecessary cookies, # rewriting the request, etc. # Only a single backend set req.backend_hint= default; # Setting http headers for backend set req.http.X-Forwarded-For = client.ip; set req.http.X-Forwarded-Proto = “https”; if (req.method == “PURGE”) { if (!client.ip ~ purgers) { return(synth(405,”You are not allowed to purge.”)); } ban(“req.url ~ /”); return(purge); } # pass wp-admin urls and comments and cron if (req.url ~ “(wp-login|wp-admin|comments-post.php|cron.php)” || req.url ~ “preview=true” || req.url ~ “xmlrpc.php”) { return (pass); } if (req.url ~ “(tinymce|wp-mce|plugin.min.js)” || req.url ~ “preview=true” || req.url ~ “xmlrpc.php”) { return (pass); } if (req.url ~ “/wp-json/”) { return (pass); } # pass wp-admin cookies if (req.http.cookie) { if (req.http.cookie ~ “(wordpress_|wp-settings-)”) { return(pass); } else { unset req.http.cookie; } } set req.http.X-Cache-TTL-Requested = req.ttl; # Force lookup if the request is a no-cache request from the client if (req.http.Cache-Control ~ “private”) { return (pass); } if (req.url ~ “\.(css|js|png|gif|jp(e)?g)”) { unset req.http.cookie; } set req.http.Surrogate-Capability = “key=ESI/1.0”; if (req.http.Authorization) { return (pass); } return (hash); } sub vcl_hash { if ( req.http.X-Forwarded-Proto ) { hash_data( req.http.X-Forwarded-Proto ); } } sub vcl_hit { # Set up some debugging headers that will be passed back out to # the client in the HTTP response. set req.http.X-Cache-Keep = obj.keep; set req.http.X-Cache-TTL-Remaining = obj.ttl; set req.http.X-Cache-Age = obj.keep – obj.ttl; # Different requests ask for the same object but with different TTLs. We are # repurposing obj.keep to store the original requested TTL. Thus we have # (obj.keep – obj.ttl) as the time spent in storage. # # A straightforward cache hit, so deliver it. if (obj.keep – obj.ttl <= req.ttl) { set req.http.X-Cache-Result = “HIT”; return (deliver); } if ( std.healthy(req.backend_hint) ) { if (obj.keep – obj.ttl – 10s <= req.ttl) { set req.http.X-Cache-Result = “HIT-with-slight-grace”; return (deliver); } # No candidate for grace. Fetch a fresh object. else { set req.http.X-Cache-Result = “HIT-stale-so-fetch”; # use return(miss) instead of deprecated return(fetch) in vcl_hit return(miss); } } # The backend is unhealthy, so use full grace. else { if (obj.keep – obj.ttl – obj.grace <= req.ttl) { set req.http.X-Cache-Result = “HIT-with-full-grace”; return (deliver); } else { # This will of course result in an error response since the backend is # unhealthy. set req.http.X-Cache-Result = “HIT-stale-so-fetch”; return (miss); } } } sub vcl_miss { # We can end up here after a hit and then fetch. if (req.method == “PURGE”) { return(synth(404,”Object not in cache.”)); } if (!req.http.X-Cache-Result) { set req.http.X-Cache-Result = “MISS”; } return(fetch); } sub vcl_backend_response { # old vcl_fetch # Happens after we have read the response headers from the backend. # # Clean the response headers, removing silly Set-Cookie headers # and other mistakes your backend makes. # retry a few times if backend is down if (beresp.status == 503 && bereq.retries < 3 ) { return(retry); } if (beresp.http.X-No-Cache) { set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } # Pause ESI request and remove Surrogate-Control header if (beresp.http.Surrogate-Control ~ “ESI/1.0”) { unset beresp.http.Surrogate-Control; set beresp.do_esi = true; } #the object will stay in cache for 15 mins after its TTL # default beresp.grace = 1h; set beresp.grace = 8h; # default beresp.keep = std.duration(bereq.http.X-Cache-TTL, 900s); set beresp.keep = std.duration(bereq.http.X-Cache-TTL, 6h); set beresp.ttl = beresp.keep; if (beresp.ttl > 0s) { set beresp.http.x-obj-ttl = beresp.ttl + “s”; } return (deliver); } sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # You can do accounting or modifying the final object here. if (req.http.X-Cache-Keep) { set resp.http.X-Cache-Keep = req.http.X-Cache-Keep; } if (req.http.X-Cache-TTL-Remaining) { set resp.http.X-Cache-TTL-Remaining = req.http.X-Cache-TTL-Remaining; } if (req.http.X-Cache-Age) { set resp.http.X-Cache-Age = req.http.X-Cache-Age; } if (req.http.X-Cache-TTL-Requested) { set resp.http.X-Cache-TTL-Requested = req.http.X-Cache-TTL-Requested; } if (req.http.X-Cache-Result) { set resp.http.X-CacheResult = req.http.X-Cache-Result; } if (resp.http.x-obj-ttl) { set resp.http.Expires = “” + (now + std.duration(resp.http.x-obj-ttl, 3600s)); unset resp.http.x-obj-ttl; } # Remove some HTTP-headers: unset resp.http.Server; unset resp.http.X-Powered-By; unset resp.http.Via; unset resp.http.X-Varnish; unset resp.http.X-Cacheable; unset resp.http.Age; unset resp.http.X-Pingback; unset resp.http.Link; return (deliver); }
The details of this configuration to pay special attention to:
sub vcl_recv {
...
set req.http.X-Forwarded-For = client.ip;
set req.http.X-Forwarded-Proto = "https";
...
}
The same configuration was needed for Apache. We also didn’t cache certain WordPress URLs (wp-login, wp-admin, etc.)
sub vcl_recv {
...
if (req.url ~ "(wp-login|wp-admin|comments-post.php|cron.php)" || req.url ~ "preview=true" || req.url ~ "xmlrpc.php") {
return (pass);
...
}
For testing, we display diagnostic information. It can be removed as unnecessary.
sub vcl_hit {
...
set req.http.X-Cache-Result = "some text";
...
}
Three attempts to access the backend (Apache) should be made in the following fragment:
sub vcl_backend_response { ... if (beresp.status == 503 && bereq.retries < 3 ) { return(retry); } ... }
In case it does not respond, error 503 should be displayed.
In this project, the Varnish cache had a lifetime of 8 hours.
sub vcl_backend_response { ... set beresp.grace = 8h; ... } sub vcl_deliver { ... unset resp.http.Server; unset resp.http.X-Powered-By; unset resp.http.Via; unset resp.http.X-Varnish; unset resp.http.X-Cacheable; unset resp.http.Age; unset resp.http.X-Pingback; unset resp.http.Link; ... }
- We removed unnecessary headers in the Nginx configuration.
server { ... location { ... proxy_http_version 1.1; #varnish http proxy_pass http://127.0.0.1:8085/; #disable proxy cache proxy_no_cache 1; proxy_cache_bypass 1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_ssl_session_reuse off; ... } ... }
- We turned off caching in Nginx since Varnish was used for this purpose.
proxy_no_cache 1;
proxy_cache_bypass 1;
- To control cache cleaning in WordPress, we installed the W3 Total Cache plugin and configured it as shown below:
At this point, the configuration is complete.
Results/Achievements
Following are the outcomes of the project:
- Nginx, Varnish, and Apache greatly reduced the response time of the client’s website. Despite the growing amount of content, users quickly receive requested data. There is no downtime, even at moments of peak load.
- The installation of an SSL Certificate prepared the online store for Chrome’s October sanctions for HTTP traffic.
- Thanks to the performance upgrades and better user experience, the client noticed an increase in the number of purchases and a longer view time on the website.
Digital Clever Solutions continues to support this project, implementing new optimization features in response to changes in web requirements.