Composr Tutorial: Optimising Performance
Written by Chris Graham
Composr is very heavily optimised so that pages load as quickly as possible, but there are also many ways to tune performance. This tutorial will provide information on techniques and issues for increasing throughput, so that your site may support more visitors. Some of these techniques are easy, others are programmer-level in complexity.The tips of most importance are given in bold.
Performance tuning is multi-faceted, broadly fitting into:
- Making all page requests faster
- Making the most common page requests extremely fast (e.g. static caching for guests)
- Improving the number of page requests per unit time (throughput)
- Removing specific bottlenecks
- Stopping requests blocking each other (e.g. database locks)
- Blocking abuse, such as bad bots, DOS attacks, misrouted traffic, and flooders
- Improving page download speeds (e.g. compression and minification)
- Improving front-end rendering time
- Optimising server configuration
Limiting factors often are:
- CPU (most common)
- Memory
- Disk access speed (quite common if you do not have an SSD and fast I/O channel)
- Networking speed (less common)
Table of contents
Composr configuration
Composr caches
Composr provides many forms of cache, to allow the system to run as quickly as is possible. Caching is one of the best ways to improve performance as it cuts out the need to do repeated work.The Composr caches are:
- language cache: this removes the need for Composr to parse the .ini language files on each load
- template cache: this removes the need for Composr to parse the .tpl template files on each load
- Comcode page cache: this removes the need for Composr to parse the .txt Comcode pages on each load
- Comcode cache: this removes the need for Composr to parse Comcode whenever it is used
- block cache: this removes the need for many blocks to be fully executed whenever they are viewed – they are cached against the parameters they are called up with using a per-block tailored scheme
- theme image cache: this removes the need for Composr to search for theme image files whenever they are referenced by code (a code could translate to perhaps 10 different URLs, due to the flexibility of the system)
- values caches: this isn't a specific cache, but caching of values such as member post counts removes the need for Composr to recalculate them on-the-fly
- persistent cache: this caches common data outside the database, in some kind of persistent cache handler (see the next section)
- advanced admin cache: this can be turned on in the Admin Zone configuration to let admins have cached pages on their computer that are immediately (without server communication) interstitially displayed (for roughly 1 second) while the server generates the up-to-date page
- static cache: this can be turned on from the Installation Options editor to feed static pages to bots or Guests
- self learning cache: this can be turned on from the Installation Options editor and allows pages to learn what resources they need, for efficient bulk loading of essentials while avoiding loading full resource sets upfront
Technical note for programmers
Composr is not designed to "cache everything as it will be displayed" because the high dynamic nature of the system makes that impossible. Instead, Tempcode is typically cached against relevant parameters, which provides a "half way" between determined output and neutral data.
Static caching
The static caching provides extreme performance by bypassing most of the Composr framework.It works on the principle that bots and guest users don't need dynamically-generated content (in most situations), and thus whole pages can be cached.
Enable the static cache from Installation Options.
Composr tries to be smart about what is statically cached, but you also have some control over it via some options available within the Installation Options.
Note(s):
- Unless you prevent static caching running for the purchase and shopping modules, they will no longer be available to Guests – this is because Composr cannot track eCommerce activity to guest users if they are working via statically cached pages.
Persistent caching
PHP requests are completely standalone. No data is held in memory from one request to another. The persistent cache is a cache that aims to store regularly-accessed data in-memory between requests, so that it does not actually need to be loaded from the database or re-calculated on each page load. This cache removes about 30% of the page load time, but most importantly, takes load away from the database, allowing the database to become less of a limiting factor in high throughput situations.Composr does not cache processed content in memory that has no special featured status, as this would only trade reliance on CPU for reliance on memory in a non-productive fashion.
The cache is implemented to work with any one of:
- APC/APCu (maintenance status), which is an PHP extension that provides in-memory storage features as well as an opcode cache and is associated with core PHP development (hence why we recommend it as the opcode cache to use)
- memcache (maintenance status) ('memcache' is the PHP extension for the 'memcached' server), which provides a heavyweight solution to memory sharing – it is not recommended that this be used for typical websites, as memcached requires additional configuration
- or memcached (maintenance status), which works via the other PHP memcached server extension
- Wincache (maintenance status) – a PHP accelerator extension developed by Microsoft, optimised for Windows
- XCache (maintenance status) – another PHP accelerator extension
- disk cache (maintenance status) – stores data under the caches/persistent directory – while this does increase disk access, it still provides a performance boost over not having a persistent cache
The PHP extensions work by either holding memory resident within the web server's PHP module (so it stays in memory between requests), or within a standalone persistent caching server that runs on the network. The filesystem handler just stores data on disk, but disk caching will usually mean the data loads very quickly, and certainly more quickly than many disparate filesystem and database read operations.
Composr will not use a persistent cache by default but it may be enabled from the Installation Options editor.
Be aware that some persistent cache backends may be tied into a PHP process, rather than held system wide. This is the case for APC. Therefore if you are using a CGI architecture the full contents of the cache will be dumped each time a web request finishes (defeating the entire purpose); for fast CGI it's not as bad, as processes serve multiple requests, although it does mean there is a lot of duplication between each fast CGI process.
Do not set the persistent cache's memory (typically done in the PHP extension configuration) higher than needed, especially on a fast CGI or module-based PHP installation – for at least some persistent cache backends the memory is pre-allocated, for each PHP process that is running. For APC, this is set via the apc.shm_size setting.
General advice
General tips:- Don't have addons installed that you don't need. Some may have general performance requirements, such as catalogues (adds custom fields checking to all content types) – while others may just have general over-head in terms of sitemap generation, language string presence, moniker registration, and hook execution.
- Enable the disable_smart_decaching option in the installation options. You can make it intelligent, by only allowing smart decaching if the FTP log was modified recently.
- If you get a lot of 404 errors, it is best to make a static 404 page instead of using Composr's which can be relatively intensive. You activate a custom 404 page by putting a reference to it in the .htaccess file (Apache only; our recommended.htaccess file does this for Composr's). Or for IIS, there is sample code in the web.config file.
- Enable the "Output compression" configuration option. Unless you're using Cloudflare's optimisation features, in which case disable it so Cloudflare can do that instead.
- Sending e-mails can be an intensive task because Composr has to work out what CSS to include for each e-mail, put templating together, and connect through and send to the e-mail server. Turning on the "E-mail queue" configuration option will avoid this happening immediately during user actions, deferring it. It will make forum posting, for example, faster.
- Disabling URL monikers can help with performance; however this will also impact SEO if you think your URL paths are key to that.
- If you use www in your base URL, set an explicit cookie domain in _config.php to include that. This will cause Google Analytics cookies to be set on the www subdomain, reducing the cookie volume on other domains.
- Deny guest access to pages that are necessarily computational intensive – to stop bots consuming too many resources.
- For MySQL you can configure Composr to use persistent database connections. This is not recommended on shared hosting because it may annoy the webhost, but if you have a dedicated server it will cut down load times as a new database connection does not need to be established. These are enabled through the "Installation Options" (http://yourbaseurl/config_editor.php).
- There are many possible config options, or set_value commands to tune performance described in the Code Book.
- Servers may have incorrectly firewalled DNS resolution, in which case the Composr "Spammer checking level" setting must be set to "Never" to avoid a 28 second timeout happening on each anti-spammer check.
Moderation tips:
- Avoid very long forum topics – split them up if they get too long. This is because jumping to the latest post has to search back through all posts in the topic to work out pagination positioning. Additionally just going deep into the topic uses a lot of resources for the same kind of reason.
- If you have large numbers of topics then consider deleting topics that have little value.
Stay on the well-trodden path:
- Even though Composr supports various database vendors it is optimised for MySQL. You should also run the latest stable version so that you benefit from all their performance optimisations.
- Even though Composr supports various third party forums it is optimised for Conversr.
Aggressive caching for bots
If you want to serve cached pages to bots, put a line like this into your _config.php file:Code (PHP)
$SITE_INFO['static_caching_hours'] = '3';
The cache lifetime in this example would be 3 hours, but you can change it to whatever you require.
The cache files are saved under the caches/persistent directory.
If you want any Guest user to be cached like this, set:
Code (PHP)
$SITE_INFO['any_guest_cached_too'] = '1';
Disk activity
If you have a hard disk that is slow, for whatever reason, you can put these settings into _config.php to reduce access significantly:Code (PHP)
/* The best ones, can also be enabled via the config_editor.php interface */
$SITE_INFO['disable_smart_decaching'] = '1'; // Don't check file times to check caches aren't stale
$SITE_INFO['no_disk_sanity_checks'] = '1'; // Assume that there are no missing language directories, or other configured directories; things may crash horribly if they are missing and this is enabled
$SITE_INFO['hardcode_common_module_zones'] = '1'; // Don't search for common modules, assume they are in default positions
$SITE_INFO['prefer_direct_code_call'] = '1'; // Assume a good opcode cache is present, so load up full code files via this rather than trying to save RAM by loading up small parts of files on occasion
/* Very minor ones */
$SITE_INFO['charset'] = 'utf-8'; // To avoid having to do lookup of character set via a preload of the language file
$SITE_INFO['known_suexec'] = '1'; // To assume .htaccess is writable for implementing security blocks, so don't check
$SITE_INFO['dev_mode'] = '0'; // Don't check for dev mode by looking for traces of Git
$SITE_INFO['no_extra_logs'] = '1'; // Don't allow extra permission/query logs
$SITE_INFO['no_extra_bots'] = '1'; // Don't read in extra bot signatures from disk
$SITE_INFO['no_extra_closed_file'] = '1'; // Don't support reading closed.html for closing down the site
$SITE_INFO['no_installer_checks'] = '1'; // Don't check the installer is not there
$SITE_INFO['assume_full_mobile_support'] = '1'; // Don't check the theme supports mobile devices (via loading theme.ini), assume it always does
$SITE_INFO['no_extra_mobiles'] = '1'; // Don't read in extra mobile device signatures from disk
$SITE_INFO['disable_smart_decaching'] = '1'; // Don't check file times to check caches aren't stale
$SITE_INFO['no_disk_sanity_checks'] = '1'; // Assume that there are no missing language directories, or other configured directories; things may crash horribly if they are missing and this is enabled
$SITE_INFO['hardcode_common_module_zones'] = '1'; // Don't search for common modules, assume they are in default positions
$SITE_INFO['prefer_direct_code_call'] = '1'; // Assume a good opcode cache is present, so load up full code files via this rather than trying to save RAM by loading up small parts of files on occasion
/* Very minor ones */
$SITE_INFO['charset'] = 'utf-8'; // To avoid having to do lookup of character set via a preload of the language file
$SITE_INFO['known_suexec'] = '1'; // To assume .htaccess is writable for implementing security blocks, so don't check
$SITE_INFO['dev_mode'] = '0'; // Don't check for dev mode by looking for traces of Git
$SITE_INFO['no_extra_logs'] = '1'; // Don't allow extra permission/query logs
$SITE_INFO['no_extra_bots'] = '1'; // Don't read in extra bot signatures from disk
$SITE_INFO['no_extra_closed_file'] = '1'; // Don't support reading closed.html for closing down the site
$SITE_INFO['no_installer_checks'] = '1'; // Don't check the installer is not there
$SITE_INFO['assume_full_mobile_support'] = '1'; // Don't check the theme supports mobile devices (via loading theme.ini), assume it always does
$SITE_INFO['no_extra_mobiles'] = '1'; // Don't read in extra mobile device signatures from disk
Rate limiting
A site can be impacted by a flood of requests from a machine, and it is wise to block this.If you're on a shared host, requests floods can even get you suspended, or your whole site automatically rate limited.
To enable Composr's inbuilt rate limiting, add this to _config.php:
Code (PHP)
$SITE_INFO['rate_limiting'] = '1';
$SITE_INFO['rate_limit_time_window'] = '10';
$SITE_INFO['rate_limit_hits_per_window'] = '5';
$SITE_INFO['rate_limit_time_window'] = '10';
$SITE_INFO['rate_limit_hits_per_window'] = '5';
This produces soft errors with the correct HTTP header. The errors happen early, before Composr boots up.
Note that anyone behind a shared proxy server will share an IP address. See how to properly configure IP addresses under the "Reverse proxying" section.
PHP time limit
If you are really constrained for resources, and worry some requests may take too long in some cases, you can lower the PHP time limit from Composr's default 60 seconds.Note that Composr does raise the limit on its own when really needed.
Code (PHP)
$SITE_INFO['max_execution_time'] = '10'; // Ten second maximum execution time
Note that PHP's time limit does not count time taken doing database queries and other external tasks, except on Windows servers.
Ensure configuration is set
If a configuration page has never been saved, default config values from that page will be calculated on the fly, which is a little slower.Go through and save them all, or run this Commandr command:
Code
:require_code('config2'); foreach (array_keys($GLOBALS['CONFIG_OPTIONS_CACHE']) as $key) if (get_option($key)!==NULL) set_option($key,get_option($key));
'keep' parameters
This is not recommended, but if you really need to squeeze performance, you can disable the 'keep' parameters:Code (PHP)
$SITE_INFO['no_keep_params'] = '1'; // Disable 'keep' parameters, which can lead to a small performance improvement as URLs can be compiled directly into the template cache
Composr front-end development
Template Tips:- If your website doesn't need to be able to run without a wider Internet connection (i.e. isn't an Intranet), then you could blank out any of the unmodified JavaScript libraries Composr includes (e.g. jQuery) and instead include references to a JavaScript CDN via direct includes in the HTML_HEAD.tpl template. Then it may run direct out of a user's existing web cache.
- Composr won't hard-code image dimensions into templates, but some may use the IMG_WIDTH/IMG_HEIGHT symbols to auto-detect them. This has a very small performance impact – you may wish to hard-code it once your theme is complete.
Block tips:
- You can mark out parts of a complex Comcode layout as having quick_cache by moving the content into individual Comcode pages then using the main_include_module block to insert it back, with the quick_cache parameter turned on for that block.
- The defer-loading option block parameter is also useful (as it decouples block generation from initial load, and allows blocks to generate in parallel), although it puts a requirement on JavaScript (so crawlers may not get the content for example).
- If you are using a particularly visually complex block (e.g. deep pop-out menus) then use the quick_cache block parameter on it if possible.
- The main_news and main_forum_news blocks have an optimise parameter which will simplify down the Comcode of anything displayed such as it will be stored statically. This usually will have no impact, but may increase performance if a lot of Comcode tags were used. It does have an impact if dynamic elements are used within posts, such as Comcode that check's a user's usergroup or displays the current date/time.
Comcode tips:
- Comcode tabs can take a page-link rather than normal Comcode contents, for the tab to open up via AJAX (don't use this on the default tab though, as that won't work).
- If you make very heavy use of large and complex tooltips then you may want to consider loading these via AJAX. This will require some fairly advanced coding skills, but it may be worth doing. Look at the COMCODE_MEMBER.tpl template for an example of AJAX tooltip code.
- You can add {$,page hint: Quick Cache} into Comcode pages to enable the equivalent of quick block caching for that page.
Miscellaneous tips:
- Serve pre-compressed CSS and JS files, by uncommenting lines in themes/*/templates_cached/.htaccess (Apache only); it is not commented as there are problems on some servers
- Merge always-needed CSS and JS files into the global files via the "Globally-included CSS/JS files" configuration options
Content Delivery Networks (CDNs)
There are a number of approaches you can take if you want to serve media from third party servers. Doing this reduces latency, and your own hosting costs, as well as some other potential side benefits such as automatic transcoding, and making the web requests cookieless.Theme/content files
You can have randomised content delivery networks used for your theme images, CSS files, JavaScript files, and other files (such as within the File/media library).Change the Content Delivery Network option to something like:
Code (Text)
cdn.example.com
Code (Text)
example1.example.com,example2.example.com
As you can see, it is a comma-separated list of domain names. These domain names must be serving your files from a directory structure that mirrors that of your normal domain precisely. How you achieve that is up to you, but here are some ideas:
- Use a commercial CDN service
- There are many commercial CDN services available that can be configured in via this option. Many of them automatically will transfer over any file you reference on their service (via URL equivalencies) so that you don't need to worry about copying files over yourself.
- Use your own CDN servers, and something like rsync to keep files in sync
- If you are very sophisticated, you may use implement choice of the best geographic server location at the DNS level
- Use Cloudflare, but just on the CDN domain (hence getting all the CDN advantages of Cloudflare but without having to proxy your entire website)
- If you set Cloudflare to control your entire site then this is actually creating a bottleneck and a performance degradation, as you are relying on all your content flowing through Cloudflare, even for users in the same building as your server
- Just run your CDN domains off the same server (this is called 'domain sharding', and is like CDN-lite)
Composr will randomise what CDN's it uses, so parallelisation can work more efficiently (this is the only reason for the comma-separation). Web browsers have a limit on how many parallel web requests may come from a single server, and this works around that.
By default Composr will pipe CSS, JavaScript, and theme image files, through the CDN.
You can use the CDN_FILTER directive and symbol to pass anything within through the CDN. This is described further below.
HTML_COMPRESS and CDN_FILTER directives
Composr has HTML compression. If you wish you can edit the GLOBAL_HTML_WRAP template to wrap the contents like:Code (HTML)
{+START,HTML_COMPRESS,relativeurls|protocolrelativeurls|selfclose|redundantclose|quotes|cdata|delayjavascript|comments}{+START,CDN_FILTER}
...
{+END}{+END}
...
{+END}{+END}
The beauty of these directives is that they can run fast because they don't do a full parse of the HTML code while the concern of accidentally breaking anything is removed because you have full control over what portions of your HTML they run on.
Both directives make some assumptions based upon Composr coding standards. So make sure you stick to Composr HTML standards when using it.
Note that these directives will disable if you temporarily are disabling output minification.
HTML_COMPRESS directive
The HTML_COMPRESS rules available are:- cdata – remove CDATA code from JavaScript, which is needed for XHTML but not HTML
- comments – remove HTML comments
- we don't include [m]any in Composr because we use Tempcode comments, but it may be useful to remove them from third-party code you've integrated
- delayjavascript – remove any inline JavaScript to the end of the HTML </body> execution to stop render blocking
- we don't have any in Composr, but you may have integrated some
- it is possible JavaScript may need to run early for the page to render convincingly, or for accurate page-load-speed tracking (in the case of Google Analytics). In this case you need to add a data-optimize="no" attribute to appropriate <script> tag(s).
- protocolrelativeurls – make absolute URLs that are on the same protocol (e.g. https) shorter
- quotes – remove unneeded HTML quotes (this is invalid XHTML, but valid HTML)
- redundantclose – remove unneeded closing tags for tags that get automatically closed
- relativeurls – make URLs relative where possible, greatly shortening them
- if you're using this then only do it in GLOBAL_HTML_WRAP, not anywhere that may be cached in the block cache, as the URL base won't stay the same
- if you're using a <base> tag with a new base URL, you shouldn't use this
- selfclose – remove unneeded tag self-closing indicators for tags that always self-close
More advanced discussion
We don't do anything particularly assumptive or risky in this directive, for that you can consider Cloudflare or mod_pagespeed. For example:- We don't recompress images, or target scaled-down images to different devices, or switch file extensions on you. We assume you'll compress your own theme images well, and do some image compression when we generate thumbnails. It would be technically very challenging for us to implement these image switching techniques, and we feel we'd cause as many problems as we'd solve.
- We don't make changes to your stylesheet structure. Almost any change can alter the priorities styles are applied with, i.e. break your layout.
- We don't automatically inline images. You can do this manually using the IMG_INLINE symbol, in full knowledge of your real trade-offs.
- We don't automatically merge stylesheets or script files. A technique to do this manually (i.e. in knowledge of your real trade-offs) is discussed in this tutorial though.
- We don't try and implement an initially-inlined-then-locally-cached mechanism using a shared client/server memory of what resources have already been sent. This is a really cool technique that can make initial page-load time optimal, and subsequent page-load time optimal, without any kind of trade-off. However it is very complex and should not be implemented within the scope of a CMS. It also is not necessarily optimal except for simple sites, due to the possibility of connection stutter during the initial monolithic page transfer.
CDN_FILTER directive
The CDN_FILTER directive is equivalent of individually wrapping the CDN_FILTER symbol on each URL within the contained HTML.This allows images in WYSIWYG-edited code to be passed through the CDN without this having to be seen or considered at the editing level.
You must not run the directive outside the HTML_COMPRESS directive, as that directive will break the Composr coding standards this directive relies on.
If you are using this directive then there are some .htaccess rules you can use (commented out by default) to use long-lasting HTTP Expires headers (Apache-only). The last-modified timestamp is automatically a part of the CDN URLs used, so this works very well (if you change the file, the URL changes, so the cache is implicitly expired).
Improved HTTP piping
In the HTML <head> you can define <link> tags for (increasing cost):- dns-prefetch – pre-resolve a domain name you are going to need to use somewhere in your HTML/CSS/JS
- preconnect – pre-connect to a web server you are going to take resources from somewhere in your HTML/CSS/JS, including TLS resolution
- prefetch – pre-fetch a URL
- prerender – pre-fetch and pre-render a URL (if the browser supports it, Google Chrome dropped support)
If you have a situation where you can predict what links people will click, you can use a standard HTML approach to define preloading for those links with rel="preload".
Various HTTP streaming techniques are discussed in a blog post I wrote.
HTTP/2 is great, enable it on your server if you can. You can use preload <link> tags to define what the HTTP/2 implementation should automatically push. Just be aware that this will waste bandwidth for users who already have those resources cached.
The work on HTTP/3 is also interesting (previously Google QUIC). Google SDCH was extremely interesting (but Google dropped support due to lack of interest). At this time of writing it is too early to really think about these, but they should provide automatic optimisations to Composr if implemented.
Database management
MySQL tips:- Automatically kill slow searches (see "Auto-kill slow searches" in the Search tutorial).
- You can edit the sessions table to be of type HEAP, which will have it stored in memory rather than disk (it has no need to be saved to disk permanently).
Huge databases
If you have really large databases then two issues come into play:- Composr will start doing sensible changes to site behaviour to stop things grinding to a halt
- You might start worrying about databases being too large for a single database server and need to implement 'sharding'
You may also want to switch tables to InnoDB instead of MyISAM. If you do this then run this command in Commandr too:
Code
:set_value('innodb', '1'); set_value('slow_counts', '1');
If using InnoDB you may want to customise the innodb_flush_log_at_trx_commit and innodb_flush_log_at_timeout settings. They provide a trade-off between performance and data integrity of recent transactions in the case of critical failure.
Sharding
If you have so much data (100's of GB, millions of records) that you can't house it in a single database server then you have a good kind of problem because clearly you are being incredibly successful.It's at this point that serious programming or database administration will need to happen to adapt Composr to your needs. MySQL does have support for 'sharding' that can happen transparently to Composr, where you could use multiple hard-disks together to serve a single database. However this is not the commodity hardware approach many people prefer.
An alternative is to implement a No-SQL database driver into Composr. There is nothing stopping this happening so long as SQL is mapped to it. We have no out-of-the-box solution, but we do have full SQL parsing support in Composr for the intentionally-limited SQL base used by Composr (in the XML database driver) so have a lot of the technology needed to build the necessary translation layer. Practically speaking though this is a serious job, and at this point you are so huge you should be having a full-time team dedicated to performance work.
Composr adaptations
Composr has in the past tested up to a million of the following:- Comment topic posts for a single resource
- Ratings for a single resource
- Trackbacks for a single resource
- Forum/topic trackers (if you do this though things will get horribly slow – imagine the number of e-mails sent out)
- Authors
- Members
- Newsletter subscribers
- Point transactions
- Friends to a single member
- Friends of a single member
- Banners
- Comcode pages
- Calendar events
- Subscribers to a single calendar event
- Catalogues (but only a few hundred should contain actual entries – the rest must be empty)
- Catalogue categories
- Catalogue entries in a single catalogue category
- Shopping orders
- Chatrooms (only a few can be public though)
- Chat messages in a single chatroom
- Download categories
- Downloads in a single download category
- Polls
- Votes in a single poll
- Forums under a single forum
- Forum topics under a single forum
- Forum posts in a single topic
- Clubs (but not usergroups in general)
- Galleries under a single gallery
- Images under a single gallery
- Videos under a single gallery (not validated, to test validation queue)
- Quizzes
- Hack attempts
- Logged hits
- News
- Blogs
- Support tickets
- Wiki+ pages
- Wiki+ posts
If there is a lot of data then Composr will do a number of things to workaround the problem:
- Choose-to-select lists will either become non-active or be restricted just to a selection of the most recent entries (instead the user can follow in-situ edit links to get to edit something).
- A very small number of features, like A-Z indexes, will become non-functional.
- Pagination features will become more obvious.
- In some cases, subcategories may not be shown. For example, if there are hundreds of personal galleries, those galleries will need to be accessed via member profiles rather than gallery browsing. This is because pagination is not usually implemented for subcategory browsing.
- The sitemap might not show subtrees of content if the subtree would be huge.
- Some Composr requests will average become very slightly slower (more database queries) as optimised algorithms that load all content from database tables at once have to be replaced with ones that do multiple queries instead.
- Entry/Category counts for subtrees will only show the number of immediate entries rather than the recursive number
- Birthdays or users-online won't show (for example)
- The IS_IN_GROUP symbol and if_in_group Comcode tags will no longer fully consider clubs, only regular usergroups
- Usergroup selection lists won't include clubs except sometimes the ones you're in
- With very large numbers of catalogue entries, only in-database (indexed) sorting methods will work, so you can't have the full range of normal ordering control
- Selectcode will not work thoroughly when using category tree filters if there are more than 1000 subcategories
There is a risk that people could perform a DDOS attack. For example, someone might submit huge numbers of blog items, and then override default RSS query settings to download them all, from lots of computers simultaneously. Composr cannot protect against this (we don't put in limits that would break expected behaviour for cases when people explicitly ask for complex requests, and if we did it would just shift the hackers focus to a different target), but if you have this much exposure that hackers would attempt this you should be budgeting for a proper network security team to detect and prevent such attacks.
Be aware of these reasonable limits (unless you have dedicated programming resources to work around them):
- Don't create more than 60 Custom Profile Fields, as MySQL will run out of index room and things may get slow!
- Composr will stop you putting more than 300 children under a single Wiki+ page. You shouldn't want to though!
- Composr will stop you putting more than 300 posts under a single Wiki+ page. You shouldn't want to though!
- Don't create more than about 1000 zones (anything after the 50th shouldn't contain any modules either). Use customised page monikers to build a 'directory structure' instead.
- LDAP support won't run smoothly with 1000's of LDAP users in scope (without changes anyway).
- Just generally don't do anything unusually silly, like make hundreds of usergroups available for selection when members join.
MySQL searches
MySQL full-text search can be a resource hog if your server is not configured properly and you have a large amount of content.To make it run efficiently, the MySQL key_buffer_size setting (think of it as a general index buffer, it's not just for keys) must be high enough to contain all the indexes involved in searching:
- the full-text index on the translate table
- indexes on the fields that join into that table
- other indexes involved in the query (e.g. for sorting or additional constraints)
If the key buffer size is not large enough then indexing will work via disk, and for full-text searches or joins, that can be very slow. In particular, if a user searches for common words, the index portion relating to those words may be large and require a large amount of traversal – you really don't want this to be running off of disk.
If you notice searches for random phrases are sometimes fast, and sometimes slow, it's likely indicating the key buffer has filled up too far and is pushing critical indexes out.
You can test cache coverage via priming the key buffer via the MySQL console. This example would be for searches on forum posts, and a key buffer size of 500MB:
You'll get a result like:
Code (Text)
+------------------------+------------+
| Variable_name | Value |
+------------------------+------------+
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 0 |
| Key_blocks_used | 239979 |
| Key_read_requests | 2105309418 |
| Key_reads | 219167 |
| Key_write_requests | 26079637 |
| Key_writes | 18706139 |
+------------------------+------------+
7 rows in set (0.05 sec)
| Variable_name | Value |
+------------------------+------------+
| Key_blocks_not_flushed | 0 |
| Key_blocks_unused | 0 |
| Key_blocks_used | 239979 |
| Key_read_requests | 2105309418 |
| Key_reads | 219167 |
| Key_write_requests | 26079637 |
| Key_writes | 18706139 |
+------------------------+------------+
7 rows in set (0.05 sec)
Once you get it right, full-text searches on large databases should complete in a small number of seconds, rather than tens of seconds. The first search may be slow, but subsequent ones should not be.
It also never hurts to optimise (via OPTIMIZE TABLE or myisamchk -r *.MYI) your MySQL tables. This helps MySQL know how to better conduct queries in general, and re-structures data in a more efficient way.
If getting Composr search to work well does not seem feasible, there is a simple non-bundled addon for using Google to do your site searches. Of course this would not have any sense of permissions and be limited to Guest content, but that's fine for most use cases.
MySQL database backups
You may have some kind of script that does database backups. Follows is some advice for that script…- Call mysqldump with the --skip-lock-tables --quick --lock-tables=false parameters. In MySQL 5.7+ there is a mysqlpump command, which is an alternative to mysqldump. It is more efficient and doesn't need tuning as much.
- Do not backup the contents of certain volatile/large/non-important tables. For mysqldump use multiple --ignore-table=database.table parameters (you cannot omit the database name in this syntax). For mysqlpump use a single --exclude-tables=table1,table2,table3 style parameter.
The tables:- cache
- cached_comcode_pages
- captchas
- cron_caching_requests
- post_tokens
- messages_to_render
- ip_country
- sessions
- stats
- temp_block_permissions
- url_title_cache
- urls_checked
- digestives_tin
- Run the backup at a time when nobody is accessing the site.
- Avoid backing up test databases, there's no need to stress your machine load to back up what does not need to be backed up.
- Consider a professional backup solution. If you have a large site, any amount of load peaking and row locking could cause major issues, so consider investing in a MySQL/disk backup solution that is smarter.
Miscellaneous configuration
robots.txt tips:- Block unwanted bots you see in your logs using robots.txt
- block URLs that you don't care to be indexed
- set a Crawl-delay value
Web server configuration
PHP architecture
Not all PHP configurations are created equal.Opcode caching
Speed can be approximately doubled if an "opcode cache" is installed as a PHP extension. These caches cache compiled PHP code so that PHP does not need to re-compile scripts on every page view. The main solutions for this are:- Zend OpCache (this is bundled by default in PHP 5.6 and higher, recommended)
- APC (free) or APCu (free)
- wincache (free, Windows only)
- xcache (free)
Opcode caching configuration sometimes lines up with persistent caching configuration, as Opcode cache backends often will also work as persistent cache backends.
Only one opcode cache can be used, and they often need to be manually compiled against the PHP version on the server and then installed as an extension. Such system administration is beyond the scope of this tutorial.
PHP version
Tips:- Use PHP 7 or higher, as PHP 7 introduced massive speed improvements.
PHP process model
The fastest way to configure PHP is usually as FPM (FastCGI), on top of a threaded web server (e.g. Apache's mpm_event):- While the traditional Apache mod_php is fast and simple, it cannot run on top of mpm_event because it is not thread-safe, so it doesn't scale up well – you'll find you need more Apache processes just to deal with static requests, but each comes with the overhead of mod_php.
- suPHP and CGI are likely too slow for you.
- FPM will give you a suEXEC-like capability without actually needing to configure suEXEC.
- FPM has the capability of scaling up and down the number of PHP FastCGI processes dynamically, within the bounds you configure.
- Be aware that this requires a baseline of RAM for every website on the server that is running under a different username because the FastCGI processes are user-specific.
- The more RAM you have, the quicker parallel PHP requests can be handled as more FastCGI instances can be configured to remain resident in RAM.
- Don't have an Apache or FastCGI (FPM) configuration that allows more Apache/PHP processes than you have memory for when a server is under high load – it is much better to have saturated CPU than to have your server start page swapping.
- If you are using Composr features where AJAX is involved, expect parallel PHP requests even when only a single user is on the website.
- You'll likely want to carefully configure your FastCGI settings for each website, based on available RAM and anticipated load.
- If you are configuring your websites via a hosting control panel such as ISPConfig, make sure the default www site isn't reserving FastCGI instances when it's not even being used.
Further PHP advice
PHP configuration tips:- Make sure PHP was not compiled with --disable-ctype.
Alternative PHP engines (not supported)
We have tried in the past to support:- HHVM (by Facebook) [no longer PHP-compatible]
- PeachPie (for .net) [active]
- Roadsend (commercial software) [dead]
- Quercus (Caucho-sponsored) [dead]
- Project Zero (IBM-sponsored) [dead]
Reverse proxying
It is common to put a high-performance web server between a dynamic website like Composr, and users. This would typically come in one of three forms:- Use a common third-party proxying service, usually Cloudflare (Cloudflare would route requests to your server, and apply its own optimisations and caching). This effectively creates a CDN because Cloudflare has many servers around the world: although it is different to how CDNs were described earlier in this tutorial because it works at the level of reverse-proxying, rather than having static content served from a different domain name.
- Use nginx (an extremely high-performance server for serving static content) in front of Apache on a single server.
- Process requests via a smart router/firewall – typically Apache would then run on an internal network, otherwise firewalled off from the outside world.
If you are using a reverse proxy then it's important PHP sees the correct IP addresses of end-users, not your proxy server.
You can solve this by setting the Composr trusted_proxies installation option (edited via config_editor.php).
By default we place trust on Cloudflare IP ranges.
mod_pagespeed / Cloudflare optimisations
Google's mod_pagespeed adds some nice micro optimisations. Most of these are done automatically by Composr, or can be achieved in Composr using the HTML_COMPRESS directive; however, the image optimisations may be of interest. mod_pagespeed is not supplied with Apache, but can be installed on most Linux servers. However you would need server access or for the webhost to do this for you.The Cloudflare service (described in the above section) provides some of the same optimisations that mod_pagespeed does, as well as a CDN. However personally I'd opt to use mod_pagespeed and a custom CDN, as it is a more straight-forward configuration that doesn't require proxying your traffic (which itself will impact performance). That said, Cloudflare does provide nice anti-spam and traffic filtering, so if you have a big spam/DOS problem, it can be a very useful tool. Cloudflare's preloading feature also looks interesting, but it is not likely to result in noticeable speedup for most sites.
Further advice
Tips:- Sometimes you may be flooded with random traffic. For example, the chinese "great firewall" has been observed to randomly reroute traffic to avoid it reaching certain sites. If this happens, you may want to use a .htaccess file (Apache only) to spot undesired request patterns and terminate those requests.
- Configure your TLS options appropriately; basically there is a trade-off between perfect privacy ("Perfect Forward Secrecy") and speed – and TLS 1.3 offers good speed improvements.
- Set up fail2ban so that you aren't constantly having to log failed login attempts by hackers to SSH etc (this can be a big I/O performance hit).
- You may want to use only systemd's journald, rather than rsyslogd; on some Linux system rsyslogd is configured to mirror entries written to journald, which can be very intensive due to the frequent reading in and out. journald is higher performance as it is an efficient binary format with a cursor feature.
- Consider whether you want journaling or not – it has a significant I/O overhead.
- If you use Dropbox for server backup, and have other users actively saving files into the account, only keep it started for a short time after your backup script finishes – as it will do a lot of I/O updating its SQLite databases whenever any file in Dropbox is changed.
- Your server should be using none, or very little, swapspace. If swapspace is being used it implies that the server doesn't think the free RAM is enough and it will be actively moving stuff into and out of that swap space as your server serves web requests – a real performance killer. You can adjust the threshold when a Linux server will start using swapspace.
Hosting scenarios
Shared hosting
CloudLinux
CloudLinux is popular with webhosts. It is a modified version of Red Hat Linux that can limit the performance of individual accounts, based on the lve tools.CloudLinux tracks maximum average CPU, calculated for the worst 5 second time period your site performance has in any particular hour (by default). This is calculated across all CPU cores, so it tends to trip on request floods (rather than any single slow request), with the request processing spanning multiple cores.
CloudLinux also tracks average CPU, but this is less of an issue because the average tends to be much less than the maximum, but it is the maximum that hosts will typically look at.
Webhosts may automatically produce warnings based on the CloudLinux maximums getting hit, or even automatic suspensions. At best, sites will automatically have their CPU kept to the configured maximum.
cPanel users will find a resource usage graph is made available. Note that the 100% line on this graph represents 100% of the configured limit, not 100% of a CPU core or of combined CPU. The average and maximum lines are scaled proportional to this configured limit also. This will confuse you if you aren't aware of it, because everywhere else lve is configured and monitored in terms of percentage of total capacity.
VPS
A VPS (virtual private server) is similar to shared hosting, except you get a fully-manageable virtual machine. This is more secure and configurable.Dedicated servers
A faster and more dedicated server will make Composr run faster. This may seem obvious, but in the efforts of optimisation, is easily forgotten.CPU speed will be the most limiting factor for most websites: so this is the first thing that should be considered. Our analysis of web servers targeted to enthusiasts and small businesses show that servers commonly only have around 10% of the performance of a good desktop machine – look at the CPU family carefully, don't just count the Ghz! If you're not careful your 'server' may be a tiny ARM machine in a massive rack, with shared network storage.
Go with an SSD in your machine if possible.
The Cloud
If you have outgrown normal shared hosting / a single VPS or server, you can do a cloud deployment of Composr, and there are a few options.Options to consider (in increasing complexity):
- Auto-scaling webhost. There are some webhosts that will automatically set up scaling for you behind-the-scenes. For example, Composr runs perfectly on Rackspace Cloud Sites hosting, where you can set up quite large instances (we have been told that behind-the-scenes there is database replication, to share database load).
- VM provisioning services, such as Amazon EC2. There is a Composr Bitnami image for easy installation on Amazon's infrastructure. Bitnami also provide their own Amazon-based hosting service. There are some solutions to auto-scale out new cloud instances to meet demand, although many companies handle it manually. You will need some kind of load balancer.
- You can use Google App Engine, to get automated scaling, with a trade-off that it is a bit harder to initially set up. This is the most elegant and maintainable option and we generally prefer it over '1' and '2' if you are seriously investing in a maintainable infrastructure. We discuss this in the separate Google App Engine tutorial. At the time of writing Google App Engine PHP support is unstable and won't run Composr properly – this is outside our control, and we have been discussing it with Google directly.
- You can set up your own cloud system on your own servers, or cheap dedicated servers. You could look at using something like OpenStack to achieve this. This is similar to Facebook's infrastructure and a very challenging option, but perhaps the best from a huge-scale cost perspective.
Please be aware that an auto-scaling cloud solution is not the same as just installing Composr on a cloud instance, so options 2/3/4 above are not simple out-of-the-box solutions. If you are just installing on a cloud instance then there's little difference to a VPS. Achieving effective automatic scaling requires significant expertise. If you want to use multiple instances you will need to set up your own MySQL replication and file-synching, which Composr does support (read on).
Shared filesystem
All web servers will need to share files (as we don't try and store file data in the database).There are 3 approaches with Composr:
- The best way to do this is with network storage (e.g. S3 or SAN).
- Composr has support for automatically synching local file changes between servers.
- Syndicate large media files to other storage, and sync all other files as per '2'
We recommend SAN storage as it is easier to set up, doesn't require mirroring of every file on multiple machines, doesn't have sync delay, and has potential for much larger file stores. The down-side is the higher cost and the need for everything to be in the same data centre. You will need to take various factors into account when making your decision, such as cost, availability of programmers, existing hosting limitations, volume of data, nature of data, and requirements for geographic distribution.
SAN storage
There is not much for us to say here. SAN storage would be configured as normal filesystem storage by a skilled IT person.S3 storage
Composr has no direct inbuilt S3 support, but this is intentional, because you do not need it. Assuming you have a Linux dedicated server or cloud instance, you may use the 'FUSE' system to make Composr upload directory/directories use S3.First you need to mount S3 storage to a directory, such as uploads/attachments, using a filesystem driver:
How to Mount S3 Bucket on CentOS and Ubuntu with S3FS - TecAdmin
You'll want to mount a subdirectory of your S3 storage rather than a root. I recommend you mirror the same basic filesystem structure as Composr, so that you are mapping subdirectories with equivalence.
You'll need to rename uploads/attachments to something else to do this, and then move the old contents of that directory back into the S3 version of it.
Now immediately you are using S3 for storage, however URLs are still coming through your own server, which will work, but not efficiently.
Let's say we want normal image attachments to route through S3. We would edit the MEDIA_IMAGE_WEBSAFE.tpl template, changing:
Code
{URL*}
Code
{$PREG_REPLACE,^{$CUSTOM_BASE_URL}/uploads/attachments/,http://whateverYourAmazonBaseURLIS/uploads/attachments/,{URL}}
Alternatively, you could do a similar thing using rewrite rules in the .htaccess/web.config, but this would add additional latency.
Synching local file changes
This section is mainly applicable to programmersIn order to implement file change synchronisation, you need to create a simple PHP file in data_custom/sync_script.php that defines these functions:
Code (PHP)
/**
* Provides a hook for file synchronisation between mirrored servers. Called after any file creation, deletion or edit.
*
* @param PATH $filename File/directory name to sync on (may be full or relative path)
*/
function master__sync_file($filename)
{
// Implementation details up to the network administrators; might work via NFS, SCP, etc
}
/**
* Provides a hook for file-move synchronisation between mirrored servers. Called after any rename or move action.
*
* @param PATH $old File/directory name to move from (may be full or relative path)
* @param PATH $new File/directory name to move to (may be full or relative path)
*/
function master__sync_file_move($old, $new)
{
// Implementation details up to the network administrators; might work via NFS, SCP, etc
}
* Provides a hook for file synchronisation between mirrored servers. Called after any file creation, deletion or edit.
*
* @param PATH $filename File/directory name to sync on (may be full or relative path)
*/
function master__sync_file($filename)
{
// Implementation details up to the network administrators; might work via NFS, SCP, etc
}
/**
* Provides a hook for file-move synchronisation between mirrored servers. Called after any rename or move action.
*
* @param PATH $old File/directory name to move from (may be full or relative path)
* @param PATH $new File/directory name to move to (may be full or relative path)
*/
function master__sync_file_move($old, $new)
{
// Implementation details up to the network administrators; might work via NFS, SCP, etc
}
You may want to code up these functions so that log files are not synched. By keeping log files separate on each machine. Or you might want to dissect them and pass them onto something like a network-based syslog.
Upload syndication
This section is mainly applicable to programmers, although specific service implementations may be available within addonsComposr contains 2 kinds of hooks for sending off uploads to third-party servers:
- upload_syndication – transfers attachments and gallery files to other services, useful both for syndication and remote hosting; supports a basic UI for determining whether syndication happens
- cdn_transfer – transparently transfers uploads directly to other services at an early stage; has no UI
Database replication
In order to implement replication, just change the db_site_host and db_forums_host values using http://yourbaseurl/config_editor.php (or editing _config.php by hand in a text editor) so that they contain a comma-separated list of host names. The first host in the list must be the source server. It is assumed that each server has equivalent credentials and database naming. Due to the source server being a bottleneck, it will never be picked as a read-access server in the randomisation process, unless there is only one replica.It is advised to not set replication for the Composr sessions table, as this is highly volatile. Instead you should remove any display of 'users online' from your templates because if you're at the point of replication there will be too many to list anyway (and Composr will have realised this and stopped showing it consistently in many cases, to stop performance issues).
Load balancing
Round-Robin DNS could be used to choose a frontend server from the farm randomly, or some other form of load balancing such as one based on a reverse proxy server.A proper load balancer is better than Round-Robin DNS as it can detect when servers are down, or overloaded, and properly route traffic accordingly. You may also have a load balancer than automatically provisions and starts and stops VMs for you based on demand patterns. You may even have a load balancer that acts as a front-end to initiating Cron jobs on whatever server is most available to run them at any point in time.
Geographic distribution of servers
Some people with very ambitious goals want to have multiple servers dotted around the world, able to operate in isolation to each other for maximum redundancy and performance. Priority-based geo-DNS resolution would be used for request routing.Composr can not currently support this, as ID numbers would conflict if database servers are not kept in strict sync with each other. We hope in a future version we can make changes for it to be possible (see 0003147: Review of cloud filesystem support - Composr CMS feature tracker).
Platform notes
Amazon EC2
Composr does quite a lot of disk I/O when checking for installed files, etc (much more than a traditional hard-coded custom application would need to do).Because of this we recommend using 'magnetic' rather than 'SSD' storage, because SSD storage on Amazon's infrastructure has extremely very low IOPS allowances (presumably so that their internal bandwidth is used for very-fast burst reads/writes from/to fast SSD rather than more regular reads/writes).
Calculating hosting requirements
This section contains a simple methodology with made up figures…Let's say it's a 0.5 seconds load time per page, and that is for 1 core of a 2 core 2Ghz machine. Let's say that at peak, there are 25 users, loading a new page every 20 seconds.
We only really consider full page hits, as that's where serious processing lies. Things like image downloads have minimal CPU impact.
Max page hits per second on said machine:
1 / 0.5 seconds * 2 cores = 4 hits per second.
Peak load:
25 users producing hits / 20 seconds between hits = 1.25 hits per second
So, in this sample we can take more hits than at peak. But of course you need to use real numbers.
It's quite a simplistic model, as things often burst, which means things queue up a bit, but also even out over time. Also if there's locking, such as a write operation locking a database table, things can queue, but that doesn't necessarily mean there'll be a high CPU cost – it may just mean traffic is rearranged while locked requests wait a bit longer.
If you are planning on using Amazon instances, you can resize them after-the-fact, but it's rather complex:
Change the instance type - Amazon Elastic Compute Cloud
You effectively have to take it down, then get Amazon to shift over your hard disk image to where they need to put it for a larger instance.
Diagnosing server slow-downs
If you are administrating a server you could come across situations where the server 'grinds to a halt', or spits out depleted resources messages. This isn't a Composr problem, but just like any desktop computer can take on too much work, so can a server.In these kinds of situations you need to identify what server resource is depleted. Typically it is either:
- Disk I/O
- Memory
- CPU
- Network I/O (available bandwidth)
A good system administrator will:
- Stay on top of performance metrics, to know what needs optimising or where to spend more on hardware.
- Develop experience isolating the cause of slow-downs, pointing to programmers where they need to do profiling and optimisation.
- Configure a server so that excess activity results in appropriate error messages, rather than a crashed server (for example, by configuring the Apache settings to limit the number of processes and threads to what the server can actually handle).
Potential bottlenecks
Here's a list of metrics you may want to be considering…- Memory:
- Free non-swap memory
- Assigned swap memory
- Swapping rate
- MySQL memory usage specifically
- Combined PHP memory usage specifically
- Processes:
- Process count
- Process count of PHP CLI processes specifically
- Process count of PHP web processes specifically
- Queues:
- Queued MySQL queries
- Queued web requests
- CPU (†):
- Uptime
- CPU utilisation
- Local I/O:
- Disk I/O load
- I/O wait time
- I/O queue length
- Performance:
- I/O read latency
- I/O write latency
- I/O read throughput
- I/O write throughput
- Network I/O:
- Network I/O load
- Inbound bandwidth utilisation
- Outbound bandwidth utilisation
- Packet loss percentage
- Inbound ping time
- Outbound ping time
- Inbound network pipe speed
- Outbound network pipe speed
† I/O load will also raise these metrics, which can be confusing.
Diagnostic tools
There are various commands on Linux that are useful during diagnosis…Command | Purpose | Hints |
---|---|---|
cat /proc/meminfo | Show detailed memory information | |
uptime | Show the CPU load at different points in time | If the load level is higher than the number of CPU cores in the server, you have a serious issue. |
ps -Af | Show all active tasks | This will show you if you have a run-away number of processes (e.g. dozens of Apache instances). |
top -n1 | Show active processes, sorted by CPU usage; also, total CPU usage, memory usage, and I/O wait time | This will tell you what processes are using a lot of CPU or memory (press M to sort by memory), as well as giving you good clues to what resource is primarily depleted. You may also want to try atop which is similar but better. |
iostat -xt 1 | Show CPU and disk load | This is useful to find disk I/O load. |
iotop -a -P | Show active processes by I/O load | This command usually is not installed by default. It is very useful to find what processes are doing a lot of I/O. The given parameters will open it up in cumulative mode, which is most helpful as otherwise you can only see spikes in I/O. Use the left/right arrows to change the column sorting. |
vmstat 1 | Watch virtual memory | Watching the numbers changing will tell you if the server is 'page swapping' due to low memory (which is a huge performance killer). |
fatrace | Watch disk reads | This command usually is not installed by default. This will give you much more insight than just looking at live load numbers. If you have a problem with I/O queue buildup (different whatever your I/O throughput is, due to latency) then this will help you find what the I/O is being spent on. Note that writes to a swap partition will not show up here. On my testing I also found some other minor file accesses not showing up, and I don't know why. |
lsof | List open files | Gives an idea what files are being regularly read and/or written to. |
strace -e write -f -y -p <process ID> | List open files | Gives an idea what files are being regularly read and/or written to. |
Additionally, the Apache web server has mod_status which can be configured to show what web requests are coming in, and how long they last for. This tells you a lot more than the Apache access log will, although looks at the Apache access log is still important to find actual timestamps of requests (i.e. to gauge throughput) and to make sure you're not overlooking request volume from individual hosts (which may use KeepAlive and therefore keep on the same Apache slots).
Some of the sample commands above are configured to keep showing results in a loop. Note that you will miss things doing this. For example, from experience if swapping is happening you may not actually see it in the vmstat output.
There is an 'art' to finding the cause of a performance slow-down. Often just one depleted resource will have a knock-on effect on others. For example, if I/O is saturated, memory may become depleted as Apache processes back up.
Tips
Some further tips:- If you are on a VPS or shared hosting ask the webhost whether there is too much load on the host machine at some points during the day. For a VPS, consider whether you have dedicated cores, have virtual cores in a small shared pool (worst, depending on your 'neighbours'), or whether you have virtual cores in a large pool (very efficient and adaptive, so long as the host's cores aren't all full). Note that a VPS cannot truly know what load the host machine is under, so will not be optimal in how it manages system load – for example, low-priority tasks such as flushing disk write caches may happen when the VM 'thinks' the disk is under low load, when the real disk may actually be under high load.
- If you are on a VPS then don't assume that I/O will be as a regular machine's HDD or SSD – there's a good chance the host is using some kind of storage array with its own latency, in addition to VM overhead. I/O is always queued on any architecture so you'll get queue buildup, on both the VPS and the host, exacerbated by the latency. From my testing it seems Linux will flush disk writes early (to log files, for example) if it thinks the disk is free, which can actually lock things up badly on a VPS.
- Don't forget that system scheduler hooks may have a performance impact, especially because by default Opcode caches are not enabled for them.
- Be wary that web requests may be slowed down by things external to that request – for example if requests have been backlogged, resulting in saturated processing – or if intensive background tasks (such as Cron) are running in parallel.
- If you are seeing swap space used even when you have plenty of free RAM and a low swappiness setting, you can do swapoff -a; swapon -a to force it to clear.
- Check your robots.txt file (if you have one) actually can be called by URL (it's easy to accidentally set the wrong file permissions on it and not notice, for example).
Improving Apache access logs
Staring at an Apache access log can be frustrating, especially if you have a lot of static file requests, or your server's traffic is split across different logs. You also can't graph a raw log.It may therefore be useful to use a custom script to visualise a log. Here is one I wrote for the composr.app web server (which runs ISPConfig):
Code (PHP)
<?php
// Config
$cutoff = time() - 60 * 60 * 5;
$php_only = true;
$filter_403 = true;
$common_fast_urls = array(
'https://composr.app/backend.php?type=rss&mode=news',
);
error_reporting(E_ALL);
ini_set('display_errors', '1');
$results = array();
$files = glob('/var/www/*/log/access.log');
foreach ($files as $file_path) {
if (!file_exists($file_path)) {
continue;
}
cms_flush_safe 'Processing ' . $file_path . "\n";
flush();
$domain = basename(dirname(dirname($file_path)));
if ($cutoff === null) {
$lines = file($file_path);
} else {
$lines = explode("\n", shell_exec('tail -n 1000 ' . $file_path));
}
cms_flush_safe ' found ' . strval(count($lines)) . ' lines' . "\n";
flush();
foreach ($lines as $line) {
$parsed = parse_log_line($domain, $line);
if (
($parsed !== null) &&
((!$filter_403) || ($parsed['response_code'] != 403)) &&
(($cutoff === null) || ($parsed['timestamp'] > $cutoff)) &&
((!$php_only) || (preg_match('#\.(php|htm)#', $parsed['url']) != 0) &&
(!in_array($parsed['url'], $common_fast_urls))
) {
$results[] = $parsed;
}
}
}
usort($results, function ($a, $b) {
return ($a['timestamp'] > $b['timestamp']) ? 1 : (($a['timestamp'] == $b['timestamp']) ? 0 : -1);
});
foreach ($results as $i => $result) {
if ($i == 0) {
foreach (array_keys($result) as $j => $key) {
if ($j != 0) {
echo "\t";
}
echo '"' . str_replace('"', '""', $key) . '"';
}
echo "\n";
}
foreach (array_values($result) as $j => $val) {
if ($j != 0) {
echo "\t";
}
if (empty($val)) {
echo '"-"'; // Otherwise Excel will mess up column alignment
} else {
echo '"' . str_replace('"', '""', $val) . '"';
}
}
echo "\n";
}
function parse_log_line($hostname, $line)
{
/*
Log file format for our configured/chosen LogFormat in Apache...
Remote hostname. Will log the IP address if HostnameLookups is set to Off, which is the default. If it logs the hostname for only a few hosts, you probably have access control directives mentioning them by name. See the Require host documentation.
Remote logname (from identd, if supplied). This will return a dash unless mod_ident is present and IdentityCheck is set On.
Remote user if the request was authenticated. May be bogus if return status (%s) is 401 (unauthorized).
Time the request was received, in the format [18/Sep/2011:19:18:28 -0400]. The last number indicates the timezone offset from GMT
First line of request.
Status. For requests that have been internally redirected, this is the status of the original request. Use %>s for the final status.
Bytes sent, including headers. May be zero in rare cases such as when a request is aborted before a response is sent. You need to enable mod_logio to use this.
Referer
User-agent
The time taken to serve the request, in microseconds. [was added on to the default manually, not normally present]
*/
$matches = array();
if (preg_match('#^(\d+\.\d+\.\d+\.\d+) [^ ]+ [^ ]+ \[([^\[\]]*)\] "(\w+) ([^" ]*) HTTP/([\d\.]+)" (\d+) (\d+) "([^"]*)" "([^"]*)" (\d+)$#', $line, $matches) != 0) {
return array(
'ip_address' => $matches[1],
'date_time' => $matches[2],
'timestamp' => strtotime($matches[2]),
'http_method' => $matches[3],
'url' => 'https://' . $hostname . $matches[4], // We don't actually know in Apache if it is https or http, let's assume https
'http_version' => floatval($matches[5]),
'response_code' => intval($matches[6]),
'bytes' => intval($matches[7]),
'referer' => (trim($matches[8], " \t-") == '') ? null : $matches[8],
'user_agent' => (trim($matches[9], " \t-") == '') ? null : $matches[9],
'time_microseconds' => intval($matches[10]),
);
}
return null;
}
// Config
$cutoff = time() - 60 * 60 * 5;
$php_only = true;
$filter_403 = true;
$common_fast_urls = array(
'https://composr.app/backend.php?type=rss&mode=news',
);
error_reporting(E_ALL);
ini_set('display_errors', '1');
$results = array();
$files = glob('/var/www/*/log/access.log');
foreach ($files as $file_path) {
if (!file_exists($file_path)) {
continue;
}
cms_flush_safe 'Processing ' . $file_path . "\n";
flush();
$domain = basename(dirname(dirname($file_path)));
if ($cutoff === null) {
$lines = file($file_path);
} else {
$lines = explode("\n", shell_exec('tail -n 1000 ' . $file_path));
}
cms_flush_safe ' found ' . strval(count($lines)) . ' lines' . "\n";
flush();
foreach ($lines as $line) {
$parsed = parse_log_line($domain, $line);
if (
($parsed !== null) &&
((!$filter_403) || ($parsed['response_code'] != 403)) &&
(($cutoff === null) || ($parsed['timestamp'] > $cutoff)) &&
((!$php_only) || (preg_match('#\.(php|htm)#', $parsed['url']) != 0) &&
(!in_array($parsed['url'], $common_fast_urls))
) {
$results[] = $parsed;
}
}
}
usort($results, function ($a, $b) {
return ($a['timestamp'] > $b['timestamp']) ? 1 : (($a['timestamp'] == $b['timestamp']) ? 0 : -1);
});
foreach ($results as $i => $result) {
if ($i == 0) {
foreach (array_keys($result) as $j => $key) {
if ($j != 0) {
echo "\t";
}
echo '"' . str_replace('"', '""', $key) . '"';
}
echo "\n";
}
foreach (array_values($result) as $j => $val) {
if ($j != 0) {
echo "\t";
}
if (empty($val)) {
echo '"-"'; // Otherwise Excel will mess up column alignment
} else {
echo '"' . str_replace('"', '""', $val) . '"';
}
}
echo "\n";
}
function parse_log_line($hostname, $line)
{
/*
Log file format for our configured/chosen LogFormat in Apache...
Remote hostname. Will log the IP address if HostnameLookups is set to Off, which is the default. If it logs the hostname for only a few hosts, you probably have access control directives mentioning them by name. See the Require host documentation.
Remote logname (from identd, if supplied). This will return a dash unless mod_ident is present and IdentityCheck is set On.
Remote user if the request was authenticated. May be bogus if return status (%s) is 401 (unauthorized).
Time the request was received, in the format [18/Sep/2011:19:18:28 -0400]. The last number indicates the timezone offset from GMT
First line of request.
Status. For requests that have been internally redirected, this is the status of the original request. Use %>s for the final status.
Bytes sent, including headers. May be zero in rare cases such as when a request is aborted before a response is sent. You need to enable mod_logio to use this.
Referer
User-agent
The time taken to serve the request, in microseconds. [was added on to the default manually, not normally present]
*/
$matches = array();
if (preg_match('#^(\d+\.\d+\.\d+\.\d+) [^ ]+ [^ ]+ \[([^\[\]]*)\] "(\w+) ([^" ]*) HTTP/([\d\.]+)" (\d+) (\d+) "([^"]*)" "([^"]*)" (\d+)$#', $line, $matches) != 0) {
return array(
'ip_address' => $matches[1],
'date_time' => $matches[2],
'timestamp' => strtotime($matches[2]),
'http_method' => $matches[3],
'url' => 'https://' . $hostname . $matches[4], // We don't actually know in Apache if it is https or http, let's assume https
'http_version' => floatval($matches[5]),
'response_code' => intval($matches[6]),
'bytes' => intval($matches[7]),
'referer' => (trim($matches[8], " \t-") == '') ? null : $matches[8],
'user_agent' => (trim($matches[9], " \t-") == '') ? null : $matches[9],
'time_microseconds' => intval($matches[10]),
);
}
return null;
}
The script outputs a tab-separated format that can be directly copy & pasted into Excel. Excel can then be used to sort and graph the results very easily.
This script will need customising for your particular architecture.
Benchmarking
Slow-downs may be a result of something inherent in your architecture that you didn't expect, and not directly related to your load.Here are a few useful Linux benchmarking tools:
- sysbench – examine your CPU performance
- hdparm -Tt /dev/<device> – quick benchmarking for read speeds on the given disk
- dd – while not actually a benchmarking tool, it can help you find your disk write performance (you can find examples online)
- bonnie++ – examine detailed disk I/O characteristics
- siege – gauge your web throughput
Debugging system scheduler performance
If you find the system scheduler is running slowly, you can find out which individual hook is the cause via the Low-level logging feature (Admin Zone > Audit > Low-level logging).Composr back-end development
Tips:- If you have custom .php scripts that are called a lot, consider whether they could just be static files – or if the output can be statically cached and a transparent webserver-level redirect can be used to bypass PHP. PHP has infinitely more overhead than a static file, and is often a bottleneck due to limited quantities of FastCGI workers.
- See the non-bundled performance_compile addon.
Profiling
A developer can run a profiler to work out bottlenecks in code, and decide optimisations.xdebug
For hard-core programming, xdebug would be used. It collects the most detailed profiling information and there are many tools for analysing the data.Here are a few things to look at:
- Functions called a lot (function calls are costly, perhaps the function isn't really needed that much)
- Functions that take a long time to run individually (perhaps they can be optimised)
- Functions that take a long time to run in aggregate (perhaps they can be optimised)
- Disk operations called a log (e.g. is_file) (perhaps they can be avoided)
tideways-xhprof
Facebook developed xhprof, a fast profiler that can be used on live servers. It collects less detailed data than xdebug. xhprof was discontinued, but tideways-xhprof is a fork that is well maintained.To use on a live server you can install the tideways-xhprof extension, then add this PHP code to your _config.php file:
Code (PHP)
if ((class_exists('\Tideways\Profiler')) && (isset($_SERVER['HTTP_HOST']))) {
\Tideways\Profiler::setServiceName($_SERVER['HTTP_HOST']);
}
if (function_exists('tideways_xhprof_enable')) {
global $TIDEWAYS_INIT_TIME;
$TIDEWAYS_INIT_TIME = microtime(true);
tideways_xhprof_enable();
register_shutdown_function(function() {
if ((class_exists('\Tideways\Profiler')) && (function_exists('get_self_url_easy'))) {
\Tideways\Profiler::setTransactionName(get_self_url_easy(true));
}
register_shutdown_function(function() {
$save_id = strval(time()) . '-' . uniqid('', true);
global $TIDEWAYS_INIT_TIME;
//require_code('global4');
$context_data = array(
'wall_time' => microtime(true) - $TIDEWAYS_INIT_TIME,
//'cpu_performance' => calculate_performance_score(),
'cat /proc/meminfo' => shell_exec('cat /proc/meminfo'),
'uptime' => shell_exec('uptime'),
'ps -Af' => shell_exec('ps -Af'),
'top -n1' => shell_exec('top -n1'),
'iostat' => shell_exec('iostat'),
'iotop -n1 -b' => shell_exec('iotop -n1 -b'),
'$_SERVER' => $_SERVER,
'$_ENV' => $_ENV,
'$_GET' => $_GET,
'$_POST' => $_POST,
'$_COOKIE' => $_COOKIE,
'$_FILES' => $_FILES,
);
file_put_contents(dirname(__FILE__) . '/safe_mode_temp/composr-' . $save_id . '.context', json_encode($context_data, JSON_PRETTY_PRINT));
$data = tideways_xhprof_disable();
file_put_contents(dirname(__FILE__) . '/safe_mode_temp/composr-' . $save_id . '.xhprof', serialize($data));
});
});
};
\Tideways\Profiler::setServiceName($_SERVER['HTTP_HOST']);
}
if (function_exists('tideways_xhprof_enable')) {
global $TIDEWAYS_INIT_TIME;
$TIDEWAYS_INIT_TIME = microtime(true);
tideways_xhprof_enable();
register_shutdown_function(function() {
if ((class_exists('\Tideways\Profiler')) && (function_exists('get_self_url_easy'))) {
\Tideways\Profiler::setTransactionName(get_self_url_easy(true));
}
register_shutdown_function(function() {
$save_id = strval(time()) . '-' . uniqid('', true);
global $TIDEWAYS_INIT_TIME;
//require_code('global4');
$context_data = array(
'wall_time' => microtime(true) - $TIDEWAYS_INIT_TIME,
//'cpu_performance' => calculate_performance_score(),
'cat /proc/meminfo' => shell_exec('cat /proc/meminfo'),
'uptime' => shell_exec('uptime'),
'ps -Af' => shell_exec('ps -Af'),
'top -n1' => shell_exec('top -n1'),
'iostat' => shell_exec('iostat'),
'iotop -n1 -b' => shell_exec('iotop -n1 -b'),
'$_SERVER' => $_SERVER,
'$_ENV' => $_ENV,
'$_GET' => $_GET,
'$_POST' => $_POST,
'$_COOKIE' => $_COOKIE,
'$_FILES' => $_FILES,
);
file_put_contents(dirname(__FILE__) . '/safe_mode_temp/composr-' . $save_id . '.context', json_encode($context_data, JSON_PRETTY_PRINT));
$data = tideways_xhprof_disable();
file_put_contents(dirname(__FILE__) . '/safe_mode_temp/composr-' . $save_id . '.xhprof', serialize($data));
});
});
};
The collected .context files can be viewed in any text editor, and are designed to give a little context to the environment status when the profiling happened.
Commercial dashboards
Newrelic and Tideways both provide high-quality performance monitoring dashboards, for a fee. To a limited extent Google Analytics can also identify performance issues.Composr's inbuilt profiler
Composr's built-in profiler is enabled via a hidden option (described in the Code Book, and comments in sources/profiler.php).MySQL
MySQL has a number of great features for diagnosing performance issues.Slow query log
MySQL has a "slow query log" feature, which logs slow-running queries to a file.You enable it through the MySQL configuration (including at run-time). Documenting how is simple but beyond the scope of this tutorial.
General log
To get an idea what kind of queries are running, you can enable the general log. Maybe queries aren't slow per-se, but you may (for example) have too many queries and need to see where the workload is.You enable it through the MySQL configuration (including at run-time). Documenting how is simple but beyond the scope of this tutorial.
Process list
The MySQL query SHOW FULL PROCESSLIST is very useful for showing current activity when you notice the database being slow.Performance schema
MySQL has the capability to do detailed query performance logging, recording problematic queries across a number of metrics.Data is stored in the performance_schema database, which can be queried directly.
It's a little tricky to get performance schema running:
- It needs to be enabled in the main MySQL configuration (i.e. cannot be done at run-time).
- Actual instruments and consumers need enabling. To enable all of them:
- You can then query the data. For example, to find out what queries are creating on-disk temporary tables:
MySQL also maintains some simple global counts of problematic queries as statuses. For example, to find use of temporary structures:
Documenting all the things you can query is beyond the scope of this tutorial. Check the MySQL documentation and/or look at what tables and fields there are.
Composr's page stats
If you have the stats addon you can also see what percentage of clock time each page is running for using this query:Code (SQL)
SELECT the_page,SUM(milliseconds),SUM(milliseconds)/((SELECT config_value FROM ocp_config WHERE the_name='stats_store_time')*24*60*60*1000)*100 FROM ocp_stats GROUP BY the_page ORDER BY SUM(milliseconds);
Filesystem
If you create a data_custom/debug_fs.log file then all disk activity will be written to it if you load the site with keep_debug_fs=1.See also
- Moving sites
- Installation on Google App Engine
- Webhosting for Composr
- Website Health
- https://pagespeed.web.dev/
- https://composr.app/tr…g_view_page.php?tag_id=16
- https://share.transistor.fm/s/38a4e496
Feedback
Please rate this tutorial:
Have a suggestion? Report an issue on the tracker.