#482 - Detailed line-by-line review of what code actually runs during output (beyond profiling)
| Identifier | #482 |
|---|---|
| Issue type | Feature request or suggestion |
| Title | Detailed line-by-line review of what code actually runs during output (beyond profiling) |
| Status | Closed (rejected) |
| Tags |
Type: Performance (custom) |
| Handling member | Chris Graham |
| Addon | core |
| Description | There are a few techniques for improving performance:
- Profiling: What functions are called a lot, and reducing (function calls are relatively expensive in PHP, and it indicates also that something might be getting called more than necessary or repeated) - Profiling: What functions take a long time for each call, and then trying to optimise them to run faster - Profiling: What functions take a lot of time for all executions in total, and then trying to optimise them to run faster - Code coverage: If code loaded up never runs, move it into an auxiliary file and only load on demand - Analyse the code with a "does this need to happen at all?" mindset - in many cases you can restructure code so more complex stuff can be bypassed for the majority of execution scenarios, e.g. by recognising and hard-coding the most common execution cases - Laboriously manually stepping through bootstrapping code exiting with an indication of how much time was taken so far, and any time a jump in the time is observed, trying to optimise that jump away, before continuing - Inlining execution directly, instead of relying on making function calls (either PHP calls or Composr ones) - Doing more caching, so things don't need re-calculating for each page request - Trying to make the system self-learn how it executes to remove the need to make a choice between excessive pre-loading and costly iterative loading And less useful but still notable ones: - (Optimising frontend performance, JS minification etc (this usually has the smallest gain, and isn't so interesting because it doesn't solve scaling problems at all)) - (Offsetting execution onto the client rather than the server (rarely viable because it introduces a JS dependency and might increase the amount of data to be transferred and be impossible due to the need for server-side security checks)) - (Persistent cache) - (HipHop PHP) Most of these are analytical ways (usually using a profiler) to gauge performance bottlenecks that match certain cost metrics then to optimise them away. A lot of time has been spent on most of these over the years, resulting in perhaps a 3x speedup in v8 compared to v2. There isn't a whole lot left in it using existing tools and you need to be increasingly creative to find optimisations that get less-and-less effective. An interesting new approach I just had is to write a simple tool to take the Composr code and build a "profiling version" of it. It would run a code parse, then intersperse logging code after every line, to output what line was called, how long it took, and what data it worked on. That would then be saved as a debug version of Composr, that when run would produce this very extensive logging. Normal profilers don't give this amount of data, as they work just based on function-calls and statistics, rather than a complete dump. Once we have a complete dump we can then try and edge Composr to the optimised scenario of an Composr that does the absolutely theoretical minimum to get the data the user needs to see sent to their browser. That would be simply echo'ing it out, taking the bits needed from whatever block caches (etc) that hold that data, and doing whatever permission checks are required. Reality is very different, with the need to pull out lots of language strings, do lots of run-time computation, pass things up and down different levels of the framework, etc. But by having a complete execution dump, and spending a decent chunk of time, we can get a better view of how we might be able to create new lookup tables, new self-learning techniques, default caching except when exceptions are automatically observed (identified use of dynamic/volatile/member-specific elements), and so on, to be able to "slip stream" through everything. The new approach provides a new window into how Composr executes to spark this kind of creativity and to provide a direct 1-on-1 comparison to the hand-written (i.e. non-flexible, overly-specific, non-modular) optimal algorithms. |
| Steps to reproduce | |
| Funded? | No |
The system will post a comment when this issue is modified (e.g., status changes). To be notified of this, click "Enable comment notifications".

Comments