View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
658 | Composr | core | public | 2012-07-02 18:30 | 2013-10-10 19:22 |
Reporter | Chris Graham | Assigned To | Chris Graham | ||
Priority | normal | Severity | feature | ||
Status | resolved | Resolution | fixed | ||
Summary | 658: Output streaming | ||||
Description | If Tempcode isn't in some kind of diagnostic mode (showing the Tempcode tree or whatever), then have a new paradigm for Tempcode evaluation in the API. Currently we build up a complete Tempcode tree, and then evaluate it. If we drop the idea of preprocessing (discussed in another issue) we no longer need to do this). Instead of build up the GLOBAL_HTML_WRAP template last, do it first. But obviously the 'MIDDLE' parameter would be unbound at this point, as execution has not finished. We initially call do_template without passing 'MIDDLE'. Then we have a new method on the Tempcode object we call instead of evaluate_echo()... evaluate_echo__until_stalled(). This method iterates over the Tempcode seq_parts until it gets to one that has not been bound. It then quits out, storing where it was. We can then call evaluate_echo__until_stalled(true) again when we know 'MIDDLE' is computed, and it would finish the output [true indicates it now should expect all parameters to be known as we're finishing off -- and give errors if ones missing]. This allows data transfer to be parallel to server-side execution, and reduced memory consumption (theoretically). We can go further and allow output of the MIDDLE component iteratively (e.g. post-by-post for a forum topic). We'd check a global state that asks if we're currently outputting and if we are we call evaluate_echo__until_stalled() on the screen template we're currently building up (like 'GLOBAL_HTML_WRAP' this would need preinitialising with some missing variables calculated later). The pattern in this case would be a bit more complex as we'd be splitting a variable into multiple parts. The code would look something like... $tpl=do_template('CNS_TOPIC_SCREEN',array('TITLE'=>$title,'POSTS'=>NULL)); if (is_outputting_already()) $tpl->evaluate_echo__until_stalled(); foreach ($posts as $post) { $_post=render_post($post); $tpl->extend_binding('POSTS',$_post); if (is_outputting_already()) $tpl->evaluate_echo__until_stalled('POSTS'); // Tells it 'POSTS' is not done yet so to output what it has but not advance the iterator yet } $tpl->mark_fully_bound('POSTS'); if (is_outputting_already()) $tpl->evaluate_echo__until_stalled(true); return $tpl; // When attach'd it would have to know that output had happened, stored as a note inside $tpl -- so when evaluate_echo__until_stalled() was called on the global template it would skip over this value for MIDDLE knowing it was done already. | ||||
Additional Information | This is all rather tricky, as Composr would need to be able to bomb out of the Tempcode engine mid-way, and keep track of what it has already done. Some extra data will need putting into the Tempcode engine. We'll also need to do it for both the runtime and compiler engines, meaning extra work. It can only happen if preprocessing has been disabled. We'd need to make a way for pages to define their meta-data (title, meta tags, description) before run() is called. Maybe it'd be something like can_run_incrementally() which has a code contract to set the meta-details if it returns true. After site.php has done can_run_incrementally, it knows it can do incremental output. If output has already started, we don't need to cache blocks, or actually build up the real Tempcode tree. This should help performance a lot too - less memory, and not having to take time growing the complex Tempcode tree. | ||||
Tags | Type: Performance | ||||
Attach Tags | |||||
Time estimation (hours) | 24 | ||||
Sponsorship open | |||||
child of | 657 | Resolved | Chris Graham | Cut-down the Tempcode preprocessing step |
|
I really like the idea of being able to specify which pages are slipstreamed. We don't need to ensure the whole of any Composr site can work with it, we can do it on a page-by-page basis, optimising the pages that are hit most on any particular site. |
|
This is implemented, except we're not iteratively streaming MIDDLE as it is generated. This would give very little performance gain because MIDDLE in total isn't usually that time-consuming (at this point we have already off-loaded some complexity into pre_run, and pre-sent a lot of output to keep the bandwidth busy). There is also a major problem, if an error occurred mid-way through generating MIDDLE, we could not recover to display a clean error message. That said, we can still do this on a module-by-module basis, if we so choose. We just do this via not attaching Tempcode objects, but instead echoing them out immediately. This can be multi-layered also. It's not more work than having an API to do it, and actually cleaner. In terms of what is implemented... Almost all modules have been made to support output streaming. Comcode pages have it implemented. Minimodules do not, as they are not sophisticated enough to pre-declare things like screen titles, but are likely to use them. All-in-all, this is a really nice improvement. We don't yet have the self-learning cache implemented, so things look a bit funky when CSS includes aren't loaded initially (the browser has to re-render the page, so flickers). When that is done, it will be even better. There is no overall performance negative on the streaming, and strong positives (psychological, seeing output start sooner; bandwidth kept busy due to lower latency; connection kept prioritised through routers due to faster response). |