6336
0 guests and 0 members have recently viewed this.
The top 3 point earners from 8th Feb 2026 to 15th Feb 2026.
| Gabri |
|
|
|---|---|---|
| Master Rat |
|
|
| PDStig |
|
|
There are no events at this time
My approach is to flatten the data that we store in the database. Instead of dumping serialized data into p_data for each bucket/interval, we will flatten out the keys. Every key-value pair (data point) will be its own row in the database.
Pros:
- Much less memory use as we are not selecting dumps of p_data data; these can easily be multiple MBs;
- The flat key structure means that we can select groups of data points that we want using LIKE `keys||to||select||%` instead of loading entire dumps of data in, running unserialize on them, and finding the data points that we want. Since this is directly SQL, we can also select keys in batches (e.g., 100 at a time) to avoid OOM.
Cons:
- Many more rows in the database (but they will be smaller)
- Many more SQL queries involved (but that's mainly on the scheduler; graphs won't see that much of an increase due to selecting all data points that we need together with a wildcard LIKE statement)
I am testing the following changes:
Processing times have been reduced to about 4-5 minutes. I will continue to monitor the changes.