This approach would be beneficial for compressing historical data, thereby reducing the storage needed for historizing values, retrieving large data sets, and resolving slowness issues. Additionally, it would be a great idea to consider options like implementing time-weighted average values and interpolated values at set intervals.
This would be extremely useful for us. It would help to reduce the volume of data in our archives and be sure that these data matches the actual compression rules for each tag
This would be massively useful, to the extent that I actually wrote a tool to apply simple lossless compression. We have thousands of tags with no compression at all and many are retrieving at a 1-second scan rate. Even after we have fixed these going forward, we still have several years-worth of archives with this data in it. This data makes retrieval and analysis slow and inefficient.
We are finding this to be an issue right now. We are bringing over data from many remote sites and many have very little compression set. This would be a great idea.
I'm finding that I'm getting 2 data values for the same timestamp (bad compression/exception settings) which is causing my PE calcs to error out due to the number of archive reads required for yearly data when I'm only in February! Having tools to remove duplicate values at the same timestamp would be great. Also helps when importing data from other systems with very little compression.
This would be very useful. When connecting new data sources, it takes some time to check what useful exception and compression settings would be.
After we have figured out good values, it would be nice to then easily apply them backwards to all the data stored since the beginning, so that any Analyses run on similarly compressed data.
This capability would be a great time saver to fix 'bloated' archives when fast scan classes (1 sec) cause the archives to fill 4x faster than estimated. PI to PI seems to be our only way to process these down to a reasonable size.
This approach would be beneficial for compressing historical data, thereby reducing the storage needed for historizing values, retrieving large data sets, and resolving slowness issues. Additionally, it would be a great idea to consider options like implementing time-weighted average values and interpolated values at set intervals.
This would be extremely useful for us. It would help to reduce the volume of data in our archives and be sure that these data matches the actual compression rules for each tag
This would be massively useful, to the extent that I actually wrote a tool to apply simple lossless compression. We have thousands of tags with no compression at all and many are retrieving at a 1-second scan rate. Even after we have fixed these going forward, we still have several years-worth of archives with this data in it. This data makes retrieval and analysis slow and inefficient.