Generic filters
Exact matches only

We just received a question from one of our prospective customers following an OSIsoft PI System on mobile webinar yesterday.  It went something like this:

“In this morning’s webinar, it was mentioned that the refresh rate of the application at this customer was set to 30 seconds. We would be interested in higher refresh rates… possibly, 10 seconds or sooner. We would want PI data to be viewable to Transpara Visual KPI with a minimum of latency. Can you comment on the feasibility of data being viewable in Transpara within 10 seconds or less after originating in PI?”

Great question, and the answer was good enough we thought everyone would benefit.  Oh, and the answer is not just for those using OSIsoft PI – it applies to all of our supported data sources.

Here’s are the details…

The cache interval is set by each customer in the Visual KPI Server Manager according to their needs (see image below). This interval determines how quickly we refresh our cached data from each data source.

Cache interval is a “wait time” between executions and it can be safely set all the way to zero seconds. This means each time cache runs, we wait zero seconds before starting the cache cycle again. Since each execution cycle depends on the number of values being fetched (we batch them in groups of 5000 for efficiency) the cache execution time is what determines the “freshness” of our data.

Here’s a critical point: We do not cache most data at all.  We only cache some data in order to maintain referential integrity among all screens where data from lots of sources might live (KPI Maps, List Views, Rollup Bars, etc).  In contrast, trends (as one example) when invoked cause an immediate call to the data source, guaranteeing you the absolute latest data, up to the millisecond of the request. All trends also support AutoPlay, a feature that uses HTML5 asynchronous callbacks to smoothly scroll any trend in any browser on any device. These callbacks typically occur every 3 seconds so trends appear to smoothly scroll right to left.

To summarize, we get data right from the source wherever possible so there is a near-zero delay, and in cases where we are combining multiple sources, you can turn the cache down to zero seconds, limiting the speed only by the number of values being fetched.

Hope this helps.  Please leave a comment if you have any questions about the post.