Thoughts on Auto-Tuning & Calibration

Have a question regarding the internal calculations performed by eQUEST. Am new to eQUEST and just looking to understand some basics. Trying to figure out if there is some way to modify internal eQUEST calculations so baseline model may be adjusted to fit existing utility bills or if there is no way to modify internal eQUEST calculations and we need various workarounds to fit baseline eQUEST model to fit existing utility bills.

As an energy engineer in the performance contracting side of the industry, a defining skill set for my job is creating and then calibrating models to fit historical utility data.  We calibrate our models to a degree of rigor allowing our business to guarantee savings projected from those models (and write shortfall checks when we’re wrong).  I don’t generally talk up my background, but I think in this case it helps to know where my voice is coming from to press a nuanced response:

It’s possible (again, not built-in) to automate iterative model input manipulation to “auto-tune” a building energy simulation to match a set of utility bills.  You can even get the curves to fit extremely tightly over multiple meters.  I’ve gone so far as to build some such tools from scratch, and that experience has taught me some very important lessons I didn’t set out to find.  Among them, an “auto-tuned” model where many inputs are guided by randomization and computer logic can in practice become very difficult to trust for projecting savings, even on a relative “doesn’t need to be seen on the bills” level.

On the other hand, if you careful to bound “auto-tuning” techniques to reasonable input ranges, and specifically to address “unknowable” model inputs which cannot be measured or reasonably estimated/inferred, the results can become much more useful, even enlightening.  This “optimal” usage of the likes of monte carlo analysis, with and without machine learning algorithms, is anything but an “easy” button.

I use doe2/eQuest as my primary energy simulation platform, however all of the above advice is platform-agnostic and holds true whether you’re crunching degree-day analyses in excel or wielding rooms of supercomputers in the cloud with e+.

If calibration matters, and you’re not doing so just to tick some prescriptive box, best practice during model development is to keep mindful track of which inputs are:

  1. Known
    • General Hierarchy of “Known:”  Design/Construction Documents < As-Builts < RCx reports < Current field measurements & observations
    • Be mindful that construction documents and nameplate data are better than nothing, but commonly do not match reality and may be better considered as “informed estimates.”  Allow some room for doubt. 
  2. Estimated
    • For existing buildings, this most inputs will be “estimated.” 
    • If for example you have to define fan power based on scheduled static pressure loss and airflows on the drawings… that’s just aligning your estimate with the designer’s.  Actual is probably something different.
    • Software defaults you understand are ready to “own” or explain fall under this category
    • This includes anything “auto-sized”
  3. Guesswork
    • This includes software defaults that you are relying upon but haven’t yet investigated/understood. 
    • This includes “known unknowns” for lack of information / resources. 
    • A pretty common example is envelope constructions where (a) you have no architectural details/specifications to reference all the layers in the middle and (b) you aren’t budgeted/resourced to tear up a client’s walls to find out what’s inside.

Considering the degree of input complexity for something like an eQuest model, I feel there will always be some blend of all if these input categories for every project and every individual modeler.  Experience helps, though as the years pile on, for every new topic I get a lock on measuring/estimating, I feel like I learn about two more issues that were previously not on my radar… “the more I see the less I know!”

Having rough estimates and unknowns is fine, but the more that you know or else can reasonably estimate, the better your initial calibration results will turn out, and the quicker the process of iteratively “tuning” a model will go.  When you have a good record kept of which inputs are particularly solid vs. estimated/guesswork, you can work your way up the tree, marrying that knowledge to assumed/tested input sensitivity on the results, and plot a course to find your way back to the billed amounts!

Hope this is helpful!

~Nick

Top