Thursday, December 20, 2012

FDS 6 Verification and Validation

In recent years, we have formalized the process of developing and releasing new versions of FDS and Smokeview, taking our cue from the way it is done by commercial software developers. The key to this process is a set of calculations that we divide into two categories -- relatively short, simple test cases that we use to verify that the programs are properly solving the equations we've written down, and relatively long, sometimes complex cases that we use to validate the accuracy of the model by comparing with actual experiments.

The short verification cases are the sample input files that we distribute with each new bundle. They are run each night automatically by a script we call 'firebot' to catch simple coding mistakes that we inevitably make as we develop new routines. We not only run these cases, but we also automatically regenerate all the plots that you find in the FDS and Smokeview User's Guide, plus the FDS Verification Guide. Any result that falls outside of a certain tolerance is reported to all of us via email, and we then fix the problem while it is still fresh on our minds.

The longer validation cases are run with each minor release of FDS and Smokeview because it takes between one and two weeks to complete all the cases (about 800 calculations) on our 256 core linux cluster. Because these cases are run less frequently, there almost always is something amiss about one or two cases that is more difficult to diagnose. In fact, a good part of the two years we spent preparing FDS 6 was devoted to addressing discrepancies in some of our most reliable experimental data sets. As frustrating as it was, the fact that we knew about it all was due to the new process of V&V that we initiated. In the past, it was far more difficult to detect a fundamental problem with the algorithm because we did not systematically run all the cases at once and analyze the results in a consistent way. Basically, we just eye-balled things, but that does not always reveal subtle problems.

When we released FDS version 1 in 2000, we did not have a formal process of V&V. We did develop test cases, and we did compare calculations with experiment, but we did not do it in a systematic way. We felt that papers published by ourselves and others would suffice, but we soon learned that this was not the case. This lesson was reinforced when we began working with the US Nuclear Regulatory Commission on a V&V of five different fire models that are commonly used by the nuclear industry. The most important lesson we learned is that published results using older versions of the software cannot be used to justify the use of a model to the AHJ (Authority Having Jurisdiction). We cannot republish our validation papers with the release of each new version, so we decided to develop and maintain our own versions of the V&V guides that the NRC published (NUREG-1824).

Now the FDS Verification and Validation Guides are the key volumes that quantify the robustness and accuracy of the model. When we say that FDS can or cannot do something, what we really mean is that we have documented calculations that show the range and accuracy of the model for a particular application. For those of you using FDS for design or forensic applications, the FDS V&V Guides should be the first place to look to determine if FDS is appropriate for your use. Every few days someone writes to the Discussion Group asking something like, "Does anybody know if FDS can do ...?" What that person really ought to do is check the V&V Guides. The question should not only be can it be done, but also how well can it be done. FDS has alot of potential applications, but you need to check the Guides before deciding if it is applicable.

Finally, we spend a considerable amount of work putting the V&V Guides together, running the cases, working on the statistics, the scripts, and so on. We have appealed to the user community, especially the students and profs working with FDS, to help us with V&V cases. So far, the results have been disappointing. For a variety of reasons, we are not able to capture in our Guides the work with FDS that we see submitted to journals and conferences. We suspect that the main reason for this is that a student's first objective is to graduate, then maybe publish the thesis in some way. Working with us to get the results into our Guides is either a low priority or something that no one has even considered. I sometimes meet students at meetings who do not think that their thesis work is worthy of our Guides. Think again -- not only would the work be worthy in most cases, but it is a necessity. In order for us all to continue to develop and use tools like FDS and Smokeview, we all have to contribute to their upkeep. In this case, that means extending the range of application represented by the V&V cases. If you did your thesis on smoke detector algorithms, it is in your best interest to get this work into the Guides so that we can maintain the capability in future versions. If you just hack some routine into FDS 5.3.whatever, don't expect it to be accepted down the road by the AHJ. Further, owing to the relatively small size of fire protection engineering, there are probably few other engineering disciplines where a masters degree student can have his or her work put directly into practice so readily. Ask engineers in other disciplines what kinds of software packages they use and whether or not they have any means to be involved in their development. One of the appealing things about FPE is that there are so many opportunities to have an impact. To see how you can have an impact, read through the following wiki:

http://code.google.com/p/fds-smv/wiki/Contribution_Guide