Friday, September 18, 2009

FDS Verification and Validation

A few years ago, Prof Jose Torero of the University of Edinburgh remarked during a presentation that for any given quantity that you might want to predict with a fire model, you can find 2 or 3 papers in the literature that report that FDS works well, and 2 or 3 that report that it does not work well. Putting aside for the moment what is meant by "works well," the point Jose was trying to make is that it is very difficult for practicing engineers to know when to trust or not trust model predictions. The complementary processes of Verification and Validation are intended as checks of the mathematical algorithm and physical submodels, respectively, and there has been a considerable amount of work to V&V FDS over the past decade. Yet, the problem Jose alludes to remains.

When FDS was first released, we had in mind the idea that V&V would be performed by students and engineers using the model for research or commercial applications, and the results would be published in the fire literature. This did indeed happen, and there are numerous papers, reports, and theses spread across the various journals and websites. However, several years ago as we were working on a V&V study with the US Nuclear Regulatory Commission, it became apparent that we could not just depend on the fire literature as a repository of FDS V&V work. There were several reasons:
  1. V&V, especially Validation work, cannot be easily crammed into a short journal article.
  2. Results of older versions of the model lose their validity after a few years.
  3. Often the experimental conditions and uncertainties are unknown.
  4. Often the work is performed by students who are just learning how to use the model.
  5. There are too many different ways of quantifying accuracy, which gets back to the question above as to what "works well" means.
  6. Cases have to be re-run with each new release, and we cannot expect journals to keep publishing the same old stuff.

For these reasons, we decided to maintain two manuals, Volumes 2 and 3 of the FDS Technical Reference Guide, called the FDS Verification and Validation Guides, respectively. In these, we have compiled existing V&V work and continually add case studies to demonstrate mathematical accuracy and physical fidelity. The Validation Guide

http://fds-smv.googlecode.com/svn/trunk/FDS/trunk/Manuals/All_PDF_Files/FDS_5_Validation_Guide.pdf

now has hundreds of experiments and thousands of individual point to point comparisons for a wide variety of output quantities. The Verification Guide

http://fds-smv.googlecode.com/svn/trunk/FDS/trunk/Manuals/All_PDF_Files/FDS_5_Verification_Guide.pdf

is more recent, but it is growing.

Everyone who is using FDS ought to familiarize themselves to some extent with these Guides. They are not the sort of thing you sit down and read, however. Rather, they are reference documents that you should refer to whenever the question arises, "Can FDS do that?"

We would like to especially encourage students who are interested in working with FDS to look through these Guides. More than anything else, they indicate subjects of current interest, especially areas that we are working to improve. In addition to consulting these Guides, we encourage you to contact us via the Discussion Group (or off-line if you like) and indicate which areas you might want to work in. Using specific examples directly out of the V&V Guides is a great way to start a collaboration because we are familiar with the cases and either the analytical or experimental technique. It is far more difficult for us to work with you when all we see are a few comparisons of FDS with experiments we are not familiar with. The value of working with such a large amount of test data is that anomolies in one or two experiments become outliers when compared to hundreds of other measurements.

If you have a verification case or an experimental dataset(s) that you would like to contribute, it would be very helpful if the data and FDS input files could be prepared in a way that is similar to the cases already in the repository. It takes a significant amount of time to boil down megabytes of test data into a form that can be easily plotted and compared to the model. We do not have enough time to take a test report, set up the input files, work the experimental data into a useable form, run the cases, prepare the output graphs, and document the process for each and every experimental test series. If you have already done this, it takes much less time to re-organize the material into a form that we can easily work into one of the Guides. But, please contact us early in the process. There are plenty of useful techniques we have developed for doing V&V, and it these are adopted early, there is a much greater likelihood that the work will make a significant contribution to the whole project.