Tuesday, October 7, 2014

FDS 6.1.2 released

A maintenance version of FDS has been released and is available for download at

https://code.google.com/p/fds-smv/

A few things to note:

  1. We are only releasing the 64 bit versions of the software from now on. Of course, the source code is available for anyone who wishes to compile the code. 
  2. We have switched the Windows implementation of MPI from MPICH to Intel MPI. Instructions are in the updated User's Guide. As we discussed in an earlier blog post, MPICH, the free implementation of MPI from Argonne National Labs, is no longer supported under Windows. The new Windows download of FDS has all of the necessary libraries to run Intel MPI and no extra download is needed, however our appeal for beta testing of the software was disappointing. Only a handful of testers volunteered. This is especially frustrating for us because we've heard no end of complaints about the fact that FDS 6 is slower than 5, even though it is more accurate, robust, and so on. If you want the code to run faster, support our efforts to make this happen.
  3. Those of you who have submitted bug reports in the last few months, please go to the Issue Tracker and see if the issue has been "fixed." If so, download the new version and verify that it has been fixed. If it has not been fixed, let us know. For 9 out of 10 issues, we never hear back from the person who originally posted the bug report. Thus, we never really know if the issue has been fixed. Sometimes years later we meet this person at a meeting or wherever and the person tells us that the problem is not fixed. We cannot make progress this way.

Wednesday, September 10, 2014

Windows Test Bundle for FDS Parallel Processing using Intel MPI

We are looking for volunteers to test a bundle of FDS that uses the Intel MPI libraries for Windows. The installation procedure is the same as that of the current release of FDS. The test bundle can be downloaded from:

https://drive.google.com/folderview?id=0B_wB1pJL2bFQcURod1UyZTJUaEE&usp=sharing#list

A few things. First, we have only tested this version on a Windows domain network; that is, a network where user accounts are centrally managed. It is OK if this network has a firewall; the installation script will set up the necessary exceptions to allow FDS to run in parallel across the network even with the firewall in place. Second, the test installation will install FDS 6.1.1, but the old MPICH executable (fds_mpi.exe) will be overwritten. So anyone who has MPICH working and wants to continue using MPICH until we officially release should avoid the test installation because it will not only overwrite the executable, it will also install the Intel variant of mpiexec. Finally, there is no need to install the MPI package yourself. Everything you need to run should be in the test package. At least, this is what we want to test.

Volunteers should post to the FDS discussion group whether they have succeeded or failed to run a simple test case. Do not spend too much time fussing with this -- if the installation procedure is not simple then we have to rethink it. We have found that the old MPICH procedure was fairly difficult, and we think we have a much simpler process now.

Friday, August 22, 2014

FDS formulation to appear in Journal of Computational Physics

I apologize for the shameless self promotion.  But we code grunts need citations, too.

I am happy to report that the formulation developed and implemented in FDS 6 will appear in the high impact, peer-reviewed Journal of Computational Physics.  Please cite the article in addition to the FDS Tech Guide when referring to the mathematical formulation.  Here is a link to the article:

A velocity divergence constraint for large-eddy simulation of low-Mach flows

Abstract:

The velocity divergence (rate of fluid volumetric expansion) is a flow field quantity of fundamental importance in low-Mach flows. It directly affects the local mass density and therefore the local temperature through the equation of state. In this paper, starting from the conservative form of the sensible enthalpy transport equation, we derive a discrete divergence constraint for use in large-eddy simulation (LES) of low-Mach flows. The result accounts for numerical transport of mass and energy, which is difficult to eliminate in relatively coarse, engineering LES calculations when total variation diminishing (TVD) scalar transport schemes are employed. Without the correction terms derived here, unresolved (numerical) mixing of gas species with different heat capacities or molecular weights may lead to erroneous mixture temperatures and ultimately to an imbalance in the energy budget. The new formulation is implemented in a publicly available LES code called the Fire Dynamics Simulator (FDS). Accuracy of the flow solver for transport is demonstrated using the method of manufactured solutions. The conservation properties of the present scheme are demonstrated on two simple energy budget test cases, one involving a small fire in a compartment with natural ventilation and another involving mixing of two gases with different thermal properties.

Wednesday, July 9, 2014

Good-bye to 32 bit apps?

We currently compile FDS and Smokeview for MS Windows, Linux, and Apple OS X. We release both a 32 bit and 64 bit version of each program for each OS. Maintaining the 32 bit apps is becoming increasingly troublesome, and we anticipate that in the coming year we will drop support for 32 bit operating systems. I suspect that this will not be an issue for Linux and OS X users, but we do not know about Windows. MS no longer supports Windows XP, the last OS to come with 32 bit as the default, but there might still be a lot of these machines around. So this is an opportunity for any and all 32 bit users to make their case for us to continue supporting 32 bit. If we do not see a significant call for keeping 32 bit, we will drop it to simplify our compilation and release activities.

FDS Parallel Processing using MPI (Message Passing Interface)

The latest release of FDS (6.1.0) runs with OpenMP by default; that is, the code uses multiple cores/processors of a single computer to process a single mesh. In other words, OpenMP is a "shared memory, multi-processor" form of parallel processing. We are now looking at the MPI version of FDS. This is where we use multiple computers to process multiple meshes -- distributed memory, multi-processor. For linux and OS X, we use Open MPI, an open-source, free set of MPI libraries. For Windows, we have been using MPICH2 (MPI-2), a similar set of libraries distributed by Argonne National Labs. We've recently learned that the MPICH team is dropping support for Windows, and a team from Microsoft has developed MS-MPI, a similar set of libraries as MPICH, to take its place. We have experimented with MS-MPI recently, but we discovered that MS-MPI is designed for use on a Windows HPC server, essentially a dedicated cluster of computers running a special OS specifically for high performance computing. MPICH had the advantage of running on an ordinary office network. We recognize that this is probably a common configuration for small engineering firms. Now with support for MPICH on Windows going away, and MS-MPI not quite what we are looking for, we are searching for another alternative. There are two, that we know of. First, Open MPI can, in theory, port to Windows, but we discovered that it involves installing Cygwin, essentially a unix/linux emulator for MS Windows. That proved to be quite onerous. Next, there is Intel MPI. At NIST, we use Intel compilers for both FDS and Smokeview releases. Intel sells its own MPI libraries as an add-on to its existing compilers. It also would allow us to distribute the run-time libraries needed for you to be able to run our compiled version of FDS. We are currently testing Intel MPI, and while we do, if any of you has any experience with it or comments in general on MPI, we'd like to hear from you. You can post your comments via the FDS-SMV Discussion Group under this thread.

Thursday, May 29, 2014

FDS 6.1.0 Released

We are releasing today a minor release of FDS, version 6.1.0. Note that a minor release means that there are some minor changes in functionality between FDS 6.0.x and FDS 6.1.0. For a list of these changes, consult the release notes:

https://code.google.com/p/fds-smv/wiki/FDS_Release_Notes

For most of you, the most significant change is that we are now releasing the OpenMP version of FDS as the default. In the past, we released the OpenMP version as an option. Now, if you run FDS in what we used to call "serial" mode, i.e.

fds job_name.fds

the executable will exploit multiple cores of your machine, typically four but it depends on your specs. We have found via benchmark testing that four cores seems to be the optimum number for a typical Windows-based PC. It provides approximately a factor of 2 in speed-up, although we would like to hear from you if you experience something dramatically different. More details are provided in the User's Guide.

Keep in mind that OpenMP and MPI are two very different kinds of parallel processing. The MPI version of FDS, which is still the same as it was, exploits shared or distributed memory architectures; that is, it can exploit multiple cores on a single machine or on multiple machines on a network. To make MPI work, you must  divide the computational domain into multiple meshes. However, OpenMP, which can only exploit shared memory architectures, can make either single or multiple mesh calculations run faster, but you are limited to a single machine. We are exploring the possibility of using both OpenMP and MPI to exploit multiple cores on multiple machines of a network. For the moment, we want to make sure that the OpenMP version works reliably.

The implementation of OpenMP in FDS was started by Christian Rogsch several years ago and more recently by Daniel Haarhoff. Thanks also to Boris Stock and Cian Davis who helped work out a few kinks in the new release. Thanks also to all of you who have contributed results of your benchmarking exercises. This greatly helps us better understand how both the MPI and OpenMP versions work under the various operating systems we develop for. We're going to continue to put together benchmarking cases as we continue to improve the parallel processing. Let us know via the FDS-SMV Discussion Group if there is any trouble with the new version.