europeissue29feature2

Europe_2022.png

View the PDF here

Will Your Fire Engineering Simulation Tool Hit the Target?

By: Michael Spearpoint, OFR Consultants, UK

Greg Baker, Fire Research Group, New Zealand

Nils Johansson, Lund University, Sweden

Introduction

As part of a recent SFPE Foundation-sponsored research project on the use of fire engineering tools, a survey of fire engineers and other industry participants was completed [1]. The survey identified that a wide range of tools are in current use in the international fire engineering community, ranging from simple hand-calculation methods at one end of the spectrum, through to sophisticated computer simulation tools at the other extreme. The findings suggest that a range of tool complexity is perfectly acceptable in modern fire engineering practice, and that it is not always necessary to use the most sophisticated tools for every application. At the same time, the SFPE is currently undertaking a project to revise the existing guide on how to substantiate the use of a fire model[1] for a given application [2]. The theme that different model complexity is suitable for different end-use applications was also a key message in a recent presentation by the lead author of this article to the Institution of Mechanical Engineers (IMechE) in the UK [3].

From the authors’ viewpoint, there can be a perception by some stakeholders across the fire engineering community that to achieve the desired level of ‘accuracy’ in computer modelling, it is always necessary to use the most sophisticated tools. However, the level of accuracy will be dependent on the objective of the fire engineering modelling that is being undertaken and there is not necessarily a direct correlation between model accuracy and model complexity.

The concept of model accuracy vs. model complexity is not a new topic and it has been discussed and debated within the fire engineering community over a number of years. The primary objective of this article is to therefore serve as a timely reminder of the issues underpinning such debate, and to provide a summary of some of the key issues involved using the analogy of shooting arrows (or firing bullets) at a target.

Classic view of model accuracy

The accuracy of a model is a combination of ‘trueness’ and ‘precision’ where trueness is the ability of a model to predict reality and precision[2] is the ability to provide repeatable predictions. Many readers will probably be aware of the ‘classic’ view of model accuracy illustrated in Figure 1 in which arrows have been shot at a target.

An important aspect of the different target illustrations is to clearly understand how ‘trueness’ and ‘precision’ differ from each other. For there to be high ‘trueness’, it is only the average of all the values that determines the trueness, i.e., the multiple values can have a wide spread, but so long as the average is close to the centre of the target, then high trueness is achieved. Conversely, precision relates directly to how tightly the multiple values are spread. The combination of the two components is also important. High trueness and high precision results in high accuracy, whereas low trueness and low precision results in low accuracy. Alternatively, a low/high combination of the two components will results in a moderate (i.e., neither low nor high) level of accuracy. As such, an ‘accurate’ model provides both an acceptable level of trueness and precision whereas an inaccurate model does not. The judgment of what is acceptable is a matter for discussion between stakeholders (e.g., model developers, model users, approval authorities, etc.) and clearly both trueness and precision form a continuum of possibilities.


Figure 1. ‘Classic’ view of model accuracy where the centre of each figure represents the ‘true’ value.

At the same time, it is also possible to compare these dual elements of the collective model accuracy with the complementary concepts of model verification and validation (commonly known as model V&V). The existing edition of the SFPE Guidelines for substantiating a fire model [2] has a review of the definitions of verification and validation from various industry standards along with a broader discussion of verification that addresses the issues of model use quality control. The International Standard ISO 20414:2020 [4] provides a formal definition for the two V&V terms, as follows:

·         Verification: process of determining that a calculation method implementation accurately represents the developer’s conceptual description of the calculation method and the solution to the calculation method.

·         Validation: process of determining the degree to which a calculation method is an accurate representation of the real world from the perspective of the intended uses of the calculation method.

It is noted that both definitions use the word ‘accurate’ and it might be better that verification be described as ‘…a calculation method implementation correctly represents the developer’s conceptual description…’. In addition to model V&V (which are both processes) there is the concept of model reliability (in other words, a model characteristic) to the extent that inputting the same values into the model will always (i.e., reliably) produce the same output values.

Within fire safety engineering ‘validation’ is often taken to mean the process of the model developer (or other third-party investigators) to assess a model’s capability to correspond to results from experiments, standardised tests, real-world events, etc. However, McGrattan et al. [5] note that:

A common misconception about model validation is that it is the responsibility of the model developers. Actually, it is the responsibility of the end users or regulatory authority (AHJ) acting on their behalf. After all, to say that the model has been verified and validated means that it has been deemed acceptable for a particular use by the end user or AHJ.”

As a result, there are some commentators that prefer the term ‘benchmarking’ to describe model assessment, and validation is something that is performed by a party that forms some approval role within a design process. Given the above comment from McGrattan et al., a question that might be asked is why a well-used model such as FDS has a validation guide [6] rather than a ‘benchmarking’ document[3].

The illustration in Figure 1 of arrows hitting a target is a useful metaphor for model accuracy, but it also implicitly suggests a number of factors that are not clearly expressed in fire safety design when it comes to the user of the model, namely:

·         They have been given / chosen an appropriate model

·         They are sufficiently able/competent to use the model

·         They can clearly see the target – either the final or an intermediate target (noting that a ‘target’ could refer to a fire safety objective, performance metric, acceptance criterion, etc., or another way to describe this is that in order to understand the context of the ‘target’, the modeller needs to know the objective of doing the modelling)

·         They have no time limit to take the shots (and no limit on arrows)

·         They understand how success is judged

Appropriate tool – is the level of accuracy suitable?

Taking each of the previous bullet point list in turn in the following sections, there are various fire models available to users, and these range from a simple one-line equation to sophisticated computational tools that employ coupled non-linear algorithms. For fire dynamics problems, conventionally the tools are classed as hand calculations (for example, [7]), zone models [8] and field models [9]. In the context of the shooting at a target, is the selected tool (i.e., the type of bow) going to have the capacity to scores hits? A poorly designed and manufactured bow will invariably fail to achieve its objective no matter how good the user is. If the target is at a longer range than can be achieved by the bow, then other factors are irrelevant.

Understanding the accuracy of a model is primarily addressed by the benchmarking process, however there will be decisions that have been made by the developer(s) of a model that will likely affect the model output. A reasonable expectation is that a developer aims for as an accurate representation of reality as possible and does not build in conservativeness. However, this is not as simple as it first might be seen, for example a model may include an empirical correlation in which the creator of that correlation has selected what they consider to be appropriate upper (or lower) bounding values. Furthermore, the developer may have made decisions on whether certain parameters are hard-coded and so cannot be altered by the user, what might be acceptable threshold values for inputs, if default values are provided then what should they be [10] and also how inputs might need to be bounded to achieve numerical stability. Regardless of the implementation of a model, it will have certain applicability that is expressed by its accuracy appropriate to the problem at hand. For example, the final accuracy may be more important for a forensic analysis whereas the extent of conservativeness may be a factor when undertaking design.

Whatever the objectives of the modelling there needs to be consideration of whether the uncertainties associated with the inputs are more significant than the certainties associated with the model output. In the seminal work of Elms [11] he discussed the quality of the information available, the sensitivity of the model to the information and the quality of the model, noting that “a model can only degrade information; it cannot improve it”. In other words, there is no point using a ‘high quality’ model on ‘low quality’ information. Elms further pointed out that “The sensitivity-modified quality of any item of input information should not be made significantly better than that of either the item with the lowest quality, or the model”. Recent work by Hopkin et al. [12] has investigated such in the context of CFD modelling.

User ability

Research often focuses on the technical question of model accuracy – how well a model represents an experiment/reality. Just as using the metaphor of shooting arrows at the target implies that the shooter has some level of competence in using a bow and arrow, the outcome of applying a model cannot be easily separated from the user of the model.

Fire modelling is an art that depends on the ability of the user. Factors such as the user’s knowledge of fire science, access to literature resources, experience of modelling in general, familiarity with the particular tool, and even their ethical behaviour, may all impact on the predictions. As an example, Johansson et al. [13] saw in a round robin study that even if clear instructions were given to the users on building layout and heat release rate, some of the results demonstrate a significant variation partly because of user errors but also because they selected different but similarly rational inputs that arose from their individual judgements. The ability of a model user is a complex topic that this article cannot do justice to, so the reader is directed to the work such as that of Rein et al. [14] and Baker et al. [15] for example.

Defining the target

The target metaphor is predicated on the notion that the target is not obscured to the shooter. Whether the target is assumed to be stationery or moving is ambiguous, but often illustrated as being static. However, when considering the use of a fire model there may be uncertainty where the target is in addition to the uncertainty associated with the model and the user capability. Buildings typically go through various design stages, as expressed in the UK by the RIBA stages (Figure 2). Early on in a building project, detailed decisions regarding the building form are likely yet to be finalised. For example, the addition or removal of a stair/door/wall that might impose small change for the building as a whole but might have very large implications on egress and fire safety.


Figure 2. The RIBA design stages (adapted from original RIBA graphic)

Not only is there uncertainty associated with the building which diminishes through the design stages but there is uncertainty associated with the fire scenario and also the environmental conditions at the time of an incident (e.g., whether ventilation openings such as doors and windows are open or closed, operation of HVAC, wind conditions, solar gains). Where the regulatory fire safety expectations are expressed in functional terms then how modelling meets those expectations may have some form of uncertainty. Thus, it could be argued that the target is moving and also not necessarily clearly defined.

Number of simulations

The target metaphor makes no explicit assumption on the number of arrows available to the user nor how quickly the shots are taken. However, complex fire models not only may take longer to run but may also require more effort to set up and more effort to interpret the output when compared with simple tools. Within the commercial design process there is always a limit on time and budget which may result in there being only sufficient resource to carry out a limited number of simulations. The user of a model needs to have the resources to select appropriate input values for one or many parameters. Resources will include an understanding of the impact of the user’s selection, whether any default values are applicable, adequate reference material in the literature to determine appropriate values. More complex tools will likely need more input parameters to be defined. Often effort is needed to justify non-critical inputs whether by referring back to previous studies, by sensitivity analysis, or by presenting a sufficiently compelling argument. Addressing all these elements takes time, and therefore the number of ‘shots’ is going to be limited.

Where there is a desire to do a risk-based design it is not necessarily reasonable to perform very time-consuming calculations. As discussed above, given the inherent uncertainties that exist in design then it might be argued that multiple simulations using a less accurate tool might be preferable. Assessing the trueness and selecting a bounding value may be of more utility than investing in trying to hit the centre of the target.

Judging success

The target metaphor judges those arrows that fall into the central bullseye being more successful than those that hit the target in the outer rings, which in turn are more successful than those that do not hit the target, i.e., this is an illustration of accuracy. When using fire models how much does accuracy ultimately matter – should everything hit the bullseye, or is hitting the target anywhere adequate? By analogy, at a first glance a standard dartboard (Figure 3) looks to be the same as the target shown in Figure 1, but in actuality hitting the bullseye is rarely the most ‘successful’ option in matchplay.


Figure 3. A standard dartboard in which hitting the triple-20 region is the chosen tactic in matchplay to gain success

Success will need to be judged through the ability to get necessary outputs that can then be appropriately interpreted. McGrattan et al. [5] suggest that “Sophisticated comparison metrics may be more trouble than they are worth” which poses questions such as does having detailed smoke density output provide any more value than an average layer height? Providing detailed model output also requires the means to communicate that information in a way that stakeholders have the capability to understand it.

If the level of model accuracy is understood, then this can be factored into an assessment. Analysis can be adjusted appropriately so that if during benchmarking it has been found that a model has a certain bias then this can be accounted for. Rather than needing to have more precision, a bias might be addressed by adding a ‘safety factor’. Where multiple simulations have been used through stochastic modelling techniques to get resultant distributions then a level of acceptability can be selected by using an appropriate percentile.

Conclusion

This article has used a metaphor of shooting arrows at a target to briefly discuss some aspects of applying fire models that are sometimes missed. Selecting a fire model requires the consideration of a number of practical factors. Simply relying on what might be considered the most complex tools might not be the optimum approach even if it is accepted that the level of sophistication is measure of accuracy. Meanwhile, the authors of this article are building on the work of others to examine the accuracy of different types of models that illustrates more sophistication does not necessarily equate to higher accuracy.

References

[1]         Wade, C., Nilsson, D., Baker, G., Olsson, P. (2021). Fire engineering practitioner tools – Survey and analysis of needs, FRG Report 2102015/1, SFPE Educational and Scientific Foundation Inc, higherlogicdownload.s3.amazonaws.com/SFPE/c2f91981-c014-4bec-97f4-1225586937ac /UploadedImages/Final_Report_with_Cover_Page.pdf.

[2]         SFPE. (2010). SFPE Guide: Guidelines for substantiating a fire model for a given application: Society of Fire Protection Engineers, Bethesda, MA, USA.

[3]         Spearpoint, M. (2022). Selecting your fire simulation tool when dealing with uncertainty (or, Choose your fire simulation weapon). Presentation to Institution of Mechanical Engineers, London, Sept. 2022.

[4]         ISO. (2020). International Standard ISO 20414:2020 Fire safety engineering – Verification and validation protocol for building fire evacuation models, International Organization for Standardization, Geneva, Switzerland.

[5]         McGrattan, K., Peacock, R., Overholt, K. (2014). Fire model validation – Eight lessons learned, Fire Safety Science – Proceedings of the Eleventh International Symposium, pp. 958-968, International Association of Fire Safety Science, doi: 10.3801/IAFSS.FSS.11-958

[6]         McGrattan, K., McDermott, R., Vanella, M., Hostikka, S., Floyd, J. (2022). Fire Dynamics Simulator technical reference guide volume 3: validation, National Institute of Standards and Technology, Gaithersburg, MD, NIST SP 1018-3. doi: 10.6028/NIST.SP.1018.

[7]         Walton, W. D., Thomas, P. H., Ohmiya, Y. (2016). Estimating temperatures in compartment fires, SFPE Handbook, Chapter 30, pp. 996-1023: Springer, New York, NY, USA.

[8]         Walton, W. D., Carpenter, D. J., Wood, C. B. (2016). Zone computer fire models for enclosures, SFPE Handbook, Chapter 31, pp. 1024-1033: Springer, New York, NY, USA.

[9]         McGrattan, K., Miles, S (2016). Modeling fires using computational fluid dynamics (CFD), SFPE Handbook, Chapter 31, pp. 1034-1065: Springer, New York, NY, USA.

[10]     Gwynne, S. M. V., Kuligowski, E., Spearpoint, M., Ronchi, E. (2015). Bounding defaults in egress models. Fire and Materials, 39(4), pp.335–352. doi: 10.1002/fam.2212

[11]     Elms, D. G. (1992). Consistent crudeness in system construction, in Optimization and Artificial Intelligence in Civil and Structural Engineering, B. H. V. Topping, Ed. Dordrecht: Springer Netherlands, pp. 71–85. doi: 10.1007/978-94-017-2490-6_6.

[12]     Hopkin, D., Hopkin, C., Spearpoint, M., Ralph, B., Van Coile, R. (2019). Scoping study on the significance of mesh resolution vs. scenario uncertainty in the CFD modelling of residential smoke control systems, Interflam: 15th International Conference on Fire Safety Engineering, 1–3 July, Egham, UK.

[13]     Johansson, N., Anderson, J., McNamee, R., Pelo, C. (2021). A round robin of fire modelling for performance-based design, Fire and Materials, 45(8), pp. 985–998, doi: 10.1002/fam.2891.

[14]     Rein, G. et al. (2009). Round-robin study of a priori modelling predictions of the Dalmarnock Fire Test One, Fire Safety Journal, 44(4), pp. 590-602. doi: 10.1016/j.firesaf.2008.12.008

[15]     Baker, G., Spearpoint, M., Frank, K., Wade, C., Sazegara, S. (2017). The impact of user decision-making in the application of computational compartment fire models. Fire Safety Journal, 91, 964–972. doi: 10.1016/j.firesaf.2017.03.068



[1] Hereafter we use ‘model’ and ‘simulation tool’ synonymously rather than distinguishing between a conceptual model and an implementation of that model using a computer.

[2] For a really interesting read on the historical development of measurement precision, get a copy of “How Precision Engineers Created the Modern World” by Simon Winchester.

[3] This resulted in a separate email discussion with Kevin McGrattan about terminology. He pointed out that Wikipedia states that benchmarking has its origins in the practice of fixing a rifle in a benchtop vice and firing it many times to ascertain the spread of its “marks” on a target. This methodology removes the user effects and he uses this analogy to explain bias and scatter statistics.