It's the End of the World as We Know It (and I feel fine)
By: Michael Spearpoint, OFR Consultants, UK
Konstantinos Chotzoglou, Efectis, UK
Danny Hopkin, OFR Consultants, UK
Yorgos Kanellopoulos, OFR Consultants, UK
ChatGPT, OpenAI L.L.C., San Francisco, CA, USA
A chatbot is a software program that simulates conversation with human users, often used for customer service or simple tasks. GPT-3 is a language generation model developed by OpenAI that has been trained on a diverse dataset of text to predict the next word in a sequence of words. In the context of fire engineering, a chatbot could potentially assist in summarizing concepts and documents for non-expert team members and assist in research and writing. The role of a fire engineer is to ensure the safety of buildings and structures in the event of a fire by working with architects to comply with codes and regulations and developing evacuation plans. Tasks performed by fire engineers include conducting fire safety assessments, recommending changes to improve safety, and selecting fire protection systems.
Chatbots are not appropriate to use for writing academic papers as they are not capable of conducting original research or contributing new insights or ideas to a field of study. Additionally, academic integrity requires that students and researchers properly cite their sources and give credit to others for their work, which would not be possible if a chatbot were used to write a paper. However, chatbots can be used to assist with elements of an academic paper such as summarizing previous work or generating introductory text. It is important to note that detecting if text has been generated by a chatbot can be difficult, but signs such as lack of originality, lack of context, stilted or repetitive language and inconsistencies or errors may indicate that the text was generated by a chatbot.
What is a chatbot?
Readers are likely aware of the increased use of chatbots in society and it appears that there has recently quite a bit of discussion on the internet etc. with the release of the ChatGPT tool which uses the GPT‑3 state-of-the-art language generation model developed by OpenAI. This has led to conversations on whether such a tool should be used in fire engineering, if so, how could it be used, what the implications might be, etc. We therefore thought it might be useful to widen the discussion across the community to see what people think about such tools and where this all might be going. To make the exercise more interesting, and for full disclosure, this article has been written using the responses from ChatGPT (Dec 15 and Jan 9 versions, https://chat.openai.com/) with the questions as posed by the authors and answers supplied by the chatbot.
Many readers are probably already familiar with chatbots, and when asking ChatGPT “What is a chatbot?” the response was
A chatbot is a software program that is designed to simulate conversation with human users, especially over the Internet. Chatbots are often used to provide customer service, answer frequently asked questions, or complete simple tasks. Chatbots can be integrated into messaging apps, websites, mobile apps, or other platforms, and they can be designed to operate using artificial intelligence or simple rules-based algorithms. Some chatbots are designed to understand and process natural language input from users, while others may only be able to respond to predefined commands or prompts. Chatbots are commonly used in customer service and marketing, but they have many other potential applications as well.
When ChatGPT was asked “what information were used to train GPT-3”, and the reply was
GPT-3 (Generative Pre-trained Transformer 3) is a language generation model developed by OpenAI that has been trained on a large dataset of text in order to learn to predict the next word in a sequence of words, a task known as language modeling. The specific data used to train GPT-3 may vary, but it is likely to be a diverse and representative set of text from the internet, books, and other sources. The goal of using a large and diverse dataset is to ensure that the model can perform well on a wide range of language generation tasks. In general, the training data for language models like GPT-3 consists of large collections of text from a variety of sources, such as the internet, books, and other documents. The specific data used to train each individual model may vary, but in general, the goal is to use a diverse and representative set of data in order to ensure that the model can perform well on a wide range of tasks.
Chatbots and fire engineering
Within the fire engineering discipline there might be different applications for a chatbot, such as summarising concepts and documents for the benefits of non-expert design team members, or in the context of research, writing articles or academic papers. At a high level the chatbot has potential utility in helping members of the design team better understand the role of a fire engineer. When prompted to “summarise the role of a fire engineer for the benefit of an architect” and “what tasks do fire engineers do?”, ChatGPT noted
A fire engineer is a professional who applies engineering principles and techniques to the design, construction, and maintenance of buildings and structures to ensure their safety in the event of a fire. They work with architects to ensure that the building is designed and constructed in compliance with fire safety codes and regulations. They also assist in the development of evacuation plans and the selection of fire protection systems. In summary, the role of a fire engineer is to ensure that the building is designed and constructed to minimize the risk of fire and to protect occupants in the event of a fire.
Fire engineers typically perform a variety of tasks, including:
1. Conducting fire safety assessments of existing buildings to identify potential hazards and recommend changes to improve safety.
2. Analyzing building plans and designs to ensure compliance with fire safety codes and regulations.
3. Assisting in the selection and design of fire protection systems, such as sprinklers, alarms, and smoke control systems.
4. Developing evacuation plans and emergency procedures for buildings.
5. Conducting fire drills and training building occupants on emergency procedures.
6. Investigating fires to determine their cause and make recommendations to prevent similar incidents in the future.
7. Collaborating with architects, builders, and other professionals to incorporate fire safety measures into the design and construction of buildings.
8. Keeping up to date with new fire safety codes and regulations and advising clients on how to comply with them.
9. Researching new fire safety technologies and methods and making recommendations for their use.
10. Participating in professional organizations and staying current with advances in fire safety engineering.
As a test of the tool in relation to Item 8 on the above list, ChatGPT was asked “In Approved Document B when should a door be hung to open in the direction of escape?” with the reply:
According to Approved Document B (ADB), which provides guidance on the building regulations in the United Kingdom that relate to fire safety, doors in escape routes and exits should be hung to open in the direction of escape in order to allow people to easily and quickly evacuate a building in the event of a fire.
In general, all doors in escape routes and exits should be hung to open in the direction of escape, except for doors that are:
· Used as part of a smoke control system
· Used to contain a fire
· Used to protect a means of escape from the effects of fire
It is important to note that the specific requirements for doors in escape routes and exits may vary depending on the specific characteristics of the building, such as the layout of the building, the type of occupancy, and the number of people expected to use the routes and exits. It is always best to consult the latest version of ADB or other relevant guidance to determine the current requirements for your specific situation.
It is a good thing that ChatGPT suggested the latest (or any) version of ADB should be consulted since its response does not even closely match any edition the lead author has from 1992 onwards. In the latest version Clause 5.11 on direction of opening states:
The door of any doorway or exit should be hung to open in the direction of escape whenever reasonably practicable. It should always be hung to open in the direction of escape if either of the following conditions applies.
a. More than 60 people might be expected to use it during a fire.
b. There is a very high risk of fire with potential for rapid fire growth, such as with some industrial activities.
It is also incorrect when it says ADB provides “…guidance on the building regulations in the United Kingdom…” as ADB is only relevant to England and Wales each of which have their own specific editions. Trying several other questions related to the guidance in ADB also resulted in incorrect answers so clearly the chatbot is not ready (yet?) for this kind of task.
Given Item 2 on the previous list of fire engineering roles, can a chatbot replace the fire engineer? When ChatGPT was asked “Can you develop/generate a fire safety strategy for residential purposes building as per ADB?” resulted in
Sure, here are some general guidelines for developing a fire safety strategy for a residential building:
1. Conduct a fire risk assessment to identify potential fire hazards and identify the people who may be at risk.
2. Implement appropriate fire prevention measures, such as installing smoke alarms, maintaining electrical systems, and properly storing flammable materials.
3. Develop an emergency evacuation plan and ensure that all residents are aware of it. This should include a designated assembly area outside of the building.
4. Ensure that all exits are clearly marked and unobstructed, and that there is at least one designated fire escape route from each floor.
5. Install and maintain firefighting equipment, such as fire extinguishers and hose reels, in appropriate locations throughout the building.
6. Consider installing additional fire safety measures, such as sprinkler systems or fire doors, depending on the size and layout of the building.
7. Regularly test and maintain all fire safety equipment to ensure that it is in good working order.
8. Provide fire safety training to all residents, including how to use firefighting equipment and how to evacuate the building safely.
9. Establish a procedure for reporting fires and ensure that all residents know how to do so.
10. Work with local authorities to ensure that your fire safety strategy complies with relevant regulations and codes, such as the Abu Dhabi Building Code.
The above response suggests that generic guidelines may be provided by a chatbot but for the time being it seems that a fire strategy is going to have to be developed by qualified fire engineers.
Chatbots in research
Item 9 on the list of fire engineering roles mentions research. Clearly one question to raise is “Should chatbots be used to write academic papers” to which ChatGPT responded with:
It is generally not appropriate to use chatbots to write academic papers. Academic papers are typically expected to be the result of original research or scholarship, and chatbots are not capable of conducting original research or contributing new insights or ideas to a field of study. Additionally, academic integrity requires that students and researchers properly cite their sources and give credit to others for their work, which would not be possible if a chatbot were used to write a paper. It is important for students and researchers to do their own work and to follow ethical guidelines when writing academic papers. Using a chatbot to write a paper would likely be considered cheating and could result in academic penalties.
Notwithstanding the obvious ethical standpoint, given that a chatbot cannot generate original ideas then it might be concluded that the question of whether to use them for an academic paper is irrelevant. However, academic papers typically include material such as introductory text, a review of previous work, and other elements which could have the potential to be generated or informed by the use of a chatbot. For example, when tasked with summarising the introduction of a recent paper by Hopkin, et al. , in bullet format for a lay-person, ChatGPT provided the following:
· Timber is being considered more frequently as a framing solution for new office buildings in the UK due to its embodied carbon, aesthetics, and constructability.
· Hybrid construction solutions, using timber (in the form of cross-laminated timber or CLT) alongside other materials such as steel and concrete, are becoming more popular in the UK.
· Recent guidance has clarified the design evidence that should be provided by engineers to demonstrate adequate structural fire performance in buildings using combustible structural framing solutions.
· The configuration, scale, and fire design of commercial buildings are increasingly different from the research that has been conducted to date, which has tended to focus on smaller residential enclosures.
· A series of experiments have been conducted at ITB in Warsaw to support designers in the realization of mass timber commercial buildings. This paper describes the experiments and reports on the findings to date.
In the opinion of the authors, ChatGPT has generated a satisfactory representation of the content of the paper, and on that basis, a summary of this article has been created using ChatGPT.
One apparently straight-forward solution to the question of using a chatbot is to ban its use for research purposes. On the face of it this seems to be a reasonable proposition, but to enforce such a ban it would be necessary to be able to detect that text originated from a chatbot. When the question “How can you tell if text has been generated by a chatbot?” was posed to ChatGPT the response was:
There are a few signs that text may have been generated by a chatbot:
1. Lack of originality: Chatbots are not capable of conducting original research or generating new insights or ideas. If the text lacks originality or seems to be repeating information that is already widely known, it may have been generated by a chatbot.
2. Lack of context: Chatbots may have difficulty understanding or generating text that is contextually relevant or appropriate. If the text seems unrelated or out of context, it may have been generated by a chatbot.
3. Stilted or repetitive language: Chatbots may have difficulty generating text that flows naturally or sounds like human speech. If the text is stilted or repetitive, it may have been generated by a chatbot.
4. Inconsistencies or errors: Chatbots may make mistakes or generate text that is inconsistent with other information. If the text contains errors or inconsistencies, it may have been generated by a chatbot.
It is important to note that not all chatbots will exhibit these characteristics, and some chatbots may be able to generate text that is more sophisticated and realistic. However, if the text exhibits one or more of these characteristics, it is possible that it may have been generated by a chatbot.
When asked “what context would you require to definitively determine whether the text was produced by an AI or a human?”, the response was
In order to definitively determine whether text was produced by an AI or a human, several pieces of context would be required. Some of the key factors that could be used to determine the origin of the text include:
1. The specific language model or AI system that was used to generate the text, if known. Each AI language model has its own unique characteristics and capabilities, and comparing the text to known examples of text generated by a specific AI system would provide strong evidence as to its origin.
2. The specific task or application that the text was generated for. Some AI systems are better suited to specific tasks, such as language translation or question answering, and the characteristics of the text would reflect the requirements of the task.
3. The amount and type of input or context provided to the AI system. Most AI systems require some input or context to generate text, and the nature of that input can provide clues as to the origin of the text.
4. The quality and coherence of the text. Text generated by AI can be highly coherent and fluent, but it may also contain errors or inconsistencies that are characteristic of machine-generated text.
5. The style, tone and language use, AI generated text might not be able to capture the nuances, subtleties and the emotional context of a human's writing.
It is important to note that even with all this context, it may not be possible to definitively determine whether text was produced by an AI or a human. However, by considering multiple pieces of context, it would be possible to make a strong inference as to the origin of the text.
Fire engineers, such as the authors of this article, are generally not expected to use such tools on a daily basis, would not be ‘suspicious’ of using ChatGPT to produce part of their work (or maybe they will be from now on). The lack of emotion in text is a characteristic of academic papers. Therefore, authors and reviewers of research work will be presented with additional challenges to identify whether their text was produced by an AI, as themselves are expected to present their findings in a factual and dispassionate manner.
However, given the challenges of detecting text from a chatbot, and that chatbots present a new tool that may have their utility then an alternative view might be to allow them under certain circumstances if they are used appropriately. For example, if there is a way to cite or acknowledge the use of a chatbot this might be seen to be acceptable. Asking ChatGPT “Is there a way to cite the text from a chatbot?” generated the following:
It is generally not appropriate to cite the text from a chatbot as a source in an academic paper. Chatbots are designed to provide automated responses to questions or prompts, and their responses are not the result of original research or scholarship. Therefore, it would not be appropriate to use the text from a chatbot as a source in an academic paper. If you are using information from a chatbot in your paper, it would be more appropriate to paraphrase or summarize the information in your own words and to provide a citation for the source of the information, if applicable.
Interestingly although the response starts with it being unacceptable to cite text from a chatbot, it also goes on to suggest that using it to help create a summary might be appropriate. It would be interesting to hear people’s views on that approach. But what if there was a way to give a specific text using a ‘hash function’ that could be provided as a citation (somewhat analogous to a DOI). Not being an experts in AI development ChatGPT was asked “Can you use a hash function to generate specific text from a chatbot” with the result:
It is possible to use a hash function to generate specific text from a chatbot, but it would depend on the specific implementation of the chatbot and how it is programmed to generate responses.
A hash function is a mathematical algorithm that takes an input (called the "message") and produces a fixed-size output (called the "hash value" or "digest"). The output of a hash function is designed to be unique and deterministic, meaning that the same input will always produce the same output, and it is infeasible to determine the input from the output.
In the context of a chatbot, a hash function could potentially be used to generate specific responses based on the input message. For example, the chatbot could be programmed to use a hash function to map specific input messages to predefined responses. However, this would likely require a significant amount of programming and would be dependent on the specific implementation of the chatbot.
It is important to note that chatbots are not capable of conducting original research or contributing new insights or ideas to a field of study. Therefore, it would not be appropriate to use a chatbot to generate text for an academic paper or other research project.
The response suggests the approach is viable but not without its challenges. The alternative is to have ChatGPT listed as a co-author of a paper and the first published editorial article  to do this is already available. It therefore only seems appropriate to also list ChatGPT as a co-author of this article.
It would appear chatbots (and other associated AI tools such as the DreamStudio art generator used to create the image at the end of this article) are here to stay and although it is evident they still have a long way to go, it is also highly likely their capability will increase. If they have not already, readers are invited to give them a try to discover what they can and cannot do as we are certainly still learning. In the meantime, we expect there to be a necessary ongoing debate across industry and academia about the use of these tools.
 D. Hopkin et al., ‘Large-scale enclosure fire experiments adopting CLT slabs with different types of polyurethane adhesives: Genesis and preliminary findings’, Fire, 5 (2), 2022,
 O’Connor S, ChatGPT, Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?, Nurse Education in Practice, 66, 2023,
In addition to the co-authors of this article, the lead author would like to acknowledge the discussions with Steve Gwynne, Hui Xie and Wojciech Węgrzyński on this topic, and Ieuan Rickard for pointing out the first paper to have ChatGPT is a co-author.
Image created using DreamStudio (beta.dreamstudio.ai/dream) using the phrase “Robot feeling fine looking at the end of the world, Dutch master”