This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Algorithm and the Black Box Problem


View the PDF here


By: 
Edmund Ang

“It’s all in the algorithm”: a common and popular phrase in discussing technologies and making key decisions. It is well-known that an algorithm is used in software services such as search engines, intelligent assistants and others. What is not obvious is that an algorithm is increasingly used in safety-critical equipment, such as intruder detection systems, multi-criteria smoke detectors and video fire detection systems.

These smart technologies rely on algorithms to reduce false positives — e.g., spurious fire alarms due to dust — and false negatives, i.e., not identifying a real incident. For example, most modern aspirating smoke detection systems rely on an algorithm to process the signals from the LED and infrared sensors to determine a real or spurious alarm.

While the possibilities of pairing algorithms with advanced hardware technologies are exciting, there can be a concern that these are becoming a black box for end-users and their creators, starting with the increasing complexities. Using the aspirating detection system as an example, the decision-making process to differentiate a true or false alarm appears straightforward, but the reality is anything but. The system has to be sufficiently robust to differentiate this in many environments under different scenarios. If a machine learning-based algorithm is used in the future, this will exponentially increase the complexities, and no one will fully understand the cause and effect of a scenario because of the evolving nature of machine learning.

A case in point in another industry is the financial market’s flash crashes over the last few years, which have been attributed to the use of algorithms in high-speed trading.

Secondly, there is no standardized testing method. Currently, each manufacturer maintains its own proprietary design, and there is no one standardized testing method for the robustness of the algorithm in dealing with various scenarios and edge cases. Therefore, there is no consistency in the expectation for the performance of such systems.

Thirdly, the algorithms are mainly closed sourced. Acknowledging the need to maintain a company’s intellectual properties, most of the algorithms paired with the hardware technologies are proprietary or hidden. This means only the owners with access to the source code have a reasonable chance of understanding and stress-testing the algorithm.

If the current situation is left unchecked, soon there will be a black box problem where no one fully understands the technology at hand. Then we can only trust, albeit blindly, with no verifiable assurance, that the system is sufficiently robust for a catastrophic failure like those seen in other industries not to occur.
That said, compared to other industries, the use of algorithms in fire safety-critical technologies is still primitive at this juncture. Now is precisely the opportunity for the fire industry to set a strong foundation to ensure having a full understanding of and control over the algorithms used today, and in the future.

Two Humble Suggestions
Firstly, we need to adopt an open source mindset. Acknowledging the necessity to protect intellectual properties, this mindset can still be instilled when products are developed. At this point of development, it is still possible to open the algorithms used in safety-critical technologies in sufficient detail to ensure that the wider engineering and research community can help examine and stress-test them. While a manufacturer can test general cases, it is impossible to identify all the edge cases for these algorithms.

Secondly, we need to develop an industry-agreed testing and algorithm disclosure method to provide a level playing field for manufacturers. The fire industry, in collaboration with the testing laboratories and standards-setting body, should develop a testing method for safety-critical technologies where an algorithm is a crucial part to ensuring the functionality of the system. This is to ensure there is consistency in the expectation for the performance of such systems. Further, the industry must collectively agree on a standardized algorithm disclosure method to ensure professionals using such systems can understand the decision-making process of the algorithms.

The effort required to implement these suggestions would, of course, be enormous. Nonetheless, I fully believe if we all come together, the collective intelligence of the fire industry will prevail.

Edmund Ang is a fire & risk engineer and PhD researcher at Hazelab Imperial College, London, UK. 

Corporate 100 Visionaries

About Us

© 2019 SFPE | All Rights Reserved
Privacy Policy