The proliferation of Artificial Intelligence (AI) technology such as ChatGPT (and other services, such as “Grok” from X) has brought AI to the forefront of discussions about software development, software applications, and how this computing power can be used to serve us. Public Safety software is no different. There are interesting applications for AI that are even now in use or development. There are also, however, hazards where AI technology is concerned – hazards which will need to be addressed in any use of the technology moving forward. The more the software can do, the greater these concerns become.

For example, AI can be used to facilitate facial recognition. Public Safety software that uses an image database to scan surveillance footage and find potential matches increasingly relies on this type of technology. There is dramatically interesting potential for AI to be used in dispatching and emergency call processing, too. Imagine a 911 call center which can field hundreds of calls simultaneously with a relatively small staff of human dispatchers, reducing stress and workload on the living dispatch personnel while decreasing response time and ensuring all calls are handled quickly and efficiently.

In both cases, however, the potential for problems should be obvious: What happens if facial recognition software employing AI technology scores a “hit”… incorrectly? There is potential for harm to the person mis-identified, but there is also potential legal liability for the companies responsible for the technology and its use. The same is true of using AI technology to facilitate dispatching and fielding of emergency response. What happens when the AI makes a mistake and a call is not properly addressed? The more complex AI becomes, the more it can do. The more it can do, the more responsibility it can take on… but the higher the stakes of error. Ali Asgary, in The Conversation, writes that we “are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporated into risk and emergency management phases — mitigation or prevention, preparedness, response and recovery. AI hazards can be classified into two types: intentional and unintentional. Unintentional hazards are those caused by human errors or technological failures.”

He continues, “As the use of AI increases, there will be more adverse events caused by human error in AI models or technological failures in AI based technologies. These events can occur in all kinds of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health and mining. Intentional AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromising security and safety systems.”

Asgary concludes, “In my view, this simple intentional and unintentional classification may not be sufficient in case of AI. Here, we need to add a new class of emerging threats — the possibility of AI overtaking human control and decision-making. This may be triggered intentionally or unintentionally. Many AI experts have already warned against such potential threats.”

Whether the use of AI truly does produce such science-fiction style outcomes doesn’t change the fact that the industry must address the potential for error, both within AI and stemming from its application in public safety software. Caliber Public Safety will continue to keep its fingers on the pulse of these advancing technologies, incorporating them responsibly where benefits can be obtained. This is the future of public safety, and it is a bright future indeed.