A.I. based automated Anomaly detection system is gaining popularity nowadays due to the increase in data generated from various devices and the increase in ever evolving sophisticated threats from hackers etc. Anomaly detection systems can be applied across various business scenarios like monitoring financial transactions of a fintech company, highlighting fraudulent activities in a network, e-commerce price glitches among millions of products, and so on. Anomaly detection system can work well in managing millions of metrics at scale and filter them into a number of consumable incidents to create actionable insights.
While deploying the right anomaly detection system, companies should ask the following important questions to ensure the deployment of the correct product for their needs:
1] What is the alert frequency (5 minutes/ 10 minutes/ 1 hour or 1 day)
2] Requirement of a scalable solution (Big data vs. regular RDBMS data)
3] On-premise or cloud-based solution (Docker vs. AWS EC instance)
4] Unsupervised vs. Semi-supervised solution
5] How to read & prioritize various anomalies in order to take appropriate action (Point based vs. Contextual vs. Collective anomalies)
6] Alert integration with systems
What is the alert frequency (5 minutes/ 10 minutes/ 1 hour or 1 day): Alert frequency is very much dependent on the sensitivity of the process which will be being measured, including the reaction time and other metrics. Some applications demand low latency: like detecting & intimating the suspicious fraudulent payment transactions to users in case of any misuse of the card within minutes. In the case of some applications it can be less sensitive to changes and not so severe, like total inbound & outbound calls from cellular towers, which can be aggregated to an hourly level rather than measuring at 5-minute intervals etc. One can choose between too much sensitivity and right number of alerts while measuring processes as per the suitability etc.
Requirement of the scalable solution (Big data vs. regular RDBMS data): Some businesses like e-commerce or fintech, need to save their data in a Big data environment due to its velocity or scalability. Whereas in other areas like banking, they can take comfort of using mainframe systems. In big data scenarios, hardware and software scalability needs to be taken care by systems like Hadoop and Spark respectively, vis-a-vis RDBMS and Python programming in regular scenarios.
On-premise or cloud-based solution (Docker vs. AWS EC instance): In the case of certain businesses such as fintech and banking, data cannot be dumped in the cloud, due to issues pertaining to compliance and confidentiality. For some other businesses like E-commerce, where said issues are not a factor, the data can be uploaded into a private cloud. Anomaly detection solutions should consider these aspects to understand if the deployment can happen either in Docker format for on-premise services or AWS based EC solution for cloud-based requirement.
Unsupervised vs. Semi-supervised solution: While deploying unsupervised learning algorithms to detect anomalies on time series-based data is a common solution, these systems are infamous for generating a high number of false positives. In this case, if businesses find there are high number of alerts, they can prioritize the alerts based on score and they can set higher threshold scores to increase focus towards critical anomalies. But, semi-supervised algorithms also do exist, which enable algorithms to re-train based on user feedback upon generated anomalies. This will enable algorithms to get trained to not repeat such mistakes at later stages. But it is important to bear in mind that integrating semi-supervised algorithms does come with its own challenges.
How to read & prioritize various anomalies in order to take action: Anomalies types vary in nature: point based, contextual & collective. Point based anomalies are anomalies generated from individual series which could be one-on-one in isolation. Contextual anomalies are the anomalies which appear as an anomaly at different time period, else it would be considered as normal data points. Example of contextual anomalies could be, if there is a surge in call volume during afternoon would not be considered as an anomaly, whereas if the same volume of surge happens during midnight, it would be considered as an anomaly. Contextual anomalies also appear on individual series, similar to point based anomaly. Finally, collective anomalies appear across various data series and these collections try to create a complete story. Companies should define the type of anomalies they are looking for in order to get the most out of the anomaly detection system. In addition, by prioritizing anomalies based on a scoring system, higher level anomalies can be given more preference.
Alert integration with systems: Once the alerts have been generated, it needs to be integrated with the available in-house systems. If this is not taken care of, resources will need to be employed for the verification process which can become tedious, especially in the case of false positives. Ideally alerts from Anomaly detection system should be integrated with email notification system, SMS notification system or any other dashboard system which can send notifications to users on the detection of glitches.
Conclusion: It has been evident that as a part of the evolution, explosion of data generated from various devices and applications in coming future. Technology is always been a double edge sword, with the great benefits it will also do come up with great challenges, including misuse, hacking, safety issues etc. By deploying the artificial intelligence enabled anomaly detection systems will be handy in combating these issues, by selecting appropriate configurations business can obtain best possible performances.
Unleash the power of AI on your data with Anomaly Detection System
Pratap Dangeti is the Principal Data Scientist at CrunchMetrics. He has close to 9 years of experience in the field of analytics across the domains like banking, IT, credit & risk, manufacturing, hi-tech, utilities and telecom. His technical expertise includes Statistical Modelling, Machine Learning, Big Data, Deep Learning, NLP, and artificial intelligence. As a hobbyist, he has written 2 books in the field of Machine Learning & NLP