Beiträge

Tackling Anomalies in Factory of the Future Networks with AI and Visualization

Visualization is key in making abstract data more understandable, usable and actionable. It helps us to communicate existing information more efficiently and to discover new trends or anomalies from large swaths of data. Visualizations often take advantage of some almost automatic processes in our brains, like noticing red items from a sea of grey; hence, they decrease our cognitive load when interpreting the information. Combined with artificial intelligence we can analyse and present even more complex data in a human friendly manner.

CyberFactory#1 focuses on designing, developing, integrating and demonstrating a set of key enabling capabilities to foster optimization and resilience of the Factories of the Future (FoF). The project consists of 28 partners from seven countries, namely, Canada, Finland, France, Germany, Portugal, Spain, and Turkey.

Our research and development work  described here was conducted primarily in the FoF dynamic risk management and resilience work package which comprised of the following four tasks related to the cybersecurity of the FoF that are depicted in the figure below.

Figure 1. Structure of the FoF dynamic risk management and resilience work package

 

The collaboration was primarily based on the Human/Machine (H/M) behaviour watch task with the objective of detecting anomalies from the factory floor with the help of sensors, cameras and other monitoring equipment. VTT as the research partner had skills related to cybersecurity, network traffic monitoring and visualisation, while the SME partner, namely Houston Analytics had extensive knowhow about applying Artificial Intelligence and Machine Learning to different business and other sectors. In this particular collaboration effort, Houston Analytics provided analysis on anomalies and VTT tested different visualization options for them.

The Path from Anomaly Data to Detailed Visualizations

The system was based on real world data, consisting of a six-month period of post manufacturing quality measurements data provided by Bittium. The original database contained logged errors and data on quality of defective products. Even though the data was not explicitly designed for machine learning (ML) usage, we could use the data for autonomous learning.

The database was analysed by machine learning algorithms designed by Houston Analytics to detect anomalies. The objective of the analysis was to gain insight on increasing the detection of faults. Furthermore, we wanted to enable predictive maintenance, cut factory downtime and reduce the number of sub-par or rejected products. We used the measurement data to formulate an anomaly score, which enabled the system to report the anomalies that are most likely, and enabled users to see, which elements influenced the score.

The visualization of the dataset became an important tool for making the wide attribute space of the measurement data understandable for humans, with about 50 measurement vectors for each tested unit. The large number of dataset features prevents efficient modelling in human readable form, but using feature vector transformations, we can calculate a top anomaly feature space. This changes the feature space into anomaly space, which in turn is much easier to visualize in fewer dimensions.

Visualizations can be used in two different ways: to convey a known story to an audience in a powerful way, or to discover new information within complicated data. In this use case, we needed to explore different kind of options for visualizing the anomaly data to find new insights. We used open source tools to build the visualizations, namely Python with Pandas and Dash. Pandas is a widely used data analysis and manipulation library, and Dash is a framework for building dashboards or other data apps that are easily used via web browsers. There are plenty of inbuilt options for plotting data with Dash and the web interface includes basic controls for things like zooming or selecting data points.

In the example image below, the user can easily find a couple of anomalous results by their colour. They can then hover the mouse over a particular result to get some identifying information, or zoom in to see the surrounding results in more detail.

Figure 2. Example of one of the Dash plots used for discovering anomalies in the dataset

 

Where to Go Next

One of the future research topics related to this development would be the use of AI in other target areas related to the H/M behaviour watch, but without restricting ourselves solely to that topic. One of the CyberFactory#1 research partners from Portugal, ISEP, has already conducted research on the use of AI in human behaviour monitoring on the factory floor, the results of which could be enhanced with the visualization mechanisms used in our work or the analysis capabilities that Houston Analytics possesses.

In conclusion, one of the main themes of the project is to improve the resiliency of FoFs. The data gathered on a factory floor may be very complex and abstract; therefore, we need to process it in order to make it more understandable and actionable to us humans. In this particular case, we first used AI to analyse the data and then applied different visualizations to gain insight on the data.

 

Acknowledgements

We wish to thank Jari Partanen from Bittium Wireless for providing the measurement data, and Tommi Havukainen and Ville Laitinen from Houston Analytics for creating the anomaly analysis database for use in our research work.

Authors:

Outi-Marja Latvala (research scientist at VTT), Mirko Sailio (research scientist at VTT), and Jarno Salonen (senior scientist at VTT).

 

 

AI Manipulation and Security – Who should be interested?

Artificial Intelligence is supporting businesses by producing knowledge for decision-making and enabling predictive actions in some cases. Yet, usage of AI comes not only with merits, but it also includes some notable threats: like any other connected IT system it is a lucrative target for potential malicious attackers. AI can be misled through manipulation to faulty decisions, or it can be used for spying a company’s confidential information. The best potential impact of AI can be achieved through tight co-operation throughout the organization where also the board and C-level comprehend the balance between the threats and opportunities it poses.

 

I have personally long-lasting interest into the opportunities of AI ranging from my studies during the previous century all the way to my current board roles in AI focused companies like Houston Analytics. With this perspective it is quite clear to me that applications of AI go through similar development paths as any other radically industry shaking innovation: it will transform from a separate technology cherished by technocrats into an integral part of business. Timing is critical to deliver the best possible impact for targeted change.

Companies often start their exploration into the AI landscape as separate proof of concepts without clear or even any connection to actual business needs. If the desired result and connection to business environment are not defined, these exercises will remain separate and die with fugitive interest. Acquisition of needed talent is sometimes a reason for this isolation of AI related activities. Talent can be acquired outside and thus results are felt to be separate from the company. If talent can be recruited into the company, the mistake could be isolation of the team too far away from the business stakeholders and again end results seem too academic and benefits remain low. I see these as growth pains of AI in its path towards maturity, but also as evolution of the thinking of decision makers while they try to understand AI’s potential as a driver for change and a tool to increase corporate intelligence. The role of AI must be understood at the strategic level, to set the direction of activities correctly. It is a fundamental change in a company’s modus operandi, and the way data assets are utilized. Changes of this magnitudes cannot be carried out only by individual efforts of in-depth experts or even by individual organizational units. It requires involvement and commitment of the top management with a common understanding of the desired direction and result.

AI is a common object of academic research projects, and it is also the main theme of many corporate innovation activities. In Cyberfactory#1 for example is exploring AI’s opportunities and threats related to the future factory environments. The project recently held a webinar on threats related to AI manipulation.

 

AI transforms the way decisions are made

AI is already embedded into many daily operations of companies. It will change the way decisions are made in a very fundamental way: the classifications made by AI models are decisions, which will give the control to the AI. Decision-making is getting faster and naturally the quality of decisions is getting more uniform. This establishes an interesting new target for potential attackers seeking ways to interfere with decision-making.

AI enables the utilization of passive data assets in a whole new way. Data can be converted into intelligence by using it as training material for AI model. Features embedded into data will become available for organizations by utilizing learning to increase efficiency and foresight. In many companies AI is already on the front line as a customer facing solution. Its behaviour forms the appearance of company’s capabilities to address customer’s needs and expectations. In this role its performance becomes as critical as capabilities of traditional customer service to maintain the company’s image. This makes AI a pervasive strategic element, which impacts processes in multiple locations throughout the organization. Even though AI is not responsible, all possible decisions, classifications and predictions made by it can steer several critical processes, which in turn have wide impact into how a company operates.

 

The weak spots of an AI solution emerge in the interfaces

AI as a solution is part of a company’s normal infrastructure. As with all other integrated IT solutions, when analysing the security of the AI solution, you need to focus on the spots where external influence is possible. AI solutions are especially interesting targets for influence due to their nature as an integral part of the decision-making process. AI lives and develops on the data feed it receives. Data and its sources are a natural vulnerability spot where attackers can try to influence process behaviour. It is impossible to prohibit all possible forms of influence in advance. Therefore, companies need active measures to act and react while the process is running. Basically, attack patterns can be divided into four main categories: poisoning, interference, extraction, and evasion.

Poisoning of the model will lead to incorrect learning. In this scenario the attacker knows the sources of training data and has means for poisoning it with falsified material. The goal is to change the model already during the learning phase and impact indirectly how the AI model will later, while in production, steer the process. The developer of the model is responsible for understanding the data that is used for training: it has to be clean and reliable. It is important to comprehend the structure of data, what are the characteristics of it and what are the forces potentially impacting the content. Another important part is to have clear vision of the main characteristics of unmanipulated data and the allowed variation ranges of the values.

An Interference attack can reveal a company’s confidential information. If the model is trained with a combination of private and public data, attackers could use their own classifier on public data and in this way deduce characteristics of company’s internal data. This approach is based on the assumption of correlation created between internal and external data. If, as in many cases the volume of available external data exceeds the volume of internal data, the correlation gets even stronger revealing an even better view into the internal data. Interference attacks could be made more difficult by minimizing the usage of external data and by breaking the statistical correlation between data sets.

Extraction provides attackers with knowledge about the model that has been utilized. The goal of attackers is to understand the behaviour of the model and with that knowledge either to reproduce a copy of the model for their own use or to create a view into the content of training data that was used to build the original model. A copy of the model allows attackers to view to company’s business model or process which is controlled by the model. These attacks are usually accomplished through interfaces that have been left open. Mitigation action for attack pattern would be strict access right management and observation of utilization of the available interfaces.

Evasion attacks make AI models delusional. The model is confused by the input, which is poised with characteristics that make classifications fail. This is possible when a model is interacting with external input. Added or embedded characteristics are often outside of human’s observation capabilities like high frequency sounds added in the background or added confusing patterns in pictures. In order to defend against this, you need to understand the characteristics of the input and its normal variations. You might also want to understand how the model behaves while receiving extreme inputs. Good mitigation practice is to pre-process input with another model trained to recognize anomalies and filter out data that is outside of desired boundaries.

 

The journey to build smarter businesses continues

Awareness of the strategic importance of AI is gradually rising among companies. Although, at the same time during this transitional phase one still hears comments that AI is yet another fancy technology among others. It is hard to argue against that as AI is a technology. Yet, it has a special feature: the capability to transform the data assets of a company into knowledge capable to steer processes and with continuous learning to produce a cumulative competitive advantage for the company. If isolated from the business and processes, AI can stay a series of trials or a separate technical approach to implement single steps in traditional processes. But, if used more broadly it can become a strategic asset producing cumulative benefit.

This thought about AI as an integral part of strategy inspired me and my colleague Colin Shearer to start a series of articles together with the goal to finally form a book. On our progress in this part and other articles you can follow-up on our LinkedIn Group: “Building Smarter Businesses: Guidance for company leaders on adopting and succeeding with AI”. Our goal is to shed light on the strategic role of AI from the point of view of top management and give an understanding of the opportunities and challenges related to it, while aiming to make businesses smarter.

 

Author: Seppo Heikura, Senior Advisor at Houston Analytics Ltd

 

Learn more:

Resilience Capabilities for the Factory of the Future Webinar

Join the LinkedIn Group: “Building Smarter Businesses: Guidance for company leaders on adopting and succeeding with AI”