Tackling Anomalies in Factory of the Future Networks with AI and Visualization

Visualization is key in making abstract data more understandable, usable and actionable. It helps us to communicate existing information more efficiently and to discover new trends or anomalies from large swaths of data. Visualizations often take advantage of some almost automatic processes in our brains, like noticing red items from a sea of grey; hence, they decrease our cognitive load when interpreting the information. Combined with artificial intelligence we can analyse and present even more complex data in a human friendly manner.

CyberFactory#1 focuses on designing, developing, integrating and demonstrating a set of key enabling capabilities to foster optimization and resilience of the Factories of the Future (FoF). The project consists of 28 partners from seven countries, namely, Canada, Finland, France, Germany, Portugal, Spain, and Turkey.

Our research and development work  described here was conducted primarily in the FoF dynamic risk management and resilience work package which comprised of the following four tasks related to the cybersecurity of the FoF that are depicted in the figure below.

Figure 1. Structure of the FoF dynamic risk management and resilience work package


The collaboration was primarily based on the Human/Machine (H/M) behaviour watch task with the objective of detecting anomalies from the factory floor with the help of sensors, cameras and other monitoring equipment. VTT as the research partner had skills related to cybersecurity, network traffic monitoring and visualisation, while the SME partner, namely Houston Analytics had extensive knowhow about applying Artificial Intelligence and Machine Learning to different business and other sectors. In this particular collaboration effort, Houston Analytics provided analysis on anomalies and VTT tested different visualization options for them.

The Path from Anomaly Data to Detailed Visualizations

The system was based on real world data, consisting of a six-month period of post manufacturing quality measurements data provided by Bittium. The original database contained logged errors and data on quality of defective products. Even though the data was not explicitly designed for machine learning (ML) usage, we could use the data for autonomous learning.

The database was analysed by machine learning algorithms designed by Houston Analytics to detect anomalies. The objective of the analysis was to gain insight on increasing the detection of faults. Furthermore, we wanted to enable predictive maintenance, cut factory downtime and reduce the number of sub-par or rejected products. We used the measurement data to formulate an anomaly score, which enabled the system to report the anomalies that are most likely, and enabled users to see, which elements influenced the score.

The visualization of the dataset became an important tool for making the wide attribute space of the measurement data understandable for humans, with about 50 measurement vectors for each tested unit. The large number of dataset features prevents efficient modelling in human readable form, but using feature vector transformations, we can calculate a top anomaly feature space. This changes the feature space into anomaly space, which in turn is much easier to visualize in fewer dimensions.

Visualizations can be used in two different ways: to convey a known story to an audience in a powerful way, or to discover new information within complicated data. In this use case, we needed to explore different kind of options for visualizing the anomaly data to find new insights. We used open source tools to build the visualizations, namely Python with Pandas and Dash. Pandas is a widely used data analysis and manipulation library, and Dash is a framework for building dashboards or other data apps that are easily used via web browsers. There are plenty of inbuilt options for plotting data with Dash and the web interface includes basic controls for things like zooming or selecting data points.

In the example image below, the user can easily find a couple of anomalous results by their colour. They can then hover the mouse over a particular result to get some identifying information, or zoom in to see the surrounding results in more detail.

Figure 2. Example of one of the Dash plots used for discovering anomalies in the dataset


Where to Go Next

One of the future research topics related to this development would be the use of AI in other target areas related to the H/M behaviour watch, but without restricting ourselves solely to that topic. One of the CyberFactory#1 research partners from Portugal, ISEP, has already conducted research on the use of AI in human behaviour monitoring on the factory floor, the results of which could be enhanced with the visualization mechanisms used in our work or the analysis capabilities that Houston Analytics possesses.

In conclusion, one of the main themes of the project is to improve the resiliency of FoFs. The data gathered on a factory floor may be very complex and abstract; therefore, we need to process it in order to make it more understandable and actionable to us humans. In this particular case, we first used AI to analyse the data and then applied different visualizations to gain insight on the data.



We wish to thank Jari Partanen from Bittium Wireless for providing the measurement data, and Tommi Havukainen and Ville Laitinen from Houston Analytics for creating the anomaly analysis database for use in our research work.


Outi-Marja Latvala (research scientist at VTT), Mirko Sailio (research scientist at VTT), and Jarno Salonen (senior scientist at VTT).



AI Manipulation and Security – Who should be interested?

Artificial Intelligence is supporting businesses by producing knowledge for decision-making and enabling predictive actions in some cases. Yet, usage of AI comes not only with merits, but it also includes some notable threats: like any other connected IT system it is a lucrative target for potential malicious attackers. AI can be misled through manipulation to faulty decisions, or it can be used for spying a company’s confidential information. The best potential impact of AI can be achieved through tight co-operation throughout the organization where also the board and C-level comprehend the balance between the threats and opportunities it poses.


I have personally long-lasting interest into the opportunities of AI ranging from my studies during the previous century all the way to my current board roles in AI focused companies like Houston Analytics. With this perspective it is quite clear to me that applications of AI go through similar development paths as any other radically industry shaking innovation: it will transform from a separate technology cherished by technocrats into an integral part of business. Timing is critical to deliver the best possible impact for targeted change.

Companies often start their exploration into the AI landscape as separate proof of concepts without clear or even any connection to actual business needs. If the desired result and connection to business environment are not defined, these exercises will remain separate and die with fugitive interest. Acquisition of needed talent is sometimes a reason for this isolation of AI related activities. Talent can be acquired outside and thus results are felt to be separate from the company. If talent can be recruited into the company, the mistake could be isolation of the team too far away from the business stakeholders and again end results seem too academic and benefits remain low. I see these as growth pains of AI in its path towards maturity, but also as evolution of the thinking of decision makers while they try to understand AI’s potential as a driver for change and a tool to increase corporate intelligence. The role of AI must be understood at the strategic level, to set the direction of activities correctly. It is a fundamental change in a company’s modus operandi, and the way data assets are utilized. Changes of this magnitudes cannot be carried out only by individual efforts of in-depth experts or even by individual organizational units. It requires involvement and commitment of the top management with a common understanding of the desired direction and result.

AI is a common object of academic research projects, and it is also the main theme of many corporate innovation activities. In Cyberfactory#1 for example is exploring AI’s opportunities and threats related to the future factory environments. The project recently held a webinar on threats related to AI manipulation.


AI transforms the way decisions are made

AI is already embedded into many daily operations of companies. It will change the way decisions are made in a very fundamental way: the classifications made by AI models are decisions, which will give the control to the AI. Decision-making is getting faster and naturally the quality of decisions is getting more uniform. This establishes an interesting new target for potential attackers seeking ways to interfere with decision-making.

AI enables the utilization of passive data assets in a whole new way. Data can be converted into intelligence by using it as training material for AI model. Features embedded into data will become available for organizations by utilizing learning to increase efficiency and foresight. In many companies AI is already on the front line as a customer facing solution. Its behaviour forms the appearance of company’s capabilities to address customer’s needs and expectations. In this role its performance becomes as critical as capabilities of traditional customer service to maintain the company’s image. This makes AI a pervasive strategic element, which impacts processes in multiple locations throughout the organization. Even though AI is not responsible, all possible decisions, classifications and predictions made by it can steer several critical processes, which in turn have wide impact into how a company operates.


The weak spots of an AI solution emerge in the interfaces

AI as a solution is part of a company’s normal infrastructure. As with all other integrated IT solutions, when analysing the security of the AI solution, you need to focus on the spots where external influence is possible. AI solutions are especially interesting targets for influence due to their nature as an integral part of the decision-making process. AI lives and develops on the data feed it receives. Data and its sources are a natural vulnerability spot where attackers can try to influence process behaviour. It is impossible to prohibit all possible forms of influence in advance. Therefore, companies need active measures to act and react while the process is running. Basically, attack patterns can be divided into four main categories: poisoning, interference, extraction, and evasion.

Poisoning of the model will lead to incorrect learning. In this scenario the attacker knows the sources of training data and has means for poisoning it with falsified material. The goal is to change the model already during the learning phase and impact indirectly how the AI model will later, while in production, steer the process. The developer of the model is responsible for understanding the data that is used for training: it has to be clean and reliable. It is important to comprehend the structure of data, what are the characteristics of it and what are the forces potentially impacting the content. Another important part is to have clear vision of the main characteristics of unmanipulated data and the allowed variation ranges of the values.

An Interference attack can reveal a company’s confidential information. If the model is trained with a combination of private and public data, attackers could use their own classifier on public data and in this way deduce characteristics of company’s internal data. This approach is based on the assumption of correlation created between internal and external data. If, as in many cases the volume of available external data exceeds the volume of internal data, the correlation gets even stronger revealing an even better view into the internal data. Interference attacks could be made more difficult by minimizing the usage of external data and by breaking the statistical correlation between data sets.

Extraction provides attackers with knowledge about the model that has been utilized. The goal of attackers is to understand the behaviour of the model and with that knowledge either to reproduce a copy of the model for their own use or to create a view into the content of training data that was used to build the original model. A copy of the model allows attackers to view to company’s business model or process which is controlled by the model. These attacks are usually accomplished through interfaces that have been left open. Mitigation action for attack pattern would be strict access right management and observation of utilization of the available interfaces.

Evasion attacks make AI models delusional. The model is confused by the input, which is poised with characteristics that make classifications fail. This is possible when a model is interacting with external input. Added or embedded characteristics are often outside of human’s observation capabilities like high frequency sounds added in the background or added confusing patterns in pictures. In order to defend against this, you need to understand the characteristics of the input and its normal variations. You might also want to understand how the model behaves while receiving extreme inputs. Good mitigation practice is to pre-process input with another model trained to recognize anomalies and filter out data that is outside of desired boundaries.


The journey to build smarter businesses continues

Awareness of the strategic importance of AI is gradually rising among companies. Although, at the same time during this transitional phase one still hears comments that AI is yet another fancy technology among others. It is hard to argue against that as AI is a technology. Yet, it has a special feature: the capability to transform the data assets of a company into knowledge capable to steer processes and with continuous learning to produce a cumulative competitive advantage for the company. If isolated from the business and processes, AI can stay a series of trials or a separate technical approach to implement single steps in traditional processes. But, if used more broadly it can become a strategic asset producing cumulative benefit.

This thought about AI as an integral part of strategy inspired me and my colleague Colin Shearer to start a series of articles together with the goal to finally form a book. On our progress in this part and other articles you can follow-up on our LinkedIn Group: “Building Smarter Businesses: Guidance for company leaders on adopting and succeeding with AI”. Our goal is to shed light on the strategic role of AI from the point of view of top management and give an understanding of the opportunities and challenges related to it, while aiming to make businesses smarter.


Author: Seppo Heikura, Senior Advisor at Houston Analytics Ltd


Learn more:

Resilience Capabilities for the Factory of the Future Webinar

Join the LinkedIn Group: “Building Smarter Businesses: Guidance for company leaders on adopting and succeeding with AI”


Are there hidden costs of untrusted technology in 5G private networks?

In some European metropolitan areas, you can already see a 5G symbol on your mobile phone display. Nevertheless, most networks are still in the planning phase and mobile network operators (MNOs for short) have not yet made a final decision on which equipment provider they will purchase the network technology from. This applies even more to private corporate networks, so-called campus networks, despite the decision being potentially significant for the security of the factory of the future.

In many European countries, there are currently discussions about the economic possibilities in connection with the new mobile communications standard 5G. This concerns possible leaps in productivity, but also the security gaps and dependencies associated with greater networking that would arise if these new mobile networks were built with Chinese technology, for example. As a result of these discussions, some states have excluded untrusted network equipment suppliers from building domestic 5G networks or set the regulatory hurdles so high that the result is tantamount to a ban.  The question which is slowly moving up the agenda is: is it necessary to also regulate private networks with respect to the technology they use? From the perspective of an economist this should only be the case if using untrusted technology has a detrimental effect on customers, suppliers or employees for which they are not compensated. Economist call that negative externalities.

Network equipment providers for 5G networks are expected to have a high level of trustworthiness in order to participate in an infrastructure that controls large parts of a factory of the future. It is particularly difficult for Chinese suppliers to establish this credibility. They are often seen as untrustworthy, operating from a country without sufficient rule of law, which exercises strict state control over their business conduct and management. Moreover, Western intelligence agencies, cybersecurity firms and the media regularly report that China is the country of origin for numerous attempts at industrial espionage.

If companies with such origins are nevertheless involved in the deployment of 5G networks in Europe, this will come at a significant cost. Only part of these costs are incurred by the company operating the network and choosing the network providers. A large part of the costs must be borne by other parts of society, which in absence of further regulation have no influence on the choice of network provider.

Even when the factory of the future decides which providers to procure 5G network technology from, they do not take all costs into account – either because they are hidden costs that will be incurred later (life-cycle costs) or because they are borne by others than the MNOs (external costs). Of course, many security-related costs will also occur if 5G networks are built exclusively with trusted technology. However, these costs will be lower because a trusted provider is a cooperating partner in securing the network from external influences.

If non-trusted providers are a part of a private 5G network, additional efforts will have to be made

  • to test and verify the software updates provided.
  • to share information with other private network operators, government agencies responsible for network security, and with suppliers and customers of the cyber factory of the future. New information sharing and analysis centers need to be established among industry participants.
  • to build additional sensors into the network to monitor network traffic and detect unintended data flows to third parties.
  • To develop and integrate new AI tools into network management as an early warning system for covert data exfiltration.
  • to devote resources to enforce regulatory policies and compliance to compensate for the lack of trust in the network.
  • to cover damages caused by cyber-attacks by spending (more) money on cyber insurance to deal with the financial consequences.

If a 5G network contains untrusted technology, more of the burden to protect data or machines controlled over the network falls on the operator, but potentially also on other parts of their value chain. The latter will have to spend more resources on classic cybersecurity tools or will have to leave the value chain that makes the cyber-factory of the future and thus will not be able to realize potential productivity gains.

European 5G technology providers will have a hard time competing with companies that do not need to make a profit in order to stay in the 5G business – for example because they are backed by a state for strategic reasons. To internalize the external costs and to guarantee a level playing field, it should be considered to not only regulate nationwide networks, but to include private 5G campus networks. The goal is to either exclude non-trusted technology or to require operators of campus networks to invest in the necessary additional protection when using non-trusted technology.

Authors: Johannes Rieckmann und Tim Stuchtey, BIGS

A more detailed description and estimate of the hidden costs of untrusted vendors in 5G networks can be found in the policy paper and the country studies for Germany, France, Italy and Portugal. The virtual presentation of the policy paper takes place on the 16th of March at 2pm (CET).

The Misuse of the Use-Cases of CyberFactory#1

A Misuse-Case (MUC), which is derived from a Use-Case (UC)*, describes the steps and scenarios, which a user/actor performs in order to accomplish a malicious act against a system or business process. They are still UCs in the sense that they define the steps that a user performs to achieve a goal, even if the goal is not a positive or a desired one from the perspective of the business process or system designers.

A MUC covers for example:

  • Safety hazards, irrespective of originating from security vulnerabilities or inherent to the novel technologies developed in the project,
  • Security attacks by outsiders,
  • Workers attacks,
  • Insider threats will also be considered in the MUCs, giving the required attention to economical, psychological and societal aspects.

Figure 1: Misuse-Case Task Approach

To be able to document the right MUCs, the project team first worked on selecting the appropriate approach. In the specific case of CyberFactory#1 (CF#1) it was decided that a two-phased approach was the preferred approach: first there was a collecting of generic and independent risks, which were then consolidated into MUCs.

Within CF#1 the risk assessment considered the following aspects

  • Impact Level (categorized in high, medium, low)
  • Probability Level (categorized in high, medium, low)
  • Risk Source, Risk Source Type and Risk Location
  • Attack Vector
  • Vulnerability
  • Target Asset and Target Asset Type
  • Threat Agent and Threat Agent Type
  • If applicable: References (CVE, etc.)
  • Risk Result (Impact Detail), Outcome and Impact Nature

Example risk “Lack of OT capacity in current IT cybersecurity products (mainly SIEM)”

  • High
  • Medium
  • SIEM & other IT based cybersecurity products | Legacy Infrastructure | FoF
  • Technical security attacks against OT solutions
  • Lack of OT interoperability for existing IT based SIEMs and existing cybersecurity products
  • OT Systems | FoF
  • Hackers & hacking software | Hacker
  • N/A
  • Stop of production | loss of safety

After the first stage, a total of 153 risks have been determined. Here are the statistics of those risks by their level and source type:

Figure 2: Risks by Risk Level

Figure 3: Risks by Source Type

As per the selected methodology and the risks, one (or more) misuse-cases were selected and defined further for each use-case within Cyberfactory#1 project. In particular, these risks were connected to the use-cases and their implementation with no risk mitigation available yet. The risks are assessed and listed based of the source type although there are many risks related with the new use cases, legacy infrastructure has also quite number of new risks that will be addressed within the project.

What’s next?

As the project team progresses through the main work packages and tasks, we always have the misuse-cases in mind in order to test, implement, perform our designs and projects while preventing them as a by-product in the scope of a security-by-design approach.

Author: Murat Lostar, CEO & Founder, Lostar Inc.

*To learn more about our use-cases, see our article on it here

The Use-Cases of CyberFactory#1

The key problem addressed by CyberFactory#1 is the need to conciliate the optimization of the supply and manufacturing chain of the Factory of the Future (analyzed by means of Use-Cases) with the need for security, safety and resilience against cyber and cyber-physical threats (analyzed by means of Misuse-Cases).

Therefore, in order to study this key problem, ten pilots have been developed from Aerospace, Automotive, Machinery and Electronic Industries around several use-cases (UC). These UC were then described and matched with Key Capabilities defined by CyberFactory#1 project proposal plan (technical value chain items):

UC1. Airbus Defense & Space (Spain):

At Airbus three sub-use cases are defined for the deployment of Industrial Internet of Things (IIoT) for flexible management and optimization of manufacturing as well assembly lines within the Aerospace Industry.

  • UC1.1 Description – Roboshave (Tablada Site): Connectivity of the Roboshave station to the IIoT to improve traceability, supervision and maintenance of the processes.
  • UC1.2 Description – Autoclave (CBC Site): Real-time monitoring and quality process automation across the IIoT for the process of composite parts curing and forming within Autoclaves area.
  • UC1.3 Description – Gap Gun (San Pablo Sur Site): Automation of the data acquisition using a Gap Gun device (smart tool for gaps and steps measuring) with a centralized data storage and the possibility for further data analysis.



UC2. S21Sec (Spain):

This UC addresses Human/Machine collaboration in manufacturing for quality control.

  • UC Description: The evolution of TRIMEK’s METROLAB solution, which focuses on quality control laboratory services towards a Zero Defect, through its integration with fully automated processes within the auxiliary automotive industry (controlling environmental variables and interconnecting the shop-floor). This means an overall enhancement of Metrolab Scenario (incorporation of several cybersecurity tools/services, including of Cobots)



UC3. Bittium (Finland):

This UC is concerned with a cyber-secure networked supply chain and information architecture.

  • UC Description: The goal is to create a consistent and secure information architecture and develop processes as well as information tools, which are able to support digital partnered manufacturing and deliveries, in order to achieve supply chain optimization.

UC4. High Metal (Finland):

This UC will develop a highly automated food production line of the future (in this particular case for cheese making).

  • UC Description: The High Metal UC introduces a new integrated platform-based concept for cheese manufacturing that enables: better flexibility for product quality changes, scalability for production increases, shorter installation as well as production start-up time and better efficiency and easier maintenance compared to traditional dairy production lines.

UC5. IDEPA (Portugal):

This UC will digitalize a textile production line (legacy machines) for the automotive industry.

  • UC Description: The goal is to increase efficiency (and also security, safety and resilience) focusing on the development of a new generation of ERP tools, considering Security Awareness and providing Data & Knowledge as a service. This should be achieved along with IDEPA business transformation (connectivity of legacy machines).

UC6. VESTEL (Turkey):

This UC is concerned with the optimization of material handling in PCB assembly lines.

  • UC Description: The objective is to pass from conventional material handling managed by operators and without data gathered from machines (no traceability) to a new situation oriented to the integration of machines in the electronic board assembly line with ERP system, warehouse and carrier robots in order to achieve optimization of the production and improving the traceability, and also considering cybersecurity aspects.



UC7. Bombardier Transporter (Germany):

This UC aims to optimize the material supply for the rail vehicle production.

  • UC Description: The main objective of this UC is the optimization of material supply for railway vehicle production by building an automatic supply system from the warehouse directly to the workstations, in order to have a safe and automated provision of the material within its various physical levels (many different customer projects are carried out in parallel at the Bautzen Plant in Germany).


UC8. InSystems (Germany):

This UC addresses the optimization of an autonomous transport robot fleet (ProANT).

  • UC Description: This UC is focused on the collection of data from normal operations of a transport robot fleet that can be used for detecting individual patterns via ML and predictive systems. This information can be also used for logistics optimization, and in a dynamical way for adaptation to continuous changes.


What is the general purpose of the use cases with the project of CyberFactory#1?

These use cases are contributing to the creation of the Factory of the Future (FoF) concept, which is the key goal of the Cyberfactory#1 project. The main objectives addressed by the different use cases developments, that may help to create this FoF concept, can be summarized as the following ones:

  • Automation of E2E processes across M2B & B2M communications.
  • Real time (or near real time) situational awareness and factory systems monitoring.
  • Enhanced visibility and traceability of the activity within the Factory.
  • Optimization and secure communications for Supply Chain (Distributed Manufacturing).
  • Advanced data analytics and Machine Learning for processes improvement.
  • Connectivity and integration of the Factory systems (Factory as a System of Systems).
  • Communications security and global security management.

Author: José Antonio Rivero Martinez, Automation for Industrial Means, Industrial Means Dpt. – Manufacturing Engineering, Airbus Defence and Space

PS: If you are interested in more depth in one or more of the UC(s), we are happy to get you in touch with the relevant UC owner(s). Please use for all inquiries the following email address: info@cyberfactory-1.org.


New Business Models for the Creation of Value in the Factory of the Future

One of the main objectives of CyberFactory#1 is to devise innovative ways of delivering value to the several industry sectors involved in the project through the enhancement of optimization and resilience of the production environments. The project has recently delivered a set of new business models featuring value proposition that go beyond traditional approaches, based on the intelligent product servitization (i.e. transforming product sales into services provision), the knowledge extraction from data and the focus on intellectual property (i.e. enhancing the exploitation and protection of the industrial intellectual property).

Innovative business models for eight industry sectors

The project maps eight paradigmatic sectors and actors in the Factory of the Future (FoF) value chain, divided into two main value chain stages: users (i.e. industrial sectors which represent the end users of the new technologies and approaches developed in CyberFactory#1 – Figure 1) and suppliers (i.e. industrial sectors which provide enabling technologies to be applied in the end user activities – Figure 2).

Figure 1 – CyberFactory#1 FOF Value Chain – Users

Figure 2 – CyberFactory#1 FOF Value Chain – Suppliers

For each one of these sectors, the CyberFactory#1 developed a business model. The work, coordinated by each leading industry partner in the project, started with a rigorous analysis of the internal and external environments (including competition and market player analysis) and consolidated into a business model canvas. The business model canvas was then extended to a full-fledged business model. During this process the Cyberfactory#1 partners provided their input.

The business models were presented at the ICTurkey event in Istanbul (July 5th 2019) by the project coordinator, further raising the interest in the project of potential external partners, in particular concerning the application and exploitation of the project technologies.

Data, as a base for services

The “factory of the future” paradigm envisions a production environment in which massive amounts of data flow bottom-up from the shop floor to the highest levels of the management. This data yields a great value since it contains useful information that can be used to increase efficiency and performance as well as to enhance decision-making. However, this amount of data flow needs to be secure from unintended use and has to be trustable.

The new business models focus on the exploitation of data to extract valuable information and insights in order to make it an integral part of the transformation of products into services. Thereby they are providing increased value to industrial organizations and their customers. The exploitation of data lakes is at the core of the CyberFactory#1 business models.

Data exploitation is the key to more profitable business models based on service provision, which relies on continuous flow of value to customers instead of discrete product sale transactions (i.e. sales of distinct items). The continuous flow of value is provided through the “as-a-service” paradigm, meaning that high value services can be provided in a continuous way. Intelligence “as-a-service” can be provided through on-demand knowledge discovery from data, as well as Artificial Intelligence as-a-service (for example, provision of on-demand insight reports regarding production optimization). Management applications such as Enterprise-Resource-Planning (ERPs) and security platforms can benefit from the enhanced data value exploitation and themselves can also be provided “as-a-service” (for example, manufacturing management-as-a-service).

Lower adoption costs, greater flexibility, higher value

Servitization supports new revenue streams as it also empowers per-mile or plafond billing, flat rates or “per call” billing. This lowers the adoption costs, decreases risks both for producers and consumers and grants higher flexibility as well as scalability. This means that organizations become more capable and efficient of reacting to changes in markets.

Enhanced security also empowers service-based paradigms, as they rely on more frequent exchanges of data flows between value chain actors. Ensuring security and trust between actors makes the value chain more resilient and capable of delivering value even in the advent of internal or external cyberattacks, as well as protecting intellectual property and business-crucial information. This is especially important to enhance the protection against counterfeiting goods, to strengthen brand and to protect IP-driven competitive advantages.

Higher flexibility also opens the door for customization services (“mass customization”), allowing both industrial suppliers and users to lower production costs while being able to satisfy ever-changing customer requirements. Intelligent servitization based on data exploitation, higher flexibility, enhanced security and trust leverage the value creation in the next-generation industrial organizations, specifically in key sectors of the European industry.

Bringing benefits to European Industry

By focusing on core sectors of the European Industry, the CyberFactory#1 project also aims to build a community of manufacturing companies which can partner up with the project consortium and get involved.  This is an excellent way of strengthening ties, sharing knowledge and raise awareness regarding the benefits of the several developments, including being part of enhanced value chains and considering new approaches to market and value creation.

Authors: João Mourinho, Innovation Manager, Sistrade Software Consulting & Américo Nascimento, Research/Consultant, Sistrade Software Consulting


The Project DNA of CyberFactory#1

Achieving efficient and resilient Factories of the Future (FoF)

This is the aim of CyberFactory#1 in its three-year project duration. The project is the outcome of a user-driven investigation on security implications concerning the digital transformation of aerospace manufacturing lines. This investigation was carried out in 2017-2018 in scope of an eponym multifunctional working group within Airbus, including manufacturing and security professionals from Airbus Commercial Aircraft and Airbus Defence and Space divisions. The project idea was drafted by mid-2017 and a proposal was brought to the ITEA cluster for extension to broader industrial sectors facing similar digital transition challenges such as the rail systems, automotive, machine manufacturing or textile industry.

A consortium of a total of 31 partners from France, Canada, Finland, Germany, Portugal, Spain and Turkey was established, involving a balanced set of industrial pilots, technology providers and research organizations. It came to the definition of a large set of use-cases and misuse cases targeted to the convergence of industrial process optimization and manufacturing system resilience challenges. The consortium managed by Airbus Cybersecurity came to the definition of a set of twelve key capabilities that are necessary in order to achieve efficient and resilient FoF. These capabilities belong to three capacities: 1) FoF modeling and simulation, 2) FoF monitoring, control and optimization, 3) FoF security and resilience. For each of these three capacities, a set of four capabilities address respectively technical, economical,  human and societal dimensions of digital transition.

This equal consideration for technological and non-technological aspects of digital transition makes our project original and most applicable in the operational environment compared to the many techno-centric projects which currently bloom in the area of the Industry 4.0 topic. The equal consideration to both optimization and resilience challenges as well ensures adequate cost/benefit rationale in the selection of organizational and technological set-ups for industrial transformation.

The project was kicked-off on 18th December 2018 with support from the Spanish funding Authority. Finland, Canada, Germany, Portugal and Turkey later confirmed their support, while the UK and France remain with self-funded participations at this stage. Close to one year from project start, CyberFactory#1 has already successfully delivered a set of ten detailed pilot use-cases and as many misuse-cases, covering topics such as remote asset monitoring, statistical process control, robot fleet optimization, real time inventory or predictive maintenance and threats such as rogue device insertion, industrial data spoofing, distributed denial of service or adversarial machine learning. Upcoming is the definition of generic secure and optimized architectures for Factories of the Future.


Author: Adrien Bécue, Project Coordinator, Head of Innovation and R&T, Airbus CyberSecurity