Controlling Supervised Industry 4.0 Processes through Logic Rules and Tensor Deformation Functions

Industry 4.0 solutions are composed of autonomous engineered systems where heterogeneous agents act in a choreographed manner to create complex workflows. Agents work at low-level in a flexible and independent manner, and their actions and behaviour may be sparsely manipulated. Besides, agents such as humans tend to show a very dynamic behaviour and processes may be executed in a very anarchic, but correct way. Thus, innovative, and more flexible control techniques are required. In this work, supervisory control techniques are employed to guarantee a correct execution of distributed and choreographed processes in Industry 4.0 scenarios. At prosumer level, processes are represented using soft models where logic rules and deformation indicators are used to analyse the correctness of executions. These logic rules are verified using specific engines at business level. These engines are fed with deformation metrics obtained through tensor deformation functions at production level. To apply deformation functions, processes are represented as discrete flexible solids in a phase space, under external forces representing the variations in every task’s inputs. The proposed solution presents two main novelties and original contributions. On the one hand, the innovative use of soft models and deformation indicators allows the implementation of this control solution not only in traditional industrial scenarios where rigid procedures are followed, but also in other future engineered applications. On the other hand, the original integration of logic rules and events makes possible to control any kind of device, including those which do not have an explicit control plane or interface. Finally, to evaluate the performance of the proposed solution, an experimental validation using a real pervasive computing infrastructure is carried out.


Introduction
Industry 4.0 (Lasi et al., 2014) refers to a new industrial revolution where current production solutions and robots are being replaced by Cyber-Physical Systems (Bordel et al., 2017b) and other innovative engineered systems such as pervasive computing or sensing infrastructures (Ebling and Want, 2017). Traditionally, in industrial scenarios, managers define at a very high abstraction level processes to be executed and supported by production systems (Sánchez et al., 2016). Dashboards and other graphic environments are employed for this purpose. In these models, actions to be performed at low-level together with their input parameters are indicated. Connections among tasks and their temporal organization are strict and cannot be modified. Moreover, sometimes valid ranges for task outputs may also be defined as common industrial systems and robots are very predictable and precise (Bordel et al., 2018c). This top-down approach employs technologies such as YAWL (Van Der Aalst and Ter Hofstede, 2005) or BPMN (Geiger et al., 2018) to create and validate processes and assumes that every low-level infrastructure is following a request-actuation design paradigm. In this paradigm, every low-level system is provided with an interface through which requests may be received. After each request, the hardware system performs a certain action or actuation and, after that, stops its operation waiting for a new request (Bordel et al., 2018b). These systems, then, are totally controllable by external agents and match perfectly the request sequences that are created through modelling technologies such as YAWL (Bordel et al., 2017a).
However, Industry 4.0 scenarios are different. Cyber-Physical Systems and pervasive computing infrastructures are not typically provided with open interfaces, and they tend to act autonomously according to deterministic algorithms or, even, learning technologies (Bordel et al., 2020b). Dense environments where thousands of agents with heterogeneous capabilities are deployed and working together are the most common application scenarios for Industry 4.0. This context, furthermore, can get more complex if humans are considered (Bordel et al., 2017c). In fact, Industry 4.0 is inclusive, and many manufacturing companies produce handmade products where human labour is essential. Production systems including humans are even more heterogenous, as people's behaviour is very variable and dynamic. And, obviously, human agents are not externally controllable (Bordel et al., 2017c). Thus, in Industry 4.0, complex tasks and services are provided through the choreographed coordination of heterogenous agents acting in an autonomous way (Bordel et al., 2018a). This bottom-up approach is also compatible with computational processes defined at high-level, if decomposition and transformation engines are considered. However, it is a very costly and inefficient approach, as many false negative alarms are triggered. When autonomous agents are included, processes may be executed in a very anarchic but still correct manner.
Therefore, traditional industrial control mechanisms are not valid for these scenarios, and innovative technologies are needed (Bordel et al., 2020a). Then, in this work we propose a new supervisory control mechanism for Industry 4.0 scenarios, matching the special characteristics of distributed processes supported by coordinated autonomous and heterogeneous agents. This new technology defines (at prosumer level) processes using soft models where flexible logic rules and general deformation metrics guarantee the correctness of executions. These rules are verified in a passive manner at business level using specific engines. These engines receive information (events) from lower layers and run a validation procedure to analyse if minimum rules are being met by low-level agents. These events are enriched with information at production level, describing if tasks are being correctly executed or not. Finally, at production level, the physical parameters or tasks under execution are being monitored. Using tensor deformation functions (Bordel et al., 2017a), the global similarity between the expected executions and the real actions is measured. To allow these calculations, processes are represented as discrete flexible solids in a multidimensional phase space. This generalized approach is focused on reducing the false negative alarms observed in traditional control systems. All information is acquired through observation and recognition mechanisms, which are not described in this paper, but already existing (Bordel et al., 2019).
The rest of the paper is organized as follows: Section 2 introduces the state of the art on control mechanisms for Industry 4.0 scenarios. Section 3 presents the proposed technology, including the three abstraction levels (prosumer, business, and production) and their associated mechanisms. Section 4 provides an experimental validation of the proposal. Finally, Sections 5 and 6 explain some results of this experimental validation and the conclusions of our work.

State of the Art
Industry 4.0 is one of the most popular research topics nowadays (Lu, 2017), thus, many different control solutions for these new scenarios have been reported. In fact, almost every existing control mechanism has been already applied and integrated into Industry 4.0 technologies and scenarios: from traditional monolithic instruments (Kretschmer et al., 2016) to most modern intelligent algorithms (Meissner et al., 2017). Besides, a large catalogue of specific control solutions for Industry 4.0 scenarios have been also reported .
Two basic types of control solutions for Industry 4.0 have been described: supervisory control (Wonham and Cai, 2019) and embedded control (Aminifar, 2016) solutions. In supervisory control mechanisms, a surveillance system monitors the behaviour of hardware infrastructures and actuates and intervenes in the production process if it goes outside an acceptable variation margin. On the other hand, embedded control mechanisms are integrated into the production processes themselves and are continuously regulating the evolution and behaviour of the hardware platform.
One of the most popular supervisory control mechanisms in Industry 4.0 are SCADA systems (Mohammad et al., 2019) (Supervisory Control And Data Acquisition). SCADA solutions can manage heterogeneous infrastructures, through complex and heavy modular software tools (Calderón Godoy and González Pérez, 2018). Traditionally, SCADA solutions were built as monolithic platforms where specific industrial protocols, such as OPC, were employed (Boyer, 2016). Initial applications for Industry 4.0 also followed this paradigm (Merchan et al., 2018). Nevertheless, recently, new and different modules for slightly distributed SCADA solutions have been reported (Branger and Pang, 2015), and even cloud-based platforms may be found (Sajid et al., 2016). Furthermore, some SCADA mechanisms for industrial scenarios based on Internet-of-Things have been described (Wollschlaeger et al., 2017). The main problem of SCADA systems is their security weaknesses: many different reports about security problems of SCADA systems in the context of Industry 4.0 scenarios have been reported (Igure et al., 2006;Chhetri et al., 2017). Moreover, in largely distributed production systems, communication delays usually create complex malfunctions in SCADA control functions. Then, different stability analyses (Foruzan et al., 2016), mathematical models to compensate delays (Silva et al., 2018) and evolution analyses (Gu et al., 2019) to detect problems have been reported. In any case, these problems are still present and only distributed solutions including a small number of devices are working nowadays.
Other supervisory control solutions for future industrial scenarios based on autonomous agents have been described. These mechanisms are typically defined as Discrete event systems (Wonham et al., 2018) and are focused on autonomous robots (Gonzalez et al., 2018) and similar mobile machines (Roszkowska, 2002), although other applications to real-time solutions (Sampath et al., 1995) and fault diagnosis (Moreira and Basilio, 2014) may be found. Logic rules have been also employed to implement robot navigation frameworks (Kloetzer and Mahulea, 2016) and different mechanisms to optimal (Fabre and Jezequel, 2009) or clean paths (Iqbal et al., 2012) in robotized Industry 4.0 have been also reported. Works on this area are also evaluating, in a formal way, the scalability (Hill and Lafortune, 2017) and software characteristics (Goryca and Hill, 2013) of supervisory software solution for Industry 4.0.
Contrary to all these previous works, the problem and scenario addressed in this work is more general. First, we are considering not only robots and similar devices but also pervasive infrastructure and humans; what introduces an important challenge. Besides, all previous supervisory control solutions assume there is an interface so the surveillance system can intervene and act in the production process; however in most future engineered solutions (and, of course, when humans are considered), that's not a realistic assumption.
Embedded control solutions can be classified into two different groups: vertical and horizontal architectures . Vertical architectures are, probably, the genuine approach for Industry 4.0 applications. In this approach, computational processes are transformed and decomposed (Ivanov et al., 2016a;Bagheri et al., 2015), so executable units may be transferred and delegated to remote production infrastructures or, even, cloud services (Bordel et al., 2018c). On the contrary, horizontal architectures are traditional embedded control paradigms, which have been adapted to Industry 4.0 (Lalwani et al., 2006). Feedback control systems are the most traditional approach. In these mechanisms, a complex production system is represented through a block diagram where different key indicators are calculated at each step . Feedback loops guarantee that if any deviation is detected, that information is considered in previous steps to correct the situation. Feedback control solutions for traditional linear production schemes (Bensoussan et al., 2009;Lin et al., 2018) are very popular, but additional proposals for new nonlinear schemes (Spiegler and Naim, 2017;Zhang et al., 2019) may also be found. Furthermore, different studies about how randomness (Garcia et al., 2012), disturbances (Scholz-Reiter et al., 2011), and fluctuations (Yang and Fan, 2016) affect the global behaviour and performance of these feedback control mechanisms have been published. On the other hand, optimal control applications have been reported. Optimal control is the most common technique in horizontal control architectures for Industry 4.0 .
In these scenarios, cloud production systems are the most common application for these technologies (Frazzon et al., 2018;Rossit et al., 2019). Optimal control is characterized by a process evolution that is not allowed to belong to certain states or areas in the phase state. With this view, solutions for optimal planning (Sokolov et al., 2018) and efficient activity scheduling  in Industry 4.0 may be found. As in previous topics, works on robustness and resilience analyses have also been reported (Aven, 2017;Ivanov et al., 2016b).
The main problem of embedded control mechanisms in Industry 4.0 applications is that they require a total access to every component in the industrial system; as control modules must be integrated in every component and all of them must be interconnected to generate the global expected behaviour. Contrary to these systems, the proposed approach in this paper is also valid for proprietary solutions (very usual in industrial applications) which cannot be accessed or easily modified, as only a supervisory transversal component is able to support the whole control policy.
Finally, control mechanisms based on modern technologies such as artificial intelligence and fuzzy logic may be found (Diez-Olivan et al., 2019). Although fuzzy logic is not a recent technology (Nguyen et al., 2018), new solutions for industrial scenarios have been recently reported. These solutions are sparse, but some new proposals for nonlinear and event-driven processes may be found (Pan and Yang, 2017). In this context, case studies about real implementations are also a relevant contribution (Theorin et al., 2017;Golob and Bratina, 2018). These technologies are very powerful but are not flexible enough to deal with anarchist executions caused by human behaviour, as learning algorithms need to previously observe every execution model to be accepted. Table 1 shows in a systematized manner the main advantages and disadvantages, and differences in terms of the problem addressed, of previously described works.

New Supervisory Control Solution for Distributed Industry 4.0 Processes
This section describes the new proposal for supervisory control in Industry 4.0 scenarios, where processes are described using soft models, logic rules and deformation functions and metrics. Section 3.1 describes the global overview of the proposed solution. Section 3.2 presents the new soft models to represent processes at prosumer level. Section 3.3 analyses how logic rules (at business level) may be employed to verify in a flexible manner the process execution performed by autonomous agents and humans. And Section 3.4 describes proposed technologies for production level, where deformation functions and metrics are deployed to evaluate workflows and feed verification engines at business level. Figure 1 shows the proposed architecture for the described new control mechanism. At the highest level, managers are defining processes 1 using any of the existing process description technologies, such as BPMN or YAWL. Industrial production processes are usually very large and complex, including many activities and tasks which may be partially related They can support very complex control schemes through powerful algorithms.

General Overview
These solutions cannot be flexible and precise at the same time. Typically, they are specifically designed for individual applications.
The proposed solution can be deployed in a large catalogue of scenarios as it balances pattern recognition techniques with other more traditional approaches.
(or not). Besides, some technologies enable defining quality indicators for tasks' outputs, so the activity execution may be rejected if those indicators are not met. Process definitions at this level (named as prosumer level, as managers act as producers and consumers in the proposed technology) are typically graphic, and only relations among tasks and quality measurements are described.
The process model based on standard technologies (hereinafter we are considering YAWL language) is then analysed and decomposed 2 to "soft" and relax the model. The idea is to transform a hard or rigid process definition where deformations and variations are not expressed (and, then, not accepted by the control solution) into a soft model 3 where global quality and organization restriction are equivalent but dynamic changes, variations and deformation are admissible.
To perform this transformation, a specific engine 2 analyses the YAWL process model and identifies the key branches or subprocesses whose exterior structure cannot be modified. For example, a subprocess generating a subproduct as output which is the input of a second subprocess must be executed strictly before the second subprocess. However, tasks inside both subprocesses could be executed in any order, and the same subproduct is created.
After this analysis and decomposition procedure, two different soft process descriptions are generated. The first one is a global description based on logic rules 3 . To generate this description, each subprocess is represented by a discrete state. And the global state map is regulated by a set of logical rules extracted and instantiated from a logic predicate repository 4 , where different general conditions are stored (temporal order, necessary conditions, etc.). Those logical rules represent the minimum restrictions that the production process must fulfill, but all additional restrictions artificially introduced by the limitations or graphic process representations are removed. The second one is a set of reports 5 where deformation metrics about each subprocess are indicated. Quality indicators about each task are considered together and three global metrics for the whole subprocess are obtained: stiffness, strength and ductility.
The global soft description is transferred to a verification engine 6 , where logical rules are validated (or not) at real-time. This global understanding about the process takes place at business level, where information from managers (process models) and from autonomous hardware platforms are matched to determine if production processes are being correctly executed or not. This verification engine takes as input two information sources: events directly generated by hardware devices 7 following an event-driven paradigm; and events generated 8 by control component in lower layers (the production layer). These events are employed to validate logical rules and determine if minimum conditions to consider the process execution is valid are being met.
All reports about subprocesses 9 are transferred to a lower level (the production level), where low-level (physical) information is collected to evaluate if process deformation is above the proposed global metrics. To do that, information from different recognition technologies (human-oriented, device-oriented, etc.) is collected 10 . These recognition technologies are not addressed in this paper, as any of the previously existing mechanisms may be integrated (Bordel et al., 2019;Bordel and Alcarria, 2017). All information sources are integrated 11 to perform a global evaluation of process deformation. To obtain those global metrics a mathematical algorithm 12 is employed. This algorithm is deducted by understanding production processes as discrete flexible solids in a generalized phase space. Dynamic behaviour of autonomous devices is represented as external forces acting on the solid (process) and deforming it. Using different deformation functions (and the provided deformation metrics), it is evaluated if the final process execution is similar enough to the model to be considered as a valid execution or not. A decision-making engine 13 performs these functions. A report 14 and event-like result (whose format may be adapted in an event generation module) is sent to the rule verification engine 6 (as one of the information sources described before).

Process Description at Prosumer Level
Using YAWL language, managers may define processes in a very easy manner with graphic instruments (see Fig. 2). At the highest level (prosumer level) managers (prosumers) are defining a complex production process or workflow W . This workflow is an ordered sequence T of M T tasks, connected through M E oriented edges, E (1). Each task t i ∈ is labelled with a discrete timestamp n i indicating the planned temporal order for the workflow. On the other hand, each edge e i ∈ E is labelled with a collection SP i indicating the list of subproducts flowing (as input and/or outputs) between tasks along the edge (2). Labelling applications λ T and λ E are employed to relate labels and edges and tasks. To learn which tasks t i are connected through oriented edges e i , and incidence application γ W is defined (3). This application relates each edge e i with the order pair of tasks t i , t j this edge connects. The edge is outgoing from t i and incoming in t j : Besides, in YAWL, tasks may be labelled with a list I i of M I quality indicators j i , employed to accept or reject the actual tasks execution (4). These indicators may refer maximum delays, jitter, errors in physical dimensions, etc. A labelling application λ ind is employed to relate indicators and tasks (5). This list must be exhaustive: every possible indicator must be present in every task. If an indicator does not apply to a particular task, this value in the list should be empty but still present.
Moreover, bifurcation conditions can be considered in YAWL process definitions (see Fig. 2). These conditions are modelled as lists C i of semantic annotations associated to edges through a label application λ cond (6). These annotations c j i are logical predicates referred to output characteristics, time, etc. This whole process description (including all labels), then, is analysed and decomposed in a specific engine. The first step is to calculate the key process branches or subprocesses ω i (7): A key subprocess is characterized by only one input point (only one task receives inputs from the physical world) and only one output point (only one task generates a physical service or product). Besides, relations among tasks within the subprocess do not have associated subproducts SP i = ∅, but input and output edges to the subprocess do have associated subproducts. Inside a subproduct, then, the temporal order is artificially induced by YAWL language, as it requires all tasks to be connected in sequences. But the process does not require tasks to be executed in that order and it could be altered. When a key subprocess is detected, it is extracted to an independent report to be analysed separately using deformation metrics, and in the global description the subprocess is replaced by a state s j , taken from a set W of possible states for this workflow. And a transaction relation ρ W is built to connect all states s j according to restrictions in the original process (8). Algorithm 1 describes the proposed mechanism to calculate subprocess and new states and the transaction relation.
On the other hand, annotations C i must be transformed into logic predicates referred to states s j and/or the discrete timestamps n i . To do that, a semantic algorithm may be employed. This algorithm is not described in this paper, but similar solutions may be found in the state of the art [8]. To do that, a repository of generic logic predicates is considered. These generic predicates are instanced according to annotations C i to generate a set R of atomic propositions r i or logical rules. An interpretation function L W is finally defined, indicating what predicates from R are true for each state s j (8).
The set of states W and both the interpretation L W and the transaction ρ W applications define a Kripke structure K (9) which is transferred to the business layer for further processing. This Kripke structure is formally defined if only one additional element is considered: the set of states from which the workflow may be initiated init .
A second element is generated in the analysis and decomposition engine. For each key subprocess ω i , each task t i is labelled with an exhaustive set of indicators I i . These indicators are split into two different vectors I a i and I r i (10). On the one hand, absolute indicators I a i referring state variables (such as time, temperature, etc.) are preserved (including empty positions). On the other hand, relative indicators I r i referring tolerances, errors, etc. are processed to calculate three basic and global deformation metrics: stiffness, strength and ductility.
• Stiffness (F ) represents the ability of a process to resist to deformations. In other words, it presents how much a process may absorb variations in the input parameters and conditions and still keep the planned values in the state variables. • Strength (G) represents the ability of a process to prevent unsatisfactory executions.
It describes how resistant the internal organization of the process is, producing correct executions even if inputs and/or state variables suffer great changes with respect to the original model. • Ductility (D) represents the ability of a process to be deformed and, still, produce correct executions. This parameter indicates how flexible a process is, so even large changes in the state variables are considered as valid executions. Obtain t i ∈ W such that λ T (t i ) = n i Obtain Stiffness is related to security margins and safeguards between consecutive tasks in the process model (11). As these security margins are larger, it is easier for the process to resist to global deformations, as the effect of inputs on the first tasks are absorbed by security margins. Function calculating the stiffness of a process σ f must, then, be strictly increasing: linear, logarithmic or exponential functions could be employed depending on the scenario.
Strength is related to the number of restrictions and indicators in the list I r i (12). As more restrictions are considered, the process is more probable to generate an unsatisfactory execution. Besides, if we consider all restrictions as independent effects, the probability of unsatisfactory executions grows up exponentially with the number of restrictions. Then, function calculating the process strength σ f is exponentially decreasing.
Finally, ductility is obtained from the tolerances and relative errors in the list I r i (13). As tolerances go up, processes can be deformed at a higher level, but the process execution is still valid. Then, the function calculating the ductility of the process σ d is strictly increasing: linear, logarithmic or exponential functions could be employed depending on the scenario Then, the subprocess descriptions {ω i } together with the vector {I a i } and the deformation indicators {(F i , G i , D i ) i } are transferred to the production layer for further processing.

Global Process Control and Rule Validation at Business Level
At business level, a Kripke structure (9) describing the process model is received. On the other hand, from this level, the control solution is viewed as a discrete event system (DES), Z (14). Where X is the set of possible states in the system; E is the finite set of events; f is the transition function indicating the next state for a given current state and a received event (15); is the function indicating what events are active for a given state (16) and P(E) is the power set of E; X 0 is the initial state of the system and X m is the set of market states (of allowed final states).
In this DES, E * is the Kleene closure of E which is the set of all possible string in the system (including the empty string); being a string the juxtaposition of states in X describing the evolution of the system; i.e. E * is the set of all possible evolutions the system may follow. Two different languages (sets of strings) can be defined, then. The language marked (17) L m contains all strings that finished with a marked state. The language generated (18) L g contains all possible strings, i.e. all possible system evolution although they may be not acceptable. In traditional supervisory control systems, the underlaying event-driven infrastructure is assumed to be fully controllable (all events are controllable) or partially controllable (where only some events are uncontrollable). However, in future Industry 4.0 all events tend to be uncontrollable (as devices and people are totally dynamic and autonomous). In that case, common supervisory control theory is not applicable, as its objective is to avoid certain undesired controllable events (and, in Industry 4.0, they are all incontrollable). On the contrary, the objective of the proposed supervisory mechanism is to create reports when undesired states are reached.
Moreover, in this scenario, events and states in the low-level platform are unknown and, probably, infinite. Then, supervisory control cannot be performed directly over the workers and hardware devices, but over the Kripke structure. Actually, this structure is a fair Kripke structure are two additional conditions are met in every process model: • Justice or weak fairness requirements: Many states s j activate the same logical predicate. • Compassion or strong fairness requirements: If many states s j have the same logical predicate; many states s k do not activate that logical predicate.
In that way, a two-level verification is performed in the "rule verification engine" (see Fig. 3). In this solution, states E in DES Z and in the Kripke structure K are the same, although the state set W in the Kripke structure includes a special state s null to represent unknown states (19). This state s null does not activate any logical predicate (20). The initial states are the same (21). Discrete events ε in the DES Z are words g of the language general L m (Z) (where the empty string corresponds to the unknown state in the Kripke structure), describing the states the Kripke structure has reached (22) before the current state s current . Then, the transition function in the DES may be easily constructed from the transaction relation in K (23): In those conditions, the proposed supervisory control algorithm operates as follows. First, physical events E phy (obtained through the event generation module from physical information and reports) are employed to determine in the Kripke structure which states s j are allowed in the production process (several of them, because of the justice and compassion requirements). This action is performed using the interpretation function (24). If only one state is allowed, a new DES event ε i is generated, juxtaposing the new event to the previous string prefix i g (25). This information is introduced as strings' prefix in the DES Z, and the supervisory control module determines which future states A(E) are allowed by the business model (26) and, eventually, reports if the executed process is incompatible with business requirements (27). Algorithm 2 describes the proposed control solution.
Different solutions to validate logic rules have been reported (Leucker, 2017), any of them may be integrated into the proposed mechanism.

Subprocess Control Using Deformation Functions at Production Level
Finally, at production level, a collection of labelled key subprocess {ω i } and deformation metrics {(F i , G i , D i ) i } are received. At this point, every task t i in any subprocess ω i is totally defined by the set of absolute quality indicators I a i . This set is a M I −A -dimensional vector which, eventually, can be represented as a point in a generalized M I −A -dimensional phase space. To do that, it is enough to consider the position vector − → x i of that point (28) in a general Euclidian space. Besides, in the most common case, every subprocess ω i is composed of a very large set of tasks, so if the same procedure is repeated for every task, finally the whole subprocess is represented as a point cloud in the phase space. This point cloud may be understood as a discrete solid i in the phase space (29). This solid, moreover, is flexible, as from the beginning we have assumed that tasks inside each subprocess may be altered (see Fig. 4). In this context, M ω i is the number of tasks in the subprocess ω i . In those conditions, the subprocess execution may be understood as a function  The question to address, then, is if the suffered deformation is high enough to consider whether the execution is unsatisfactory or not.
In the proposed model, a set of external forces i−k (31) is actuating on the solid i in the point t k . These forces represent the dynamic, variable, and anarchic behaviour of autonomous devices and humans, which tend to deform the original process model (see Fig. 6). If forces were acting only in one dimension δ j i−k , the global deformation suffered in that dimension j i−k could be easily calculated through the Hook law (32); where F i is the stiffness parameter calculated in the prosumer layer. Although it is not completely correct in terms of physical meaning, we are generalizing this law by taking modular functions, so the global aggregated deformation in all directions of the phase space may be also estimated through the Hook law (33) and, even, the global deformation for the entire solid, by integrating along the entire solid's surface (34). Considering, now, ductility D i and strength G i we can extend the common Hook law to include inelastic areas and break zones (where executions are considered unsatisfactory). This new deformation function includes three different branches (35), see Fig. 7.
As can be seen in Fig. 7, it is not necessary to evaluate the external forces i to learn if global subprocess deformation is too high to consider the execution is valid. These forces are extremely complicated to analyse, as almost no data about them is available. On the contrary, global deformation i can be evaluated through the absolute quality indicators I a i , which are monitored through the recognition technologies at low-level. For every deformation value i in the break area, the execution is considered unsatisfactory. Otherwise, it is considered valid. For deformation values in the elastic area, external forces are only deforming the process in a transitory manner, i.e. only some tasks are affected (those which are directly affected by forces). Security margins are absorbing the secondary effects over other tasks, and no permanent global deformation is suffered. On the contrary, when deformation values are in the inelastic area, security margins are not enough to compensate the external forces, and permanent and global deformation appear. Nevertheless, executions are still valid although deformed. Moreover, two contradictory situations must be considered before deformation calculation.
On the one hand, in the inelastic area, small variations in the estimated global deformation i may cause the subprocess execution to be rejected. So, high precision algorithm to perform those analyses are needed. On the other hand, Industry 4.0 scenarios present realtime requirements, so algorithms must be fast and efficient, and complex high-precision mechanisms must be employed.
Thus, we proposed a two phases deformation evaluation: in the first step, a very simple algorithm where all state variables and quality indicators are considered independent is employed; only if this algorithm places the global deformation in the inelastic area, a second step where a complex and much more precise algorithm is employed would be triggered.
To calculate the first and approximate global deformation ind i of the subprocess ω i after an execution, we are considering all quality indicators I a i to be totally independent. Then, deformation φ j i−k along each indicator and on each task can be calculated independently. This calculation is based on the ratio between the planned value for each quality indicator  (40) or any other application specific algorithm. Each one of these expressions is adequate for a range of deformation values (see Table 2). Then, we are always initially calculating the unitary deformation, and, after that, we can recalculate the value using a more adequate expression if needed.
As can be seen in Table 2, using the unitary deformation we can evaluate the subprocess rotation and absolute deformation. Then, for processes where rotation and large deformation values are present, Hencky function is the most precise calculation method. Equally, for null rotation and large deformation values, specific logarithmic algorithms and expressions may be proposed depending on the scenario. Finally, for small values of deformation we can use the Green or Almansi function (as desired) if rotation is present, or the unitary deformation if no rotation and small deformation values are observed.
After calculating the global deformation i for the subprocess, if this value is in the elastic area, the execution is accepted. If the subprocess execution is in the break area the execution is rejected; and if the value is in the inelastic area, a more precise evaluation is needed. In this more precise calculation mechanism, we are assuming the realistic scenario, where quality indicators are dependent on each other. Then, deformation parameters are obtained through the deformation tensor i−k (41), where differential expressions are approximated using numerical expressions (42). Basically, for initial task we are employing the previous expressions (37)-(40) to start the calculation algorithm. Other tasks are evaluated through the backward differences. There exists also a tensor expression for Almansi deformation (43) and for Green-Lagrange deformation (44) which can be employed for depending on the scenario under study (selecting that which is closer to reality in all cases).
Module calculation (34), in this case, will refer to the tensor module calculation; contrary to the previous case where vector module was employed.
With this much more precise value, the caused deformation is finally evaluated. If it remains in the inelastic area, the execution is approved. In the contrary event, the execution is rejected. Algorithm 3 describes the entire described solution at production level.

Experimental Validation
In order to evaluate the performance of the proposed solution, an experimental validation was designed and carried out. The experiments were based on pervasive sensing and computing platforms, deployed in a laboratory, emulating an Industry 4.0 manufacturing environment. In this environment, workers and work automation devices were expected to perform a certain workflow along the day, composed of certain production activities, taken from food manufacturing companies such as baking companies (see Table 4). These production activities, besides, are composed of a quite large collection of tasks, which are finally monitored. Approve the subprocess execution 28 else 29 Reject the subprocess execution 30 end The scenario was deployed at Universidad Politécnica de Madrid, where a 30 m2 space was conditioned as working environment. Because of sanitary restrictions during 2020 in Table 2 Different deformation calculation expressions.

Rotation
Deformation: Large Deformation: Ad hoc logarithmic definitions Unitary deformation φ Europe, only three people could be at the same time in the laboratory. Then, if a large amount of people were involved in the experiment, different experiment realization (each one with only three participants) would be performed. Electronic devices present in this installation were configured to simulate a food production environment. Three different information sources and devices were considered in order to monitor people and device activities: RFID tags, infrared barriers and accelerometers and general sensors (temperature, humidity, light, etc.). Table 3 shows the composition of the deployed infrastructure. All these elements were built together with an Arduino Nano platform and connected through Bluetooth technologies to a gateway supported by a Raspberry Pi device. This gateway, finally, communicates all information to a central server in the cloud for data storage and back-end deployment. The server was deployed in a Linux architecture (Ubuntu 18.04 LTS) with the following hardware characteristics: Dell R540 Rack 2U, 96 GB RAM, two processors Intel Xeon Silver 4114 2.2G, HD 2TB SATA 7,2K rpm.
Autonomous devices in the proposed scenario belonged to three different types (see Table 3): • Autonomous robots: Cleaning robots and software-based robots, acting in an autonomous manner to help workers in the management of production facility. Softwarebased robots consisted on agents interacting with virtual PLCs simulating bakery processes, service interruptions, etc. • Electronic ink displays: Low-cost and low consumption devices employed to display information about next activities to be performed, warning and alerts, etc. All these displays were controlled from the Raspberry local gateway. • Cyber-Physical Systems and pervasive system to control living conditions: All environmental conditions, such as temperature or humidity, were monitored by pervasive computing platforms and traditional feedback control loops.
In order to recognize activities being performed, two different technologies were employed. To recognize human activities, we are using a two-phase solution composed of different machine learning and pattern recognition layers (Wonham and Cai, 2019). To recognize activities performed by autonomous devices we employed previous works on artificial intelligence mechanisms for Internet-of-Things applications based on signal processing (Bordel et al., 2020b). These components were deployed together with the proposed  Table 4 Most common production activities in the experimental validation.

Workflow
Most important activities Supervising bakery products Visual inspection of products, quality assessment and product discard. Supervising bakery production Monitoring of production processes, operating status, controlling speed, stops and notifications. Pastry packaging Dispense, grouping, labeling and packaging. Auxiliary operations in food industry Customer service, warehouse cleaning, etc. solution in the referred cloud server. This server also offered a prosumer webpage, where YAWL-based workflows of production activities could be generated. Workflows are based in bakery production activities of the state of the art (Katz et al., 1963), and most common activities are showed in Table 4.
The experiment considered four different workflows with a variable number of tasks and production activities. All workflows were executed by autonomous devices and twelve people in an eight-hour session (standard labour schedule). People were selected as a homogenous community, with respect to the gender and age parity. The experiment included three different phases: • Training phase: Participants received some training about the activities and workflows they had to perform and the context and experiment conditions. • Process execution phase: Each participant was interviewed in this phase. The participant was asked to execute a certain process, which is described in a short document, as any company would do. • Evaluation phase: In this phase, experts evaluated the records obtained by the system about the validity of executions, and they were compared to observations made by experts (which finally determined whether an execution was correct or not).
Two different experiments were performed using the described infrastructure. During the first experiment, the precision and success rate of the proposed supervisory control mechanism was evaluated. The number (percentage) of activities and workflows correctly detected as successful or unsatisfactory executions is calculated, including the percentage of false positives and false negatives. Experiments are repeated for workflows with different number of tasks, and they are also compared to traditional top-down control approaches (Bordel et al., 2018c). During the second experiment, the scalability of the proposed control solution with respect to the complexity (number of tasks) of the executed workflow and quality indicators in the task description is analysed. The main indicator analysed in this second experiment is the processing delay. Figure 8 shows the results from the first experiment, comparing the success rate in the proposed control mechanism (rule verification engine) 6 to previously existing traditional solutions [5], for workflows with a different number of tasks M T .

Results and Discussion
As can be seen, the proposed supervisory control mechanism reaches a higher successful rate in all cases. In a first zone, for workflows where M T < 400, the success rate is improved round 10% with respect to traditional control solutions, reaching a success rate of 82% (approximately). However, as the number of activities in the workflow goes up, and the success rate goes down for both approaches. In fact, a higher number of activities implies a higher variability and makes it much more complex to control anarchic executions. In any case, traditional solutions get worse faster than the innovative proposed mechanism. Thus, for workflows with a number of tasks M T = 600, the proposed solution presents a success rate of 58% (approximately), while traditional mechanisms present a performance of almost a 300% worse. After this point, both mechanisms reach the minimum precision and they remain stable.
An interesting result to be evaluated is how the wrongly evaluated process executions are split into false positive detection and false negative detections. Figure 9 shows those results.
In general, as can be seen, traditional approaches present a much higher number of false negative detections (executions that are valid are considered unsatisfactory), while the proposed solution tends to create false positive evaluations (executions that are not valid but are finally accepted) in a higher rate than state of the art technologies.
Finally, Fig. 10 shows the results from the second experiment. The scalability in terms of processing delay (from deformation calculation 12 to rule verification 6 ) is analysed, considering different number of tasks M T and a number of absolute state variables and quality indicators M I −A . As can be seen, evolution of processing delay with respect to the number of tasks in the workflow is almost linear. And, then, the temporal order of the proposed algorithm in that variable is O(n). On the other hand, processing delay according to the number of quality indicators describing each task M I −A follows a function with a higher growing rate than linear functions, which can be approximated by O(n · log(n)). In both cases we are avoiding exponential or high-order polynomial temporal order which could make the proposed solution impractical for high complexity Industry 4.0 production scenarios. Processing delay, in any case, is in the order of hundreds of milliseconds.

Conclusions
Industry 4.0 solutions are composed of autonomous engineered systems where heterogenous agents act in a choreographed manner to create complex workflows. In this work, supervisory control techniques are employed to guarantee a correct execution of distributed and choreographed processes in Industry 4.0 scenarios. At prosumer level, processes are represented using soft models where logic rules and deformation indicators are used to analyse the correctness of executions. These logic rules are verified using specific engines at business level. These engines are fed with deformation metrics obtained through tensor deformation functions at production level. To apply deformation functions, processes are represented as discrete flexible solids in a phase space, under external forces representing the variations in every task's inputs.
Experimental validation shows that the proposed mechanism improves the performance of traditional control solutions in a percentage between 10% and 300%, depending on the workflow to be executed. Besides, scalability of the proposed solution is almost linear with respect to the number of tasks in the workflow, and n · log(n) with respect to the number of quality indicators or state variables describing tasks.
As future works, it is intended to do a proof of concept in a relevant environment, a real food processing facility for the monitoring of production activities in a non-intrusive way. For this, it is necessary to consider, in addition to the technical challenges of integration with sensorization devices present in this relevant environment, ethical and privacy considerations for the workers who are in these facilities. For these tests and the continuation of this line of research we have a research framework with relevant food processing companies, established in accordance with the DEMETER project.

Funding
The research leading to these results has received funding from DEMETER project (H2020- DT-2018-2020.