Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 29, Issue 2 (2018)
  4. Predictor-Based Control of Human Respons ...

Predictor-Based Control of Human Response to a Dynamic 3D Face Using Virtual Reality
Volume 29, Issue 2 (2018), pp. 251–264
Vytautas Kaminskas   Edgaras Ščiglinskas  

Authors

 
Placeholder
https://doi.org/10.15388/Informatica.2018.166
Pub. online: 1 January 2018      Type: Research Article      Open accessOpen Access

Received
1 December 2017
Accepted
1 April 2018
Published
1 January 2018

Abstract

This paper introduces how predictor-based control principles are applied to the control of human excitement signal as a response to a 3D face virtual stimuli. A dynamic human 3D face is observed in a virtual reality. We use changing distance-between-eyes in a 3D face as a stimulus – control signal. Human responses to the stimuli are observed using EEG-based signal that characterizes excitement. A parameter identification method for predictive and stable model building with the smallest output prediction error is proposed. A predictor-based control law is synthesized by minimizing a generalized minimum variance control criterion in an admissible domain. An admissible domain is composed of control signal boundaries. Relatively high prediction and control quality of excitement signals is demonstrated by modelling results.

1 Introduction

Virtual environments already became a part of our daily life including applied computer games, learning environments (Devlin et al., 2015), business and project management environment (Mattioli et al., 2015), social networks and their games. These applications and its multimedia elements are causing negative or positive emotions and are considered as a virtual stimuli (Wrzesien et al., 2015). These stimuli may be used as a clue to regulate human psychological, emotional and social state (Devlin et al., 2015) or even to treat various mental diseases (Calvo et al., 2015). For this purpose, a control mechanism for human state regulation or stabilization is needed.
EEG-based emotion signals (excitement, frustration, engagement/boredom) are characterized as reliable and quick response signals (Hondrou and Caridakis, 2012; Mattioli et al., 2015; Sourina and Liu, 2011). However, foremost we need to compose and investigate mathematical models describing dependencies between emotion signals as a reaction to stimuli.
Predictive input-output structure models were proposed and investigated for exploring dependencies between virtual 3D face features and human reaction to them when dynamic virtual 3D face is observed without a virtual reality headset (Kaminskas et al., 2014; Kaminskas and Vidugirienė, 2016; Vaškevičius et al., 2014). Predictive models are necessary in the design of predictor-based control systems (Astrom and Wittenmark, 1997; Clarke, 1994; Kaminskas, 2007). Predictor-based control was applied to the control of human emotion signals, when a dynamic 3D face is observed without a virtual reality headset (Kaminskas et al., 2015; Kaminskas and Ščiglinskas, 2016).
This paper introduces a predictor-based control with a generalized minimum variance control quality principles which are applied to the control of human response signal, when a dynamic virtual 3D face as stimuli is observed using a virtual reality headset.

2 Experiment Planning and Cross-Correlation Analysis

A virtual 3D face with changing distance-between-eyes was used for input and EEG-based pre-processed excitement signal of a volunteer was measured as output (Fig. 1). The output signal was recorded with Emotiv Epoc device. This device records EEG inputs from 14 channels (in accordance with the international 10–20 locations): AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4 (Emotiv Epoc specifications, Mattioli et al. (2015)). Values of the output signal (excitement) vary from 0 to 1. If excitement is low, the value is close to 0 and if it is high, the value is close to 1.
info1181_g001.jpg
Fig. 1
Input–output scheme.
A dynamic stimulus was formed from a changing woman face (Kaminskas et al., 2015). A 3D face created with Autodesk MAYA was used as a “neutral” one (Fig. 2, middle). Other 3D faces were formed by changing distance-between-eyes in an extreme manner (Fig. 2, top, bottom). The transitions between normal and extreme stages were programmed. “Neutral” face has 0 value, largest distance-between-eyes corresponds to value 3 and smallest distance-between-eyes corresponds to value – 3 (Fig. 2). At first “neutral” face was shown for 5 s, then the distance-between-eyes was increased continuously and in 10 s the largest distance between eyes was reached, then 5 s of steady face was shown and after that the face came back to “normal” state in 10 s. Then “normal” face was shown for 5 s, followed by 10 s of continuous change to the face with the smallest distance between-eyes, again 5 s of steady face was shown and in the next 10 s the face came back to “normal”. The experiment was continued in the same way further using 3 s time intervals for steady face and 5 s for continuous change. Eight volunteers (four males and four females) were tested. Each volunteer was watching a changing virtual 3D face with virtual reality headset and each experiment was approximately about 100 s long. EEG-based excitement and changing distance-between-eyes signals were measured with sampling period ${T_{0}}=0.5$ s and recorded in real time.
info1181_g002.jpg
Fig. 2
Experiment plan for a stimulus – testing input.
To estimate the possible relation between human response (excitement) and virtual 3D face feature (distance-between-eyes) a cross-correlation analysis was performed. The estimates of the maximum cross-correlation function values
(1)
\[ \underset{\tau }{\max }\big|{r_{yx}}[\tau ]\big|=\underset{\tau }{\max }\bigg|\frac{{R_{yx}}[\tau ]}{\sqrt{{R_{yy}}[0]{R_{xx}}[0]}}\bigg|\]
are shown in Table 1. In Eq. (19) ${R_{yx}}[\tau ]$ is a cross-covariation function between distance-between-eyes (x) and excitement (y) signals, and ${R_{yy}}[\tau ]$, ${R_{xx}}[\tau ]$ are auto-covariation functions (Vaškevičius et al., 2014). Examples of cross-correlation functions are demonstrated in Fig. 3.
Table 1
Maximum cross-correlation function values.
No. volunteer 1 2 3 4 5 6 7 8
Female Female Female Female Male Male Male Male
Max. values 0.90 0.68 0.66 0.50 0.83 0.81 0.81 0.48
The shift of the maximum values of cross-correlation functions in relation to ${R_{yx}}[0]$ allows stating that there exists dynamic relationship between virtual 3D face feature (distance-between-eyes) and human response (excitement) to them. High cross-correlation values justify a possibility of linear dynamic relations.

3 Input–Output Model

Dependency between human excitement signal as a response to a virtual 3D face feature (distance-between-eyes) changes is described by linear input-output structure model (Kaminskas et al., 2014)
(2)
\[ A\big({z^{-1}}\big){y_{t}}={\theta _{0}}+B\big({z^{-1}}\big){x_{t}}+{\varepsilon _{t}},\]
where
(3)
\[ A({z^{-1}})=1+{\sum \limits_{i=1}^{n}}{a_{i}}{z^{-i}},\hspace{2em}B({z^{-1}})={\sum \limits_{j=0}^{m}}{b_{j}}{z^{-j}},\]
${y_{t}}$ is an output (excitement), ${x_{t}}$ is an input (distance-between-eyes) signals respectively observed as
\[ {y_{t}}=y(t{T_{0}}),\hspace{2em}{x_{t}}=x(t{T_{0}})\]
with sampling period ${T_{0}}$, ${\varepsilon _{t}}$ corresponds to noise signal, ${z^{-1}}$ is the backward-shift operator (${z^{-1}}{x_{t}}={x_{t-1}}$) and ${\theta _{0}}$ is a constant value.
info1181_g003.jpg
Fig. 3
Examples of cross-correlation functions of males (left) and of females (right).
Eq. (2) can be expressed in the following expanded form
(4)
\[ {y_{t}}={\theta _{0}}+{\sum \limits_{j=0}^{m}}{b_{j}}{x_{t-j}}-{\sum \limits_{i=1}^{n}}{a_{i}}{y_{t-i}}+{\varepsilon _{t}}.\]
Parameters (coefficients ${b_{j}}$ and ${a_{i}}$, degrees m and n of the polynomials (3) and constant ${\theta _{0}}$) of the model (2) or (4) are unknown. Parameter identification is performed in accordance with the observations obtained during the experiments with the volunteers.

4 Parameter Identification Method

The current estimates of the parameters can be obtained in the identification process from the condition (Kaminskas, 2007)
(5)
\[ {\hat{\mathbf{c}}_{t}}:\hspace{1em}{\tilde{Q}_{t}}(\mathbf{c})=\frac{1}{t-n}{\sum \limits_{k=n+1}^{t}}{\varepsilon _{k|k-1}^{2}}(\mathbf{c})\to \underset{\mathbf{c}\epsilon {\Omega _{\mathbf{c}}}}{\min },\]
where
(6)
\[ {\mathbf{c}^{\mathrm{T}}}=[{\theta _{0}},{b_{0}},{b_{1}},\dots ,{b_{m}},{a_{1}},{a_{2}},\dots ,{a_{n}}]\]
is a vector of the coefficients of the polynomials (3) and ${\theta _{0}}$,
(7)
\[ {\varepsilon _{t+1|t}}(\mathbf{c})={y_{t+1}}-{y_{t+1|t}}\]
is one-step-ahead output prediction error,
(8)
\[ {y_{t+1|t}}={\theta _{0}}+z\big[1-A({z^{-1}})\big]{y_{t}}+B\big({z^{-1}}\big){x_{t+1}}\]
is one-step-ahead output prediction model,
(9)
\[ {\Omega _{\mathbf{c}}}=\big\{{a_{i}}:\hspace{2.5pt}\big|{z_{i}^{A}}\big|<1,\hspace{2.5pt}i=1,2,\dots ,n\big\}\]
is stability domain (unity disk) for the model (2), ${z_{i}^{A}}$ is the roots of the polynomial
(10)
\[ {z_{i}^{A}}:\hspace{1em}A(z)=0,\hspace{2.5pt}i=1,\dots ,n,\hspace{1em}A(z)={z^{n}}A\big({z^{-1}}\big),\]
z is a forward-shift operator ($z{y_{t}}={y_{t+1}}$), T is a vector transpose sign, sign $|.|$ denotes absolute value.
Predictive model (8) can be expressed in the form of linear regression
(11)
\[ {y_{t+1|t}}={\boldsymbol{\beta }_{t}^{\mathrm{T}}}\mathbf{c},\]
where
(12)
\[ {\boldsymbol{\beta }_{t}^{\mathrm{T}}}=[1,{x_{t+1}},{x_{t}},\dots ,{x_{t-m+1}},-{y_{t}},-{y_{t-1}},\dots ,-{y_{t-n}}].\]
Considering Eq. (7) and Eq. (11), identification criterion
(13)
\[ {Q_{t}}(\mathbf{c})=\frac{1}{t-n}{\sum \limits_{k=n+1}^{t}}{\big({y_{k}}-{\boldsymbol{\beta }_{k-1}^{\mathrm{T}}}\mathbf{c}\big)^{2}}\]
is a quadratic form of the vector variable c.
Accordingly, solution of the minimization problem (5) is separated into two stages. In the first stage, which is application of the least squares method, parameter estimates are calculated without evaluation of restrictions
(14)
\[ {\mathbf{c}_{t}}={\bigg[{\sum \limits_{k=n+1}^{t}}{\boldsymbol{\beta }_{k-1}}{\boldsymbol{\beta }_{k-1}^{\mathrm{T}}}\bigg]^{-1}}\bigg[{\sum \limits_{k=n+1}^{t}}{y_{k}}{\boldsymbol{\beta }_{k-1}}\bigg].\]
In the second stage, these estimates are projected into stability domain (9)
(15)
\[ {\widehat{\mathbf{c}}_{t}}=\boldsymbol{\Gamma }{\mathbf{c}_{t}},\]
where
(16)
\[ \boldsymbol{\Gamma }=\left(\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c}1& \mathbf{0}& \mathbf{0}\\ {} \mathbf{0}& {\mathbf{I}_{b}}& \mathbf{0}\\ {} \mathbf{0}& \mathbf{0}& \gamma {\mathbf{I}_{a}}\end{array}\right),\hspace{1em}0<\gamma \leqslant 1\]
is a diagonal block-matrix of projection of the dimension $(m+n+2)\times (m+n+2)$, ${\mathbf{I}_{b}}$ and ${\mathbf{I}_{a}}$ are correspondingly unity matrix dimension $(m+1)\times (m+1)$ and $(n\times n)$.
Factor γ in matrix (16) is calculated by equation
(17)
\[ \gamma =\min \{1,{\gamma _{\mathrm{max}}}-{\gamma _{0}}\},\]
where ${\gamma _{\mathrm{max}}}\| {\mathbf{c}_{t}}\| $ is the distance from the point $\mathbf{0}$ (origin) to the boundary of stability domain ${\Omega _{\mathbf{c}}}$ in the direction of ${\mathbf{c}_{t}}$, $\| .\| $ is the Euclidean norm sign, ${\gamma _{0}}$ is a small and positive constant. When $n\leqslant 2$ (stability domain for the model (2) is defined by linear inequations) factor γ calculation was given (Kaminskas et al., 2014) and when $n\leqslant 3$ (stability domain is defined by linear and quadratic inequations) factor γ calculation was given (Kaminskas and Vidugirienė, 2016).
Estimates of the model orders $(\hat{m},\hat{n})$ are defined from the following conditions (Kaminskas and Vidugirienė, 2016):
(18)
\[ \hat{m}=\min \{\tilde{m}\},\hspace{2em}\hat{n}=\min \{\tilde{n}\},\]
where $\tilde{m}$ and $\tilde{n}$ are polynomial (3) degrees when the following inequalities are correct
(19)
\[\begin{array}{l}\displaystyle \bigg|\frac{{\sigma _{\varepsilon }}[m,n+1]-{\sigma _{\varepsilon }}[m,n|}{{\sigma _{\varepsilon }}[m,n]}\bigg|\leqslant {\delta _{\varepsilon }},\hspace{1em}n=1,2,\dots ,\\ {} \displaystyle \bigg|\frac{{\sigma _{\varepsilon }}[m+1,n]-{\sigma _{\varepsilon }}[m,n|}{{\sigma _{\varepsilon }}[m,n]}\bigg|\leqslant {\delta _{\varepsilon }},\hspace{1em}m=0,1,\dots ,n,\end{array}\]
$\sigma \epsilon [m,n]$ is one-step-ahead prediction error standard deviation for model order $(m,n)$, $\delta \epsilon [0.01,0.1]$ is chosen constant value (which corresponds to a relative variation of predictions error standard deviation from 1% to 10%).
Validation of the predictive models (8) was performed for each of eight volunteers (four males and four females). For the identification of unknown parameters the first 60 to 100 observations were used for each volunteer. For evaluation of the model order and prediction accuracy all 185 observations were used. Each model is selected from twelve possible models (when $n=1,2,3$, $m=0,1,2,3$). The analysis of the experiment results showed relations between distance-between-eyes and excitement and it can be described using first order ($\hat{m}=0$, $\hat{n}=1$) model
(20)
\[ {\hat{y}_{t+1|t}}={\hat{\theta }_{0}}+{\hat{b}_{0}}{x_{t+1}}-{\hat{a}_{1}}{y_{t}}.\]
Prediction accuracies with predictive model (20) were correspondingly evaluated using the prediction error standard deviation, relative prediction error standard deviation and average absolute relative prediction error (Vaškevičius et al., 2014):
(21)
\[ {\sigma _{\varepsilon }}=\sqrt{\frac{1}{N-n}{\sum \limits_{t=n}^{N-1}}{\big({y_{t+1}}-{\hat{y}_{t+1|t}}\big)^{2}}},\]
(22)
\[ {\tilde{\sigma }_{\varepsilon }}=\sqrt{\frac{1}{N-n}{\sum \limits_{t=n}^{N-1}}{\bigg(\frac{{y_{t+1}}-{\hat{y}_{t+1|t}}}{{y_{t+1}}}\bigg)^{2}}}\times 100\% ,\]
(23)
\[ |\bar{\varepsilon }|=\frac{1}{N-n}{\sum \limits_{t=n}^{N-1}}\bigg|\frac{{y_{t+1}}-{\hat{y}_{t+1|t}}}{{y_{t+1}}}\bigg|\times 100\% ,\]
where $N=185$. Parameter estimates and a predictor accuracy measures are provided in Table 2. Figures 4 and 5 show examples of one-step-ahead prediction results when we are using model (20) for four volunteers.
Table 2
Estimatesof parameter and prediction accuracy measures.
No. Volunteer ${\hat{\theta }_{0}}$ ${\hat{b}_{0}}$ ${\hat{a}_{1}}$ ${\sigma _{\varepsilon }}$ ${\hat{\sigma }_{\varepsilon }}(\% )$ $\left|\bar{\varepsilon }\right|\left(\% \right)$
1 Female 0.0383 −0.0115 −0.9431 0.0391 9.2 7.3
2 Female 0.0042 0.0027 −0.9674 0.0150 10.0 7.2
3 Female 0.0139 0.0060 −0.9244 0.0258 8.7 6.5
4 Female 0.0383 −0.0142 −0.9152 0.0413 10.9 7.4
5 Male 0.0056 −0.0061 −0.9935 0.0252 9.8 7.3
6 Male 0.0152 −0.0104 −0.9833 0.0324 11.4 8.9
7 Male 0.0014 −0.0028 −0.9972 0.0298 11.0 8.3
8 Male 0.0162 −0.0064 −0.9698 0.0386 9.3 7.0
Average 0.0309 10.0 7.5
info1181_g004.jpg
Fig. 4
Example of one-step-ahead prediction results for female (right volunteer No. 1 and left No. 2).
info1181_g005.jpg
Fig. 5
Example of one-step-ahead prediction results for male (right volunteer No. 5 and left No. 6).
The analysis of the identification results showed what relations between distance-between-eyes and excitement is described by first order ($\hat{m}=0$, $\hat{n}=1$) model (20). The validation results show that excitement can be predicted on average with less than 8% average absolute relative prediction error. Accordingly, input-output structure model (2), (3) in the predictive form (8) can be applied to the design of prediction-based control system of human excitement signal.

5 Generalized Minimum Variance Control

A predictor-based control law is synthesized by minimizing control quality criterion ${Q_{t}}$ (${x_{t+1}}$) in an admissible domain ${\Omega _{x}}$ (Kaminskas, 2007)
(24)
\[ {x_{t+1}^{\ast }}:\hspace{2.5pt}{Q_{t}}({x_{t+1}})\to \underset{{x_{t+1\epsilon {\Omega _{x}}}}}{\min },\]
(25)
\[ {Q_{t}}({x_{t+1}})=E\big\{{\big({y_{t+1}}-{y_{t+1}^{\ast }}\big)^{2}}+q{x_{t+1}^{2}}\big\},\]
(26)
\[ {\Omega _{x}}=\big\{{x_{t+1}}:\hspace{2.5pt}{x_{\min }}\leqslant {x_{t+1}}\leqslant {x_{\mathrm{max}}},\big|{x_{t+1}}-{x_{t}^{\ast }}\big|\leqslant {\delta _{t}}\big\},\]
where E is an expectation operator, ${y_{t+1}^{\ast }}$ is a reference signal (reference trajectory for excitement signal), ${x_{\min }}$ and ${x_{\mathrm{max}}}$ are input signal boundaries (smallest and largest distance-between-eyes), ${\delta _{t}}>0$ are the restriction values for the change rate of the input signal, and sign $|.|$ denotes absolute value, $q\geqslant 0$ are weight coefficients.
Then solving the minimization problem (24)–(26) for one-step ahead prediction model (8), the control law is described by equations:
(27)
\[ {x_{t+1}^{\ast }}=\left\{\begin{array}{l@{\hskip4.0pt}l}\min \{{x_{\max }},{x_{t}^{\ast }}+{\delta _{t}},{\tilde{x}_{t+1}}\},\hspace{1em}& \text{if}\hspace{2.5pt}{\tilde{x}_{t+1}}\geqslant {x_{t}^{\ast }},\\ {} \max \{{x_{\min }},{x_{t}^{\ast }}-{\delta _{t}},{\tilde{x}_{t+1}}\},\hspace{1em}& \text{if}\hspace{2.5pt}{\tilde{x}_{t+1}}<{x_{t}^{\ast }},\end{array}\right.\]
(28)
\[ \tilde{B}\big({z^{-1}}\big){\tilde{x}_{t+1}}=-L\big({z^{-1}}\big){y_{t}}+{y_{t+1}^{\ast }}-{\theta _{0}},\]
(29)
\[ L({z^{-1}})=z\big[1-A({z^{-1}})\big],\]
(30)
\[ \tilde{B}\big({z^{-1}}\big)=\lambda +B\big({z^{-1}}\big),\hspace{1em}\lambda =q/{b_{0}}.\]
If the roots of polynomial
(31)
\[ \tilde{B}(z)={z^{m}}\tilde{B}\big({z^{-1}}\big)\]
are in the unity disk
(32)
\[ \big|{z_{j}^{B}}\big|<1,\hspace{2em}{z_{j}^{B}}:\tilde{B}(z)=0,\hspace{1em}j=1,\dots ,m,\]
then from (28)–(30) the following equation is correct
(33)
\[ {\tilde{x}_{t+1}}=\frac{1}{{b_{0}}+\lambda }\bigg\{{\sum \limits_{i=1}^{n}}{a_{i}}{y_{t+1-i}}-{\sum \limits_{j=1}^{m}}{b_{j}}{\tilde{x}_{t+1-j}}+{y_{t+1}^{\ast }}-{\theta _{\mathrm{0}}}\bigg\}.\]
If a part or all of polynomial (31) roots do not belong to the unity disk, weight factor $|\lambda |$ is increased until all roots rely in the unity disk. The scheme of a generalized minimum variance controller (27)–(30) is illustrated in Fig. 6.
info1181_g006.jpg
Fig. 6
The scheme of a generalized minimum variance control with constraints.
When inserting the control signal, which is described by equations (28) and (30), to the model (2) we get a closed-loop system equation
(34)
\[ \big[B\big({z^{-1}}\big)+\lambda A\big({z^{-1}}\big)\big]{y_{t}}=B\big({z^{-1}}\big)\big({y_{t}^{\ast }}-{\theta _{0}}\big)+{\varepsilon _{t}}.\]
It is clear from equation (34), what stability of the closed-loop system is dependent of characteristic polynomial
(35)
\[\begin{array}{l}\displaystyle D(z)={z^{d}}D\big({z^{-1}}\big),\\ {} \displaystyle D\big({z^{-1}}\big)=B\big({z^{-1}}\big)+\lambda A\big({z^{-1}}\big),\hspace{1em}d=\max \{m,n\},\end{array}\]
roots, all the roots must be inside the unity disk
(36)
\[ \big|{z_{i}^{D}}\big|\leqslant 1,\hspace{2em}{z_{i}^{D}}:D(z)=0,\hspace{1em}i=1,2,\dots ,d.\]
The analysis of characteristic polynomial equation (35) allows to state what having stable model in the process of the identification (5)–(10), stability of a closed-loop system is obtained with any arrangement of roots of the polynomial $B({z^{-1}})$, when the weight factor $|\lambda |$ is increased.
From equation (34) we get what permanent component of output signal in stationary regime (${y_{t}^{\ast }}={y^{\ast }}$) is
(37)
\[ y={K_{\mathrm{p}}}\big({y^{\ast }}-{\theta _{0}}\big),\]
where
(38)
\[ {K_{\mathrm{p}}}=\frac{B(1)}{B(1)+\lambda A(1)}\]
is a gain of the transfer function of the reference signal ${y_{t}^{\ast }}$ in a closed – loop
(39)
\[ {W_{\mathrm{p}}}({z^{-1}})=\frac{B({z^{-1}})}{B({z^{-1}})+\lambda A({z^{-1}})}.\]
Considering the expression (38), weight factor λ is calculated by equation
(40)
\[ \lambda =\frac{{K_{0}}(1-{K_{\mathrm{p}}})}{{K_{\mathrm{p}}}},\]
where
(41)
\[ {K_{0}}=\frac{B(1)}{A(1)}\]
is a gain of the transfer function of the input-output model (2)
(42)
\[ W\big({z^{-1}}\big)=\frac{B({z^{-1}})}{A({z^{-1}})}.\]
From equation (37) follows that sistematic control error
(43)
\[ {e_{\mathrm{p}}}={y^{\ast }}-y=(1-{K_{\mathrm{p}}}){y^{\ast }}+{K_{\mathrm{p}}}{\theta _{0}}\]
grows if ${K_{\mathrm{p}}}$ is significantly lower than unit (weight factor $|\lambda |$ or weight coefficient q in control criterion (25) are high). Accordingly, the gain ${K_{\mathrm{p}}}$ is selected from an interval
(44)
\[ {K_{\mathrm{p}}}\in [0.8,1],\hspace{1em}\text{if}\hspace{2.5pt}({b_{0}}>0)\wedge ({K_{0}}>0)\mathrm{or}({b_{0}}<0)\wedge ({K_{0}}<0)\]
or
(45)
\[ {K_{\mathrm{p}}}\in [1,1.2],\hspace{1em}\text{if}\hspace{2.5pt}({b_{0}}>0)\wedge ({K_{0}}<0)\mathrm{or}({b_{0}}<0)\wedge ({K_{0}}>0).\]
When ${K_{\mathrm{p}}}=1$ ($\lambda =0$, $q=0$), we get a minimum variance control, in other cases we get a generalized minimum variance control.
Table 3
Efficiency measure of excitement control.
No. Vol. ${\delta _{t}}=12/s$ ${\delta _{t}}=1.2/s$ ${\delta _{t}}=0.3/s$
${K_{p}}$ 1 0.9 0.8 1 0.9 0.8 1 0.9 0.8
1 Female 51.1 48.9 45.2 36.8 33.9 16.0 32.0 34.8 33.1
2 Female 86.3 83.4 77.6 83.7 82.5 77.6 80.2 77.6 77.6
3 Female 33.6 32.6 30.6 31.2 31.1 29.3 27.8 24.0 27.4
4 Female 39 35.8 31.9 27.9 17.5 31.4 27.9 16.2 14.1
5 Male 192.8 159.1 121.8 128.6 158.7 121.7 132.0 158.9 121.1
6 Male 161.2 149.0 103.3 131.0 122.7 103.3 37.6 72.5 103.2
7 Male 122.8 118.4 113.1 107.1 118.2 112.6 96.0 98.2 96.9
8 Male 65.2 61.3 57.6 48.9 27.5 50.3 27.3 43.3 55.9
Average 94 86.1 72.6 74.4 74.0 67.8 57.6 65.7 66.2
Modelling experiments consisted of two phases. In the first phase a human excitement signal as a response to dynamical 3D face stimuli (testing input) were observed. According with these observations parameter estimates of the predictive model (20) were calculated using identification. In the second phase, dynamical virtual 3D face features were formed according with the control law (27) and (33) (control output). The control tasks were to maintain high excitement levels (reference signals). In this case a control efficiency can be evaluated by a relative measure
(46)
\[ \triangle y=\frac{{\bar{y}_{c}}-{\bar{y}_{T}}}{{\bar{y}_{T}}}\times 100\% ,\]
where ${\bar{y}_{T}}$ is an average of output ${y_{t}^{T}}$ (excitement) as a response to testing input, and ${\bar{y}_{c}}$ is an average of output ${y_{t}^{c}}$ (excitement) as a response to control input. These measures are given in Table 3. Examples of excitement control results are shown in Fig. 7 and Fig. 8 (weight factor $\lambda =-0.0224$ and weight coefficient $q=0.00026$, when ${K_{p}}=0.9$ or $\lambda =-0.2346$, $q=0.00143$, when ${K_{p}}=0.8$).
info1181_g007.jpg
Fig. 7
Examples of excitement control for volunteer No. 1 (female). Output: reference signal ${y_{t}^{\ast }}$ (solid line), output signals ${y_{t}^{c}}$ (dotted line) and ${y_{t}^{T}}$ (dashed line). Input: control signal ${x_{t}^{\ast }}$ (solid line) and testing input ${x_{t}}$ (dashed line).
Modelling results show that using predictor-based control with constraints a sufficiently good quality of human excitement signal control can be reached. Excitement signal level can be raised up on average to about 95% (when ${K_{\mathrm{p}}}=1$, minimum variance control) and about 85%–70% (when ${K_{\mathrm{p}}}=0.9$ and ${K_{\mathrm{p}}}=0.8$, generalized minimum variance control) in comparison with testing input.
info1181_g008.jpg
Fig. 8
Examples of excitement control for volunteer No. 5 (male). Output: reference signal ${y_{t}^{\ast }}$ (solid line), output signals ${y_{t}^{c}}$ (dotted line) and ${y_{t}^{T}}$ (dashed line). Input: control signal ${x_{t}^{\ast }}$ (solid line) and testing input ${x_{t}}$ (dashed line).
Control quality is influenced by a control signal variation speed which is limited by the parameter ${\delta _{t}}$ of the admissible domain. This parameter allows decreasing control signal variation which is usually high in minimum variance control systems without constraints. Control signal variation decreases when a generalized minimum variance control is applied. In this case, the quality of control depends on a gain coefficient in closed-loop ${K_{\mathrm{p}}}$ (38), whose value defines weight factor λ in (30) or weight coefficient q in control criterion (25).

6 Conclusions

Experiment planning and cross-correlation analysis results demonstrated that there is a relatively high correlation between 3D face features observed using virtual reality (distance-between-eyes) and human response (excitement) to the stimuli. The shift of the maximum values of the cross-correlations functions in relation to origin allows stating that there exists linear dynamic relationship between distance-between-eyes and excitement signals. Parameter identification method for building stable input-output structure model is proposed. Identification and validation results of one-step-ahead prediction model (8) show that excitement can be predicted on average with less than 8% average absolute relative prediction error.
Accordingly, input-output structure model (2) (3) in the predictive form (8) can be applied to the design of predictor-based control system for controlling human excitement signal as a response to a dynamic virtual 3D face. Control law is synthesized by minimizing generalized minimum variance control criterion in an admissible domain for input. Calculation method of weight factor λ in control law (27)–(30) or weight coefficient q in control criterion (25) is proposed. This method is based on admissable value of the systematic control error.
Sufficiently good control quality of excitement signal, maintained signal level is at average to about 90% (when ${K_{\mathrm{p}}}=1$, minimum variance control) and about 70% (when ${K_{\mathrm{p}}}=0.8$, generalized minimum variance control with high weight coefficient) higher compared to testing input, is demonstrated by modelling results. Experiment results demonstrated possibility to decrease variations of the control signal using a limited signal variation speed when decreasing constant ${\delta _{t}}$ in expression (27) or using a generalized minimum variance control when increasing weight factor $|\lambda |$, which is calculated according to equation (40). However, in these cases, particularly applying minimum variance control, control quality decreases.

References

 
Astrom, K.J., Wittenmark, B. (1997). Computer Controlled Systems – Theory and Design, 3rd ed. Prentice Hall.
 
Calvo, R.A., D’Mello, S.K., Gratch, J., Kappas, A. (2015). The Oxford Handbook of Affective Computing. Oxford Library of Psychology. Oxford University Press.
 
Clarke, D.W. (1994). Advances in Model Predictive Control. Oxford Science Publications, UK.
 
Devlin, A.M., Lally, V., Sclaterb, M., Parusselc, K. (2015). Inter-life: a novel, three-dimensional, virtual learning environment for life transition skills learning. Interactive Learning Environments, 23(4), 405–424.
 
Emotiv Epoc specifications. Brain-computer interface technology. Available at: http://www.emotiv.com/upload/manual/sdk/EPOCSpecifications.pdf.
 
Hondrou, C., Caridakis, G. (2012). Affective, natural interaction using EEG: sensors, application and future Directions. In: Artificial Intelligence: Theories and Applications, Vol. 7297. Springer, Berlin, pp. 331–338.
 
Kaminskas, V. (2007). Predictor-based self tuning control with constraints. In: Model and Algorithms for Global Optimization, Optimization and Its Applications, Vol. 4. Springer, Berlin, pp. 333–341.
 
Kaminskas, V., Ščiglinskas, E. (2016). Minimum variance control of human emotion as reactions to a dynamic virtual 3D face. In: AIEEE 2016: Proceedings of the 4th Workshop on Advances in Information, Electronic and Electrical Engineering, Lithuania, Vilnius, pp. 1–6.
 
Kaminskas, V., Vidugirienė, A. (2016). A comparison of hammerstein – type nonlinear models for identification of human response to virtual 3D face stimuli. Informatica, 27(2), 283–297.
 
Kaminskas, V., Vaškevičius, E., Vidugirienė, A. (2014). Modeling human emotions as reactions to a dynamical virtual 3D face. Informatica, 25(3), 425–437.
 
Kaminskas, V., Ščiglinskas, E., Vidugirienė, A. (2015). Predictor-based control of human emotions when reacting to a dynamic virtual 3D face stimulus. In: Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics, France, Colmar Vol. 1, pp. 582–587.
 
Mattioli, F., Caetano, D., Cardoso, A., Lamounier, E. (2015). On the agile development of virtual reality systems. In: Proceedings of the International Conference on Software Engineering Research and Practice (SERP), pp. 10–16.
 
Sourina, O., Liu, Y. (2011). A fractal-based algorithm of emotion recognition from EEG using arousal-valence model. In: Proceedings Biosignals, pp. 209–214.
 
Vaškevičius, E., Vidugirienė, A., Kaminskas, V. (2014). Identification of human response to virtual 3D face stimuli. Information Technologies and Control, 43(1), 47–56.
 
Wrzesien, M., Rodriguez, A., Rey, B., Alcaniz, M., Banos, R.M., Vara, M.D. (2015). How the physical similarity of avatars can influence the learning of emotion regulation strategies in teenagers. Computers in Human Behavior, 43, 101–111.

Biographies

Kaminskas Vytautas
vytautas.kaminskas@vdu.lt

V. Kaminskas is a rector emeritus (2016) and honorary professor (2012) of Vytautas Magnus University. He has PhD (1972) and DrSc (1983) degrees in the field of technical cybernetics and information theory. In 1984 he was awarded the title of the professor. From 1991 he is a member of Lithuanian Academy of Science. His research interests are dynamic system modelling, identification and adaptive control. He is the author of 4 monographs and about 200 scientific papers of these topics.

Ščiglinskas Edgaras
edgaras.sciglinskas@vdu.lt

E. Ščiglinskas is a PhD student. He graduated from the Faculty of Informatics of Vytautas Magnus University in BSc (2013) and MSc (2015). His research interests are signal processing and system modelling, virtual reality and multimedia systems and its application. He is the author of 3 scientific paper of these topics.


Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Experiment Planning and Cross-Correlation Analysis
  • 3 Input–Output Model
  • 4 Parameter Identification Method
  • 5 Generalized Minimum Variance Control
  • 6 Conclusions
  • References
  • Biographies

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy