1 Introduction
(3)
\[ \underset{\boldsymbol{s}}{\min }\big\| \boldsymbol{y}-\boldsymbol{\Phi }{\boldsymbol{\Psi }^{-1}}\boldsymbol{s}{\big\| _{2}^{2}}+\lambda \| \boldsymbol{s}{\| _{1}}.\]2 Related Work
3 Proposed Architecture for the CS Model
3.1 Convolutional Autoencoder
Fig. 1
(4)
\[ {y_{m}}=\langle {\boldsymbol{\phi }_{m}},\boldsymbol{x}\rangle ={\sum \limits_{i=1}^{N}}{\phi _{m,i}}{x_{i}}.\](5)
\[ \begin{aligned}{}& Y=X\hspace{2.5pt}\underset{D}{\ast \ast }\hspace{2.5pt}\{{\phi _{m}}\},\\ {} & {Y_{m}}[i,j]={\sum \limits_{k}^{}}{\sum \limits_{l}^{}}X[Di+k,Dj+k]{\phi _{m}}[k,l].\end{aligned}\]Fig. 2
Fig. 3
3.2 Predefined vs. Adaptive Measurement Matrix
3.3 Network Training Using Normalized Measurements
(6)
\[ {y_{1}}=\frac{1}{{B^{2}}}{\sum \limits_{i=1}^{{B^{2}}}}{\phi _{1,i}}{x_{i}}=\frac{1}{{B^{2}}}{\sum \limits_{i=1}^{{B^{2}}}}{x_{i}}.\](7)
\[ {\hat{y}_{m}}=\frac{1}{{\textstyle\textstyle\sum _{i=1}^{{B^{2}}}}{\phi _{m,i}}}{y_{m}}-{y_{1}},\hspace{1em}m\in [2,M].\]3.4 Efficient Method for Network Initialization
(8)
\[ C(\boldsymbol{x})=E\big[\boldsymbol{x}{\boldsymbol{x}^{T}}\big]-E[\boldsymbol{x}]E\big[{\boldsymbol{x}^{T}}\big],\](9)
\[ C(\boldsymbol{x})=\frac{1}{N-1}{\sum \limits_{i=1}^{N}}({\boldsymbol{x}_{n}}-\bar{\boldsymbol{x}}){({\boldsymbol{x}_{n}}-\bar{\boldsymbol{x}})^{T}}.\](10)
\[ C(\boldsymbol{x})=\boldsymbol{U}\boldsymbol{\Sigma }{\boldsymbol{U}^{T}}\approx {\boldsymbol{U}_{1:M}}{\boldsymbol{\Sigma }_{1:M}}{({\boldsymbol{U}_{1:M}})^{T}}.\](13)
\[\begin{aligned}{}\boldsymbol{x}& ={\boldsymbol{\Phi }^{+}}\boldsymbol{y}\\ {} & ={\big(\boldsymbol{\Phi }{\boldsymbol{\Phi }^{T}}\big)^{-1}}\boldsymbol{\Phi }\boldsymbol{y}\\ {} & ={\big[{({\boldsymbol{U}_{1:M}})^{T}}{\boldsymbol{U}_{\mathbf{1}\mathbf{:}\boldsymbol{M}}}\big]^{-1}}{({\boldsymbol{U}_{1:M}})^{T}}\boldsymbol{y}\\ {} & ={\boldsymbol{U}_{1:M}}\boldsymbol{y}.\end{aligned}\]3.5 Residual Network
Fig. 5
3.6 Choice of the Loss Function
(14)
\[ {\mathcal{L}_{1}}\big(\big\{\boldsymbol{\Phi },{\boldsymbol{\Phi }^{+}}\big\}\big)=\big\| x-f\big\{x,\big\{\boldsymbol{\Phi },{\boldsymbol{\Phi }^{+}}\big\}\big\}{\big\| _{2}^{2}},\](15)
\[ {\mathcal{L}_{2}}\big(\{\boldsymbol{W}\}\big)=\frac{1}{2}{\sum \limits_{j=2}^{3}}\big\| {\phi _{j}}(x)-{\phi _{j}}\big(f\{x,\boldsymbol{W}\}\big){\big\| _{2}^{2}}.\]4 Experiments
4.1 Network Training
4.2 Measurement Matrix
Table 1
PSNR [dB] | $r=0.25$ | $r=0.10$ | $r=0.04$ | $r=0.01$ |
PCA | 31.45 | 27.11 | 23.95 | 20.56 |
Linear autoencoder | 31.39 | 27.06 | 23.92 | 20.55 |
Fig. 7
4.3 Comparison to Other Methods
Table 2
Mean PSNR [dB] for different methods | $r=0.25$ | $r=0.10$ | $r=0.04$ | $r=0.01$ |
ImpReconNet (Euc) (Lohit et al., 2018) | 26.59 | 25.51 | 23.14 | 19.44 |
ImpReconNet (Euc + Adv) (Lohit et al., 2018) | 30.53 | 26.47 | 22.98 | 19.06 |
Adp-Rec (Xie et al., 2017) | 30.80 | 27.53 | – | 20.33 |
FCMN (Du et al., 2019) | 32.67 | 28.30 | 23.87 | 21.27 |
$PC{S_{\textit{conv22}}}$ (Du et al., 2018) | – | – | 19.38 | 18.30 |
$PC{S_{\textit{conv34}}}$ (Du et al., 2018) | – | – | 16.72 | 16.80 |
Proposed method | 32.00 | 26.36 | 23.67 | 20.51 |