|
1.IntroductionVideo acquisition captures time-dependent natural scenes and brings real-time images directly to screens for immediate observation. It not only serves for the live television (TV) production, but also for security, military, and industrial operations including professional video cameras, camcorders, closed circuit TV, webcams, camera phones, and special camera systems. In traditional video acquisition, e.g., H.261, H.265, and MPEG series, the sampling and compression procedures are implemented in sequential order. The Nyquist–Shannon sampling theorem requires the sampling rate to be at least twice that of the signal frequency for guaranteed exact recovery. The compression procedure is implemented by video compression chipsets1 or separate software.2 Although state-of-the-art video cameras can record most nature scenes, they do not work for very high-resolution images or high fps videos because the growth in data storage, communication, and processing is far behind the growth in data generation. In space exploration, an image of the shuttle discovery flight deck could be 2.74 gigapixels,3 and a bubble dynamics research needs a 500-fps video microscopy.4 More importantly, commercialized high-performance video cameras are extremely expensive, e.g., the price of a basic model with 7500 fps, one-megapixel resolution, and 12-bit color depth (FASTCAM SA5 from Photron) is around $100,000. The limitation comes from weak light irradiation and the readout bandwidth when capturing high-speed objects at a high resolution. As shown in Fig. 1 and Eq. (1), the reflected illumination is collected by sensor arrays in a limited space–time volume The number of electrons () accumulated on each pixel is reversely proportional to the square of the ratio of the focal length to the aperture of the lens (), but proportional to exposure time (), incident illumination (), scene reflectivity (), quantum efficiency (), and the pixel size ().5 In video sensing, the exposure time () corresponds to the temporal resolution and the pixel size () is related to the spatial resolution. In other words, the temporal and spatial resolutions are mutual restraint in conventional video cameras due to the imaging sensors’ requirement on the minimum number of accumulated electrons and the fixed number of total electrons. The spatial resolution will decrease when the temporal resolution increases. Another limitation is the sensor’s readout speed. The readout timing includes an analog-to-digital conversion, clear charge from the parallel register, and shutter delay, e.g., a one-megapixel, 1000 fps, and 16-bit color camera will need a readout circuit.To obtain high-resolution images and high fps videos, the sampling rate has to be reduced, and compressive sensing technique can be applied. Compressive sensing6 allows combining both sampling and compression procedures together. This paradigm directly samples the signal in a compressed form such that the sampling rate can be significantly reduced. Compressive sensing has attracted extreme interest in imaging,7 geophysical data analysis,8 control and robotics,9 communication,10 and medical imaging processing.11 Compressive sensing has been applied in compressive video sensing since 2006, when the single-pixel camera setup was first used for video sampling.12 In this first approach, the three-dimensional (3-D) video was reconstructed with all the measurements together using 3-D wavelets as a sparse representation. This method cannot be used for real-time video streaming without incurring latency and delay because all the measurements have to be obtained before the reconstruction starts. Since then, in order to reconstruct the frames one by one for the purpose of real-time streaming, most approaches reconstruct or sample reference frames with more measurements and find the differences between two consecutive frames with fewer measurements. There are mainly two types of strategies: sampling the frame and sampling the difference between frames. In the first sampling method, in order to obtain a continuous video, motion estimation techniques are applied to recover frames from reference frames. For example, the evolution of dynamic textured scenes was modeled as a linear dynamical system.13 A multiframe motion estimation algorithm was proposed.14 The latest compressive video sensing research learned a linear mapping between video sequences and corresponding measured frames.15 In addition, the correlation between consecutive frames in the frequency domain16 and other transform domains17 was also used. There are also several approaches in sampling the difference between two frames. For example, Stankovic et al.18 split the video frame into nonoverlapping blocks of equal size, and compressive sampling was performed on sparse blocks determined by predicting sparsities based on previous reference frames, which were sampled conventionally. The remaining blocks were sampled fully. It would be time-consuming to determine the sparse blocks because every block has to be tested. In addition, directly sampling the difference between two consecutive frames was employed19 to save the sampling time. Though compressive sensing techniques are used in video sensing, most of the approaches use the convex minimization to approximate the nonconvex minimization, which is a nondeterministic polynomial-time (NP)-hard and difficult to solve. The compressive sensing theorem can reduce the number of measurements using the minimization. However, with nonconvex regularizations, it can reduce the number of measurements and thus the sampling rate further so as to achieve real-time video capturing. Recently, there are many nonconvex regularizations proposed to obtain better performance than the norm in compressive sensing.20,21,22 In this paper, a single-pixel compressive video sensing framework based on the nonconvex sorted regularization is proposed for fast and super resolution video. In this framework, we sample reference frames using the spatial sparsity (individual image sparsity) and the difference between two frames using the temporal sparsity. In Sec. 2, we first give a short review about compressive sensing and nonconvex solvers. Then, we propose our nonconvex compressive video sensing framework. The experimental results are depicted in Sec. 3. 2.Compressive Video Sensing2.1.Compressive SensingThe core of compressive sensing is recovering the sparse vector from a small number of linear measurements , where is the measurement matrix (). There are many solutions for the underdetermined linear system if is in the range of , and we are interested in finding the sparsest one among all the solutions. However, finding the sparsest solution is NP-hard. Therefore, instead of solving the NP-hard problem, people are looking into alternative approaches. Convex approaches are of great interest because there are lots of algorithms for solving these convex problems and it is easy to analyze the solutions of the convex problems. If is sparse and satisfies some conditions such as the null space property,23 the incoherence condition,24 and the restricted isometry property,25 the following problem is equivalent for finding the sparest solution: When there is noise in the measurements, i.e., with being the white Gaussian noise, we solve the following problem instead: where is a parameter for balancing the data fitting term and the regularization term. In order to solve these convex problems, many algorithms are proposed.26,27Although the minimization is fully understood and stable with theoretical guarantee, the number of required measurements is still high, and the performance is not good in many applications with a small number of measurements. For example, radiologists want to reduce more projections and thus radiation than that required for minimization in computed tomography. For the difference between two frames in a video, we want to decrease the number of measurements further such that it can realize higher fps videos than current cameras can produce. In order to recover signals from even fewer measurements, nonconvex regularizations are applied, and a short review will be given in Sec. 2.2. 2.2.Nonconvex Optimization Problems for Compressive SensingIn this section, we review several nonconvex regularizations for compressive sensing and their corresponding algorithms. Denote , the truth sparse signal as , and as the ’th iteration. The () term is commonly used,28 and it has and as special cases. Because of the nonconvexity, it recovers sparse signals with even fewer measurements than the convex counterpart, . To solve the nonconvex problems, there are several approaches. We describe three of them on both the noise-free and noisy cases. First, two reweighted algorithms for the following noise-free case are presented: The iteratively reweighted minimization (IRL1)20 replaces the term using a weighted term with the weights depending on the previous iteration. The iteration is expressed as For every iteration, a weighted minimization problem has to be solved and iterative algorithms are applied.Similarly, the iteratively reweighted least squares21,22 replace the term using a weighted least squares term with the weights depending on the previous iteration. The iteration is expressed as In this case, there is an analytical solution for the weighted minimization problem, since it is equivalent to a least squares problem.Except for these two reweighted algorithms for solving minimization problems, some algorithms for solving convex optimization problems are applied to solve nonconvex problems with general nonconvex regularizations.29 One example is the forward–backward iteration. In each forward–backward iteration, for solving where is a nonconvex regularization term including and the following mentioned nonconvex sorted as special cases, a proximal mapping of the nonconvex regularization term follows a gradient descent on the data fidelity term, i.e., However, for minimization, there are only analytical solutions when , , , and 1.30The success of minimization and both iterative algorithms for solving minimization problems depicts that it is better to assign small weights for components with large absolute values and large weights for zero components and components with small absolute values. A nonconvex sorted that assigns weights based on the ranking of absolute values was developed by Huang et al.31 Let the coefficients be a nondecreasing sequence of nonnegative real numbers, i.e., . The nonconvex sorted regularization is defined as where are the absolute values of the components in ranked in decreasing order. Two special cases of nonconvex sorted are 2-level with and iterative support detection (ISD) with . In addition, Huang et al. suggested a way for adaptively changing the weights during the iteration instead of having a fixed set of weights for better performance. The proposed update rule is where controls the rate of decreasing from 1 to 0 and is the smallest such that with some positive .322.3.Video Compressive SamplingA video can be considered as a series of images, as shown in Fig. 2 (left), where the coordinate space consists both the spatial domain and the temporal domain . Each frame could be realized as a static natural image that is redundant because natural images are intrinsically sparse in a specific domain.24,33 Another redundancy happens between similar frames in the temporal domain. As shown in Fig. 3, more than 85% of the pixels have no significant changes. Therefore, difference coding34 in MPEG and H.265 series reuses existing frames and updates only the pixels with significant changes. As discussed in Sec. 1, the objective of compressive video sensing is to combine both compression and sampling procedures to achieve the signal compression in hardware. In our proposed compressive video sensing, there are two types of image frames: intraframes (I-frames in H.264 or reference frames) and interframes (P-frames in H.264), shown in Fig. 4. The compressive sampling is applied on both I-frames and P-frames, where P-frames are reconstructed by the difference between P-frames and their previous frames. Since I-frames are considered as static images and the image compressive sampling has already been studied for single-pixel cameras,7,35 a total variation algorithm36 is applied to recover intraframes from the I-frame sampling. For the P-frames, because the difference between similar frames is sparse, a nonconvex regularization is adopted to reduce the number of samples and thus increase the compression ratio. We compare the performance of four different nonconvex regularizations numerically and choose the best in the experiment. The four regularizations are: with IRL1, ISD, 2-level, and the nonconvex sorted (-level). In IRL1, the weights are updated by For 2-level, we choose . For -level, we choose and .We compare the runtimes, root-mean-square error (RMSE), and the peak signal-to-noise ratio (PSNR) for these four algorithms on the difference between two consecutive frames () in Fig. 5. The difference between the left and the middle images in Fig. 5 is shown on the right. We choose the measurement matrices to be randomized Bernoulli matrices with entries. The sampling rate (the number of measurements/the number of pixels) is changed from 6% to 35%. The comparison result is shown in Fig. 6, where the -axis represents the sampling rate. When the number of measurements is small, nonconvex algorithms are unstable because they can easily be trapped at stationary points and the strategy for adaptively updating weights may not work so well. Overall, -level is the most efficient and effective algorithm among all these four algorithms. Therefore, we choose -level in our experiments in Sec. 3. Though nonconvex algorithms are able to recover sparse signals accurately from a small number of linear measurements, there is still error due to the hardware noise and the modeling error. For example, there is noise in the measurements and the algorithms cannot recover the sparse signals exactly. In Fig. 7, we show the exact difference image between two frames on the left and compare it with that recovered using the nonconvex sorted on the middle. It is noticed that there are many isolated pixels with small nonzero values in the recovered difference image, and these pixels are supposed to have zero values. In order to improve this, we develop a simple and effective method to remove these pixels and update only the pixels in the areas with significant changes. We apply the Sobel operator with a pair of convolution masks on the recovered difference image to find the edges since the Sobel kernels compute the gradient with smoothing in both the horizontal and vertical directions. Then a threshold is selected to obtain a binary mask that indicates the pixels with large gradient values. However, it does not delineate the outline of the changing area of interest. Then the binary gradient mask is dilated using the vertical structuring element followed by the horizontal structuring element for a better outline. Because the mask shows only the edges of the difference image and the areas with significant changes are inside the edges, the whole areas with significant changes are obtained via filling the holes inside the edges using a flood fill operation via the MATLAB® function “imfill.” This method keeps the most significant changes and removes error on the difference image so as to reduce the reconstruction error in P-frames. Figure 7(c) shows the performance of this postprocessing (denoising) procedure. The flow chart for this procedure is described in Fig. 8. Due to the frame difference sensing mechanism, the reconstruction error accumulates because every time we reconstruct P-frames using the difference between two consecutive frames. The error in the first P-frame is accumulated to the second P-frame. Therefore, the reconstruction of the first P-frame after I-frames is very important, and an improvement on this frame also improves following P-frames. On the other hand, if the number of P-frames between two consecutive I-frames is small, we can compute the difference image between the P-frame and the previous I-frame instead to avoid the accumulated error from previous P-frames. The next numerical experiment shows that we can apply the simple denoising procedure to improve the reconstruction results of the first P-frame and all the P-frames after that. In this numerical experiment, there are five P-frames after one I-frame. In Fig. 9, all five P-frames are plotted. The first row has five ground true frames ( to ). For the second and third rows, we show the reconstruction results using the difference image between two consecutive frames, and the reconstruction results using the difference image between P-frames and the I-frame are shown in the fourth and fifth rows. The reconstruction results using -level without the denoising step are shown in the second row ( to ) and the fourth row ( to ). The reconstruction results with the denoising step are shown in the third row ( to ) and the fifth row ( to ). The PSNR and RMSE values are shown in Tables 1 and 2. From both tables, we can see that the PSNR value is decreasing and the RMSE value is increasing for the five P-frames, if the difference images between two consecutive frames are used and the denoising step improves all P-frames, especially the first P-frame. However, if all the P-frames are compared with the I-frame, the improvement of the denoising step is large for all five P-frames. This numerical experiment suggests that we may choose to compare P-frames with the previous I-frame instead of the previous frame because the error in the previous P-frames will be accumulated. Table 1PSNR values for the five reconstructed P-frames with four methods: difference images between two consecutive images without the denoising step (m-level); difference images between two consecutive images with the denoising step (denoising); difference images between P-frames and the I-frame without the denoising step (m-level*); and difference images between P-frames and the I-frame with the denoising step (denoising*).
Table 2PSNR values for the five reconstructed P-frames with four methods: difference images between two consecutive images without the denoising step (m-level); difference images between two consecutive images with the denoising step (denoising); difference images between P-frames and the I-frame without the denoising step (m-level*); and difference images between P-frames and the I-frame with the denoising step (denoising*).
The whole algorithm for P-frames reconstruction is depicted in Table 3. The steps (a) to (c) show the nonconvex sorted calculation process, while steps (d) to (e) demonstrate the edge-detection denoising procedure to reduce the error in the compressive video sensing. Table 3P-frames reconstruction algorithm.
3.ExperimentsThe projection measurement matrices can be implemented by spatial light modulators such as the digital micromirror device (DMD) and the liquid crystal on silicon. The DMD runs as fast as 32,000 Hz, and we use a DMD with 6000 Hz in the experiments. A DMD chip has several thousand microscopic mirrors arranged in a rectangular array on its surface. These mirrors correspond to the pixels in the image to be reconstructed. The mirrors can be individually rotated to an on or off state. These two states correspond to in the Bernoulli matrix. During the sampling process, the measurement matrix is sent to the DMD controller row by row. The matrices for P-frames are selected from the rear end of the matrix for the previous I-frame, e.g., if the previous I-frame measurement matrix is , then the P-frame measurement matrix will be with . During the experiments, the irradiator (THORLABS LIU850A) is 850 nm near the IR source, and a silicon photodiode (THORLABS FDS1010) is chosen as the receiver sensor. We validate the proposed nonconvex compressive video sensing system using two experiments: a linear moving object and a rotating object. In the first experiment with a linear moving airplane in Fig. 10, the frame rate is 10 fps. There is only one P-frame between two consecutive I-frames, i.e., are I-frames, while are P-frames. The sampling ratios are 18% and 8.5% for I-frames and P-frames, respectively. The proposed system records the whole scene in real time. The second experiment is to capture the rotation of a fan. As shown in Fig. 11, each blade is designed with a different length for easy identification. There are three P-frames between two consecutive I-frames, and each row in Fig. 11 shows one I-frame on the first column and three P-frames after the I-frame on the last three columns. The frame rate is 18 fps, and the sampling ratios are 20% and 9% for I-frames and P-frames, respectively. 4.ConclusionsNonconvex compressive sensing algorithms require a fewer number of linear measurements to reconstruct a sparse signal than convex algorithms. In this work, the nonconvex sorted approach is employed to reconstruct the difference images, which are sparse, and decrease the sampling rate. Furthermore, an edge-detection-based denoising step is applied to reduce the error on the difference image. Thus, it requires a smaller number of measurements compared to the conventional compressive video sensing. We tested our algorithm on the real-time video reconstruction in the experiments. Though the frame rate in the experiments is only 18 fps, it can reach up to 105 fps based on current DMD mirror speed (maximum 32,000 Hz). AcknowledgmentsThis research work was partially supported under National Science Foundation Grants Nos. IIS-0713346 and DMS-1621798, Office of Naval Research Grants Nos. N00014-04-1-0799 and N00014-07-1-0935, the U. S. Army Research Laboratory, and the U. S. Army Research Office under Grant No. W911NF-14-1-0327. ReferencesM. Irvin, T. Kitazawa and T. Suzuki,
“A new generation of MPEG-2 video encoder ASIC and its application to new technology markets,”
in Int. Broadcasting Convention Conf.,
391
–396
(1996). http://dx.doi.org/10.1049/cp:19960840 Google Scholar
G. J. Sullivan and T. Wiegand,
“Video compression from concepts to the H.264/AVC standard,”
Proc. IEEE, 93
(1), 18
–31
(2005). http://dx.doi.org/10.1109/JPROC.2004.839617 IEEPAD 0018-9219 Google Scholar
, “Space shuttle discovery flight deck by national geographic,”
(2015) http://www.gigapan.com/gigapans/102753 ( July ). 2015). Google Scholar
M. Hepher, D. Duckett and A. Loening,
“High-speed video microscopy and computer enhanced imagery in the pursuit of bubble dynamics,”
Ultrason. Sonochem., 7 229
–233
(2000). http://dx.doi.org/10.1016/S1350-4177(00)00058-4 Google Scholar
O. Cossairt, M. Gupta and S. K. Nayar,
“When does computational imaging improve performance?,”
IEEE Trans. Image Process., 22
(2), 447
–458
(2013). http://dx.doi.org/10.1109/TIP.2012.2216538 IIPRE4 1057-7149 Google Scholar
D. L. Donoho,
“Compressed sensing,”
IEEE Trans. Inf. Theory, 52
(4), 1289
–1306
(2006). http://dx.doi.org/10.1109/TIT.2006.871582 IETTAW 0018-9448 Google Scholar
H. Chen et al.,
“Infrared camera using a single nano-photodetector,”
IEEE Sens. J., 13
(3), 949
–958
(2013). http://dx.doi.org/10.1109/JSEN.2012.2225424 ISJEAZ 1530-437X Google Scholar
Y. Wang, J. Cao and C. Yang,
“Recovery of seismic wavefields based on compressive sensing by an -norm constrained trust region method and the piecewise random sub-sampling,”
Geophys. J. Int., 187
(1), 199
–213
(2011). http://dx.doi.org/10.1111/j.1365-246X.2011.05130.x GJINEA 0956-540X Google Scholar
B. Song et al.,
“Compressive feedback-based motion control for nanomanipulation: theory and applications,”
IEEE Trans. Rob., 30
(1), 103
–114
(2014). http://dx.doi.org/10.1109/TRO.2013.2291619 ITREAE 1552-3098 Google Scholar
P. Zhang et al.,
“A compressed sensing based ultra-wideband communication system,”
in IEEE Int. Conf. on Communications,
1
–5
(2009). http://dx.doi.org/10.1109/ICC.2009.5198584 Google Scholar
M. Lustig et al.,
“Compressed sensing MRI,”
IEEE Signal Process. Mag., 25
(2), 72
–82
(2008). http://dx.doi.org/10.1109/MSP.2007.914728 ISPRE6 1053-5888 Google Scholar
M. B. Wakin et al.,
“Compressive imaging for video representation and coding,”
in Proc. of Picture Coding Symp.,
1
–6
(2006). Google Scholar
A. C. Sankaranarayanan et al.,
“Compressive acquisition of dynamic scenes,”
in European Conf. on Computer Vision,
129
–142
(2010). Google Scholar
S. Bi et al.,
“Compressive video recovery using block match multi-frame motion estimation based on single pixel cameras,”
Sensors, 16
(3), 1
–8
(2016). http://dx.doi.org/10.3390/s16030318 SNSRES 0746-9462 Google Scholar
M. Iliadis, L. Spinoulas and A. K. Katsaggelos,
“Deep fully-connected networks for video compressive sensing,”
(2016). Google Scholar
J. Chen et al.,
“Residual distributed compressive video sensing based on double side information,”
Acta Autom. Sin., 40
(10), 2316
–2323
(2014). http://dx.doi.org/10.1016/S1874-1029(14)60363-3 THHPAY 0254-4156 Google Scholar
N. Eslahi, A. Aghagolzadeh and S. Mehdi,
“Image/video compressive sensing recovery using joint adaptive sparsity measure,”
Neurocomputing, 200 88
–109
(2016). http://dx.doi.org/10.1016/j.neucom.2016.03.013 Google Scholar
V. Stankovic, L. Stankovi and S. Cheng,
“Compressive video sampling,”
in 16th European Signal Processing Conf.,
1
–6
(2008). http://dx.doi.org/10.1109/CISP.2008.476 Google Scholar
J. Zheng and E. L. Jacobs,
“Video compressive sensing using spatial domain sparsity,”
Opt. Eng., 48 087006
(2009). http://dx.doi.org/10.1117/1.3206733 Google Scholar
E. Candes, M. Wakin and S. Boyd,
“Enhancing sparsity by reweighted minimization,”
J. Fourier Anal. Appl., 14
(5), 877
–905
(2008). http://dx.doi.org/10.1007/s00041-008-9045-x Google Scholar
R. Chartrand and W. Yin,
“Iteratively reweighted algorithms for compressive sensing,”
2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 3869
–3872 IEEE(2008). http://dx.doi.org/10.1109/ICASSP.2008.4518498 Google Scholar
I. Daubechies et al.,
“Iteratively reweighted least squares minimization for sparse recovery,”
Commun. Pure Appl. Math., 63
(1), 1
–38
(2010). Google Scholar
A. Cohen, W. Dahmen and R. DeVore,
“Compressed sensing and best k-term approximation,”
J. Am. Math. Soc., 22
(1), 211
–231
(2009). http://dx.doi.org/10.1090/S0894-0347-08-00610-3 0894-0347 Google Scholar
J. A. Tropp,
“Greed is good: algorithmic results for sparse approximation,”
IEEE Trans. Inf. Theory, 50
(10), 2231
–2242
(2004). http://dx.doi.org/10.1109/TIT.2004.834793 IETTAW 0018-9448 Google Scholar
E. Candes, J. Romberg and T. Tao,
“Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,”
IEEE Trans. Inf. Theory, 52
(2), 489
–509
(2006). http://dx.doi.org/10.1109/TIT.2005.862083 IETTAW 0018-9448 Google Scholar
D. Needell and J. Tropp,
“CoSaMP: iterative signal recovery from incomplete and inaccurate samples,”
Appl. Comput. Harmon. Anal., 53
(12), 93
–100
(2010). http://dx.doi.org/10.1145/1859204.1859229 ACOHE9 1063-5203 Google Scholar
J. Yang and X. Yuan,
“Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization,”
Math. Comput., 82
(281), 301
–329
(2013). http://dx.doi.org/10.1090/mcom/2013-82-281 MCMPAF 0025-5718 Google Scholar
R. Chartrand,
“Exact reconstruction of sparse signals via nonconvex minimization,”
IEEE Signal Process. Lett., 14
(10), 707
–710
(2007). http://dx.doi.org/10.1109/LSP.2007.898300 Google Scholar
,
“Fast L1-L2 minimization via a proximal operator,”
(2016). Google Scholar
Z. Xu et al.,
“ regularization: a thresholding representation theory and a fast solver,”
IEEE Trans. Neural Networks Learn. Syst., 23
(7), 1013
–1027
(2012). http://dx.doi.org/10.1109/TNNLS.2012.2197412 Google Scholar
X. L. Huang, L. Shi and M. Yan,
“Nonconvex sorted minimization for sparse approximation,”
J. Oper. Res. Soc. China, 3 207
–229
(2015). http://dx.doi.org/10.1007/s40305-014-0069-4 Google Scholar
Y. Wang and W. Yin,
“Sparse signal reconstruction via iterative support detection,”
SIAM J. Imaging Sci., 3
(3), 462
–491
(2010). http://dx.doi.org/10.1137/090772447 Google Scholar
B. Olshausen and D. Field,
“Emergence of simple-cell receptive field properties by learning a sparse code for natural images,”
Nature, 381 607
–609
(1996). http://dx.doi.org/10.1038/381607a0 Google Scholar
D. N. Hein and N. Ahmed,
“Video compression using conditional replenishment and motion prediction,”
IEEE Trans. Electromagn. Compat., EMC-26
(3), 134
–142
(1984). http://dx.doi.org/10.1109/TEMC.1984.304204 IEMCAE 0018-9375 Google Scholar
M. Duarte et al.,
“Single-pixel imaging via compressive sampling,”
IEEE Signal Process. Mag., 25
(2), 83
–91
(2008). http://dx.doi.org/10.1109/MSP.2007.914730 ISPRE6 1053-5888 Google Scholar
C. Li,
“An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,”
10
–80 Rice University,
(2009). Google Scholar
BiographyLiangliang Chen received his bachelor’s and master’s degrees in electrical engineering from the Huazhong University of Science and Technology, Wuhan, China, in 2009 and 2007, respectively. Currently, he is pursuing his PhD at Michigan State University, East Lansing. His research interests include infrared sensor and imaging, ultraweak signal detection in nanosensors, signal processing, analog circuits, and carbon nanotube/graphene nanosensors. Ming Yan received his PhD from the University of California, Los Angeles, in 2012. He is an assistant professor at the Department of Computational Mathematics, Science and Engineering and the Department of Mathematics, Michigan State University. His research interests include signal and image processing, optimization, and parallel and distributed methods for large-scale datasets. Chunqi Qian received his BS degree in chemistry from Nanjing University and his PhD in physical chemistry from the University of California, Berkeley, in 2007. Following postdoctoral trainings at the National High Magnetic Field Laboratory and the National Institutes of Health, he joined Michigan State University as an assistant professor in radiology. His research interest includes the development and application of imaging technology in biomedical research. Ning Xi received his DSc degree in systems science and mathematics from Washington University in St. Louis, Missouri, USA, in 1993. Currently, he is the chair professor of robotics and automation at the Department of Industrial and Manufacturing System, and director of Emerging Technologies Institute of the University of Hong Kong. He is a fellow of the Institute of Electrical and Electronics Engineers (IEEE). His research interests include robotics, manufacturing automation, micro/nanomanufacturing, nanosensors and devices, and intelligent control and systems. Zhanxin Zhou received her bachelor’s and master’s degrees in control engineering from the Second Artillery Engneering College, Xi’an, China, in 1992 and 1997, respectively. She received her PhD in control engineering from Beijing Institute of Technology, Beijing, China, in 2008. Her research interests include infrared imaging, imaging enhancement, nonlinear filter and optimal control. Yongliang Yang received his BS degree in mechanical engineering from Harbin Engineering University, Harbin, China, in 2005. He received his MS and PhD degrees from the University of Arizona, Tucson, USA, in 2012 and 2014, respectively. He has been a research associate at Michigan State University since 2014. His research interests include micro/nanorobotics and their application in biomedicine. Bo Song received his BEng degree in mechanical engineering from Dalian University of Technology, Dalian, China, in 2005, and his MEng degree in electrical engineering from the University of Science and Technology of China, Hefei, China, in 2009. Currently, he is pursuing his PhD at the Department of Electrical and Computer Engineering, Michigan State University, East Lansing. His research interests include nanorobotics, nonvector space control, compressive sensing, and biomechanics. Lixin Dong received his BS and MS degrees in mechanical engineering from Xi’an University of Technology, Xi’an, China, in 1989 and 1992, respectively, and his PhD in microsystems engineering from Nagoya University, Nagoya, Japan, in 2003. He is an associate professor at Michigan State University. His research interests include nanorobotics, nanoelectromechnical systems, mechatronics, mechanochemistry, and nanobiomedical devices. He is a senior editor of the IEEE Transactions on Nanotechnology. |