SignificancefNIRS-based neuroenhancement depends on the feasible detection of hemodynamic responses in target brain regions. Using the lateral occipital complex (LOC) and the fusiform face area (FFA) in the ventral visual pathway as neurofeedback targets boosts performance in visual recognition. However, the feasibility of utilizing fNIRS to detect LOC and FFA activity in adults remains to be validated as the depth of these regions may exceed the detection limit of fNIRS.AimThis study aims to investigate the feasibility of using fNIRS to measure hemodynamic responses in the ventral visual pathway, specifically in the LOC and FFA, in adults.ApproachWe recorded the hemodynamic activities of the LOC and FFA regions in 35 subjects using a portable eight-channel fNIRS instrument. A standard one-back object and face recognition task was employed to elicit selective brain responses in the LOC and FFA regions. The placement of fNIRS optodes for LOC and FFA detection was guided by our group’s transcranial brain atlas (TBA).ResultsOur findings revealed selective activation of the LOC target channel (CH2) in response to objects, whereas the FFA target channel (CH7) did not exhibit selective activation in response to faces.ConclusionsOur findings indicate that, although fNIRS detection has limitations in capturing FFA activity, the LOC region emerges as a viable target for fNIRS-based detection. Furthermore, our results advocate for the adoption of the TBA-based method for setting the LOC target channel, offering a promising solution for optrode placement. This feasibility study stands as the inaugural validation of fNIRS for detecting cortical activity in the ventral visual pathway, underscoring its ecological validity. We suggest that our findings establish a pivotal technical groundwork for prospective real-life applications of fNIRS-based research.
Deep neural networks have been applied to video compressive sensing (VCS) task recently. The existing DNN-based VCS methods compress and reconstruct the scene video only in space or time dimensions, which ignores the spatial-temporal correlation of the video. And they generally utilize pixel-wise loss as the loss function, which causes the results to be over-smoothed. In this paper, we propose a perceptual spatial-temporal VCS network. The spatial-temporal VCS network, which compresses and recovers the video in both space and time dimensions, can preserve the spatial-temporal correlation of the video. Besides, we refine the perceptual loss by selecting specific feature-wise loss terms and adding a pixel-wise loss term. The refined perceptual loss can guide the spatial-temporal network to retain more textures and structures. Experimental results show the proposed method can achieve better visual effect with less recovery time than the state-of-the-art.
Dynamic Vision Sensor (DVS) is an event-based camera, which captures the changing pixel of vision. It captures the scene in the form of events. In this paper, we use a unique approach to visualize the events DVS captures with "DVS images". DVS is sensitive enough to capture objects moving in high speed, but noise is also captured. In order to improve the quality, we remove the noise of those images. Different from traditional images, the noise and objects in "DVS images" are both composed of distributed points. It is hard to use traditional methods to remove the noise. This paper proposes an efficient approach for "DVS image" noise removal. It is based on K-SVD algorithm and we improve the algorithm according to certain applications. The proposed framework can deal with "DVS images" containing different amount of noise. Experiments show that the proposed method can work well both on a fixed DVS and a moving DVS.
Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.