HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
KEYWORDS: Video coding, Multimedia, Video, Image compression, Image storage, Video compression, Internet technology, Internet, Image processing, Video processing, Copper, Computer programming, Quantization, Detection and tracking algorithms, Plutonium, Algorithm development
HEVC/H.265 standard was released in 2013. It allows reducing by half the required bandwidth in comparison with the previous H.264 standard. This opens the door to many relevant applications in the multimedia video coding and transmission context. Thanks to the HEVC improvements, the 4K and 8K Ultra High Definition Video real time constraints can be met. Nonetheless, HEVC implementations require a vast amount of resources. In this contribution we propose intra and inter prediction techniques in order to diminish the HEVC complexity, while complying with the real time and quality constraints. The performance is noticeably increased when comparing with respect to the HM16.2 reference software as well as the x265 encoder, maintaining a similar quality too.
Recent advances in heterogeneous high performance computing (HPC) have opened new avenues for demanding remote sensing applications. Perhaps one of the most popular algorithm in target detection and identification is the automatic target detection and classification algorithm (ATDCA) widely used in the hyperspectral image analysis community. Previous research has already investigated the mapping of ATDCA on graphics processing units (GPUs) and field programmable gate arrays (FPGAs), showing impressive speedup factors that allow its exploitation in time-critical scenarios. Based on these studies, our work explores the performance portability of a tuned OpenCL implementation across a range of processing devices including multicore processors, GPUs and other accelerators. This approach differs from previous papers, which focused on achieving the optimal performance on each platform. Here, we are more interested in the following issues: (1) evaluating if a single code written in OpenCL allows us to achieve acceptable performance across all of them, and (2) assessing the gap between our portable OpenCL code and those hand-tuned versions previously investigated. Our study includes the analysis of different tuning techniques that expose data parallelism as well as enable an efficient exploitation of the complex memory hierarchies found in these new heterogeneous devices. Experiments have been conducted using hyperspectral data sets collected by NASA's Airborne Visible Infra- red Imaging Spectrometer (AVIRIS) and the Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensors. To the best of our knowledge, this kind of analysis has not been previously conducted in the hyperspectral imaging processing literature, and in our opinion it is very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.
Trex Enterprises Corporation has developed a full body passive millimeter-wave security screening imager. The system
images naturally occurring W-band blackbody radiation, which penetrates most types of clothing. When operated
indoors, the primary mechanism for image formation is the contrast between body heat radiation and the room
temperature radiation emitted or reflected by concealed objects that are opaque at millimeter-wave wavelengths. Trex
Enterprises has previously demonstrated that an imager noise level of 0.25 to 0.5 K is necessary to detect and image
small concealed threats indoors. Achieving this noise level in a head-to-toe image required image collection times of 24
seconds using the previous imager design. This paper first discusses the measurement of the noise temperature of the
MMW detectors employed. The paper then explores reducing the image collection times through a new front-end
amplifier design and the addition of more imaging units. By changing the orientation and direction of travel of the
imaging units, the new design is able to employ more detectors and collect imagery from a subject's front and sides. The
combination of lower noise amplifiers and a new scanning architecture results in an imager appropriate for high
throughput security screening scenarios. Imagery from the new configuration is also presented.
Conference Committee Involvement (4)
High-Performance Computing in Geoscience and Remote Sensing
13 September 2018 | Berlin, Germany
High-Performance Computing in Geoscience and Remote Sensing
12 September 2017 | Warsaw, Poland
High-Performance Computing in Geoscience and Remote Sensing
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.