The initial steps of many computer vision algorithms are local feature extraction and matching. However, in the problem of recognizing objects in images with complex backgrounds, this approach has a weak point since keypoints may be found not only in the object of interest, but also in the background. This leads to redundant calculations and can cause mismatches. In this paper, we propose a keypoints filtering method applicable to the problem of classification and localization of ID documents in the wild. Using a light-weight deep learning model, keypoints are divided into ”document” and ”background” classes, after which the keypoints of the background are removed. Experimental results show that adding the proposed filtering step gives an average speedup of 3.14% on the entire MIDV-500 dataset and 14.77% on MIDV-2020. At the same time, the acceleration on target images with complex backgrounds reaches 81%.
U-Net similar architectures are widely used in the task of document image binarization. However, despite the good quality of binarization, they also have high computational complexity, which greatly limits their use on mobile and embedded devices. The performance bottleneck of U-Net architectures is the first encoder layers and the last decoder layers, which operate on high-resolution input data and contain the largest number of operations. Based on this, in this paper we propose a new Threshold U-Net model: instead of predicting the final image, Threshold U-Net predicts a low-resolution adaptive threshold map, with which the input image is binarized. The proposed architecture naturally combines the ideas of classical algorithms that calculate the binarization threshold for a specific image region with an approach based on a deep learning model with a large receptive field and context understanding. Threshold U-Net demonstrates quality of binarization of historical documents comparable to U-Net on the DIBCO-2017 dataset. At the same time, depending on the resolution of the threshold map, Threshold U-Net is up to 2 times faster, requires up to 26% less RAM and consists up to 10% fewer parameters.
In this work, we present the auto-clustering method which can be used for pattern recognition tasks and applied to the training of a metric convolutional neural network. The main idea is that the algorithm creates clusters consisting of classes similar from a network’s point of view. The usage of clusters allows the network to pay more attention to classes that are hard to differ. This method improves the generation of pairs during the training process, which is a current problem because the optimal generation of data significantly affects the quality of training. The algorithm works in parallel with the training process and is fully automatic. To evaluate this method we chose the Korean alphabet with the corresponding PHD08 dataset and compared our auto-clustering with random-mining, hard-mining, distance-based mining. Opensource framework Tesseract OCR 4.0.0 was also considered to evaluate the baseline.
In this work we study the effect of activation functions in a neural network. We consider how activation functions with different properties and their combination affect the final quality of the model. Due to optimization and speed performance issues with most of bounded functions that are represented by sigmoids, we propose the generalized version of SoftSign function - ratio function (rf). Its shape greatly depends on introduced degree parameter, which in theory leads to new interesting property - contraction to zero. For evaluation, we chose image binarization problem: based on UNet architecture of DIBCO-2017 winners, we conducted all experiments with replacing activation functions only. Our research has led us to the state-of-the-art results in binarization quality on DIBCO-2017 test dataset. U-Net with modified activation functions significantly outperforms all existing solutions in all metrics.
Regularization methods play an important role in artificial neural networks training, improving generalization performance and preventing them from overfitting. In this paper, we introduce a new regularization method, based on the orthogonalization of convolutional layer filters. Proposed method is easy to implement and it has plug-and-play compatibility with modern training approaches, without any changes or adaptations on their part. Experiments with MNIST and CIFAR10 datasets showed that the effectiveness of the suggested method depends on number of filters in the layer, and maximum increase in quality is achieved for architectures with small number of parameters, which is important for training fast and lightweight neural networks.
In this paper we study the real-time augmentation - method of increasing variability of training dataset during the learning process. We consider the most common label-preserving deformations, which can be useful in many practical tasks. Due to limitations of existing augmentation tools like increase in learning time or dependence on a specific platform, we developed own real-time augmentation system. Experiments on MNIST and SVHN datasets demonstrated the effectiveness of suggested approach - the quality of the trained models improves, and learning time remains the same as if augmentation was not used.
This paper addresses one of the fundamental problems of machine learning - training data acquiring. Obtaining enough natural training data is rather difficult and expensive. In last years usage of synthetic images has become more beneficial as it allows to save human time and also to provide a huge number of images which otherwise would be difficult to obtain. However, for successful learning on artificial dataset one should try to reduce the gap between natural and synthetic data distributions. In this paper we describe an algorithm which allows to create artificial training datasets for OCR systems using russian passport as a case study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.