KEYWORDS: Point clouds, 3D modeling, Mouth, Head, Eye models, Nose, Information visualization, Data modeling, Visualization, Principal component analysis
Haniwa are clay figures and important archaeological materials because they were made during the Kofun period for rituals and talismans against evil. Archaeologists use their knowledge when they observe visual information such as shapes, sizes, ornaments, and noses of Haniwa for classifying who created Haniwa and where they were created. However, classification by observation is largely based on subjective evaluation, and therefore an objective evaluation method is required. In this study, in order to automatically find facial parts of Haniwa, point clouds are projected onto a plane, and facial parts are extracted from the positions of holes representing eyes and mouths. Automatic extraction of facial parts was achieved by changing the threshold value for investigating the size of holes for each model.
This paper proposes a quantitative method for calculating and evaluating the facial similarity of human Haniwa based on 2D images generated from 3D point clouds measured from actual Haniwa artifacts. In the field of archaeological research in Japan, it is extremely important to clarify the excavation information, production process, and artistic value of human Haniwa, and to classify (group) each object appropriately. In order to achieve this goal in a quantitative way, Lu et al. proposed a quantitative similarity evaluation method to directly use the 3D measurement point cloud of Haniwa. They reported that the result is to some extent consistent with the clustering result obtained subjectively by archaeologists in the past. However, the computational calculation time required for 3D point matching with repetitive algorithms remains as a problem. Therefore, this paper proposes another simpler method for objective and quantitative similarity evaluation to consume less computational resources by appropriately converting 3D point clouds into 2D images. The paper also shows that the clustering result is as good as the previous research.
In this paper, we experimentally verified that deep learning based on Convolutional Neural Network (CNN) is an effective method to judge the freshness of seafood fillets from its images. Currently, freshness of seafood is generally estimated by human’s eyes (connoisseurs) in Japanese fishing industry. However, it requires many years of experience and is a difficult task for new workers due to lack of experience. Furthermore, the Japanese fishing industry is faced with a worker shortage that makes it difficult to pass on their skills to next generation due to an aging and a lack of new workers.
Meanwhile, CNN has been achieved a certain success in image recognition areas. If this deep learning can be applied to predict the freshness of fishes, i.e., if the freshness of a subject can be inferred from an image of seafood, it would be a solution for the problems of aging and labor shortage in Japan’s fisheries industry. Therefore, in this paper, we verify that CNN-based deep learning models are effective in estimating/predicting seafood freshness using tuna and squid, which are commonly caught fish in Iwate Prefecture (in Japan) and show that representative CNN models such as ResNet-50 achieved nearly 100% of predicting accuracy from experiments with over 12,000 images.
This paper discusses an approach to training data augmentation for identifying and detecting tactile paving (braille) blocks in image with machine learning. Image recognition system that assists the visually impaired is necessary to efficiently identify the guiding blocks in the image, and machine learning, represented by CNNs, etc., is considered effective for this purpose. However, it is labor intensive to collect a sufficient number of training images from the pedestrian’s perspective in various environments, and furthermore, if the number of training data is insufficient, the final identification performance of the entire system will be degraded. To solve these problems, this paper attempts to generate new training data from a small number of tactile paving images by GAN(Generative Adversarial Network), and demonstrates through evaluation experiments that adding generated images to training data can stabilize learning and improve the rate of correct answers.
KEYWORDS: 3D image reconstruction, 3D modeling, RGB color model, 3D image processing, Reflectivity, Atomic force microscopy, Unmanned aerial vehicles, Image restoration, Point clouds, Near infrared
This paper discusses 3D reconstruction of rice plant community from multiple viewpoint images taken by a small UAV, especially, using spectral reflectance images instead of the usual RGB images. In addition, the paper examines the correlation between ”the number of reconstructed 3D points” and ”indices related to rice plant breeding” such as the number of matured grains. It is known that plants such as rice have higher spectral reflectance intensity in the red to red-edge wavelength range (640-770 nm). This fact expects the correlation between the number of reconstructed points and the evaluation indices will be stronger when reconstruction is done using spectral reflectance images in these specific wavelength ranges, rather than based on ordinary RGB images. Therefore this paper focuses on the Red channel image and attempts 3D reconstruction after applying threshold processing. Also the paper demonstrates through experimental results that the correlation coefficient between the number of reconstructed points and the measured evaluation indices is relative high, in other words, 3D reconstruction using specific wavelength images will be potentially applied to crop phenotyping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.