On the epidermal surface of plants, a kidney-shaped organ called stomata plays an important role in plant health under drought conditions. These stomata which resembles to be like a pore, opens and closes during transpiration. The lower the stomatal transpiration in drought, plants can escape drought when the rate of photosynthesis is balanced. A greater number of open stomata indicates that plants are experiencing drought. To assess stomata response, one must derive the pore aperture ratio. The lower the pore aperture ratio of plants, the more closed stomata are in response to drought, consequently decreasing transpiration. Here we show the development and implementation of StomaDetectv1, a novel deep learning model for non-destructive, high-throughput phenotyping of corn stomata, utilizing a custom Faster R-CNN architecture. StomaDetectv1 achieves an Average Precision of 84.988% for closed stomata areas, underpinning its efficacy in identifying variations in stomatal traits. The model was adept at assessing stomatal density and aperture ratios, essential for quantifying drought resilience. This work underscores the significance of integrating imaging techniques and deep learning for precision phenotyping, offering a scalable solution for monitoring plant circadian rhythm, and aiding in the breeding of drought-resistant crops. By furnishing breeders and geneticists with detailed insights into stomatal behavior, our approach catalyzes the development of corn varieties optimized for water use efficiency and yield under drought conditions, thereby advancing agricultural practices to combat climate challenges.
Document layout analysis, or zoning, is important for textual content analysis such as optical character recognition. Zoning document images such as digitized historical newspaper pages are challenging due to noise and quality of the document images. Recently, effective data-driven approaches, such as leveraging deep learning, have been proposed, albeit with the concern of requiring larger training data and thus incurring additional cost of ground truthing. We propose a zoning solution by incorporating a knowledge-driven document representation, gravity map, into a multimodal deep learning framework to reduce the amount of time and data required for training. We first generate a gravity map for each image, considering the centroid distance and area between a cell in a Voronoi tessellation and its content to encode visual domain knowledge of a zoning task. Second, we inject the gravity maps into a deep convolution neural network (DCNN) during training, as an additional modality to boost performance. We report on two investigations using two state-of-the-art DCNN architectures and three datasets: two sets of historical newspapers and a set of born-digital contemporary documents. Evaluations show that our solution achieved comparable segmentation accuracy using fewer training epochs and less training data compared to a naïve training scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.