Crop pest management has vast economic consequences vital to agricultural, natural lands, and even public health. In this work, we contribute a new low-cost UGV (unmanned ground vehicle) solution to solve this precision agriculture problem. This UGV is made of two electric bikes (bi-eBike) that are commercially off the shelves (COTS) and driven by a ROS (robotics operational system) compatible UGV. Our bi-eBike system offers a mobile platform that can be used as a mobile sensing service with multiple sensors such as multispectral cameras, microwave scanners, etc., as well as a mobile actuation/application service with actuators such as UV-C light insecticide, beneficial bugs, growth stimulant spreaders or sprayers, etc. By mapping and imaging plants in the field, farmers can treat individual plants instead of troubleshooting the entire field, reducing both their costs and negative environmental impact. This smart bi-eBike system can be supplemented with solar panels (photovoltaic) and a UAV (unmanned aircraft vehicle) landing/charging pad. Thus, one can expect that the SBB can have a long operational duration (10 hours or more) and large coverage of acreages where UAVs are used for variability mapping for site-specific treatment. This paper describes the system level concept, subsystem designs and integration, vehicle control electronics, autonomous navigation architecture, and some preliminary experimental results.
The normalized difference vegetation index (NDVI) has been commonly used for vegetation monitoring, such as water stress detection, crop yield assessment, and evapotranspiration estimation. However, the influence of spatial resolution on the individual tree level NDVI using the Unmanned Aerial Vehicles (UAVs) is poorly understood. Therefore, in this research, the effects of the spatial resolution of UAV imagery are investigated using high-resolution multispectral images. A temporal sequence of UAV multispectral imagery was collected over an experimental pomegranate field, capturing variations in the whole growing season of 2019, at the USDAARS (U.S. Department of Agriculture, Agricultural Research Service) San Joaquin Valley Agricultural Sciences Center in Parlier, California, USA. The NDVI distribution of individual trees was generated at the 60 m, 90 m, and 120 m spatial resolution. Experimental results indicated how the spatial resolution of UAV imagery could affect NDVI values of individual trees.
Evapotranspiration (ET) estimation is important agricultural research in many regions because of the water scarcity, growing population, and climate change. ET can be analyzed as the sum of evaporation from the soil and transpiration from the crops to the atmosphere. The accurate estimation and mapping of ET are necessary for crop water management. One traditional method is to use the crop coefficient (Kc) and reference ET (ETo) to estimate actual ET. With the advent of satellite technology, remote sensing images can provide spatially distributed measurements. Satellite images are used to calculate the Normalized Difference Vegetation Index (NDVI). The relation between NDVI and Kc is used to generate a new Kc. The spatial resolution of multispectral satellite images, however, is in the range of meters, which is often not enough for crops with clumped canopy structures, such as trees and vines. Moreover, the frequency of satellite overpasses is not high enough to meet the research or water management needs. The Unmanned Aerial Vehicles (UAVs) can help mitigate these spatial and temporal challenges. Compared with satellite imagery, the spatial resolution of UAV images can be as high as centimeter-level. In this study, a regression model was developed using the Deep Stochastic Configuration Networks (DeepSCNs). Actual evapotranspiration was estimated and compared with lysimeter data in an experimental pomegranate orchard. The UAV imagery provided a spatial and tree-by-tree view of ET distribution.
Soil-borne plant-parasitic nematodes exist in many soils. Some of them can cause up to 15 to 20 percent annual yield losses. Walnut has high economic value, and most edible walnuts in the US are produced in the fertile soils of the California Central Valley. Soil-dwelling nematode parasites are a significant threat, and cause severe root damage and affect the walnut yields. Early detection of plant-parasitic nematodes is critical to design management strategies. In this study, we proposed use of a new low-cost proximate radio frequency tridimensional sensor "Walabot" and machine learning classification algorithms. This pocket-sized device, unlike the remote sensing tools such as unmanned aerial vehicles (UAVs), is not limited by ight time and payload capability. It can work flexibly in the field and provide data information more promptly and accurately than UAVs or satellite. Walnut leaves from trees of different nematodes infestation levels were placed on this sensor, to test if the Walabot can detect small changes of the nematode infestation levels. Hypothetically, waveforms generated by different signals may be useful to estimate the damage caused by nematodes. Scikit-learn classification algorithms, such as Neural Networks, Random forest, Adam optimizer, and Gaussian processing, were applied for data processing. Results showed that the Walabot predicted nematodes infestation levels with an accuracy of 72% so far.
The properties of aluminum alloy highly depend on the distribution, shape, and size of the microstructures. Thus accurate segmentation of these microstructures is crucial in the fields of material science. However, it is often challenging due to large variations in microstructural appearance and insufficiency in hand-labeled data. To address these challenges, we propose a hierarchical parameter transfer learning method for the automatic segmentation of microstructures in aluminum alloy micrograph, which can be seen as the generalization of the typical parameter transfer method. In the proposed method, we use the multilayer structure, multinetwork structure, and retraining technology. It can make full use of the advantages of different networks and transfer network parameters in the order from high transferability to low transferability. Several experiments are presented to verify the effectiveness of the proposed method. Our method achieves 98.88% segmentation accuracy and outperforms four typical segmentation methods.
In the last decade, technologies of unmanned aerial vehicles (UAVs) and small imaging sensors have achieved a significant improvement in terms of equipment cost, operation cost and image quality. These low-cost platforms provide flexible access to high resolution visible and multispectral images. As a result, many studies have been conducted regarding the applications in precision agriculture, such as water stress detection, nutrient status detection, yield prediction, etc. Different from traditional satellite low-resolution images, high-resolution UAVbased images allow much more freedom in image post-processing. For example, the very first procedure in post-processing is pixel classification, or image segmentation for extracting region of interest(ROI). With the very high resolution, it becomes possible to classify pixels from a UAV-based image, yet it is still a challenge to conduct pixel classification using traditional remote sensing features such as vegetation indices (VIs), especially considering various changes during the growing season such as light intensity, crop size, crop color etc. Thanks to the development of deep learning technologies, it provides a general framework to solve this problem. In this study, we proposed to use deep learning methods to conduct image segmentation. We created our data set of pomegranate trees by flying an off-shelf commercial camera at 30 meters above the ground around noon, during the whole growing season from the beginning of April to the middle of October 2017. We then trained and tested two convolutional network based methods U-Net and Mask R-CNN using this data set. Finally, we compared their performances with our dataset aerial images of pomegranate trees. [Tiebiao- add a sentence to summarize the findings and their implications to precision agriculture]
Many studies have shown that hyperspectral measurements can help monitor crop health status, such as water stress, nutrition stress, pest stress, etc. However, applications of hyperspectral cameras or scanners are still very limited in precision agriculture. The resolution of satellite hyperspectral images is too low to provide the information in the desired scale. The resolution of either field spectrometer or aerial hyperspectral cameras is fairly high, but their cost is too high to be afforded by growers. In this study, we are interested in if the flow-cost hyperspectral scanner SCIO can serve as a crop monitoring tool to provide crop health information for decision support. In an onion test site, there were three irrigation levels and four types of soil amendment, randomly assigned to 36 plots with three replicates for each treatment combination. Each month, three onion plant samples were collected from the test site and fresh weight, dry weight, root length, shoot length etc. were measured for each plant. Meanwhile, three spectral measurements were made for each leaf of the sample plant using both a field spectrometer and a hyperspectral scanner. We applied dimension reduction methods to extract low-dimension features. Based on the data set of these features and their labels, several classifiers were built to infer the field treatment of onions. Tests on validation dataset (25 percent of the total measurements) showed that this low-cost hyperspectral scanner is a promising tool for crop water stress monitoring, though its performance is worse than the field spectrometer Apogee. The traditional field spectrometer yields the best accuracy as high as above 80%, whereas the best accuracy of SCIO is around 50%.
Thermal cameras have been widely used in small Unmanned Aerial Systems (sUAS) recently. In order to analyze a particular object, they can translate thermal energy into visible images and temperatures. The thermal imaging has a great potential in agricultural applications. It can be used for estimating the soil water status, scheduling irrigation, estimating almond trees yields, estimating water stress, evaluating maturity of crops. Their ability to measure the temperature is great, though, there are still some concerns about uncooled thermal cameras. Unstable outdoor environmental factors can cause serious measurement drift during flight missions. Post-processing like mosaicking might further lead to measurement errors. To answer these two fundamental questions, it finished three experiments to research the best practice for thermal images collection. In this paper, the thermal camera models being used are ICI 9640 P-Series, which are commonly used in many study areas. Apogee MI-220 is used as the ground truth. In the first experiment, it tries to figure out how long the thermal camera needs to warm up to be at (or close to) thermal equilibrium in order to produce accurate data. Second, different view angles were set up for thermal camera to figure out if the view angle has any effect on a thermal camera. Third, it attempts to find out that, after the thermal images are processed by Agisoft PhotoScan, if the stitching has any effect on the temperature data.
Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. CNN-based object detection framework-Faster R-CNN is applied in the melon classification. Our results showed that sUAS plus CNNs were able to detect melons accurately in the late harvest season.
In this paper, the application of wheeled mobile robot (WMR) formation control in diffusion process characterization and control is discussed. We present a review over the current approaches on mobile robot formation control. A new consideration is presented on formation control within the framework of networked control system with wireless communication. The potential benefits of robot formation in distributed diffusion process measurement and control are discussed. In this paper, we present a new nonlinear control law for a general formation that can be useful in diffusion process boundary measurement. Then, we introduce our on-going project called Mobile Actuator and Sensor Network (MAS-net) on the diffusion process characterization and control. Experiment results are presented to illustrate how pattern formation can be achieved in MAS-net.
In this paper, we propose and demonstrate the application of
concepts from digital filter design to optimize artificial optical
resonant structures to produce a nearly ideal nonlinear phase
shift response. Multi-stage autoregressive moving average (ARMA)
optical filters (ring resonator based Mach-Zehnder interferometer
lattices) are designed and studied. The filter group delay is used
as an alternate measure instead of finesse or quality factor to
study the nonlinear sensitivity for multiple resonances. The
nonlinearity of a 4-stage ARMA filter is 17 times higher than that
of the intrinsic material. We demonstrate that the nonlinear
sensitivity can be increased within the same bandwidth by
allocating more in-band phase or using higher-order filter
structures and that the nonlinear enhancement improves with
increasing group delay. We also investigate some possible ways to
pre-compensate the nonlinear response to reduce the occurrence of
optical bistabilities. The impact of optical loss, including
linear absorption and two-photon absorption, and fabrication
tolerance are discussed in post-analysis.
This paper presents challenges and opportunities related to the problem of diffusion boundary determination and zone control via mobile actuator-sensor networks (MAS-net). This research theme is motivated by three example application scenarios: 1) The safe ground boundary determination of the radiation field from multiple radiation sources; 2) The nontoxic reservoir water surface boundary determination and zone control due to a toxic diffusion source; 3) The safe nontoxic 3D boundary determination and zone control of biological or chemical contamination in the air. We focus on the case of 2D diffusion process and on using a team of ground mobile robots to track the diffusion boundary. Moreover, we assume that there are a number of robots that can carry and move networked actuators to release a neutralizing chemical agent so that the shape of the polluted zone can be actively controlled. These two MAS-net applications, i.e., diffusion boundary determination and zone control, are formulated as model-based distributed control tasks. On the technological side, we focus on the node specialization and the power supply problems. On the theoretical side, some recently developed new concepts are introduced, such as the regional/zone observability, regional/zone controllability, regional/zone Luenberger observer etc. We speculate on possible further developments in the theoretical research by noting the combination of diffusion based path planning and regional analysis of the overall MAS-net distributed control system.
In this paper we present preliminary results related to path-planning problems when it is known that the quantities of interest in the system are generated via a diffusion process. The use of mobile sensor-actuator networks (MAS-Net) is proposed for such problems. A discussion of such networks is given, followed by a description of the general framework of the problem. Our strategy assumes that a network of mobile sensors can be commanded to collect samples of the distribution of interest. These samples are then used as constraints for a predictive model of the process. The predicted distribution from the model is then used to determine new sampling locations. A 2-D testbed for studying these ideas is described. The testbed includes a network of ten robots operating as a network using Intel Motes. We also present simulation results from our initial partial differential equation model of the diffusion process in the testbed.
As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.