Open Access
20 May 2019 Prediction of microunmanned aerial vehicle flight behavior from two-dimensional intensity images
Martin Rebert, Stéphane Schertzer, Martin Laurenzis
Author Affiliations +
Abstract
The increasing number of microunmanned aerial vehicles (MUAVs) is a rising risk for personal privacy and security of sensitive areas. Owing to the highly agile maneuverability and small cross section of the MUAV, effective countermeasures (CMs) are hard to deploy, especially when a certain temporal delay occurs between the localization and the CM effect. Here, a reliable prediction of the MUAV flight behavior can increase the effectiveness of CMs. We propose a pose estimation approach to derive the three-dimensional (3-D) flight path from a stream of two-dimensional intensity images. The pose estimation in a single image results in an estimation of the current position and orientation of the quadcopter in 3-D space. Combined with flight behavior model, this information is used to reconstruct the flight path and to predict the flight behavior of the MUAV. In our laboratory experiments, we obtained a standard deviation between 1 and 24 cm in a five-frame prediction of the 3-D position, depending in the actual flight behavior.

1.

Introduction

In the recent past, the number of incidents with intentional and unintentional misuse of small or microunmanned aerial vehicles (MUAVs) has increased due to the increasing number of commercially available small quadcopters. In their annual reports,1 the Federal Aviation Administration has pointed out an increasing number of sightings and close encounters of MUAVs reported in civilian air traffic. Further, the first serious incidents where a MUAV crashed into an air plane have occured.2 Beside this threat to civilian air traffic, misuse of MUAVs can also harm personal and public privacy and create security issues.

MUAVs can operate in highly agile flight patterns and are hard to detect due to having small cross sections.37 Thus, in the past decade, several groups have been working on the problem of efficiently detecting, tracking, and classifying MUAVs in different scenarios. Detection and tracking of MUAVs have been studied using single millimeter-wave RADAR,8 optical (VIS/SWIR)9 sensors or within a multisensor approach using passive/active imaging, acoustics, and RADAR in various field trials.1013 Further, classification of MUAV has been investigated using image-based deep-learning and high-level data-fusion approaches.1416 Classification can also be used to distinguish MUAVs from, e.g., birds.

Different sensors (such as RADAR, acoustics, and optics) will be embedded in a counter-MUAV system where sensor information will be processed to reliably detect/track, classify, and localize MUAV. Based on this information, decisions are made and countermeasures (CMs) are engaged. Owing to the internal processing time of information generation and decision-making as well as external time constrains (e.g., propagation of a projectile), the effect on the MUAV will occur after a certain inherent time delay Δt. Owing to the agile operation and small cross section, the probability of successful CMs is relatively low.17 Here, the prediction of the MUAV position at the moment of effect could significantly increase the success rate.

First, works that predict the MUAV position in the image plane of the succeeding frame (frame-to-frame prediction) of a video stream18,19 using a linear motion model have been presented. In our approach, we predict the position of the MUAV in three-dimensional (3-D) space and on a longer time scale tP (with tPΔt) or over n-frames. For a reliable result, we determine the 3-D position of the MUAV from pose estimation in a two-dimensional (2-D) intensity images and apply a navigation model to simulate the flight behavior.

2.

Prediction of Microunmanned Aerial Vehicle Position in the Image Plane

Based on linear quadratic estimation, Kálmán20 developed a filter to either reduce measurement noise effects and to estimate values (or determine the most probable value). In the computer vision community, for instance, the Kalman filter approach is widely used for tracking and navigating the MUAV.18,19 The Kalman filter f(·) is used to predict the MUAV state vector Xi, which contains a set of parameters, from a current measurement yi and the previous state vector Xi1, as defined in Eq. (1). Here, ω describes noise characteristics

Eq. (1)

Xi=f(Xi1,yi,ω).

Depending on the application (e.g., navigation or tracking), the state vector summarizes several physical parameters [such as position, velocity, angles (roll, yaw, pitch), and angular velocities] to describe the current flight state. In our approach, the state vector is defined as Xi=[xi,x˙i,x¨i]T containing only the MUAV position xi, velocity x˙i, and acceleration x¨i that are the first and second temporal derivatives of the position in the i’th frame. Higher order of temporal derivatives [the jerk (third) and the snap (fourth)] could be of interest in future algorithms and are neglected in the current code.

We developed a tracking and prediction filter based on Murphy’s “Kalman filter toolbox for Matlab”21 and the tutorial of Welsh and Bishop.22 Our filter was designed to predict a MUAV position in a n-frame future from a continuous data stream and has two stages: a Kalman filter to estimate the current state vector Xi and an n-frame prediction filter to estimate the future state X^i+n.

The Kalman filter, as described in Sec. 5, returns a state vector Xi at time ti, containing position, velocity, and acceleration. Then, the MUAV state vector X^i+n at time ti+n can be predicted using the motion equation approach, as defined in Eq. (2). The predicted position x^i+n can be extracted by Eq. (3). We define the motion prediction equations as

Eq. (2)

X^i+n=[1n12n201n001]Xi
and

Eq. (3)

x^i+n=[100]Xi+n.

The complete n-frame prediction procedure is illustrated in Algorithm 1 as a pseudocode. The computation process returns a predicted position z^i+n and needs a set of input data consisting of the current image frame Fi, the previous MUAV state vector Xi1, and the number of frames n as the prediction range. First, the MUAV position xi in the current image has to be determined by, for instance, a tracking algorithm. This step will be discussed in the next paragraph. From these position measurements, the current state vector is calculated using the Kalman filter and the MUAV state X^i+n at frame i+n is predicted. Finally, the estimated future position x^i+n is extracted [see Eqs. (2) and (3)].

Algorithm 1

Pseudocode for the n-frame prediction of 2-D MUAV position in the image plane.

Input: Current frame Fi, previous state vector Xi1, coefficient n
Output:n-frame prediction of 2-D MUAV position z˙i+n
 1: Determine the MUAV position in frame: Fixi
 2: Apply the Kalman filter f(X)
 3: Predict the state vector in the i+n’th frame: Xi+n. [Eq. (2)]
 4: Extract the position in the i+n’th frame: x^i+n. [Eq. (3)]
 5: returnx^i+n

In Fig. 1, some frames of a video stream are presented. This video was recorded inside our laboratory, showing a small quadcopter MUAV (type DJI Phantom 3) flying around in the camera field of view. The MUAV is tracked with a state-of-the-art tracker based on the consensus-based matching and tracking of key points (CMT).23,24 In this algorithm, the image is transformed to an alternative base using feature detection from accelerated segment test (FAST)25,26 and binary robust invariant scalable key points of features27 as feature detector and descriptor subroutines.

Fig. 1

CMT of the key points in a MUAV video sequence.

OE_58_5_053101_f001.png

The tracker employs clustering of correspondences as the central idea to distinguish inlier and outlier key points. This approach improves the tracking results, even when correspondences are located on deformed parts of the object. As the flying MUAV shows very rapid aspect changes due to the angle between the vision system and the target, this tracker is well adapted to this particular situation. The tracking algorithm returns a vector containing the MUAV position (xi and yi are the column and row positions of the MUAV, respectively) and the size of the bounding box (width and height: wi, hi). In Fig. 1, some frames of the video stream illustrate the CMT tracking results.

An exemplary result of our tracking and prediction algorithm (Algorithm 1) is presented in Fig. 2. In Fig. 2(a), a five-frame prediction result from our approach using first and second derivatives (□) is compared to a simple prediction approach using the first derivative only (Δ). Both prediction paths are compared to the ground-truth CMT tracking data (—). As long as the MUAV flight operation is slow and smooth, only small differences between ground truth and predicted paths are observed. But at turning points and during high-agile flight operation, the predictions are less accurate, especially for the simple prediction approach.

Fig. 2

Prediction of MUAV flight behavior in the 2-D image plane using the CMT tracker and the first and second derivatives (x˙ and x¨): (a) ground-truth CMT track (—) and five-frame track prediction using x˙-only (Δ) and x˙x¨ (□) algorithms, (b) the amount of the n|x˙| (⋯) and n2|x¨| (—) components and the prediction errors that are the L2-norm distances from the original track.

OE_58_5_053101_f002.png

A deeper discussion of the prediction results is presented in Fig. 2(b). In the top diagram, the amount of single flight behavior components are plotted for each frame as the amplitude of the first and second derivatives [n|x˙| (⋯) and 12n2|x¨| (—)]. It is obvious that rapid changes in the first derivative lead to peaks in the amplitude of the second derivative.

Furthermore, in the bottom diagram of Fig. 2(b), the prediction error is illustrated as the mismatch distance between ground truth and predicted position given by the L2-norm distance: dL2=xix^i2. For the simple prediction approach (⋯), high errors can be observed in frames that are related to high agile or dynamic MUAV flight operations (see frames 110 to 125, 165 to 195, 205 to 220, etc.). In this specific sequence, we obtain a mean prediction error of 10.1 pixels. This mismatch distance can be reduced significantly by additionally employing the second derivative (—) in our prediction approach. With this algorithm it is possible to reduce all previously occurring peaks significantly and a mean prediction mismatch distance of 4.7 pixels is reached.

3.

Prediction of Microunmanned Aerial Vehicles Position in Three-Dimensional Space

In three-dimensional (3-D) space, the flight path prediction has to take into account the position and motion in all spatial dimensions to reflect the complex MUAV flight behavior. Here, we want to introduce a pose estimation algorithm to derive the 3-D position and orientation of the MUAV from the 2-D intensity images. Further, we use the obtained data in an aerodynamic flight behavior model to calculate the acceleration as an additional measurement value. Then, both the determined 3-D position and the acceleration values are fed into the aforementioned Kalman filter and prediction algorithm to return the probable future MUAV position in the 3-D space.

Flight behavior of a quadcopter MUAV, for instance, is modeled in the algorithm used for navigation and control.2833 These algorithms analyze the dynamic behavior of the spatial and the angular motion to control the power consumption of the four rotors individually. Typically, these models combine a global or inertial and a MUAV or body coordinate system. In our approach, we rely on the global coordinate system of the optical sensing device. In the coordinate system the x- and y-directions are the width and the height corresponding to the previous pixel row and column directions, respectively. Then, the z-direction is the range.

Further, we developed a simplified flight behavior model to describe the MUAV motion from the thrust T, the gravitational force G, and the aerodynamic drag force D, which can be derived from some fundamental parameters such as position and orientation in 3-D space, velocity, acceleration, and few boundary conditions (aerodynamic constants and parameters). In principle, we assume that the MUAV is a floating object (the vertical component of the thrust compensates for earth gravitation). Details of this fight behavior model can be found in Secs. 6 and 7.

Position and orientation of the MUAV in 3-D space can be derived from pose estimation. For this task, the quadcopter used in our experiment can be modeled by a skeleton model consisting of four small vertical segments equally distributed around a bigger vertical segment, as shown in Fig. 3. The four small segments represent the rotation axes of each propeller and the bigger one represents the body of the quadcopter.

Fig. 3

Representation of a drone as (a) a computer graphics model and (b) a skeleton, as used in the pose estimation of the quadcopter.

OE_58_5_053101_f003.png

The use of a skeleton model has the benefit of not being too specific to the investigated MUAV. Later adaption to another MUAV can easily be done by changing the model. The only values that have to be known are the number of propellers and their distance to the center.

For the investigated vehicle, the DJI Phantom 3,34 there are four propellers located 175 mm from the center. The number of points per segment can also vary, but it does not have a great influence; we usually use 20 points per segment. The skeleton points are designated by S={sj} when placed at the origin. The pose of the skeleton corresponding to the image i is represented by a vector containing six values ϕi=[θxθyθzxyz], where x is the x-coordinate and θx is the angle of rotation around this axis. Analogous definitions are made for the y-axis and the z-axis. Here, Si(ϕi)={si,j} is the transformed skeleton (rotation and translation) and stands for the points at pose ϕi

Eq. (4)

Si(ϕi)=RiS+ti,

Eq. (5)

Ri=Rz(θz)Ry(θy)Rx(θx),

Eq. (6)

ti=[xyz]T,
where Rx(θx) is the rotation of angle θx around the x axis.

The pose estimation is based on the use of two successive views. The pseudocode is given in Algorithm 2. First, corners are detected in the region bi of the current frame Fi using the FAST corner detector26 with a nonmaximum suppression to obtain a better distribution of the corners Ci={ci,j} over the MUAV, as illustrated in Fig. 4(a). The detected corners are described using the BRIEF35 algorithm and matched with the corners from the previous frame, Ci1, to obtain the matches Mi,i1. At this point, the set of matches Mi,i1 contains corners located on the drone but also mismatches, corners on the background and corners on the propellers. In the next step, the unwanted corners (mismatch, background, and propellers) are filtered out by estimating the essential matrix36 [see Fig. 4(b)]. Hereafter Mi,i1, Ci1, and Ci correspond to the filtered data set.

Algorithm 2

Pseudocode for the pose estimation of the MUAV from two successive views.

Input: Current frame Fi, previous pose ϕi1R6, previous detected corners Ci1, bi from the CMT, threshold ε, skeleton S
Output:ϕi, Ci
 1: Detect corners Ci within bi using the FAST algorithm
 2: Match Ci with Ci1 using BRIEF descriptors and Hamming distance Mi,i1
 3: Filter out propellers and background motion from Mi,i1 by estimating the essential matrix
 4: Use ϕi1 to project S into FiSi1,proj
 5: Associate Ci1 with Si1,proj, (ci1,j,si1,proj,j)Ci1×Si1,proj such that d(ci1,si1,proj)ϵCi1,opti,Ci,opti
 6: Align S on rays issued from Ci,opti using ϕi1 as initial conditions ϕi
 7: returnϕi, Ci

Fig. 4

Link between the detected corners on the image and the drone skeleton model. (a) The detected corners using FAST, (b) the corners after the estimation of the essential matrix and the projected skeleton from the previous frame, and (c) skeleton points and corners used for the pose estimation of the drone.

OE_58_5_053101_f004.png

To estimate the pose of the drone, we need to associate the detected corners with points on the skeleton. A pinhole camera model is used to project the skeleton in the image plane. The camera is placed in the origin of the coordinate system with the standard axes, i.e., the depth of the scene is along the positive z-axis and the y-axis points to the ground. Using homogeneous coordinates, the camera projects the skeleton Si to Si,proj according to

Eq. (7)

Si,proj=PSi,
where P is the 4×4 homogeneous projection matrix.37

Further, in Fig. 4(c), we couple the detected corners and the projected skeleton points that are close together, d(ci1,si1,proj)ϵ, ci1Ci1 and si1,projSi1,proj. If a skeleton point satisfies the condition with several corners, we keep the closest couple. From this we obtain subsets of Ci1 and Si1,proj called Ci1,opti and Si1,opti. It is then straightforward to obtain Ci,opti, Si,opti, and couples (ci,opti,sopti) with ci,optiCi,opti and soptiSi,opti.

For each corner in Ci,opti, a ray r passing through the camera center (the origin) and the corner is computed. The pose of the drone in image i, ϕi, is then updated by minimizing the distance between each element of Si,opti and their corresponding rays (see Fig. 5). The minimization is performed using ϕi1 as a starting point

Eq. (8)

ϕi=argϕiminjd(rj,sj)2,sjSi,opti,

Eq. (9)

rj=P1ci,opti,j=P1[xcyc11]T,
where d(rj,sj) is the orthogonal distance from the point sj to the line rj. Here, xc and yc are the corner coordinates in pixel. The minimization is performed by the Trust Region Reflective38 algorithm, which is implemented in SciPy.39 Bounds are used on the angular components of the pose to obtain meaningful results according to the MUAV specifications (see Table 2 in Sec. 6). The whole pose estimation algorithm is summarized as a pseudocode in Algorithm 2.

Fig. 5

Principle for aligning the drone skeleton on the rays issued from the detected corners. Aligning the skeleton S starting at the pose ϕi1 on rays issued from Ci,opti shown in Fig. 4.

OE_58_5_053101_f005.png

The resulting pose vector ϕi of the current frame and the state vector Xi1 of the previous frame are used to estimate the current state vector Xi using the MUAV flight behavior model in Sec. 6 and the Kalman filter in Sec. 5. Orientation (θx, θy, and θz) and velocity x˙ are needed to estimate the current acceleration x¨i from the flight model [Eq. (15)], taking into account the gravitation, thrust, and aerodynamic drag force. The determined acceleration vector and the ϕi positions are then fed into the Kalman filter as measurement values (zi) to update the MUAV state vector Xi. Finally, the n-frame prediction is calculated as mentioned previously.

Figure 6 shows the results from the analysis of the same image sequence as used in Sec. 2. Owing to the missing reliable data for ground truth for the MUAV position, a reference for the 3-D flight path (—) is calculated from the state vectors (Xi). In Fig. 6(a), this track is compared to the five-frame prediction (gray bullets). A good agreement between flight path and prediction can be observed in area of constant or linear movement (no rapid change of flight direction), while at turning points, where the flight direction changes rapidly, the algorithm still tends to overestimate the flight motion.

Fig. 6

Results of the 3-D MUAV tracking and n-frame prediction algorithm applied to a video sequence recorded under laboratory conditions: (a) 3-D tracks from the Kalman filter (—) and the n-frame prediction (gray bullets); (b) amplitude of the components (first and second derivatives) and the prediction mismatch (L2-norm).

OE_58_5_053101_f006.png

In Fig. 6(b), a detailed analysis is presented. In the upper diagram, the amplitudes of the motion components, i.e., absolute velocity and absolute acceleration, are depicted. These values are directly derived from the algorithm. The velocity and acceleration are measured in standard units (m/s and m/s2).

In the lower diagram, the prediction error, that is L2-norm distance between predicted and ground-truth positions, is plotted for each frame. In most frames, this prediction error is <20  cm. Only in a few sections of the sequence does this error exceed a value of 50 cm. These are the parts of the track where rapid changes of the flight direction occur. Further, significant prediction errors occur in the same frames as in the 2-D case [compare Fig. 2(b)], only.

A more detailed discussion of the prediction error can be derived from Fig. 7. Here, the prediction error is plotted for each coordinate axis as the difference between the MUAV flight path (or track) and the predicted positions. Along the three coordinate axes, the prediction error is scattered around the zero position, giving evidence that no systematic error is present. In height (Δy) and width (Δx), the error is scattered with marginal standard deviation σ of ±1 and ±3  cm, respectively. Owing to a more dynamic flight behavior, the standard variation along the range axis (Δz) is σ=24.4  cm. Here, we want to point out that the retrieval of 3-D information from a single view in a 2-D intensity image is a challenging task, especially for range estimation. The statistical analysis of the whole sequence of 319 frames is summarized in Table 1.

Fig. 7

Illustration of the prediction error as a 3-D scatter plot. The difference between track and prediction is plotted for each coordinate axis: height, width, and range.

OE_58_5_053101_f007.png

Table 1

Summary of statistics on the prediction error in a sequence of 319 frames.

DataMean (m)Std. deviation σ (m)Variance σ2 (m2)Min. (m)Max. (m)
Δx0.0020.030.0010.130.13
Δy0.00020.01090.00010.030.03
Δz0.0020.2440.060.590.88

4.

Conclusions

We want to point out again that the 3-D information was carried out from pose estimation in 2-D intensity images. Further, in the analyzed sequences, the MUAV was observed under a very gradual observation angle. Here, a more slanted observation angle could help to increase the accuracy of the spatial position estimation.

We successfully demonstrated the capability to extract 3-D information from 2-D intensity images by applying pose estimation with a simple skeleton model. From this pose information (i.e., position and orientation) and a simplified flight behavior model (with a floating body approximation), we could calculate the state vector that describes the current MUAV flight operation. Further, using a motion model that includes velocity and acceleration, we could predict the position of the MUAV in the near future.

The overall performance gives evidence that our approach is capable of resulting in a good first estimation of the future MUAV flight behavior. Unfortunately, in dynamic operation that includes rapid changes in velocity and direction, the underlying model tends to an overshooting effect. In future investigation, this error could be diminished by the use of a higher order of temporal derivative.

In our experiments, we used a single view from a single camera to retrieve a 3-D pose estimation by mainly analyzing the position of the four rotors. Further, the experiments were carried out at a very narrow viewing angle close to the MUAV principal plane. In these conditions, sometimes the line of sight on one or two rotors is blocked by the MUAV main body. Then, the pose estimation uses only the remaining visible rotors (three visible rotors are optimal). This situation is sufficient for a reliable pose estimation and has only a slight impact on the overall estimation of the 3-D position. In the later application in outdoor scenarios, we assume to have a steeper viewing angle, which should allow for an even better pose estimation.

To date, we have investigated our approach only in laboratory conditions as a proof-of-principle study. But, image data recorded in outdoor scenarios under operational conditions will contain a lot of additional disruptive factors such as signal-to-noise ratio, resolution, lighting conditions, and background. Further, MUAV will fly at larger ranges. Here, an active imaging device with a very narrow field of view could counter these challenges and could be used to gate out the background, increase resolution and signal-to-noise ratio, and be independent from lighting conditions. Our team has access to a shortwave-infrared laser gated-viewing system as presented by Christnacher et al.10 In the near future, we will perform tests to investigate the pose estimation approach in operational conditions. Lighting conditions and observing MUAV flight behavior at a larger range can reduce the ranging accuracy of the proposed pose estimation approach due to an expected smaller scaling effect of the MUAV with the range at narrower observation angles. This effect could be compensated for by ranging capabilities of laser imaging devices. Notwithstanding this, the determination of the tilt angles (θx and θy) and, thus, the application of the flight model should not be affected as long as the target is imaged with a high resolution.

Further, in the presented work, we have tested our approach on a single known MUAV, but we are convinced that our approach has a general nature and it could be easily adapted to other multicopter MUAV. Finally, the tracking and prediction of the flight behavior of unknown MUAV is possible if a classification of the target is done previously. As long as we have a proper skeleton model, this approach should be able to track and predict flight behavior of any multicopter MUAV. For the prediction of other MUAV types, such as fixed-wing or vertical takeoff and landing, the flight model has to be adapted carefully and in detail.

5.

Appendix A: Brief Description of the Kalman Filter

The Kalman filter is an iterative forward recursion process that consists of two steps: the “time update” and the “measurement update.”22 Within the “time update,” for the i’th measurement, the state vector Xi and the error covariance Pi are projected from previous values using the linear stochastic differential equation

Eq. (10)

Xi=AXi1+Buk,

Eq. (11)

Pi=APi1AT+Q.

Here, Eq. (10) reflects the process model as a system of motion equations with state-transition matrix A, the previous state vector Xi1, and the control-input model Buk (in our case, Buk=0). Analogously, the projected error covariance matrix Pi is calculated in Eq. (11) using the error covariance Pi and the process noise covariance matrix Q.

Then in “measurement update,” the state vector Xi is updated by weighting the measurement zi and the projected state vector Xi with the Kalman gain Ki. Finally, the error covariance Pi is updated

Eq. (12)

Ki=PiHT(HPiHT+R)1,

Eq. (13)

Xi=Xi+Ki(ziHXi),

Eq. (14)

Pi=PkKiHPk.

Here, the matrices H and R describe the measurement (or observation) process and covariance.

6.

Appendix B: The Microunmanned Aerial Vehicles Flight Behavior Model

The MUAV flight behavior can be described by the motion in Eq. (15). Here, m is the mass of the MUAV and mx¨ is the resulting force that accelerates the MUAV in any direction. This force is the sum of the gravitational force G=[0mg0]T, the total thrust T of all propellers, and the drag force D. Owing to the lack of information about the single motor thrusts, rotational motion is neglected in this model

Eq. (15)

mx¨=G+T+D.

The overall thrust T is assumed to be perpendicular to the MUAV main plane, that is, the plane in which all motors lie. Here, T is oriented in the direction of the normal vector n=[nx,ny,nz]T. We define the thrust amplitude as |T|=Tx2+Ty2+Tz2 and the thrust vector as T=|T|n=[Tx,Ty,Tz]T. The thrust amplitude has positive values |T|[0,Tmax]. Further, in a first approximation, we assume a floating body that means the vertical thrust (Ty) compensates for the gravitational force

Eq. (16)

Ty+Gy=0Ty=mg.

From this assumption we can determine the forces induced by the thrust in the other direction

Eq. (17)

Tx=nxnyTy=nxnymg,

Eq. (18)

Tz=nznyTy=nznymg.

Finally, we use a simplified model based on Stokes’s law to determine the drag force D=kdx˙ with kd as an aerodynamic constant. This force describes the aerodynamic drag on the MUAV that is proportional to the velocity. The aerodynamic constant kd can be determined experimentally.40,41 In this paper, we assume an isotropic aerodynamic coefficient of kd=0.55kgs and a maximum thrust of Tmax=15.3  kgms2, which can be calculated from Eq. (15) and the technical specification of the investigated MUAV.34 Details of this calculation are summarized in Sec. 7.

From Eqs. (15)–(18), we can derive a calculus to determine the accelerations in the x-, y- and z-directions from the MUAV tilt angles (θx and θz), which is analogous to the 3-D pose of the MUAV

Eq. (19)

x¨=nxnygkdmx˙,withnxny=tanθx,

Eq. (20)

y¨=kdmy˙,

Eq. (21)

z¨=nznygkdmz˙,with  nzny=tanθz.

Further, stable flight operation has to comply with the condition that the thrust amplitude |T| is limited by the maximum thrust Tmax by |T|<Tmax. This condition has a direct impact on the expected values of the normal vector [see Eq. (22)] and is analogous to the definition of a maximum tilt angle θmax

Eq. (22)

nx2+ny2+nz2ny2<Tmaxmg,

Eq. (23)

nx2+nz2ny2<(Tmaxmg)21tanθmax.

7.

Appendix C: Estimation of Aerodynamic Coefficient kd and Maximum Thrust Tmax

With the definitions of the gravitational force G, the thrust T, and the drag force D as

Eq. (24)

G=[0mg0],T=[tanθxTyTytanθzTy]and  D=[kx˙ky˙kz˙],
the MUAV motion can be described as motion in the x-, y- and z-directions by the equation system in Eqs. (25)–(27) in analogy to Eq. (15). Now, θi is the inclination of the MUAV or tilt angle in the i-direction and tanθi is equal to niny

Eq. (25)

mx¨=tanθxTykdx˙,

Eq. (26)

my¨=mg+Tykdy˙,

Eq. (27)

mz¨=tanθzTykdz˙.

In the following, the aerodynamic parameters Tmax and kd will be derived from this equation system using manufacturer specifications for weight, maximum horizontal and vertical velocity, and maximum tilt angle of the investigated MUAV.34 The values are summarized in Table 2.

Table 2

Physical flight parameter Tmax and kd for the investigated MUAV calculated from the manufacturer specifications.34

ParameterSymbolUnitDJI phantom 3
Total weightmKg1.28
Max ascent speedvv,maxm/s5
Max horizontal speedvh,maxm/s16
Max tilt angleθmaxdeg35
Max thrustTmaxkgms215.3
Drag const.kdkgs0.55

As illustrated in Fig. 8, we can distinguish two borderline flight operations: maximum horizontal velocity and maximum vertical velocity.

Fig. 8

The two different extreme flight operations, case 1: maximal horizontal velocity and case 2: maximal vertical velocity, are distinguished.

OE_58_5_053101_f008.png

7.1.

Maximum Horizontal Velocity

First, we assume a flight operation with maximal horizontal velocity in the x-direction. In this case, the MUAV will be tilted to maximum inclination toward the flight direction, which is given by the maximum tilt angle θ=θmax. Further, in the horizontal direction (x), the velocity x˙ is equal to vh,max, the maximum horizontal velocity.

Further, due to borderline flight operation, the velocities in the other directions and all accelerations are zero, y˙=z˙=0 and x¨=y¨=z¨=0. The thrust is at maximum level, T=Tmax, but, due to the floating body assumption, its vertical component Ty has to compensate for the gravitational force, that is Ty=mg. Thus, the maximal thrust Tmax can be calculated as

Eq. (28)

Tmax=Tx2+Ty2,

Eq. (29)

=mg(tanθmax)2+1,

Eq. (30)

Tmax=15,3  kgms2.

Then, the aerodynamic constant kd can be calculated from Eq. (25) as

Eq. (31)

kd=mgtanθmaxvh,max=0.55  kgs.

Although we have already determined the values for both Tmax and kd, these results can be validated by the second case, that is, maximum vertical velocity.

7.2.

Maximum Vertical Velocity

Here, we can assume a flight operation with maximum vertical velocity y˙=vv,max. In this case, the inclination angle is zero, θ=0  deg, and the maximum thrust pushes the MUAV upward in the y-direction, Ty=Tmax. The velocities in the horizontal directions are zero, x˙=z˙=0, as well as any accelerations, x¨=y¨=z¨=0. Then, from Eq. (26) and with the result for either kd or Tmax of the previous case, we can again derive

Eq. (32)

kd=Tmaxmgvv,max=0.55  kgs,

Eq. (33)

Tmax=mg+kdvv,max=15.3kgms2.

Both results agree with the previously found values.

Acknowledgments

The authors would like to thank Dr. Sebastien Hengy from ISL’s acoustics group for carefully operating the MUAV during our indoor experiments.

References

1. 

U.S. Department of Transportation, “Federal Aviation Administration—UAS sightings report,” (2018) https://www.faa.gov/uas/resources/uas_sightings_report/ August ). 2018). Google Scholar

2. 

Transport Canada, “Statement by Minister of Transport about a drone incident with a passenger aircraft in Quebec City,” (2019) https://www.canada.ca/en/transport-canada/news/2017/10/statement_by_ministeroftransportaboutadroneincidentwithapassenge.html January ). 2019). Google Scholar

3. 

L. To, A. Bati and D. Hilliard, “Radar cross section measurements of small unmanned air vehicle systems in non-cooperative field environments,” in 3rd Eur. Conf. Antennas and Propag., 3637 –3641 (2009). Google Scholar

4. 

S. Harman, “Characteristics of the radar signature of multi-rotor UAVs,” in Eur. Radar Conf. (EuRAD), 93 –96 (2016). Google Scholar

5. 

C.-C. Tsai, C.-T. Chiang and W.-J. Liao, “Radar cross section measurement of unmanned aerial vehicles,” in IEEE Int. Workshop Electromagn.: Appl. and Stud. Innov. Competition (iWEM), 1 –3 (2016). https://doi.org/10.1109/iWEM.2016.7504915 Google Scholar

6. 

M. Pieraccini, L. Miccinesi and N. Rojhani, “RCS measurements and ISAR images of small UAVs,” IEEE Aerosp. Electron. Syst. Mag., 32 28 –32 (2017). https://doi.org/10.1109/MAES.2017.160167 IESMEA 0885-8985 Google Scholar

7. 

M. Laurenzis, E. Bacher and F. Christnacher, “Experimental and rendering-based investigation of laser radar cross sections of small unmanned aerial vehicles,” Opt. Eng., 56 (12), 124106 (2017). https://doi.org/10.1117/1.OE.56.12.124106 Google Scholar

8. 

D. Noetel et al., “Detection of MAVs (micro aerial vehicles) based on millimeter wave radar,” Proc. SPIE, 9993 999308 (2016). https://doi.org/10.1117/12.2242020 PSISDG 0277-786X Google Scholar

9. 

T. Müller, “Robust drone detection for day/night counter-UAV with static VIS and SWIR cameras,” Proc. SPIE, 10190 1019018 (2017). https://doi.org/10.1117/12.2262575 PSISDG 0277-786X Google Scholar

10. 

F. Christnacher et al., “Optical and acoustical UAV detection,” Proc. SPIE, 9988 99880B (2016). https://doi.org/10.1117/12.2240752 PSISDG 0277-786X Google Scholar

11. 

A. Hommes et al., “Detection of acoustic, electro-optical and RADAR signatures of small unmanned aerial vehicles,” Proc. SPIE, 9997 999701 (2016). https://doi.org/10.1117/12.2242180 PSISDG 0277-786X Google Scholar

12. 

M. Laurenzis et al., “Multi-sensor field trials for detection and tracking of multiple small unmanned aerial vehicles flying at low altitude,” Proc. SPIE, 10200 102001A (2017). https://doi.org/10.1117/12.2261930 PSISDG 0277-786X Google Scholar

13. 

S. Hengy et al., “Multimodal UAV detection: study of various intrusion scenarios,” Proc. SPIE, 10434 104340P (2017). https://doi.org/10.1117/12.2278212 PSISDG 0277-786X Google Scholar

14. 

A. Schumann et al., “Deep cross-domain flying object classification for robust UAV detection,” in 14th IEEE Int. Conf. Adv. Video and Signal Based Surveill. (AVSS), 1 –6 (2017). https://doi.org/10.1109/AVSS.2017.8078558 Google Scholar

15. 

A. Coluccia et al., “Drone-vs-bird detection challenge at IEEE AVSS2017,” in 14th IEEE Int. Conf. Adv. Video and Signal Based Surveill. (AVSS), 1 –6 (2017). https://doi.org/10.1109/AVSS.2017.8078464 Google Scholar

16. 

J. Sander et al., “High-level data fusion component for drone classification and decision support in counter UAV,” Proc. SPIE, 10651 106510F (2018). https://doi.org/10.1117/12.2306148 PSISDG 0277-786X Google Scholar

17. 

F. Racek et al., “Tracking, aiming, and hitting the UAV with ordinary assault rifle,” Proc. SPIE, 10441 104410C (2017). https://doi.org/10.1117/12.2276310 PSISDG 0277-786X Google Scholar

18. 

T. Kojima and T. Namerikawa, “Image-based position estimation of UAV using Kalman filter,” in IEEE Conf. Control Appl. (CCA), 406 –411 (2015). https://doi.org/10.1109/CCA.2015.7320663 Google Scholar

19. 

S. H. Tang et al., “Unscented Kalman filter for position estimation of UAV by using image information,” in 54th Annu. Conf. Soc. Instrum. and Control Eng. Jpn. (SICE), 695 –700 (2015). https://doi.org/10.1109/SICE.2015.7285427 Google Scholar

20. 

R. E. Kálmán, “A new approach to linear filtering and prediction problems,” J. Basic Eng., 82 (1), 35 –45 (1960). https://doi.org/10.1115/1.3662552 Google Scholar

21. 

K. P. Murphy, “Kalman filter toolbox for Matlab,” (2018) https://www.cs.ubc.ca/murphyk/Software/Kalman/kalman_download.html November ). 2018). Google Scholar

22. 

G. Bishop and G. Welch, “An introduction to the Kalman filter,” in Proc. SIGGRAPH, 59 (2001). Google Scholar

23. 

G. Nebehay and R. Pflugfelder, “Consensus-based matching and tracking of keypoints for object tracking,” in IEEE Winter Conf. Appl. Comput. Vision (WACV), 862 –869 (2014). https://doi.org/10.1109/WACV.2014.6836013 Google Scholar

24. 

G. Nebehay and R. Pflugfelder, “Clustering of static-adaptive correspondences for deformable object tracking,” in IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR), 2784 –2791 (2015). https://doi.org/10.1109/CVPR.2015.7298895 Google Scholar

25. 

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” Lect. Notes Comput. Sci., 3951 430 –443 (2006). https://doi.org/10.1007/11744023 LNCSD9 0302-9743 Google Scholar

26. 

E. Rosten, R. Porter and T. Drummond, “Faster and better: a machine learning approach to corner detection,” IEEE Trans. Pattern Anal. Mach. Intell., 32 105 –119 (2010). https://doi.org/10.1109/TPAMI.2008.275 ITPIDJ 0162-8828 Google Scholar

27. 

S. Leutenegger, M. Chli and R. Y. Siegwart, “BRISK: binary robust invariant scalable keypoints,” in IEEE Int. Conf. Comput. Vision, 2548 –2555 (2011). https://doi.org/10.1109/ICCV.2011.6126542 Google Scholar

28. 

K. D. Sebesta and N. Boizot, “A real-time adaptive high-gain EKF, applied to a quadcopter inertial navigation system,” IEEE Ind. Electron. Mag., 61 (1), 495 –503 (2014). https://doi.org/10.1109/TIE.2013.2253063 Google Scholar

29. 

P. Castillo, R. Lozano and A. Dzul, “Stabilization of a mini rotorcraft with four rotors,” IEEE Control Syst. Mag., 25 (6), 45 –55 (2005). https://doi.org/10.1109/MCS.2005.1550152 ISMAD7 0272-1708 Google Scholar

30. 

S. Bouabdallah and R. Y. Siegwart, “Full control of a quadrotor,” in IEEE/RSJ Int. Conf. Intell. Robots and Syst., 153 –158 (2007). https://doi.org/10.1109/IROS.2007.4399042 Google Scholar

31. 

R. Mahony, V. Kumar and P. Corke, “Multirotor aerial vehicles: modeling, estimation, and control of quadrotor,” IEEE Rob. Autom Mag., 19 20 –32 (2012). https://doi.org/10.1109/MRA.2012.2206474 Google Scholar

32. 

C. G. Mayhew, R. G. Sanfelice and A. R. Teel, “On path-lifting mechanisms and unwinding in quaternion-based attitude control,” IEEE Trans. Autom. Control, 58 (5), 1179 –1191 (2013). https://doi.org/10.1109/TAC.2012.2235731 Google Scholar

33. 

G. Hoffmann et al., “Quadrotor helicopter flight dynamics and control: theory and experiment,” in AIAA Guid., Navig. and Control Conf. and Exhibit, 6461 (2007). Google Scholar

34. 

DJI, “Phantom 3 professional specs,” (2018) https://www.dji.com/ September ). 2018). Google Scholar

35. 

M. Calonder et al., “BRIEF: binary robust independent elementary features,” Lect. Notes Comput. Sci., 6314 778 –792 (2010). https://doi.org/10.1007/978-3-642-15561-1 LNCSD9 0302-9743 Google Scholar

36. 

D. Nister, “An efficient solution to the five-point relative pose problem,” IEEE Trans. Pattern Anal. Mach. Intell., 26 756 –770 (2004). https://doi.org/10.1109/TPAMI.2004.17 ITPIDJ 0162-8828 Google Scholar

37. 

R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn.Cambridge University Press, New York (2004). Google Scholar

38. 

D. C. Sorensen, “Newton’s method with a model trust region modification,” SIAM J. Numer. Anal., 19 (2), 409 –426 (1982). https://doi.org/10.1137/0719026 SJNAEQ 0036-1429 Google Scholar

39. 

SciPy Community, “SciPy v0.18.1, optimization and root finding,” (2019) https://www.scipy.org/ January ). 2019). Google Scholar

40. 

C. R. Russell et al., “Wind tunnel and hover performance test results for multicopter UAS vehicles,” in 72nd Am. Helicopter Soc. (AHS) Int. Annu. Forum, (2016). Google Scholar

41. 

C. R. Russell, C. R. Theodore and M. K. Sekula, “Incorporating test data for small UAS at the conceptual design level,” in AHS Int. Tech. Meeting Aeromech. Des. Transformative Vert. Flight, (2018). Google Scholar

Biography

Martin Rebert received his master’s degree in engineering from Télécom Bretagne (France) in 2015. He is now preparing for his PhD with the collaboration of the IRIMAS laboratory of the University of Haute Alsace in Mulhouse (France) and the French–German Research Institute of Saint-Louis in Saint-Louis (France). His work is focused on computer vision and autonomous navigation.

Stéphane Schertzer is a research engineer at the French–German Research Institute of Saint-Louis (ISL) in Saint-Louis (France). He received his MSc degree in electrical engineering and computer sciences from the University of Haute-Alsace in Mulhouse (France) in 2009. He specializes in computer vision, active imaging and software development (C++, CUDA etc.) of algorithms and man–machine interfaces. He is currently developing new algorithms for active imaging systems.

Martin Laurenzis received his MS degree in physics from the University of Dortmund (Germany) in 1999 and his PhD in electrical engineering and information technology from the University of Aachen (Germany) in 2005. He has been with the French–German Research Institute of Saint–Louis (France) since 2004. His research interests are focused on microunmanned aerial vehicle detection using heterogeneous sensor networks, applied computer vision, and computational imaging.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Martin Rebert, Stéphane Schertzer, and Martin Laurenzis "Prediction of microunmanned aerial vehicle flight behavior from two-dimensional intensity images," Optical Engineering 58(5), 053101 (20 May 2019). https://doi.org/10.1117/1.OE.58.5.053101
Received: 11 March 2019; Accepted: 23 April 2019; Published: 20 May 2019
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D modeling

Filtering (signal processing)

Magnesium

Motion models

3D image processing

Aerodynamics

Detection and tracking algorithms

Back to Top