The UAV-g 2017 has three categories of publications that will be presented at the conference. These are the papers to be published in the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, within the ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, and finally extended abstracts only available via this website.

Official Links to the ISPRS Annals and Archives Volumes

Papers for the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences

  • E. Cledat and D. A. Cucci, “MAPPING GNSS RESTRICTED ENVIRONMENTS WITH A DRONE TANDEM AND INDIRECT POSITION CONTROL,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.

    @InProceedings{cledat2017uavg,
    Title = {MAPPING GNSS RESTRICTED ENVIRONMENTS WITH A DRONE TANDEM AND INDIRECT POSITION CONTROL},
    Author = {E. Cledat and D.A. Cucci},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.},
    }

  • S. Crommelinck, R. Bennett, M. Gerke, M. N. Koeva, M. Y. Yang, and G. Vosselman, “SLIC SUPERPIXELS FOR OBJECT DELINEATION FROM UAV DATA,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Unmanned aerial vehicles (UAV) are increasingly investigated with regard to their potential to create and update (cadastral) maps. UAVs provide a flexible and low-cost platform for high-resolution data, from which object outlines can be accurately delineated. This delineation could be automated with image analysis methods to improve existing mapping procedures that are cost, time and labor intensive and of little reproducibility. This study investigates a superpixel approach, namely simple linear iterative clustering (SLIC), in terms of its applicability to UAV data. The approach is investigated in terms of its applicability to high-resolution UAV orthoimages and in terms of its ability to delineate object outlines of roads and roofs. Results show that the approach is applicable to UAV orthoimages of 0.05 m GSD and extents of 100 million and 400 million pixels. Further, the approach delineates the objects with the high accuracy provided by the UAV orthoimages at completeness rates of up to 64%. The approach is not suitable as a standalone approach for object delineation. However, it shows high potential for a combination with further methods that delineate objects at higher correctness rates in exchange of a lower localization quality. This study provides a basis for future work that will focus on the incorporation of multiple methods for an interactive, comprehensive and accurate object delineation from UAV data. This aims to support numerous application fields such as topographic and cadastral mapping.

    @InProceedings{crommelinck2017uavg,
    Title = {SLIC SUPERPIXELS FOR OBJECT DELINEATION FROM UAV DATA},
    Author = {S. Crommelinck and R. Bennett and M. Gerke and M.N. Koeva and M.Y. Yang and G. Vosselman},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Unmanned aerial vehicles (UAV) are increasingly investigated with regard to their potential to create and update (cadastral) maps. UAVs provide a flexible and low-cost platform for high-resolution data, from which object outlines can be accurately delineated. This delineation could be automated with image analysis methods to improve existing mapping procedures that are cost, time and labor intensive and of little reproducibility. This study investigates a superpixel approach, namely simple linear iterative clustering (SLIC), in terms of its applicability to UAV data. The approach is investigated in terms of its applicability to high-resolution UAV orthoimages and in terms of its ability to delineate object outlines of roads and roofs. Results show that the approach is applicable to UAV orthoimages of 0.05 m GSD and extents of 100 million and 400 million pixels. Further, the approach delineates the objects with the high accuracy provided by the UAV orthoimages at completeness rates of up to 64%. The approach is not suitable as a standalone approach for object delineation. However, it shows high potential for a combination with further methods that delineate objects at higher correctness rates in exchange of a lower localization quality. This study provides a basis for future work that will focus on the incorporation of multiple methods for an interactive, comprehensive and accurate object delineation from UAV data. This aims to support numerous application fields such as topographic and cadastral mapping.},
    }

  • M. Hillemann and B. Jutzi, “UCALMICEL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Unmanned Aerial Vehicle (UAV) with adequate sensors enable new applications in the scope between expensive, large-scale, aircraft- carried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL). The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8deg and 6.4mm.

    @InProceedings{hillemann2017uavg,
    Title = {UCALMICEL - UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER},
    Author = {M. Hillemann and B. Jutzi},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Unmanned Aerial Vehicle (UAV) with adequate sensors enable new applications in the scope between expensive, large-scale, aircraft- carried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL). The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8deg and 6.4mm. },
    }

  • E. Maset, A. Fusiello, F. Crosilla, R. Toldo, and D. Zorzetto, “PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

    @InProceedings{maset2017uavg,
    Title = {PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES},
    Author = {E. Maset and A. Fusiello and F. Crosilla and R. Toldo and D. Zorzetto},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way. },
    }

  • M. Maurer, M. Hofer, F. Fraundorfer, and H. Bischof, “AUTOMATED INSPECTION OF POWER LINE CORRIDORS TO MEASURE VEGETATION UNDERCUT USING UAV-BASED IMAGES,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present a automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line), and on the other hand solid objects (surrounding). The automated selection is realised by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improve the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.

    @InProceedings{maurer2017uavg,
    Title = {AUTOMATED INSPECTION OF POWER LINE CORRIDORS TO MEASURE VEGETATION UNDERCUT USING UAV-BASED IMAGES},
    Author = {M. Maurer and M. Hofer and F. Fraundorfer and H. Bischof},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present a automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line), and on the other hand solid objects (surrounding). The automated selection is realised by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improve the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.},
    }

  • A. Milioto, P. Lottes, and C. Stachniss, “REAL-TIME BLOB-WISE SUGAR BEETS VS WEEDS CLASSIFICATION FOR MONITORING FIELDS USING CONVOLUTIONAL NEURAL NETWORKS,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    UAVs are becoming an important tool for field monitoring and precision farming. A prerequisite for observing and analyzing fields is the ability to identify crops and weeds from image data. In this paper, we address the problem of detecting the sugar beet plants and weeds in the field based solely on image data. We propose a system that combines vegetation detection and deep learning to obtain a high-quality classification of the vegetation in the field into value crops and weeds. We implemented and thoroughly evaluated our system on image data collected from different sugar beet fields and illustrate that our approach allows for accurately identifying the weeds on the field.

    @InProceedings{milioto2017uavg,
    Title = {REAL-TIME BLOB-WISE SUGAR BEETS VS WEEDS CLASSIFICATION FOR MONITORING FIELDS USING CONVOLUTIONAL NEURAL NETWORKS},
    Author = {A. Milioto and P. Lottes and C. Stachniss},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {UAVs are becoming an important tool for field monitoring and precision farming. A prerequisite for observing and analyzing fields is the ability to identify crops and weeds from image data. In this paper, we address the problem of detecting the sugar beet plants and weeds in the field based solely on image data. We propose a system that combines vegetation detection and deep learning to obtain a high-quality classification of the vegetation in the field into value crops and weeds. We implemented and thoroughly evaluated our system on image data collected from different sugar beet fields and illustrate that our approach allows for accurately identifying the weeds on the field.},
    }

  • M. S. Mueller, S. Urban, and B. Jutzi, “SQUEEZEPOSENET: IMAGE BASED POSE REGRESSION WITH SMALL CONVOLUTIONAL NEURAL NETWORKS FOR REAL TIME UAS NAVIGATION,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.

    @InProceedings{mueller2017uavg,
    Title = {SQUEEZEPOSENET: IMAGE BASED POSE REGRESSION WITH SMALL CONVOLUTIONAL NEURAL NETWORKS FOR REAL TIME UAS NAVIGATION},
    Author = {M.S. Mueller and S. Urban and B. Jutzi},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.},
    }

  • E. Palazzolo and C. Stachniss, “INFORMATION-DRIVEN AUTONOMOUS EXPLORATION FOR A VISION-BASED MAV,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Most micro aerial vehicles (MAV) are flown manually by a pilot. When it comes to autonomous exploration for MAVs equipped with cameras, we need a good exploration strategy for covering an unknown 3D environment in order to build an accurate map of the scene. In particular, the robot must select appropriate viewpoints to acquire informative measurements. In this paper, we present an approach that computes in real-time a smooth flight path with the exploration of a 3D environment using a vision-based MAV. We assume to know a bounding box of the object or building to explore and our approach iteratively computes the next best viewpoints using a utility function that considers the expected information gain of new measurements, the distance between viewpoints, and the smoothness of the flight trajectories. In addition, the algorithm takes into account the elapsed time of the exploration run to safely land the MAV at its starting point after a user specified time. We implemented our algorithm and our experiments suggest that it allows for a precise reconstruction of the 3D environment while guiding the robot smoothly through the scene.

    @InProceedings{palazzolo2017uavg,
    Title = {INFORMATION-DRIVEN AUTONOMOUS EXPLORATION FOR A VISION-BASED MAV},
    Author = {E. Palazzolo and C. Stachniss},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Most micro aerial vehicles (MAV) are flown manually by a pilot. When it comes to autonomous exploration for MAVs equipped with cameras, we need a good exploration strategy for covering an unknown 3D environment in order to build an accurate map of the scene. In particular, the robot must select appropriate viewpoints to acquire informative measurements. In this paper, we present an approach that computes in real-time a smooth flight path with the exploration of a 3D environment using a vision-based MAV. We assume to know a bounding box of the object or building to explore and our approach iteratively computes the next best viewpoints using a utility function that considers the expected information gain of new measurements, the distance between viewpoints, and the smoothness of the flight trajectories. In addition, the algorithm takes into account the elapsed time of the exploration run to safely land the MAV at its starting point after a user specified time. We implemented our algorithm and our experiments suggest that it allows for a precise reconstruction of the 3D environment while guiding the robot smoothly through the scene.},
    }

  • C. Pinard, L. Chevalley, A. Manzanera, and D. Filliat, “END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable

    @InProceedings{pinard2017uavg,
    Title = {END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS},
    Author = {C. Pinard and L. Chevalley and A. Manzanera and D. Filliat},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable},
    }

  • M. Rehak and J. Skaloud, “PERFORMANCE ASSESSMENT OF INTEGRATED SENSOR ORIENTATION WITH A LOW-COST GNSS RECEIVER,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Mapping with Micro Aerial Vehicles (MAVs whose weight does not exceed 5 kg) is gaining importance in applications such as corridor mapping, road and pipeline inspections, or mapping of large areas with homogeneous surface structure, e.g. forest or agricultural fields. In these challenging scenarios, integrated sensor orientation (ISO) improves effectiveness and accuracy. Furthermore, in block geometry configurations, this mode of operation allows mapping without ground control points (GCPs). Accurate camera positions are traditionally determined by carrier-phase GNSS (Global Navigation Satellite System) positioning. However, such mode of positioning has strong requirements on receiver’s and antenna’s performance. In this article, we present a mapping project in which we employ a single-frequency, low-cost (< $100) GNSS receiver on a MAV. The performance of the low-cost receiver is assessed by comparing its trajectory with a reference trajectory obtained by a survey-grade, multi-frequency GNSS receiver. In addition, the camera positions derived from these two trajectories are used as observations in bundle adjustment (BA) projects and mapping accuracy is evaluated at check points (ChP). Several BA scenarios are considered with absolute and relative aerial position control. Additionally, the presented experiments show the possibility of BA to determine a camera-antenna spatial offset, so-called lever-arm.

    @InProceedings{rehak2017uavg,
    Title = {PERFORMANCE ASSESSMENT OF INTEGRATED SENSOR ORIENTATION WITH A LOW-COST GNSS RECEIVER},
    Author = {M. Rehak and J. Skaloud},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Mapping with Micro Aerial Vehicles (MAVs whose weight does not exceed 5 kg) is gaining importance in applications such as corridor mapping, road and pipeline inspections, or mapping of large areas with homogeneous surface structure, e.g. forest or agricultural fields. In these challenging scenarios, integrated sensor orientation (ISO) improves effectiveness and accuracy. Furthermore, in block geometry configurations, this mode of operation allows mapping without ground control points (GCPs). Accurate camera positions are traditionally determined by carrier-phase GNSS (Global Navigation Satellite System) positioning. However, such mode of positioning has strong requirements on receiver's and antenna's performance. In this article, we present a mapping project in which we employ a single-frequency, low-cost (< $100) GNSS receiver on a MAV. The performance of the low-cost receiver is assessed by comparing its trajectory with a reference trajectory obtained by a survey-grade, multi-frequency GNSS receiver. In addition, the camera positions derived from these two trajectories are used as observations in bundle adjustment (BA) projects and mapping accuracy is evaluated at check points (ChP). Several BA scenarios are considered with absolute and relative aerial position control. Additionally, the presented experiments show the possibility of BA to determine a camera-antenna spatial offset, so-called lever-arm.},
    }

  • J. Schneider, C. Stachniss, and W. Foerstner, “ON THE QUALITY AND EFFICIENCY OF APPROXIMATE SOLUTIONS TO BUNDLE ADJUSTMENT WITH EPIPOLAR AND TRIFOCAL CONSTRAINTS,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Bundle adjustment is a central part of most visual SLAM and Structure from Motion systems and thus a relevant component of UAVs equipped with cameras. This paper makes two contributions to bundle adjustment. First, we present a novel approach which exploits trifocal constraints, i.e., constraints resulting from corresponding points observed in three camera images, which allows to estimate the camera pose parameters without 3D point estimation. Second, we analyze the quality loss compared to the optimal bundle adjustment solution when applying different types of approximations to the constrained optimization problem to increase efficiency. We implemented and thoroughly evaluated our approach using a UAV performing mapping tasks in outdoor environments. Our results indicate that the complexity of the constraint bundle adjustment can be decreased without loosing too much accuracy.

    @InProceedings{schneider2017uavg,
    Title = {ON THE QUALITY AND EFFICIENCY OF APPROXIMATE SOLUTIONS TO BUNDLE ADJUSTMENT WITH EPIPOLAR AND TRIFOCAL CONSTRAINTS},
    Author = {J. Schneider and C. Stachniss and W. Foerstner},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Bundle adjustment is a central part of most visual SLAM and Structure from Motion systems and thus a relevant component of UAVs equipped with cameras. This paper makes two contributions to bundle adjustment. First, we present a novel approach which exploits trifocal constraints, i.e., constraints resulting from corresponding points observed in three camera images, which allows to estimate the camera pose parameters without 3D point estimation. Second, we analyze the quality loss compared to the optimal bundle adjustment solution when applying different types of approximations to the constrained optimization problem to increase efficiency. We implemented and thoroughly evaluated our approach using a UAV performing mapping tasks in outdoor environments. Our results indicate that the complexity of the constraint bundle adjustment can be decreased without loosing too much accuracy.},
    }

  • P. Silva Filho, E. H. Shiguemori, and O. Saotome, “UAV VISUAL AUTOLOCALIZATON BASED ON AUTOMATIC LANDMARK RECOGNITION,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.

    @InProceedings{filho2017uavg,
    Title = {UAV VISUAL AUTOLOCALIZATON BASED ON AUTOMATIC LANDMARK RECOGNITION},
    Author = {P. {Silva Filho} and E.H. Shiguemori and O. Saotome},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.},
    }

  • F. Zimmermann, C. Eling, L. Klingbeil, and H. Kuhlmann, “PRECISE POSITIONING OF UAVS – DEALING WITH CHALLENGING RTK-GPS MEASUREMENT CONDITIONS DURING AUTOMATED UAV FLIGHTS,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    For some years now, UAVs (unmanned aerial vehicles) are commonly used for different mobile mapping applications, such as in the fields of surveying, mining or archeology. To improve the efficiency of these applications an automation of the flight as well as the processing of the collected data is currently aimed at. One precondition for an automated mapping with UAVs is that the georeferencing is performed directly with cm-accuracies or better. Usually, a cm-accurate direct positioning of UAVs is based on an onboard multi-sensor system, which consists of an RTK-capable (real-time kinematic) GPS (global positioning system) receiver and additional sensors (e.g. inertial sensors). In this case, the absolute positioning accuracy essentially depends on the local GPS measurement conditions. Especially during mobile mapping applications in urban areas, these conditions can be very challenging, due to a satellite shadowing, non-line-of sight receptions, signal diffraction or multipath effects. In this paper, two straightforward and easy to implement strategies will be described and analyzed, which improve the direct positioning accuracies for UAV-based mapping and surveying applications under challenging GPS measurement conditions. Based on a 3D model of the surrounding buildings and vegetation in the area of interest, a GPS geometry map is determined, which can be integrated in the flight planning process, to avoid GPS challenging environments as far as possible. If these challenging environments cannot be avoided, the GPS positioning solution is improved by using obstruction adaptive elevation masks, to mitigate systematic GPS errors in the RTK-GPS positioning. Simulations and results of field tests demonstrate the profit of both strategies.

    @InProceedings{zimmermann2017uavg,
    Title = {PRECISE POSITIONING OF UAVS - DEALING WITH CHALLENGING RTK-GPS MEASUREMENT CONDITIONS DURING AUTOMATED UAV FLIGHTS},
    Author = {F. Zimmermann and C. Eling and L. Klingbeil and H. Kuhlmann},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {For some years now, UAVs (unmanned aerial vehicles) are commonly used for different mobile mapping applications, such as in the fields of surveying, mining or archeology. To improve the efficiency of these applications an automation of the flight as well as the processing of the collected data is currently aimed at. One precondition for an automated mapping with UAVs is that the georeferencing is performed directly with cm-accuracies or better. Usually, a cm-accurate direct positioning of UAVs is based on an onboard multi-sensor system, which consists of an RTK-capable (real-time kinematic) GPS (global positioning system) receiver and additional sensors (e.g. inertial sensors). In this case, the absolute positioning accuracy essentially depends on the local GPS measurement conditions. Especially during mobile mapping applications in urban areas, these conditions can be very challenging, due to a satellite shadowing, non-line-of sight receptions, signal diffraction or multipath effects. In this paper, two straightforward and easy to implement strategies will be described and analyzed, which improve the direct positioning accuracies for UAV-based mapping and surveying applications under challenging GPS measurement conditions. Based on a 3D model of the surrounding buildings and vegetation in the area of interest, a GPS geometry map is determined, which can be integrated in the flight planning process, to avoid GPS challenging environments as far as possible. If these challenging environments cannot be avoided, the GPS positioning solution is improved by using obstruction adaptive elevation masks, to mitigate systematic GPS errors in the RTK-GPS positioning. Simulations and results of field tests demonstrate the profit of both strategies.},
    }

Papers for the ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences

  • H. Aasen, “FIRST RESULTS OF THE SURVEY ON STATE-OF-THE-ART IN UAV SPECTRAL SAMPLING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    UAVs are increasingly adapted as remote sensing platforms. Together with specialized sensors, they become powerful sensing systems for environmental monitoring and surveying. Spectral data has great capabilities to the gather information about biophysical and biochemical properties. Still, capturing meaningful spectral data in a reproducible way is not trivial. Since a couple of years small and lightweight spectral sensors, which can be carried on small flexible platforms, have become available. With their adaption in the community, the responsibility to ensure the quality of the data is increasingly shifted from specialized companies and agencies to individual researchers or research teams. Due to the complexity of the data acquisition of spectral data, this poses a challenge for the community and standardized protocols, metadata and best practice procedures are needed to make data intercomparable. In November 2016, the ESSEM COST action Innovative optical Tools for proximal sensing of ecophysiological processes (OPTIMISE, http://optimise.dcs.aber.ac.uk/) held a workshop on best practices for UAV spectral sampling. The objective of this meeting was to trace the way from particle to pixel and identify influences on the data quality / reliability, to figure out how well we are currently doing with spectral sampling from UAVs and how we can improve. Additionally, a survey was designed to be distributed within the community to get an overview over the current practices and raise awareness for the topic. This talk will introduce the approach of the OPTIMISE community towards best practises in UAV spectral sampling and present first results of the survey (http://optimise.dcs.aber.ac.uk/uav-survey/). This contribution briefly introduces the survey and gives some insights into the first results given by the interviewees.

    @InProceedings{aasen2017uavg-104,
    Title = {FIRST RESULTS OF THE SURVEY ON STATE-OF-THE-ART IN UAV SPECTRAL SAMPLING},
    Author = {H. Aasen},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {UAVs are increasingly adapted as remote sensing platforms. Together with specialized sensors, they become powerful sensing systems for environmental monitoring and surveying. Spectral data has great capabilities to the gather information about biophysical and biochemical properties. Still, capturing meaningful spectral data in a reproducible way is not trivial. Since a couple of years small and lightweight spectral sensors, which can be carried on small flexible platforms, have become available. With their adaption in the community, the responsibility to ensure the quality of the data is increasingly shifted from specialized companies and agencies to individual researchers or research teams. Due to the complexity of the data acquisition of spectral data, this poses a challenge for the community and standardized protocols, metadata and best practice procedures are needed to make data intercomparable. In November 2016, the ESSEM COST action Innovative optical Tools for proximal sensing of ecophysiological processes (OPTIMISE, http://optimise.dcs.aber.ac.uk/) held a workshop on best practices for UAV spectral sampling. The objective of this meeting was to trace the way from particle to pixel and identify influences on the data quality / reliability, to figure out how well we are currently doing with spectral sampling from UAVs and how we can improve. Additionally, a survey was designed to be distributed within the community to get an overview over the current practices and raise awareness for the topic. This talk will introduce the approach of the OPTIMISE community towards best practises in UAV spectral sampling and present first results of the survey (http://optimise.dcs.aber.ac.uk/uav-survey/). This contribution briefly introduces the survey and gives some insights into the first results given by the interviewees.},
    }

  • M. Almeida, H. Hildmann, and G. Solmaz, “DISTRIBUTED UAV-SWARM-BASED REAL-TIME GEOMATIC DATA COLLECTION UNDER DYNAMICALLY CHANGING RESOLUTION REQUIREMENTS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Unmanned Aerial Vehicles (UAVs) have been used for reconnaissance and surveillance missions as far back as the Vietnam War, but with the recent rapid increase in autonomy, precision and performance capabilities – and due to the massive reduction in cost and size – UAVs have become pervasive products, available and affordable for the general public. The use cases for UAVs are in the areas of disaster recovery, environmental mapping & protection and increasingly also as extended eyes and ears of civil security forces such as fire-fighters and emergency response units. In this paper we present a swarm algorithm that enables a fleet of autonomous UAVs to collectively perform sensing tasks related to environmental and rescue operations and to dynamically adapt to e.g. changing resolution requirements. We discuss the hardware used to build our own drones and the settings under which we validate the proposed approach.

    @InProceedings{almeida2017uavg-13,
    Title = {DISTRIBUTED UAV-SWARM-BASED REAL-TIME GEOMATIC DATA COLLECTION UNDER DYNAMICALLY CHANGING RESOLUTION REQUIREMENTS},
    Author = {M. Almeida and H. Hildmann and G. Solmaz},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Unmanned Aerial Vehicles (UAVs) have been used for reconnaissance and surveillance missions as far back as the Vietnam War, but with the recent rapid increase in autonomy, precision and performance capabilities - and due to the massive reduction in cost and size - UAVs have become pervasive products, available and affordable for the general public. The use cases for UAVs are in the areas of disaster recovery, environmental mapping & protection and increasingly also as extended eyes and ears of civil security forces such as fire-fighters and emergency response units. In this paper we present a swarm algorithm that enables a fleet of autonomous UAVs to collectively perform sensing tasks related to environmental and rescue operations and to dynamically adapt to e.g. changing resolution requirements. We discuss the hardware used to build our own drones and the settings under which we validate the proposed approach.},
    }

  • A. Audi, M. Pierrot-Deseilligny, C. Meynard, and C. Thom, “IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a long-exposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the N th image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesnt seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

    @InProceedings{audi2017uavg-53,
    Title = {IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS},
    Author = {A. Audi and M. Pierrot-Deseilligny and C. Meynard and C. Thom},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a long-exposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the N th image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesnt seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.},
    }

  • S. Badrloo and M. Varshosaz, “VISION BASED OBSTACLE DETECTION IN UAV IMAGING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar, therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection, hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

    @InProceedings{badrloo2017uavg-73,
    Title = {VISION BASED OBSTACLE DETECTION IN UAV IMAGING},
    Author = {S. Badrloo and M. Varshosaz},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar, therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection, hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.},
    }

  • M. Bagheridaneshvar, “SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This paper presents an enhanced method for extracting invariant features from images based on Scale Invariant Feature Transform (SIFT). Although SIFT features are invariant to image scale and rotation, additive noise, and changes in illumination but we think this algorithm suffers from excess keypoints. Besides, by adding the hue feature, which is extracted from combination of hue and illumination values in HSI colour space version of the target image, the proposed algorithm can speed up the matching phase. Therefore, we proposed the Scale Invariant Feature Transform plus Hue (SIFTH) that can remove the excess keypoints based on their Euclidean distances and adding hue to feature vector to speed up the matching process which is the aim of feature extraction. In this paper we use the difference of hue features and the Mean Square Error of orientation histograms to find the most similar keypoint to the under processing keypoint. The keypoint matching method can identify correct keypoint among clutter and occlusion robustly while achieving real-time performance and it will result a similarity factor of two keypoints. Moreover removing excess keypoint by SIFTH algorithm helps the matching algorithm to achieve this goal.

    @InProceedings{bagheridaneshvar2017uavg-35,
    Title = {SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE},
    Author = {M. Bagheridaneshvar},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This paper presents an enhanced method for extracting invariant features from images based on Scale Invariant Feature Transform (SIFT). Although SIFT features are invariant to image scale and rotation, additive noise, and changes in illumination but we think this algorithm suffers from excess keypoints. Besides, by adding the hue feature, which is extracted from combination of hue and illumination values in HSI colour space version of the target image, the proposed algorithm can speed up the matching phase. Therefore, we proposed the Scale Invariant Feature Transform plus Hue (SIFTH) that can remove the excess keypoints based on their Euclidean distances and adding hue to feature vector to speed up the matching process which is the aim of feature extraction. In this paper we use the difference of hue features and the Mean Square Error of orientation histograms to find the most similar keypoint to the under processing keypoint. The keypoint matching method can identify correct keypoint among clutter and occlusion robustly while achieving real-time performance and it will result a similarity factor of two keypoints. Moreover removing excess keypoint by SIFTH algorithm helps the matching algorithm to achieve this goal.},
    }

  • S. Batzdorfer, M. Bobbe, M. Becker, H. Harms, and U. Bestmann, “MULTISENSOR EQUIPPED UAV/UGV FOR AUTOMATED EXPLORATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The usage of unmanned systems for exploring disaster scenarios has become more and more important in recent times as a supporting system for action forces. These systems have to offer a well-balanced relationship between the quality of support and additional workload. Therefore within the joint research project ANKommEn german acronym for Automated Navigation and Communication for Exploration a system for exploration of disaster scenarios is build-up using multiple UAV und UGV controlled via a central ground station. The ground station serves as user interface for defining missions and tasks conducted by the unmanned systems, equipped with different environmental sensors like cameras RGB as well as IR or LiDAR. Depending on the exploration task results, in form of pictures, 2D stitched orthophoto or LiDAR point clouds will be transmitted via datalinks and displayed online at the ground station or will be processed in short-term after a mission, e.g. 3D photogrammetry. For mission planning and its execution, UAV/UGV monitoring and georeferencing of environmental sensor data, reliable positioning and attitude information is required. This is gathered using an integrated GNSS/IMU positioning system. In order to increase availability of positioning information in GNSS challenging scenarios, a GNSS-Multiconstellation based approach is used, amongst others. The present paper focuses on the overall system design including the ground station and sensor setups on the UAVs and UGVs, the underlying positioning techniques as well as 2D and 3D exploration based on a RGB camera mounted on board the UAV and its evaluation based on real world field tests.

    @InProceedings{batzdorfer2017uavg-70,
    Title = {MULTISENSOR EQUIPPED UAV/UGV FOR AUTOMATED EXPLORATION},
    Author = {S. Batzdorfer and M. Bobbe and M. Becker and H. Harms and U. Bestmann},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The usage of unmanned systems for exploring disaster scenarios has become more and more important in recent times as a supporting system for action forces. These systems have to offer a well-balanced relationship between the quality of support and additional workload. Therefore within the joint research project ANKommEn german acronym for Automated Navigation and Communication for Exploration a system for exploration of disaster scenarios is build-up using multiple UAV und UGV controlled via a central ground station. The ground station serves as user interface for defining missions and tasks conducted by the unmanned systems, equipped with different environmental sensors like cameras RGB as well as IR or LiDAR. Depending on the exploration task results, in form of pictures, 2D stitched orthophoto or LiDAR point clouds will be transmitted via datalinks and displayed online at the ground station or will be processed in short-term after a mission, e.g. 3D photogrammetry. For mission planning and its execution, UAV/UGV monitoring and georeferencing of environmental sensor data, reliable positioning and attitude information is required. This is gathered using an integrated GNSS/IMU positioning system. In order to increase availability of positioning information in GNSS challenging scenarios, a GNSS-Multiconstellation based approach is used, amongst others. The present paper focuses on the overall system design including the ground station and sensor setups on the UAVs and UGVs, the underlying positioning techniques as well as 2D and 3D exploration based on a RGB camera mounted on board the UAV and its evaluation based on real world field tests.},
    }

  • R. Boesch, “THERMAL REMOTE SENSING WITH UAV-BASED WORKFLOWS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Climate change will have a significant influence on vegetation health and growth. Predictions of higher mean summer temperatures and prolonged summer draughts may pose a threat to agriculture areas and forest canopies. Rising canopy temperatures can be an indicator of plant stress because of the closure of stomata and a decrease in the transpiration rate. Thermal cameras are available for decades, but still often used for single image analysis, only in oblique view manner or with visual evaluations of video sequences. Therefore remote sensing using a thermal camera can be an important data source to understand transpiration processes. Photogrammetric workflows allow to process thermal images similar to RGB data. But low spatial resolution of thermal cameras, significant optical distortion and typically low contrast require an adapted workflow. Temperature distribution in forest canopies is typically completely unknown and less distinct than for urban or industrial areas, where metal constructions and surfaces yield high contrast and sharp edge information. The aim of this paper is to investigate the influence of interior camera orientation, tie point matching and ground control points on the resulting accuracy of bundle adjustment and dense cloud generation with a typically used photogrammetric workflow for UAV- based thermal imagery in natural environments.

    @InProceedings{boesch2017uavg-100,
    Title = {THERMAL REMOTE SENSING WITH UAV-BASED WORKFLOWS},
    Author = {R. Boesch},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Climate change will have a significant influence on vegetation health and growth. Predictions of higher mean summer temperatures and prolonged summer draughts may pose a threat to agriculture areas and forest canopies. Rising canopy temperatures can be an indicator of plant stress because of the closure of stomata and a decrease in the transpiration rate. Thermal cameras are available for decades, but still often used for single image analysis, only in oblique view manner or with visual evaluations of video sequences. Therefore remote sensing using a thermal camera can be an important data source to understand transpiration processes. Photogrammetric workflows allow to process thermal images similar to RGB data. But low spatial resolution of thermal cameras, significant optical distortion and typically low contrast require an adapted workflow. Temperature distribution in forest canopies is typically completely unknown and less distinct than for urban or industrial areas, where metal constructions and surfaces yield high contrast and sharp edge information. The aim of this paper is to investigate the influence of interior camera orientation, tie point matching and ground control points on the resulting accuracy of bundle adjustment and dense cloud generation with a typically used photogrammetric workflow for UAV- based thermal imagery in natural environments.},
    }

  • M. A. Boon, A. Drijfhout, and S. Tesfamichael, “COMPARISON OF A FIXED-WING AND MULTI-ROTOR UAV FOR ENVIRONMENTAL MAPPING APPLICATIONS: A CASE STUDY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping of the wetland slope and contour mapping including the representation of hydrological features within the wetland. Factors such as cost, maintenance and flight time is in favour of the Skywalker fixed-wing. The multi-rotor on the other hand is more favourable in terms of data accuracy including for precision environmental planning purposes although the quality of the data of the fixed-wing is satisfactory for most environmental mapping applications.

    @InProceedings{boon2017uavg-85,
    Title = {COMPARISON OF A FIXED-WING AND MULTI-ROTOR UAV FOR ENVIRONMENTAL MAPPING APPLICATIONS: A CASE STUDY },
    Author = {M.A. Boon and A. Drijfhout and S. Tesfamichael},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping of the wetland slope and contour mapping including the representation of hydrological features within the wetland. Factors such as cost, maintenance and flight time is in favour of the Skywalker fixed-wing. The multi-rotor on the other hand is more favourable in terms of data accuracy including for precision environmental planning purposes although the quality of the data of the fixed-wing is satisfactory for most environmental mapping applications.},
    }

  • M. A. Boon and S. Tesfamichael, “WETLAND VEGETATION INTEGRITY ASSESSMENT WITH LOW ALTITUDE MULTISPECTRAL UAV IMAGERY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera positions and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetlands structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified (D PES Category) and that the condition is expected to deteriorate (change score) in the future. However a lower impact score were determined utilising the multispectral UAV imagery and NDVI. The result is a more accurate estimation of the impacts in the wetland.

    @InProceedings{boon2017uavg-86,
    Title = {WETLAND VEGETATION INTEGRITY ASSESSMENT WITH LOW ALTITUDE MULTISPECTRAL UAV IMAGERY},
    Author = {M.A. Boon and S. Tesfamichael},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera positions and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetlands structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified (D PES Category) and that the condition is expected to deteriorate (change score) in the future. However a lower impact score were determined utilising the multispectral UAV imagery and NDVI. The result is a more accurate estimation of the impacts in the wetland.},
    }

  • A. Cefalu, N. Haala, S. Schmohl, I. Neumann, and T. Genz, “A MOBILE MULTI-SENSOR PLATFORM FOR BUILDING RECONSTRUCTION INTEGRATING TERRESTRIAL AND AUTONOMOUS UAV-BASED CLOSE RANGE DATA ACQUISITION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Photogrammetric data capture of complex 3D objects using UAV imagery has become commonplace. Software tools based on algorithms like Structure-from-Motion and multi-view stereo image matching enable the fully automatic generation of densely meshed 3D point clouds. In contrast, the planning of a suitable image network usually requires considerable effort of a human expert, since this step directly influences the precision and completeness of the resulting point cloud. Planning of suitable camera stations can be rather complex, in particular for objects like buildings, bridges and monuments, which frequently feature strong depth variations to be acquired by high resolution images at a short distance. Within the paper, we present an automatic flight mission planning tool, which generates flight lines while aiming at camera configurations, which maintain a roughly constant object distance, provide sufficient image overlap and avoid unnecessary stations. Planning is based on a coarse Digital Surface Model and an approximate building outline. As a proof of concept, we use the tool within our research project MoVEQuaD, which aims at the reconstruction of building geometry at sub-centimetre accuracy.

    @InProceedings{cefalu2017uavg-94,
    Title = {A MOBILE MULTI-SENSOR PLATFORM FOR BUILDING RECONSTRUCTION INTEGRATING TERRESTRIAL AND AUTONOMOUS UAV-BASED CLOSE RANGE DATA ACQUISITION},
    Author = {A. Cefalu and N. Haala and S. Schmohl and I. Neumann and T. Genz},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Photogrammetric data capture of complex 3D objects using UAV imagery has become commonplace. Software tools based on algorithms like Structure-from-Motion and multi-view stereo image matching enable the fully automatic generation of densely meshed 3D point clouds. In contrast, the planning of a suitable image network usually requires considerable effort of a human expert, since this step directly influences the precision and completeness of the resulting point cloud. Planning of suitable camera stations can be rather complex, in particular for objects like buildings, bridges and monuments, which frequently feature strong depth variations to be acquired by high resolution images at a short distance. Within the paper, we present an automatic flight mission planning tool, which generates flight lines while aiming at camera configurations, which maintain a roughly constant object distance, provide sufficient image overlap and avoid unnecessary stations. Planning is based on a coarse Digital Surface Model and an approximate building outline. As a proof of concept, we use the tool within our research project MoVEQuaD, which aims at the reconstruction of building geometry at sub-centimetre accuracy.},
    }

  • C. Chen, P. Hsieh, and W. Lai, “APPLICATION OF DECISION TREE ON COLLISION AVOIDANCE SYSTEM DESIGN AND VERIFICATION FOR QUADCOPTER,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The purpose of the research is to build a collision avoidance system with decision tree algorithm used for quadcopters. While the ultrasonic range finder judges the distance is in collision avoidance interval, the access will be replaced from operator to the system to control the altitude of the UAV. According to the former experiences on operating quadcopters, we can obtain the appropriate pitch angle. The UAS implement the following three motions to avoid collisions. Case1: initial slow avoidance stage, Case2: slow avoidance stage and Case3: Rapid avoidance stage. Then the training data of collision avoidance test will be transmitted to the ground station via wireless transmission module to further analysis. The entire decision tree algorithm of collision avoidance system, transmission data, and ground station have been verified in some flight tests. In the flight test, the quadcopter can implement avoidance motion in real-time and move away from obstacles steadily. In the avoidance area, the authority of the collision avoidance system is higher than the operator and implements the avoidance process. The quadcopter can successfully fly away from the obstacles in 1.92 meter per second and the minimum distance between the quadcopter and the obstacle is 1.05 meters.

    @InProceedings{chen2017uavg-47,
    Title = {APPLICATION OF DECISION TREE ON COLLISION AVOIDANCE SYSTEM DESIGN AND VERIFICATION FOR QUADCOPTER},
    Author = {C. Chen and P. Hsieh and W. Lai},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The purpose of the research is to build a collision avoidance system with decision tree algorithm used for quadcopters. While the ultrasonic range finder judges the distance is in collision avoidance interval, the access will be replaced from operator to the system to control the altitude of the UAV. According to the former experiences on operating quadcopters, we can obtain the appropriate pitch angle. The UAS implement the following three motions to avoid collisions. Case1: initial slow avoidance stage, Case2: slow avoidance stage and Case3: Rapid avoidance stage. Then the training data of collision avoidance test will be transmitted to the ground station via wireless transmission module to further analysis. The entire decision tree algorithm of collision avoidance system, transmission data, and ground station have been verified in some flight tests. In the flight test, the quadcopter can implement avoidance motion in real-time and move away from obstacles steadily. In the avoidance area, the authority of the collision avoidance system is higher than the operator and implements the avoidance process. The quadcopter can successfully fly away from the obstacles in 1.92 meter per second and the minimum distance between the quadcopter and the obstacle is 1.05 meters.},
    }

  • F. Chiabrando and L. Teppati Lose, “PERFORMANCE EVALUATION OF COTS UAV FOR ARCHITECTURAL HERITAGE DOCUMENTATION. A TEST ON S.GIULIANO CHAPEL IN SAVIGLIANO (CN) – ITALY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Even more the use of UAV platforms is a standard for images or videos acquisitions from an aerial point of view. According to the enormous growth of requests, we are assisting to an increasing of the production of COTS (Commercial off the Shelf) platforms and systems to answer to the market requirements. In this last years, different platforms have been developed and sell at low-medium cost and nowadays the offer of interesting systems is very large. One of the most important company that produce UAV and other imaging systems is the DJI (D-Ji_ng Innovations Science and Technology Co., Ltd) founded in 2006 headquartered in Shenzhen China. The platforms realized by the company range from low cost systems up to professional equipment, tailored for high resolution acquisitions useful for film maker purposes. According to the characteristics of the last developed low cost DJI platforms, the on-board sensors and the performance of the modern photogrammetric software based on Structure from Motion (SfM) algorithms, those systems are nowadays employed for performing 3D surveys starting from the small up to the large scale. The present paper is aimed to test the characteristic in terms of image quality, flight operations, flight planning and accuracy evaluation of the final products of three COTS platforms realized by DJI: the Mavic Pro, the Phantom 4 and the Phantom 4 PRO. The test site chosen was the Chapel of San Giuliano in the municipality of Savigliano (Cuneo-Italy), a small church with two aisles dating back to the early eleventh century.

    @InProceedings{chiabrando2017uavg-55,
    Title = {PERFORMANCE EVALUATION OF COTS UAV FOR ARCHITECTURAL HERITAGE DOCUMENTATION. A TEST ON S.GIULIANO CHAPEL IN SAVIGLIANO (CN) - ITALY},
    Author = {F. Chiabrando and L. {Teppati Lose}},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Even more the use of UAV platforms is a standard for images or videos acquisitions from an aerial point of view. According to the enormous growth of requests, we are assisting to an increasing of the production of COTS (Commercial off the Shelf) platforms and systems to answer to the market requirements. In this last years, different platforms have been developed and sell at low-medium cost and nowadays the offer of interesting systems is very large. One of the most important company that produce UAV and other imaging systems is the DJI (D-Ji_ng Innovations Science and Technology Co., Ltd) founded in 2006 headquartered in Shenzhen China. The platforms realized by the company range from low cost systems up to professional equipment, tailored for high resolution acquisitions useful for film maker purposes. According to the characteristics of the last developed low cost DJI platforms, the on-board sensors and the performance of the modern photogrammetric software based on Structure from Motion (SfM) algorithms, those systems are nowadays employed for performing 3D surveys starting from the small up to the large scale. The present paper is aimed to test the characteristic in terms of image quality, flight operations, flight planning and accuracy evaluation of the final products of three COTS platforms realized by DJI: the Mavic Pro, the Phantom 4 and the Phantom 4 PRO. The test site chosen was the Chapel of San Giuliano in the municipality of Savigliano (Cuneo-Italy), a small church with two aisles dating back to the early eleventh century.},
    }

  • M. Cramer, H. J. Przybilla, and A. Zurhorst, “UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

    @InProceedings{cramer2017uavg-89,
    Title = {UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK},
    Author = {M. Cramer and H.J. Przybilla and A. Zurhorst},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced. },
    }

  • D. Duarte, F. Nex, N. Kerle, and G. Vosselman, “TOWARDS A MORE EFFICIENT DETECTION OF EARTHQUAKE INDUCED FAADE DAMAGES USING OBLIQUE UAV IMAGERY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for faade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient faade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the faades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight, ii) use of the latter as proxy for faade location in the oblique images captured in subsequent flights, and iii) selection of the faade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted faade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of LAquila, Italy, affected in 2009 by an earthquake.

    @InProceedings{duarte2017uavg-75,
    Title = {TOWARDS A MORE EFFICIENT DETECTION OF EARTHQUAKE INDUCED FAADE DAMAGES USING OBLIQUE UAV IMAGERY},
    Author = {D. Duarte and F. Nex and N. Kerle and G. Vosselman},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for faade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient faade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the faades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight, ii) use of the latter as proxy for faade location in the oblique images captured in subsequent flights, and iii) selection of the faade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted faade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of LAquila, Italy, affected in 2009 by an earthquake.},
    }

  • T. Fiolka, F. Rouatbi, and D. Bender, “AUTOMATED DETECTION AND CLOSING OF HOLES IN AERIAL POINT CLOUDS USING AN UAS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.

    @InProceedings{fiolka2017uavg-28,
    Title = {AUTOMATED DETECTION AND CLOSING OF HOLES IN AERIAL POINT CLOUDS USING AN UAS},
    Author = {T. Fiolka and F. Rouatbi and D. Bender},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.},
    }

  • M. H. D. Franceschini, H. Bartholomeus, D. van Apeldoorn, J. Suomalainen, and L. Kooistra, “ASSESSING CHANGES IN POTATO CANOPY CAUSED BY LATE BLIGHT IN ORGANIC PRODUCTION SYSTEMS THROUGH UAV-BASED PUSHBROOM IMAGING SPECTROMETER,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Productivity of cropping systems can be constrained simultaneously by different limiting factors and approaches allowing to indicate and identify plants under stress in field conditions can be valuable for farmers and breeders. In organic production systems, sensing solutions are not frequently studied, despite their potential for crop traits retrieval and stress assessment. In this study, spectral data in the optical domain acquired using a pushbroom spectrometer on board of a unmanned aerial vehicle is used to evaluate the potential of this information for assessment of late blight (Phytophthora infestans) incidence on potato (Solanum tuberosum) under organic cultivation. Vegetation indices formulations with two and three spectral bands were tested for the complete range of the spectral information acquired (i.e., from 450 to 900 nm, with 10 nm of spectral resolution). This evaluation concerned the discrimination between plots cultivated with only one resistant potato variety in contrast with plots with a variety mixture, with resistant and susceptible cultivars. Results indicated that indices based on three spectral bands performed better and optimal wavelengths (i.e., near 490, 530 and 670 nm) are not only related to chlorophyll content but also to other leaf pigments like carotenoids.

    @InProceedings{franceschini2017uavg-96,
    Title = {ASSESSING CHANGES IN POTATO CANOPY CAUSED BY LATE BLIGHT IN ORGANIC PRODUCTION SYSTEMS THROUGH UAV-BASED PUSHBROOM IMAGING SPECTROMETER},
    Author = {M.H.D. Franceschini and H. Bartholomeus and D. {van Apeldoorn} and J. Suomalainen and L. Kooistra},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Productivity of cropping systems can be constrained simultaneously by different limiting factors and approaches allowing to indicate and identify plants under stress in field conditions can be valuable for farmers and breeders. In organic production systems, sensing solutions are not frequently studied, despite their potential for crop traits retrieval and stress assessment. In this study, spectral data in the optical domain acquired using a pushbroom spectrometer on board of a unmanned aerial vehicle is used to evaluate the potential of this information for assessment of late blight (Phytophthora infestans) incidence on potato (Solanum tuberosum) under organic cultivation. Vegetation indices formulations with two and three spectral bands were tested for the complete range of the spectral information acquired (i.e., from 450 to 900 nm, with 10 nm of spectral resolution). This evaluation concerned the discrimination between plots cultivated with only one resistant potato variety in contrast with plots with a variety mixture, with resistant and susceptible cultivars. Results indicated that indices based on three spectral bands performed better and optimal wavelengths (i.e., near 490, 530 and 670 nm) are not only related to chlorophyll content but also to other leaf pigments like carotenoids.},
    }

  • C. Y. Fu and J. R. Tsay, “PRECISION ANALYSIS OF VISUAL ODOMETRY BASED ON DISPARITY CHANGING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This thesis aims to analyze the precision of Position and orientation of cameras on Mobile Mapping System (MMS) determined by disparity based VO (DBVO). Dual forwards taken cameras on MMS are applied to obtain a sequence of stereo pairs. The Interior Orientation Parameters (IOPs) and Relative Orientation Parameters (ROPs) are derived in advance. The pose estimation is achieved by DBVO without additional control data. The procedure of DBVO consists of four steps. First up, keypoint detection and matching is conducted to obtain tie points in consecutive images. Then, image rectification is implemented to transform tie points into epipolar image space. Next, parallax equation is applied to estimate the 3D coordinates of interest points in epipolar image 3D space. Since their image points have different disparity in neighboring stereo pairs, the 3D coordinates of interest points in neighboring pairs are different as well. Finally, 3D conformal transformation is employed to derive the transformation parameters between neighboring pairs according to changing of coordinates of interest points. The posteriori STDs are adopted to assess the quality of transformation. Besides, check data of ground trajectory derived by photo triangulation are applied to evaluate the result. The relative errors of horizontal and vertical translations derived by DBVO are 2% and 3% in non-viewing direction. However, the translation in viewing direction and three rotation angles derived by DBVO have significant systematic errors about 1 m, 3, 3 and 10 respectively. The influence of error propagation is not significant according to the chart of error distance ratio. In open area, the trajectory of INS/GPS is similar to ground truth, while the trajectory derived by DBVO has 44% relative error. In residential district, the trajectory derived by INS/GPS has drift error about 2 m, while the relative error of the trajectory derived by DBVO decreases to 38%. It is presumed that the systematic error results from 3D coordinates estimated by parallax equation because of poor intersection geometry. It will be proved by adding sideward photographing cameras in the future.

    @InProceedings{fu2017uavg-60,
    Title = {PRECISION ANALYSIS OF VISUAL ODOMETRY BASED ON DISPARITY CHANGING},
    Author = {C.Y. Fu and J.R. Tsay},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This thesis aims to analyze the precision of Position and orientation of cameras on Mobile Mapping System (MMS) determined by disparity based VO (DBVO). Dual forwards taken cameras on MMS are applied to obtain a sequence of stereo pairs. The Interior Orientation Parameters (IOPs) and Relative Orientation Parameters (ROPs) are derived in advance. The pose estimation is achieved by DBVO without additional control data. The procedure of DBVO consists of four steps. First up, keypoint detection and matching is conducted to obtain tie points in consecutive images. Then, image rectification is implemented to transform tie points into epipolar image space. Next, parallax equation is applied to estimate the 3D coordinates of interest points in epipolar image 3D space. Since their image points have different disparity in neighboring stereo pairs, the 3D coordinates of interest points in neighboring pairs are different as well. Finally, 3D conformal transformation is employed to derive the transformation parameters between neighboring pairs according to changing of coordinates of interest points. The posteriori STDs are adopted to assess the quality of transformation. Besides, check data of ground trajectory derived by photo triangulation are applied to evaluate the result. The relative errors of horizontal and vertical translations derived by DBVO are 2% and 3% in non-viewing direction. However, the translation in viewing direction and three rotation angles derived by DBVO have significant systematic errors about 1 m, 3, 3 and 10 respectively. The influence of error propagation is not significant according to the chart of error distance ratio. In open area, the trajectory of INS/GPS is similar to ground truth, while the trajectory derived by DBVO has 44% relative error. In residential district, the trajectory derived by INS/GPS has drift error about 2 m, while the relative error of the trajectory derived by DBVO decreases to 38%. It is presumed that the systematic error results from 3D coordinates estimated by parallax equation because of poor intersection geometry. It will be proved by adding sideward photographing cameras in the future.},
    }

  • Z. Gao, Y. Song, C. Li, F. Zeng, and F. Wang, “RESEARCH ON THE APPLICATION OF RAPID SURVEYING AND MAPPING FOR LARGE SCARE TOPOGRAPHIC MAP BY MUAV AERIAL PHOTOGRAPHY SYSTEM,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Rapid acquisition and processing method of large scale topographic map data, which relies on the Unmanned Aerial Vehicle (UAV) low-altitude aerial photogrammetry system, is studied in this paper, elaborating the main work flow. Key technologies of UAV photograph mapping is also studied, developing a rapid mapping system based on electronic plate mapping system, thus changing the traditional mapping mode and greatly improving the efficiency of the mapping. Production test and achievement precision evaluation of Digital Orth photo Map (DOM), Digital Line Graphic (DLG) and other digital production were carried out combined with the city basic topographic map update project, which provides a new techniques for large scale rapid surveying and has obvious technical advantage and good application prospect.

    @InProceedings{gao2017uavg-10,
    Title = {RESEARCH ON THE APPLICATION OF RAPID SURVEYING AND MAPPING FOR LARGE SCARE TOPOGRAPHIC MAP BY MUAV AERIAL PHOTOGRAPHY SYSTEM},
    Author = {Z. Gao and Y. Song and C. Li and F. Zeng and F. Wang},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Rapid acquisition and processing method of large scale topographic map data, which relies on the Unmanned Aerial Vehicle (UAV) low-altitude aerial photogrammetry system, is studied in this paper, elaborating the main work flow. Key technologies of UAV photograph mapping is also studied, developing a rapid mapping system based on electronic plate mapping system, thus changing the traditional mapping mode and greatly improving the efficiency of the mapping. Production test and achievement precision evaluation of Digital Orth photo Map (DOM), Digital Line Graphic (DLG) and other digital production were carried out combined with the city basic topographic map update project, which provides a new techniques for large scale rapid surveying and has obvious technical advantage and good application prospect.},
    }

  • C. Gianni, M. Balsi, S. Esposito, and P. Fallavollita, “OBSTACLE DETECTION SYSTEM INVOLVING FUSION OF MULTIPLE SENSOR TECHNOLOGIES,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Obstacle detection is a fundamental task for Unmanned Aerial Vehicles (UAV) as a part of a Sense and Avoid system. In this study, we present a method of multi-sensor obstacle detection that demonstrated good results on different kind of obstacles. This method can be implemented on low-cost platforms involving a DSP or small FPGA. In this paper, we also present a study on the typical targets that can be tough to detect because of their characteristics of reflectivity, form factor, heterogeneity and show how data fusion can often overcome the limitations of each technology.

    @InProceedings{gianni2017uavg-24,
    Title = {OBSTACLE DETECTION SYSTEM INVOLVING FUSION OF MULTIPLE SENSOR TECHNOLOGIES},
    Author = {C. Gianni and M. Balsi and S. Esposito and P. Fallavollita},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Obstacle detection is a fundamental task for Unmanned Aerial Vehicles (UAV) as a part of a Sense and Avoid system. In this study, we present a method of multi-sensor obstacle detection that demonstrated good results on different kind of obstacles. This method can be implemented on low-cost platforms involving a DSP or small FPGA. In this paper, we also present a study on the typical targets that can be tough to detect because of their characteristics of reflectivity, form factor, heterogeneity and show how data fusion can often overcome the limitations of each technology.},
    }

  • H. Gonzalez-Jorge, M. Bueno, J. Martinez-Sanchez, and P. Arias, “LOW-ALTITUDE LONG-ENDURANCE SOLAR UNMANNED PLANE FOR FOREST FIRE PREVENTION: APPLICATION TO THE NATURAL PARK OF SERRA DO XURES (SPAIN),” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Unamnned aerial systems (UAS) show great potential in operations related to surveillance. These systems can be successfully applied to the prevention of forest fires, especially those caused by human intervention. The present works focuses on a study of the operational possibilities of the unmanned system AtlantikSolar developed by the ETH Zurich for the prevention of forest fires in the Spanish natural park of Serra do Xurs, an area of 20,920 ha with height variations between 300 m and 1,500 m. The operation evaluation of AtlantikSolar is based on the use of Flir Tau 2 LWIR camera as imaging payload which could detect illegal activities in the forest, such as bonfires, uncontrolled burning or pyromaniacs. Flight surveillance is planned for an altitude of 100 m to obey the legal limit of the Spanish UAS regulation. This altitude produces a swath width of 346.4 m and pixel resolution between 1.5 and 1.8 pixels/m. Operation is planned to adapt altitude to the change on the topography and obtain a constant ground resolution. Operational speed is selected to 52 km/h. The UAS trajectory is adapted to the limits of the natural park and the border between Spain and Portugal. Matlab code is developed for mission planning. The complete surveillance of the natural park requires a total time of 15.6 hours for a distance of 811.6 km.

    @InProceedings{gonzalez-jorge2017uavg-15,
    Title = {LOW-ALTITUDE LONG-ENDURANCE SOLAR UNMANNED PLANE FOR FOREST FIRE PREVENTION: APPLICATION TO THE NATURAL PARK OF SERRA DO XURES (SPAIN)},
    Author = {H. Gonzalez-Jorge and M. Bueno and J. Martinez-Sanchez and P. Arias},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Unamnned aerial systems (UAS) show great potential in operations related to surveillance. These systems can be successfully applied to the prevention of forest fires, especially those caused by human intervention. The present works focuses on a study of the operational possibilities of the unmanned system AtlantikSolar developed by the ETH Zurich for the prevention of forest fires in the Spanish natural park of Serra do Xurs, an area of 20,920 ha with height variations between 300 m and 1,500 m. The operation evaluation of AtlantikSolar is based on the use of Flir Tau 2 LWIR camera as imaging payload which could detect illegal activities in the forest, such as bonfires, uncontrolled burning or pyromaniacs. Flight surveillance is planned for an altitude of 100 m to obey the legal limit of the Spanish UAS regulation. This altitude produces a swath width of 346.4 m and pixel resolution between 1.5 and 1.8 pixels/m. Operation is planned to adapt altitude to the change on the topography and obtain a constant ground resolution. Operational speed is selected to 52 km/h. The UAS trajectory is adapted to the limits of the natural park and the border between Spain and Portugal. Matlab code is developed for mission planning. The complete surveillance of the natural park requires a total time of 15.6 hours for a distance of 811.6 km. },
    }

  • S. Hese and F. Behrendt, “MULTISEASONAL TREE CROWN STRUCTURE MAPPING WITH POINT CLOUDS FROM OTS QUADROCOPTER SYSTEMS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85% overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations – covering the leaf-on and leaf-off status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50x10x10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.

    @InProceedings{hese2017uavg-57,
    Title = {MULTISEASONAL TREE CROWN STRUCTURE MAPPING WITH POINT CLOUDS FROM OTS QUADROCOPTER SYSTEMS },
    Author = {S. Hese and F. Behrendt},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85% overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leaf-off status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50x10x10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.},
    }

  • R. Ilehag, A. Schenk, and S. Hinz, “CONCEPT FOR CLASSIFYING FACADE ELEMENTS BASED ON MATERIAL, GEOMETRY AND THERMAL RADIATION USING MULTIMODAL UAV REMOTE SENSING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.

    @InProceedings{ilehag2017uavg-63,
    Title = {CONCEPT FOR CLASSIFYING FACADE ELEMENTS BASED ON MATERIAL, GEOMETRY AND THERMAL RADIATION USING MULTIMODAL UAV REMOTE SENSING},
    Author = {R. Ilehag and A. Schenk and S. Hinz},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.},
    }

  • S. Jabari, F. Fathollahi, and Y. Zhang, “APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

    @InProceedings{jabari2017uavg-49,
    Title = {APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION},
    Author = {S. Jabari and F. Fathollahi and Y. Zhang},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.},
    }

  • J. P. Jhan, J. Y. Rau, N. Haala, and M. Cramer, “INVESTIGATION OF PARALLAX ISSUES FOR MULTI-LENS MULTISPECTRAL CAMERA BAND CO-REGISTRATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.

    @InProceedings{jhan2017uavg-59,
    Title = {INVESTIGATION OF PARALLAX ISSUES FOR MULTI-LENS MULTISPECTRAL CAMERA BAND CO-REGISTRATION},
    Author = {J.P. Jhan and J.Y. Rau and N. Haala and M. Cramer},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.},
    }

  • K. Johansen and T. Raharjo, “MULTI-TEMPORAL ASSESSMENT OF LYCHEE TREE CROP STRUCTURE USING MULTI-SPECTRAL RPAS IMAGERY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The lychee tree is native to China and produce small fleshy fruit up to 5 cm in diameter. Lychee production in Australia is worth >$20 million annually. Pruning of trees encourages new growth, has a positive effect on fruiting of lychee, makes fruit-picking easier, and may increase yield, as it increases light interception and tree crown surface area. The objective of this research was to assess changes in tree structure, i.e. tree crown circumference, width, height and Plant Projective Cover (PPC) using multi-spectral Remotely Piloted Aircraft System (RPAS) imagery collected before and after pruning of a lychee plantation. A secondary objective was to assess any variations in the results as a function of various flying heights (30, 50 and 70 m). Pre- and post-pruning results showed significant differences in all measured tree structural parameters, including an average decrease in: tree crown circumference of 1.94 m, tree crown width of 0.57 m, tree crown height of 0.62 m, and PPC of 14.8%. The different flying heights produced similar measurements of tree crown width and PPC, whereas tree crown circumference and height measurements decreased with increasing flying height. These results show that multi-spectral RPAS imagery can provide a suitable means of assessing pruning efforts undertaken by contractors based on changes in tree structure of lychee plantations and that it is important to collect imagery in a consistent manner, as varying flying heights may cause changes to tree structural measurements.

    @InProceedings{johansen2017uavg-51,
    Title = {MULTI-TEMPORAL ASSESSMENT OF LYCHEE TREE CROP STRUCTURE USING MULTI-SPECTRAL RPAS IMAGERY},
    Author = {K. Johansen and T. Raharjo},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The lychee tree is native to China and produce small fleshy fruit up to 5 cm in diameter. Lychee production in Australia is worth >$20 million annually. Pruning of trees encourages new growth, has a positive effect on fruiting of lychee, makes fruit-picking easier, and may increase yield, as it increases light interception and tree crown surface area. The objective of this research was to assess changes in tree structure, i.e. tree crown circumference, width, height and Plant Projective Cover (PPC) using multi-spectral Remotely Piloted Aircraft System (RPAS) imagery collected before and after pruning of a lychee plantation. A secondary objective was to assess any variations in the results as a function of various flying heights (30, 50 and 70 m). Pre- and post-pruning results showed significant differences in all measured tree structural parameters, including an average decrease in: tree crown circumference of 1.94 m, tree crown width of 0.57 m, tree crown height of 0.62 m, and PPC of 14.8%. The different flying heights produced similar measurements of tree crown width and PPC, whereas tree crown circumference and height measurements decreased with increasing flying height. These results show that multi-spectral RPAS imagery can provide a suitable means of assessing pruning efforts undertaken by contractors based on changes in tree structure of lychee plantations and that it is important to collect imagery in a consistent manner, as varying flying heights may cause changes to tree structural measurements.},
    }

  • G. Jozkow, P. Wieczorek, M. Karpina, A. Walicka, and A. Borkowski, “PERFORMANCE EVALUATION OF SUAS EQUIPPED WITH VELODYNE HDL-32E LIDAR SENSOR,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.

    @InProceedings{jozkow2017uavg-92,
    Title = {PERFORMANCE EVALUATION OF SUAS EQUIPPED WITH VELODYNE HDL-32E LIDAR SENSOR},
    Author = {G. Jozkow and P. Wieczorek and M. Karpina and A. Walicka and A. Borkowski},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.},
    }

  • D. Kim, J. Youn, and C. Kim, “AUTOMATIC FAULT RECOGNITION OF PHOTOVOLTAIC MODULES BASED ON STATISTICAL ANALYSIS OF UAV THERMOGRAPHY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images, this verified a detection accuracy of defective panels of 97% or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

    @InProceedings{kim2017uavg-69,
    Title = {AUTOMATIC FAULT RECOGNITION OF PHOTOVOLTAIC MODULES BASED ON STATISTICAL ANALYSIS OF UAV THERMOGRAPHY},
    Author = {D. Kim and J. Youn and C. Kim},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images, this verified a detection accuracy of defective panels of 97% or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.},
    }

  • J. Kim and T. Kim, “DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    In this paper, a tie-point based image mosaicking method considering imaging characteristics of small UAVs is proposed. Small UAVs can be characterized to have unstable flight trajectory and lower flight height. The proposed method considers the imaging characteristics in image transformation estimation and image blending process. For image transformation estimation, an optimal transformation model is variably applied by using tie-point area ratio. The optimal tie-point area ratio was about 0.3. Mosaicking error was largely decreased by using this tie-point area ratio. For image blending, a composite area minimization is introduced as a preceding step of image resampling. Composite areas of individual images were minimized by analyzing image overlaps between adjacent images. The proposed method was evaluated over flat area and urban area with highly overlapping multi-strip and inconsistently overlapping strip. Experiment results showed that the proposed method can reliably generate mosaics not only from UAV images acquired in good environment but also from extreme environment.

    @InProceedings{kim2017uavg-79,
    Title = {DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE},
    Author = {J. Kim and T. Kim},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {In this paper, a tie-point based image mosaicking method considering imaging characteristics of small UAVs is proposed. Small UAVs can be characterized to have unstable flight trajectory and lower flight height. The proposed method considers the imaging characteristics in image transformation estimation and image blending process. For image transformation estimation, an optimal transformation model is variably applied by using tie-point area ratio. The optimal tie-point area ratio was about 0.3. Mosaicking error was largely decreased by using this tie-point area ratio. For image blending, a composite area minimization is introduced as a preceding step of image resampling. Composite areas of individual images were minimized by analyzing image overlaps between adjacent images. The proposed method was evaluated over flat area and urban area with highly overlapping multi-strip and inconsistently overlapping strip. Experiment results showed that the proposed method can reliably generate mosaics not only from UAV images acquired in good environment but also from extreme environment.},
    }

  • A. Klimkowska and I. Lee, “A PREALIMINARY STUDY OF SHIP DETECTION FROM UAV IMAGES BASED ON COLOR SPACE CONVERSION AND IMAGE SEGMENTATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Ship detection is an inherent process supporting tasks such as fishery management, ship search, marine traffic monitoring and control, and helps in the prevention of illegal activities. So far, sea and shore monitoring has been carried out by ship patrols and aircrafts along with sea vessel detection from data from space-borne platforms. Recently an increase interest in applying images delivered by UAV for marine application due to their advantages such as high spatial resolution, independence on time acquisition can be noticed. While investigating state of the art methods used for ship detection from different platforms using optical images, we found a significant problem with occurrence of a ship wake. This phenomena may prohibit correct detection of ship location and results in overestimating the ship size as the ship and its wake are often considered as being part of the same object in image or wakes are distinguished as a separate ship due to their possible similar brightness compared with sea vessel. In order to reduce the impact of ship wakes we investigated the behavior of images in different color spaces to provide data with little or almost no trace of ship wake. We took into consideration following color spaces: HSV, YCbCr, NTSC, XYZ and L*a*b and investigated each channel from new images. Finally we decided to use 2nd channel of L*a*b space where the ship wakes occurrence were significantly reduced. Object of interest were detected through the use of image segmentation. Applied method uses edge detection based on the gradient magnitude calculation. Afterwards several characteristics such as centroids, major and minor axis, size and orientation were calculated for later use to remove false positives and thus improve accuracy of the proposed method.

    @InProceedings{klimkowska2017uavg-82,
    Title = {A PREALIMINARY STUDY OF SHIP DETECTION FROM UAV IMAGES BASED ON COLOR SPACE CONVERSION AND IMAGE SEGMENTATION},
    Author = {A. Klimkowska and I. Lee},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Ship detection is an inherent process supporting tasks such as fishery management, ship search, marine traffic monitoring and control, and helps in the prevention of illegal activities. So far, sea and shore monitoring has been carried out by ship patrols and aircrafts along with sea vessel detection from data from space-borne platforms. Recently an increase interest in applying images delivered by UAV for marine application due to their advantages such as high spatial resolution, independence on time acquisition can be noticed. While investigating state of the art methods used for ship detection from different platforms using optical images, we found a significant problem with occurrence of a ship wake. This phenomena may prohibit correct detection of ship location and results in overestimating the ship size as the ship and its wake are often considered as being part of the same object in image or wakes are distinguished as a separate ship due to their possible similar brightness compared with sea vessel. In order to reduce the impact of ship wakes we investigated the behavior of images in different color spaces to provide data with little or almost no trace of ship wake. We took into consideration following color spaces: HSV, YCbCr, NTSC, XYZ and L*a*b and investigated each channel from new images. Finally we decided to use 2nd channel of L*a*b space where the ship wakes occurrence were significantly reduced. Object of interest were detected through the use of image segmentation. Applied method uses edge detection based on the gradient magnitude calculation. Afterwards several characteristics such as centroids, major and minor axis, size and orientation were calculated for later use to remove false positives and thus improve accuracy of the proposed method.},
    }

  • S. Kubota, Y. Kawai, and R. Kadotani, “ACCURACY VALIDATION OF POINT CLOUDS OF UAV PHOTOGRAMMETRY AND ITS APPLICATION FOR RIVER MANAGEMENT,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    River administration facilities such as levees and river walls play a major role in preventing flooding due to heavy rain. The forms of such facilities must be constantly monitored for alteration due to rain and running water, and limited human resources and budgets make it necessary to efficiently maintain river administration facilities. During maintenance, inspection results are commonly recorded on paper documents. Continuous inspection and repair using information systems are an on-going challenge. This study proposes a maintenance management system for river facilities that uses three-dimensional data to solve these problems and make operation and maintenance more efficient. The system uses three-dimensional data to visualize river facility deformation and its process, and it has functions that visualize information about river management at any point in the three-dimensional data. The three-dimensional data is generated by photogrammetry using a camera on an Unmanned Aerial Vehicle.

    @InProceedings{kubota2017uavg-67,
    Title = {ACCURACY VALIDATION OF POINT CLOUDS OF UAV PHOTOGRAMMETRY AND ITS APPLICATION FOR RIVER MANAGEMENT},
    Author = {S. Kubota and Y. Kawai and R. Kadotani},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {River administration facilities such as levees and river walls play a major role in preventing flooding due to heavy rain. The forms of such facilities must be constantly monitored for alteration due to rain and running water, and limited human resources and budgets make it necessary to efficiently maintain river administration facilities. During maintenance, inspection results are commonly recorded on paper documents. Continuous inspection and repair using information systems are an on-going challenge. This study proposes a maintenance management system for river facilities that uses three-dimensional data to solve these problems and make operation and maintenance more efficient. The system uses three-dimensional data to visualize river facility deformation and its process, and it has functions that visualize information about river management at any point in the three-dimensional data. The three-dimensional data is generated by photogrammetry using a camera on an Unmanned Aerial Vehicle.},
    }

  • B. Leroux, J. Cali, J. Verdun, L. Morel, and H. He, “ASSESSING THE RELIABILITY AND THE ACCURACY OF ATTITUDE EXTRACTED FROM VISUAL ODOMETRY FOR LIDAR DATA GEOREFERENCING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

    @InProceedings{leroux2017uavg-62,
    Title = {ASSESSING THE RELIABILITY AND THE ACCURACY OF ATTITUDE EXTRACTED FROM VISUAL ODOMETRY FOR LIDAR DATA GEOREFERENCING},
    Author = {B. Leroux and J. Cali and J. Verdun and L. Morel and H. He},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.},
    }

  • Q. S. Li, F. K. K. Wong, and T. Fung, “ASSESSING THE UTILITY OF UAV-BORNE HYPERSPECTRAL IMAGE AND PHOTOGRAMMETRY DERIVED 3D DATA FOR WETLAND SPECIES DISTRIBUTION QUICK MAPPING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.

    @InProceedings{li2017uavg-61,
    Title = {ASSESSING THE UTILITY OF UAV-BORNE HYPERSPECTRAL IMAGE AND PHOTOGRAMMETRY DERIVED 3D DATA FOR WETLAND SPECIES DISTRIBUTION QUICK MAPPING},
    Author = {Q.S. Li and F.K.K. Wong and T. Fung},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.},
    }

  • Y. Liang, Y. Qu, and T. Cui, “A THREE-DIMENSIONAL SIMULATION AND VISUALIZATION SYSTEM FOR UAV PHOTOGRAMMETRY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and more responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.

    @InProceedings{liang2017uavg-39,
    Title = {A THREE-DIMENSIONAL SIMULATION AND VISUALIZATION SYSTEM FOR UAV PHOTOGRAMMETRY},
    Author = {Y. Liang and Y. Qu and T. Cui},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and more responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.},
    }

  • S. Livens, K. Pauly, P. Baeck, J. Blommaert, D. Nuyts, J. Zender, and B. Delaure, “A SPATIOSPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1% of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600 900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475 925 nm), and we discuss future work.

    @InProceedings{livens2017uavg-107,
    Title = {A SPATIOSPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING},
    Author = {S. Livens and K. Pauly and P. Baeck and J. Blommaert and D. Nuyts and J. Zender and B. Delaure},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1% of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600 900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475 925 nm), and we discuss future work.},
    }

  • U. Lussem, J. Hollberg, J. Menne, J. Schellberg, and G. Bareth, “USING CALIBRATED RGB IMAGERY FROM LOW-COST UAVS FOR GRASSLAND MONITORING: CASE STUDY AT THE RENGEN GRASSLAND EXPERIMENT (RGE), GERMANY,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.

    @InProceedings{lussem2017uavg-91,
    Title = {USING CALIBRATED RGB IMAGERY FROM LOW-COST UAVS FOR GRASSLAND MONITORING: CASE STUDY AT THE RENGEN GRASSLAND EXPERIMENT (RGE), GERMANY},
    Author = {U. Lussem and J. Hollberg and J. Menne and J. Schellberg and G. Bareth},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.},
    }

  • L. Magri and R. Toldo, “BENDING THE DOMING EFFECT IN STRUCTURE FROM MOTION RECONSTRUCTIONS THROUGH BUNDLE ADJUSTMENT,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Structure from Motion techniques provides low-cost and flexible methods that can be adopted in arial surveying to collect topographic data with accurate results. Nevertheless, the so-called doming effect, due to unfortunate acquisition conditions or unreliable modeling of radial distortion, has been recognized as a critical issue that disrupts the quality of the attained 3D reconstruction. In this paper we propose a novel method, that works effectively in the presence of a nearly flat soil, to tackle a posteriori the doming effect: an automatic ground detection method is used to capture the doming deformation flawing the reconstruction, which in turn is wrapped to the correct geometry by iteratively enforcing a planarity constraint through a Bundle Adjustment framework. Experiments on real word datasets demonstrate promising results.

    @InProceedings{magri2017uavg-56,
    Title = {BENDING THE DOMING EFFECT IN STRUCTURE FROM MOTION RECONSTRUCTIONS THROUGH BUNDLE ADJUSTMENT},
    Author = {L. Magri and R. Toldo},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Structure from Motion techniques provides low-cost and flexible methods that can be adopted in arial surveying to collect topographic data with accurate results. Nevertheless, the so-called doming effect, due to unfortunate acquisition conditions or unreliable modeling of radial distortion, has been recognized as a critical issue that disrupts the quality of the attained 3D reconstruction. In this paper we propose a novel method, that works effectively in the presence of a nearly flat soil, to tackle a posteriori the doming effect: an automatic ground detection method is used to capture the doming deformation flawing the reconstruction, which in turn is wrapped to the correct geometry by iteratively enforcing a planarity constraint through a Bundle Adjustment framework. Experiments on real word datasets demonstrate promising results.},
    }

  • H. Meissner, M. Cramer, and B. Piltz, “BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    UAV based imaging and 3D object point generation is established technology. Some of the UAV users try to address (very) high-accuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerable impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of nine different camera / lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in well-controlled laboratory conditions and objective quality measures haven been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV-cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

    @InProceedings{meissner2017uavg-90,
    Title = {BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS},
    Author = {H. Meissner and M. Cramer and B. Piltz},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {UAV based imaging and 3D object point generation is established technology. Some of the UAV users try to address (very) high-accuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerable impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of nine different camera / lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in well-controlled laboratory conditions and objective quality measures haven been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV-cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.},
    }

  • M. M. R. Mostafa, “ACCURACY ASSESSMENT OF PROFESSIONAL GRADE UNMANNED SYSTEMS FOR HIGH PRECISION AIRBORNE MAPPING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies matches the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Canada. The achievable precision of map production is addressed in some detail.

    @InProceedings{mostafa2017uavg-93,
    Title = {ACCURACY ASSESSMENT OF PROFESSIONAL GRADE UNMANNED SYSTEMS FOR HIGH PRECISION AIRBORNE MAPPING},
    Author = {M.M.R. Mostafa},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies matches the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Canada. The achievable precision of map production is addressed in some detail.},
    }

  • A. K. Nasir and M. Tharani, “USE OF GREENDRONE UAS SYSTEM FOR MAIZE CROP MONITORING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This research work presents the use of a low-cost Unmanned Aerial System (UAS) GreenDrone for the monitoring of Maize crop. GreenDrone consist of a long endurance fixed wing air-frame equipped with a modified Canon camera for the calculation of Normalized Difference Vegetation Index (NDVI) and FLIR thermal camera for Water Stress Index (WSI) calculations. Several flights were conducted over the study site in order to acquire data during different phases of the crop growth. By the calculation of NDVI and NGB images we were able to identify areas with potential low yield, spatial variability in the plant counts, and irregularities in nitrogen application and water application related issues. Furthermore, some parameters which are important for the acquisition of good aerial images in order to create quality Orthomosaic image are also discussed.

    @InProceedings{nasir2017uavg-29,
    Title = {USE OF GREENDRONE UAS SYSTEM FOR MAIZE CROP MONITORING},
    Author = {A.K. Nasir and M. Tharani},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This research work presents the use of a low-cost Unmanned Aerial System (UAS) GreenDrone for the monitoring of Maize crop. GreenDrone consist of a long endurance fixed wing air-frame equipped with a modified Canon camera for the calculation of Normalized Difference Vegetation Index (NDVI) and FLIR thermal camera for Water Stress Index (WSI) calculations. Several flights were conducted over the study site in order to acquire data during different phases of the crop growth. By the calculation of NDVI and NGB images we were able to identify areas with potential low yield, spatial variability in the plant counts, and irregularities in nitrogen application and water application related issues. Furthermore, some parameters which are important for the acquisition of good aerial images in order to create quality Orthomosaic image are also discussed.},
    }

  • N. H. M. Nasir and K. N. Tahar, “3D MODEL GENERATION FROM UAV: HISTORICAL MOSQUE (MASJID LAMA NILAI),” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Preserving cultural heritage and historic sites is an important issue. These sites are subjected to erosion and vandalism, and, as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using the 3-D model building technology as they currently are, so that preservationists can track changes, foresee structural problems, and allow a wider audience to virtually see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This study discusses new methods that can reduce the time to build a model using the Unmanned Aerial Vehicle method. This study aims to develop a 3D model of a historical mosque using UAV photogrammetry. In order to achieve this, the data acquisition set of Masjid Lama Nilai, Negeri Sembilan was captured by using an Unmanned Aerial Vehicle. In addition, accuracy assessment between the actual and measured values is made. Besides that, a comparison between the rendering 3D model and texturing 3D model is also carried out through this study.

    @InProceedings{nasir2017uavg-14,
    Title = {3D MODEL GENERATION FROM UAV: HISTORICAL MOSQUE (MASJID LAMA NILAI)},
    Author = {N.H.M. Nasir and K.N. Tahar},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Preserving cultural heritage and historic sites is an important issue. These sites are subjected to erosion and vandalism, and, as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using the 3-D model building technology as they currently are, so that preservationists can track changes, foresee structural problems, and allow a wider audience to virtually see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This study discusses new methods that can reduce the time to build a model using the Unmanned Aerial Vehicle method. This study aims to develop a 3D model of a historical mosque using UAV photogrammetry. In order to achieve this, the data acquisition set of Masjid Lama Nilai, Negeri Sembilan was captured by using an Unmanned Aerial Vehicle. In addition, accuracy assessment between the actual and measured values is made. Besides that, a comparison between the rendering 3D model and texturing 3D model is also carried out through this study.},
    }

  • S. Natesan, G. Benari, C. Armenakis, and R. Lee, “LAND COVER CLASSIFICATION USING A UAV-BORNE SPECTROMETER,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Small fixed wing and rotor-copter unmanned aerial vehicles (UAV) are being used for low altitude remote sensing for thematic land classification and precision agriculture applications. Various sensors operating in the non-visible spectrum such as multispectral, hyperspectral and thermal sensors can be used as payloads. This work presents a preliminary study on the use of unmanned aerial vehicle equipped with a compact spectrometer for land cover type characterization. When calibrated, the measured spectra by the UAV spectrometer can be processed and compared reference data to generate georeferenced reflection spectra enabling the identification, classification and characterization of land cover elements. For this case study we used a DJI Flamewheel F550 hexacopter and the FLAME-NIR spectrometer for hyperspectral measurements. The calibration of the spectrometer is described as well the approach to determine its spatial footprint. The spectrometer spectral exposure labeled ground point can be used to determine the land cover classification. Preliminary results of a case-study are presented.

    @InProceedings{natesan2017uavg-98,
    Title = {LAND COVER CLASSIFICATION USING A UAV-BORNE SPECTROMETER},
    Author = {S. Natesan and G. Benari and C. Armenakis and R. Lee},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Small fixed wing and rotor-copter unmanned aerial vehicles (UAV) are being used for low altitude remote sensing for thematic land classification and precision agriculture applications. Various sensors operating in the non-visible spectrum such as multispectral, hyperspectral and thermal sensors can be used as payloads. This work presents a preliminary study on the use of unmanned aerial vehicle equipped with a compact spectrometer for land cover type characterization. When calibrated, the measured spectra by the UAV spectrometer can be processed and compared reference data to generate georeferenced reflection spectra enabling the identification, classification and characterization of land cover elements. For this case study we used a DJI Flamewheel F550 hexacopter and the FLAME-NIR spectrometer for hyperspectral measurements. The calibration of the spectrometer is described as well the approach to determine its spatial footprint. The spectrometer spectral exposure labeled ground point can be used to determine the land cover classification. Preliminary results of a case-study are presented.},
    }

  • R. A. Persad and C. Armenakis, “COMPARISON OF 2D AND 3D APPROACHES FOR THE ALIGNMENT OF UAV AND LIDAR POINT CLOUDS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The automatic alignment of 3D point clouds acquired or generated from different sensors is a challenging problem. The objective of the alignment is to estimate the 3D similarity transformation parameters, including a global scale factor, 3 rotations and 3 translations. To do so, corresponding anchor features are required in both data sets. There are two main types of alignment: i) Coarse alignment and ii) Refined Alignment. Coarse alignment issues include lack of any prior knowledge of the respective coordinate systems for a source and target point cloud pair and the difficulty to extract and match corresponding control features (e.g., points, lines or planes) co-located on both point cloud pairs to be aligned. With the increasing use of UAVs, there is a need to automatically co-register their generated point cloud-based digital surface models with those from other data acquisition systems such as terrestrial or airborne lidar point clouds. This works presents a comparative study of two independent feature matching techniques for addressing 3D conformal point cloud alignment of UAV and lidar data in different 3D coordinate systems without any prior knowledge of the seven transformation parameters.

    @InProceedings{persad2017uavg-68,
    Title = {COMPARISON OF 2D AND 3D APPROACHES FOR THE ALIGNMENT OF UAV AND LIDAR POINT CLOUDS},
    Author = {R.A. Persad and C. Armenakis},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The automatic alignment of 3D point clouds acquired or generated from different sensors is a challenging problem. The objective of the alignment is to estimate the 3D similarity transformation parameters, including a global scale factor, 3 rotations and 3 translations. To do so, corresponding anchor features are required in both data sets. There are two main types of alignment: i) Coarse alignment and ii) Refined Alignment. Coarse alignment issues include lack of any prior knowledge of the respective coordinate systems for a source and target point cloud pair and the difficulty to extract and match corresponding control features (e.g., points, lines or planes) co-located on both point cloud pairs to be aligned. With the increasing use of UAVs, there is a need to automatically co-register their generated point cloud-based digital surface models with those from other data acquisition systems such as terrestrial or airborne lidar point clouds. This works presents a comparative study of two independent feature matching techniques for addressing 3D conformal point cloud alignment of UAV and lidar data in different 3D coordinate systems without any prior knowledge of the seven transformation parameters.},
    }

  • M. Piras, V. Di Pietra, and D. Visintini, “3D MODELING OF INDUSTRIAL HERITAGE BUILDING USING COTSS SYSTEM: TEST, LIMITS AND PERFORMANCES,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called Photogrammetry with oblique images from UAV: potentialities and challenges, permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the Fornace Penna in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the Fornace Penna, making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately – the single images and the frames extracted from the video – acquired with the DJI system.

    @InProceedings{piras2017uavg-88,
    Title = {3D MODELING OF INDUSTRIAL HERITAGE BUILDING USING COTSS SYSTEM: TEST, LIMITS AND PERFORMANCES},
    Author = {M. Piras and V. {Di Pietra} and D. Visintini},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called Photogrammetry with oblique images from UAV: potentialities and challenges, permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the Fornace Penna in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the Fornace Penna, making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.},
    }

  • M. Piras, N. Grasso, and A. A. Jabbar, “UAV PHOTOGRAMMETRIC SOLUTION USING A RASPBERRY PI CAMERA MODULE AND SMART DEVICES: TEST AND RESULTS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second / resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some control points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.

    @InProceedings{piras2017uavg-84,
    Title = {UAV PHOTOGRAMMETRIC SOLUTION USING A RASPBERRY PI CAMERA MODULE AND SMART DEVICES: TEST AND RESULTS},
    Author = {M. Piras and N. Grasso and A.A. Jabbar},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second / resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some control points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.},
    }

  • M. Pircher, J. Geipel, K. Kusnierek, and A. Korsaeth, “DEVELOPMENT OF A HYBRID UAV SENSOR PLATFORM SUITABLE FOR FARM-SCALE APPLICATIONS IN PRECISION AGRICULTURE,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Todays modern precision farming applications have a huge demand for data with high spatial and temporal resolution. This leads to the need of unmanned aerial vehicles (UAV) as sensor platforms providing both, easy use and a high area coverage. This study showsthe successful development of a prototype hybrid UAV for practical applications in precision agriculture. The UAV consists of an off the-shelf fixed-wing fuselage, which has been enhanced with multi-rotor functionality. It was programmed to perform pre-defined waypoint missions completely autonomously, including vertical take-off, horizontal flight, and vertical landing. The UAV was tested for its return-to-home (RTH) accuracy, power consumption and general flight performance at different wind speeds. The RTH accuracy was 43.7 cm in average, with a root-mean-square error of 39.9 cm. The power consumption raised with an increase in wind speed. An extrapolation of the analysed power consumption to conditions without wind resulted in an estimated 40 km travel range, when we assumed a 25 % safety margin of remaining battery capacity. This translates to a maximal area coverage of 300 ha for a scenario with 18 m/s airspeed, 50 minutes flight time, 120 m AGL altitude, and a desired 70 % of image side-lap and 85 % forward-lap. The ground sample distance with an in-built RGB camera was 3.5 cm, which we consider sufficient for farm-scale mapping missions for most precision agriculture applications.

    @InProceedings{pircher2017uavg-32,
    Title = {DEVELOPMENT OF A HYBRID UAV SENSOR PLATFORM SUITABLE FOR FARM-SCALE APPLICATIONS IN PRECISION AGRICULTURE},
    Author = {M. Pircher and J. Geipel and K. Kusnierek and A. Korsaeth},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Todays modern precision farming applications have a huge demand for data with high spatial and temporal resolution. This leads to the need of unmanned aerial vehicles (UAV) as sensor platforms providing both, easy use and a high area coverage. This study showsthe successful development of a prototype hybrid UAV for practical applications in precision agriculture. The UAV consists of an off the-shelf fixed-wing fuselage, which has been enhanced with multi-rotor functionality. It was programmed to perform pre-defined waypoint missions completely autonomously, including vertical take-off, horizontal flight, and vertical landing. The UAV was tested for its return-to-home (RTH) accuracy, power consumption and general flight performance at different wind speeds. The RTH accuracy was 43.7 cm in average, with a root-mean-square error of 39.9 cm. The power consumption raised with an increase in wind speed. An extrapolation of the analysed power consumption to conditions without wind resulted in an estimated 40 km travel range, when we assumed a 25 % safety margin of remaining battery capacity. This translates to a maximal area coverage of 300 ha for a scenario with 18 m/s airspeed, 50 minutes flight time, 120 m AGL altitude, and a desired 70 % of image side-lap and 85 % forward-lap. The ground sample distance with an in-built RGB camera was 3.5 cm, which we consider sufficient for farm-scale mapping missions for most precision agriculture applications.},
    }

  • A. Raimundo, D. Peres, N. Santos, P. Sebastiao, and N. Souto, “USING DISTANCE SENSORS TO PERFORM COLLISION AVOIDANCE MANEUVRES ON UAV APPLICATIONS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. Sense and Avoid algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a Light Detection and Ranging (LiDAR), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawks flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station and UAV are made via Wi-Fi or cellular third or fourth generation (3G/4G). Some tests were made in order to evaluate the Sense and Avoid algorithms overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and Brake mode on a real outdoor, proving its concepts.

    @InProceedings{raimundo2017uavg-33,
    Title = {USING DISTANCE SENSORS TO PERFORM COLLISION AVOIDANCE MANEUVRES ON UAV APPLICATIONS},
    Author = {A. Raimundo and D. Peres and N. Santos and P. Sebastiao and N. Souto},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. Sense and Avoid algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a Light Detection and Ranging (LiDAR), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawks flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station and UAV are made via Wi-Fi or cellular third or fourth generation (3G/4G). Some tests were made in order to evaluate the Sense and Avoid algorithms overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and Brake mode on a real outdoor, proving its concepts.},
    }

  • J. Y. Rau, K. W. Hsiao, J. P. Jhan, S. H. Wang, W. C. Fang, and J. L. Wang, “BRIDGE CRACK DETECTION USING MULTI-ROTARY UAV AND OBJECT-BASE IMAGE ANALYSIS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2-8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA) technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM) to obtain 3D crack information and based on image scale we can calculate the width of a crack object. For spalling volume calculation, we also apply SGM to obtain dense surface geometry. Assuming the background is a planar surface, we can fit a planar function and convert the surface geometry into a DSM. Thus, for spalling area its height will be lower than the plane and its value will be negative. We can thus apply several image processing technique to segment the spalling area and calculate the spalling volume as well. For bridge inspection and UAV image management within a laboratory, we develop a graphic user interface. The major functions include crack auto-detection using OBIA, crack editing, i.e. delete and add cracks, crack attributing, 3D crack visualization, spalling area/volume calculation, bridge defects documentation, etc.

    @InProceedings{rau2017uavg-54,
    Title = {BRIDGE CRACK DETECTION USING MULTI-ROTARY UAV AND OBJECT-BASE IMAGE ANALYSIS},
    Author = {J.Y. Rau and K.W. Hsiao and J.P. Jhan and S.H. Wang and W.C. Fang and J.L. Wang},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2-8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA) technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM) to obtain 3D crack information and based on image scale we can calculate the width of a crack object. For spalling volume calculation, we also apply SGM to obtain dense surface geometry. Assuming the background is a planar surface, we can fit a planar function and convert the surface geometry into a DSM. Thus, for spalling area its height will be lower than the plane and its value will be negative. We can thus apply several image processing technique to segment the spalling area and calculate the spalling volume as well. For bridge inspection and UAV image management within a laboratory, we develop a graphic user interface. The major functions include crack auto-detection using OBIA, crack editing, i.e. delete and add cracks, crack attributing, 3D crack visualization, spalling area/volume calculation, bridge defects documentation, etc.},
    }

  • S. Rhee and T. Kim, “INVESTIGATION OF 1:1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1:1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific model. The results of analysis showed that the errors were within the specification of 1:1,000 map. Although the Y-parallax can be eliminated, it is still necessary to improve the accuracy of absolute ground position error in order to apply this technique to the actual work. There are a few models in which the difference in height between adjacent models is about 40 cm. We analysed the stability of UAV images by checking angle differences between adjacent images. We also analysed the average area covered by one stereo model and discussed the possible difficulty associated with this narrow coverage. In the future we consider how to reduce position errors and improve map drawing performances from UAVs.

    @InProceedings{rhee2017uavg-74,
    Title = {INVESTIGATION OF 1:1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES},
    Author = {S. Rhee and T. Kim},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1:1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific model. The results of analysis showed that the errors were within the specification of 1:1,000 map. Although the Y-parallax can be eliminated, it is still necessary to improve the accuracy of absolute ground position error in order to apply this technique to the actual work. There are a few models in which the difference in height between adjacent models is about 40 cm. We analysed the stability of UAV images by checking angle differences between adjacent images. We also analysed the average area covered by one stereo model and discussed the possible difficulty associated with this narrow coverage. In the future we consider how to reduce position errors and improve map drawing performances from UAVs.},
    }

  • B. Ruf, B. Erdnuess, and M. Weinmann, “DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

    @InProceedings{ruf2017uavg-21,
    Title = {DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION},
    Author = {B. Ruf and B. Erdnuess and M. Weinmann},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.},
    }

  • A. F. Scannapieco, A. Renga, G. Fasano, and A. Moccia, “ULTRALIGHT RADAR FOR SMALL AND MICRO-UAV NAVIGATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This paper presents a radar approach to navigation of small and micro Unmanned Aerial Vehicles (UAV) in environments challenging for common sensors. A technique based on radar odometry is briefly explained and schemes for complete integration with other sensors are proposed. The focus of the paper is set on ultralight radars and interpretation of outputs of such sensor when dealing with autonomous navigation in complex scenario. The experimental setup used to analyse the proposed approach comprises one multi-rotor UAV and one ultralight commercial radar. Results from flight tests in which both forward-only motion and mixed motion are presented and analysed, providing a reference for understanding outputs of radar in complex scenarios. The radar odometry solution is compared with ground truth provided by GPS sensor.

    @InProceedings{scannapieco2017uavg-58,
    Title = {ULTRALIGHT RADAR FOR SMALL AND MICRO-UAV NAVIGATION},
    Author = {A.F. Scannapieco and A. Renga and G. Fasano and A. Moccia},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This paper presents a radar approach to navigation of small and micro Unmanned Aerial Vehicles (UAV) in environments challenging for common sensors. A technique based on radar odometry is briefly explained and schemes for complete integration with other sensors are proposed. The focus of the paper is set on ultralight radars and interpretation of outputs of such sensor when dealing with autonomous navigation in complex scenario. The experimental setup used to analyse the proposed approach comprises one multi-rotor UAV and one ultralight commercial radar. Results from flight tests in which both forward-only motion and mixed motion are presented and analysed, providing a reference for understanding outputs of radar in complex scenarios. The radar odometry solution is compared with ground truth provided by GPS sensor.},
    }

  • J. Schmiemann, H. Harms, J. Schattenberg, M. Becker, S. Batzdorfer, and L. Frerichs, “A DISTRIBUTED ONLINE 3D-LIDAR MAPPING SYSTEM,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    In this paper we are presenting work done within the joint development project ANKommEn. It deals with the development of a highly automated robotic system for fast data acquisition in civil disaster scenarios. One of the main requirements is a versatile system, hence the concept embraces a machine cluster consisting of multiple fundamentally different robotic platforms. To cover a large variety of potential deployment scenarios, neither the absolute amount of participants nor the precise individual layout of each platform shall be restricted within the conceptual design, thus leading to a variety of special requirements, like onboard and online data processing capabilities for each individual participant and efficient data exchange structures, allowing reliable random data exchange between individual robots. We are demonstrating the functionality and performance by means of a distributed mapping system evaluated with real world data in a challenging urban and rural indoor / outdoor scenarios.

    @InProceedings{schmiemann2017uavg-64,
    Title = {A DISTRIBUTED ONLINE 3D-LIDAR MAPPING SYSTEM},
    Author = {J. Schmiemann and H. Harms and J. Schattenberg and M. Becker and S. Batzdorfer and L. Frerichs},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {In this paper we are presenting work done within the joint development project ANKommEn. It deals with the development of a highly automated robotic system for fast data acquisition in civil disaster scenarios. One of the main requirements is a versatile system, hence the concept embraces a machine cluster consisting of multiple fundamentally different robotic platforms. To cover a large variety of potential deployment scenarios, neither the absolute amount of participants nor the precise individual layout of each platform shall be restricted within the conceptual design, thus leading to a variety of special requirements, like onboard and online data processing capabilities for each individual participant and efficient data exchange structures, allowing reliable random data exchange between individual robots. We are demonstrating the functionality and performance by means of a distributed mapping system evaluated with real world data in a challenging urban and rural indoor / outdoor scenarios.},
    }

  • S. Schulte, F. Hillen, and T. Prinz, “ANALYSIS OF COMBINED UAV-BASED RGB AND THERMAL REMOTE SENSING DATA: A NEW APPROACH TO CROWD MONITORING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Collecting vast amount of data does not solely help to fulfil information needs related to crowd monitoring, it is rather important to collect data that is suitable to meet specific information requirements. In order to address this issue, a prototype is developed to facilitate the combination of UAV-based RGB and thermal remote sensing datasets. In an experimental approach, image sensors were mounted on a remotely piloted aircraft and captured two video datasets over a crowd. A group of volunteers performed diverse movements that depict real world scenarios. The prototype is deriving the movement on the ground and is programmed in MATLAB. This novel detection approach using combined data is afterwards evaluated against detection algorithms that only use a single data source. Our tests show that the combination of RGB and thermal remote sensing data is beneficial for the field of crowd monitoring regarding the detection of crowd movement.

    @InProceedings{schulte2017uavg-99,
    Title = {ANALYSIS OF COMBINED UAV-BASED RGB AND THERMAL REMOTE SENSING DATA: A NEW APPROACH TO CROWD MONITORING },
    Author = {S. Schulte and F. Hillen and T. Prinz},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Collecting vast amount of data does not solely help to fulfil information needs related to crowd monitoring, it is rather important to collect data that is suitable to meet specific information requirements. In order to address this issue, a prototype is developed to facilitate the combination of UAV-based RGB and thermal remote sensing datasets. In an experimental approach, image sensors were mounted on a remotely piloted aircraft and captured two video datasets over a crowd. A group of volunteers performed diverse movements that depict real world scenarios. The prototype is deriving the movement on the ground and is programmed in MATLAB. This novel detection approach using combined data is afterwards evaluated against detection algorithms that only use a single data source. Our tests show that the combination of RGB and thermal remote sensing data is beneficial for the field of crowd monitoring regarding the detection of crowd movement.},
    }

  • C. Stoecker, F. Nex, M. Koeva, and M. Gerke, “QUALITY ASSESSMENT OF COMBINED IMU/GNSS DATA FOR DIRECT GEOREFERENCING IN THE CONTEXT OF UAV-BASED MAPPING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Within the past years, the development of high-quality Inertial Measurement Units (IMU) and GNSS technology and dedicated RTK (Real Time Kinematic) and PPK (Post-Processing Kinematic) solutions for UAVs promise accurate measurements of the exterior orientation (EO) parameters which allow to georeference the images. Whereas the positive impact of known precise GNSS coordinates of camera positions is already well studied, the influence of the angular observations have not been studied in depth so far. Challenges include accuracies of GNSS/IMU observations, excessive angular motion and time synchronization problems during the flight. Thus, this study assesses the final geometric accuracy using direct georeferencing with high-quality post-processed IMU/GNSS and PPK corrections. A comparison of different data processing scenarios including indirect georeferencing, integrated solutions as well as direct georeferencing provides guidance on the workability of UAV mapping approaches that require a high level of positional accuracy. In the current research the results show, that the use of the post-processed APX-15 GNSS and IMU data was particularly beneficial to enhance the image orientation quality. Horizontal accuracies within the pixel level (2.8cm) could be achieved. However, it was also shown, that the angular EO parameters are still too inaccurate to be assigned with a high weight during the image orientation process. Furthermore, detailed investigations of the EO parameters unveil that systematic sensor misalignments and offsets of the image block can be reduced by the introduction of four GCPs. In this regard, the use of PPK corrections reduces the time consuming field work to measure high quantities of GCPs and makes large-scale UAV mapping a more feasible solution for practitioners that require high geometric accuracies.

    @InProceedings{stoecker2017uavg-65,
    Title = {QUALITY ASSESSMENT OF COMBINED IMU/GNSS DATA FOR DIRECT GEOREFERENCING IN THE CONTEXT OF UAV-BASED MAPPING},
    Author = {C. Stoecker and F. Nex and M. Koeva and M. Gerke},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Within the past years, the development of high-quality Inertial Measurement Units (IMU) and GNSS technology and dedicated RTK (Real Time Kinematic) and PPK (Post-Processing Kinematic) solutions for UAVs promise accurate measurements of the exterior orientation (EO) parameters which allow to georeference the images. Whereas the positive impact of known precise GNSS coordinates of camera positions is already well studied, the influence of the angular observations have not been studied in depth so far. Challenges include accuracies of GNSS/IMU observations, excessive angular motion and time synchronization problems during the flight. Thus, this study assesses the final geometric accuracy using direct georeferencing with high-quality post-processed IMU/GNSS and PPK corrections. A comparison of different data processing scenarios including indirect georeferencing, integrated solutions as well as direct georeferencing provides guidance on the workability of UAV mapping approaches that require a high level of positional accuracy. In the current research the results show, that the use of the post-processed APX-15 GNSS and IMU data was particularly beneficial to enhance the image orientation quality. Horizontal accuracies within the pixel level (2.8cm) could be achieved. However, it was also shown, that the angular EO parameters are still too inaccurate to be assigned with a high weight during the image orientation process. Furthermore, detailed investigations of the EO parameters unveil that systematic sensor misalignments and offsets of the image block can be reduced by the introduction of four GCPs. In this regard, the use of PPK corrections reduces the time consuming field work to measure high quantities of GCPs and makes large-scale UAV mapping a more feasible solution for practitioners that require high geometric accuracies.},
    }

  • Y. Taddia, C. Corbau, E. Zambello, V. Russo, U. Simeoni, P. Russo, and A. Pellegrinelli, “UAVS TO ASSESS THE EVOLUTION OF EMBRYO DUNES,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The balance of a coastal environment is particularly complex: the continuous formation of dunes, their destruction as a result of violent storms, the growth of vegetation and the consequent growth of the dunes themselves are phenomena that significantly affect this balance. This work presents an approach to the long-term monitoring of a complex dune system by means of Unmanned Aerial Vehicles (UAVs). Four different surveys were carried out between November 2015 and November 2016. Aerial photogrammetric data were acquired during flights by a DJI Phantom 2 and a DJI Phantom 3 with cameras in a nadiral arrangement. GNSS receivers in Network Real Time Kinematic (NRTK) mode were used to frame models in the European Terrestrial Reference System. Processing of the captured images consisted in reconstruction of a three-dimensional model using the principles of Structure from Motion (SfM). Particular care was necessary due to the vegetation: filtering of the dense cloud, mainly based on slope detection, was performed to minimize this issue. Final products of the SfM approach were represented by Digital Elevation Models (DEMs) of the sandy coastal environment. Each model was validated by comparison through specially surveyed points. Other analyses were also performed, such as cross sections and computing elevation variations over time. The use of digital photogrammetry by UAVs is particularly reliable: fast acquisition of the images, reconstruction of high-density point clouds, high resolution of final elevation models, as well as flexibility, low cost and accuracy comparable with other available techniques.

    @InProceedings{taddia2017uavg-48,
    Title = {UAVS TO ASSESS THE EVOLUTION OF EMBRYO DUNES},
    Author = {Y. Taddia and C. Corbau and E. Zambello and V. Russo and U. Simeoni and P. Russo and A. Pellegrinelli},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The balance of a coastal environment is particularly complex: the continuous formation of dunes, their destruction as a result of violent storms, the growth of vegetation and the consequent growth of the dunes themselves are phenomena that significantly affect this balance. This work presents an approach to the long-term monitoring of a complex dune system by means of Unmanned Aerial Vehicles (UAVs). Four different surveys were carried out between November 2015 and November 2016. Aerial photogrammetric data were acquired during flights by a DJI Phantom 2 and a DJI Phantom 3 with cameras in a nadiral arrangement. GNSS receivers in Network Real Time Kinematic (NRTK) mode were used to frame models in the European Terrestrial Reference System. Processing of the captured images consisted in reconstruction of a three-dimensional model using the principles of Structure from Motion (SfM). Particular care was necessary due to the vegetation: filtering of the dense cloud, mainly based on slope detection, was performed to minimize this issue. Final products of the SfM approach were represented by Digital Elevation Models (DEMs) of the sandy coastal environment. Each model was validated by comparison through specially surveyed points. Other analyses were also performed, such as cross sections and computing elevation variations over time. The use of digital photogrammetry by UAVs is particularly reliable: fast acquisition of the images, reconstruction of high-density point clouds, high resolution of final elevation models, as well as flexibility, low cost and accuracy comparable with other available techniques.},
    }

  • N. Takahashi, R. Wakutsu, T. Kato, T. Wakaizumi, T. Ooishi, and R. Matsuoka, “EXPERIMENT ON UAV PHOTOGRAMMETRY AND TERRESTRIAL LASER SCANNING FOR ICT-INTEGRATED CONSTRUCTION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    In the 2016 fiscal year the Ministry of Land, Infrastructure, Transport and Tourism of Japan started a program integrating construction and ICT in earthwork and concrete placing. The new program named i-Construction focusing on productivity improvement adopts such new technologies as UAV photogrammetry and TLS. We report a field experiment to investigate whether the procedures of UAV photogrammetry and TLS following the standards for i-Construction are feasible or not. In the experiment we measured an embankment of about 80 metres by 160 metres immediately after earthwork was done on the embankment. We used two sets of UAV and camera in the experiment. One is a larger UAV enRoute Zion QC730 and its onboard camera Sony a6000. The other is a smaller UAV DJI Phantom 4 and its dedicated onboard camera. Moreover, we used a terrestrial laser scanner FARO Focus3D X330 based on the phase shift principle. The experiment results indicate that the procedures of UAV photogrammetry using a QC730 with an a6000 and TLS using a Focus3D X330 following the standards for i-Construction would be feasible. Furthermore, the experiment results show that UAV photogrammetry using a lower price UAV Phantom 4 was unable to satisfy the accuracy requirement for i-Construction. The cause of the low accuracy by Phantom 4 is under investigation. We also found that the difference of image resolution on the ground would not have a great influence on the measurement accuracy in UAV photogrammetry.

    @InProceedings{takahashi2017uavg-50,
    Title = {EXPERIMENT ON UAV PHOTOGRAMMETRY AND TERRESTRIAL LASER SCANNING FOR ICT-INTEGRATED CONSTRUCTION},
    Author = {N. Takahashi and R. Wakutsu and T. Kato and T. Wakaizumi and T. Ooishi and R. Matsuoka},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {In the 2016 fiscal year the Ministry of Land, Infrastructure, Transport and Tourism of Japan started a program integrating construction and ICT in earthwork and concrete placing. The new program named i-Construction focusing on productivity improvement adopts such new technologies as UAV photogrammetry and TLS. We report a field experiment to investigate whether the procedures of UAV photogrammetry and TLS following the standards for i-Construction are feasible or not. In the experiment we measured an embankment of about 80 metres by 160 metres immediately after earthwork was done on the embankment. We used two sets of UAV and camera in the experiment. One is a larger UAV enRoute Zion QC730 and its onboard camera Sony a6000. The other is a smaller UAV DJI Phantom 4 and its dedicated onboard camera. Moreover, we used a terrestrial laser scanner FARO Focus3D X330 based on the phase shift principle. The experiment results indicate that the procedures of UAV photogrammetry using a QC730 with an a6000 and TLS using a Focus3D X330 following the standards for i-Construction would be feasible. Furthermore, the experiment results show that UAV photogrammetry using a lower price UAV Phantom 4 was unable to satisfy the accuracy requirement for i-Construction. The cause of the low accuracy by Phantom 4 is under investigation. We also found that the difference of image resolution on the ground would not have a great influence on the measurement accuracy in UAV photogrammetry.},
    }

  • D. Turner, A. Lucieer, M. McCabe, S. Parkes, and I. Clarke, “PUSHBROOM HYPERSPECTRAL IMAGING FROM AN UNMANNED AIRCRAFT SYSTEM (UAS) GEOMETRIC PROCESSING WORKFLOW AND ACCURACY ASSESSMENT,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    In this study, we assess two push broom hyperspectral sensors as carried by small (10 15 kg) multi-rotor Unmanned Aircraft Systems (UAS). We used a Headwall Photonics micro-Hyperspec push broom sensor with 324 spectral bands (4 5 nm FWHM) and a Headwall Photonics nano-Hyperspec sensor with 270 spectral bands (6 nm FWHM) both in the VNIR spectral range (400 1000 nm). A gimbal was used to stabilise the sensors in relation to the aircraft flight dynamics, and for the micro-Hyperspec a tightly coupled dual frequency Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU), and Machine Vision Camera (MVC) were used for attitude and position determination. For the nano-Hyperspec, a navigation grade GNSS system and IMU provided position and attitude data. This study presents the geometric results of one flight over a grass oval on which a dense Ground Control Point (GCP) network was deployed. The aim being to ascertain the geometric accuracy achievable with the system. Using the PARGE software package (ReSe Remote Sensing Applications) we ortho-rectify the push broom hyperspectral image strips and then quantify the accuracy of the ortho-rectification by using the GCPs as check points. The orientation (roll, pitch, and yaw) of the sensor is measured by the IMU. Alternatively imagery from a MVC running at 15 Hz, with accurate camera position data can be processed with Structure from Motion (SfM) software to obtain an estimated camera orientation. In this study, we look at which of these data sources will yield a flight strip with the highest geometric accuracy.

    @InProceedings{turner2017uavg-81,
    Title = {PUSHBROOM HYPERSPECTRAL IMAGING FROM AN UNMANNED AIRCRAFT SYSTEM (UAS) GEOMETRIC PROCESSING WORKFLOW AND ACCURACY ASSESSMENT},
    Author = {D. Turner and A. Lucieer and M. McCabe and S. Parkes and I. Clarke},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {In this study, we assess two push broom hyperspectral sensors as carried by small (10 15 kg) multi-rotor Unmanned Aircraft Systems (UAS). We used a Headwall Photonics micro-Hyperspec push broom sensor with 324 spectral bands (4 5 nm FWHM) and a Headwall Photonics nano-Hyperspec sensor with 270 spectral bands (6 nm FWHM) both in the VNIR spectral range (400 1000 nm). A gimbal was used to stabilise the sensors in relation to the aircraft flight dynamics, and for the micro-Hyperspec a tightly coupled dual frequency Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU), and Machine Vision Camera (MVC) were used for attitude and position determination. For the nano-Hyperspec, a navigation grade GNSS system and IMU provided position and attitude data. This study presents the geometric results of one flight over a grass oval on which a dense Ground Control Point (GCP) network was deployed. The aim being to ascertain the geometric accuracy achievable with the system. Using the PARGE software package (ReSe Remote Sensing Applications) we ortho-rectify the push broom hyperspectral image strips and then quantify the accuracy of the ortho-rectification by using the GCPs as check points. The orientation (roll, pitch, and yaw) of the sensor is measured by the IMU. Alternatively imagery from a MVC running at 15 Hz, with accurate camera position data can be processed with Structure from Motion (SfM) software to obtain an estimated camera orientation. In this study, we look at which of these data sources will yield a flight strip with the highest geometric accuracy.},
    }

  • J. Unger, F. Rottensteiner, and C. Heipke, “ASSIGNING TIE POINTS TO A GENERALISED BUILDING MODEL FOR UAS IMAGE ORIENTATION,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    This paper addresses the integration of a building model into the pose estimation of image sequences. Images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between buildings. Two approaches to assign tie points to a generalised building model in object space are presented. A direct approach is based on the distances between the object coordinates of tie points and planes of the building model. An indirect approach first finds planes within the tie point cloud that are subsequently matched to model planes, finally based on these matches, tie points are assigned to model planes. For both cases, the assignments are used in a hybrid bundle adjustment to refine the poses (image orientations). Experimental results for an image sequence demonstrate improvements in comparison to an adjustment without the building model. Differences and limitations of the two approaches for point-plane assignment are discussed – in the experiments they perform similar with respect to estimated standard deviations of tie points.

    @InProceedings{unger2017uavg-71,
    Title = {ASSIGNING TIE POINTS TO A GENERALISED BUILDING MODEL FOR UAS IMAGE ORIENTATION},
    Author = {J. Unger and F. Rottensteiner and C. Heipke},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {This paper addresses the integration of a building model into the pose estimation of image sequences. Images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between buildings. Two approaches to assign tie points to a generalised building model in object space are presented. A direct approach is based on the distances between the object coordinates of tie points and planes of the building model. An indirect approach first finds planes within the tie point cloud that are subsequently matched to model planes, finally based on these matches, tie points are assigned to model planes. For both cases, the assignments are used in a hybrid bundle adjustment to refine the poses (image orientations). Experimental results for an image sequence demonstrate improvements in comparison to an adjustment without the building model. Differences and limitations of the two approaches for point-plane assignment are discussed - in the experiments they perform similar with respect to estimated standard deviations of tie points.},
    }

  • U. Vepakomma and D. Cormier, “POTENTIAL OF MULTI-TEMPORAL UAV-BORNE LIDAR IN ASSESSING EFFECTIVENESS OF SILVICULTURAL TREATMENTS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    Silvicultural treatments are practiced to control resource competition and direct forest stand development to meet management objectives. Effective tracking of thinning and partial cutting treatments help in timely mitigation and ensuring future stand productivity. Based on a study conducted in autumn 2015, our findings in a white pine dominant forest stand in Petawawa (Ontario, Canada) showed that almost all individual trees were detectable, structure of individual trees and undergrowth was well pronounced and underlying terrain below dense undisturbed canopy was well captured with UAS based Riegl Vux-1 lidar even at a range of 150m. Thereafter, the site was re-scanned the following summer with the same system. Besides understanding the difference in distribution patterns due to foliage conditions, co-registering the two datasets, in the current study, we tested the potential of quantifying effectiveness of a partial cutting silvicultural system especially in terms of filling of 3D spaces through vertical or lateral growth and mortality in a very short period of time.

    @InProceedings{vepakomma2017uavg-103,
    Title = {POTENTIAL OF MULTI-TEMPORAL UAV-BORNE LIDAR IN ASSESSING EFFECTIVENESS OF SILVICULTURAL TREATMENTS },
    Author = {U. Vepakomma and D. Cormier},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {Silvicultural treatments are practiced to control resource competition and direct forest stand development to meet management objectives. Effective tracking of thinning and partial cutting treatments help in timely mitigation and ensuring future stand productivity. Based on a study conducted in autumn 2015, our findings in a white pine dominant forest stand in Petawawa (Ontario, Canada) showed that almost all individual trees were detectable, structure of individual trees and undergrowth was well pronounced and underlying terrain below dense undisturbed canopy was well captured with UAS based Riegl Vux-1 lidar even at a range of 150m. Thereafter, the site was re-scanned the following summer with the same system. Besides understanding the difference in distribution patterns due to foliage conditions, co-registering the two datasets, in the current study, we tested the potential of quantifying effectiveness of a partial cutting silvicultural system especially in terms of filling of 3D spaces through vertical or lateral growth and mortality in a very short period of time. },
    }

  • M. Weinmann, M. S. Mueller, M. Hillemann, N. Reydel, S. Hinz, and B. Jutzi, “POINT CLOUD ANALYSIS FOR UAV-BORNE LASER SCANNING WITH HORIZONTALLY AND VERTICALLY ORIENTED LINE SCANNERS – CONCEPT AND FIRST RESULTS,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

    @InProceedings{weinmann2017uavg-52,
    Title = {POINT CLOUD ANALYSIS FOR UAV-BORNE LASER SCANNING WITH HORIZONTALLY AND VERTICALLY ORIENTED LINE SCANNERS - CONCEPT AND FIRST RESULTS},
    Author = {M. Weinmann and M.S. Mueller and M. Hillemann and N. Reydel and S. Hinz and B. Jutzi},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.},
    }

  • D. Wierzbicki, “THE PREDICTION OF POSITION AND ORIENTATION PARAMETERS OF UAV FOR VIDEO IMAGING,” in ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2017.
    [Abstract] [BibTeX] [PDF]
    The paper presents the results of the prediction for the parameters of the position and orientation of the unmanned aerial vehicle (UAV) equipped with compact digital camera. Issue focus in this paper is to achieve optimal accuracy and reliability of the geo-referenced video frames on the basis of data from the navigation sensors mounted on UAV. In experiments two mathematical models were used for the process of the prediction: the polynomial model and the trigonometric model. The forecast values of position and orientation of UAV were compared with readings low cost GPS and INS sensors mounted on the unmanned Trimble UX-5 platform. Research experiment was conducted on the preview of navigation data from 23 measuring epochs. The forecast coordinate values and angles of the turnover and the actual readings of the sensor Trimble UX-5 were compared in this paper. Based on the results of the comparison it was determined that: the best results of co-ordinate comparison of an unmanned aerial vehicle received for the storage with, whereas worst for the coordinate Y on the base of both prediction models, obtained value of standard deviation for the coordinate XYZ from both prediction models does not cross over a admissible criterion 10 m for the term of the exactitudes of the position of a unmanned aircraft. The best results of the comparison of the angles of the turn of a unmanned aircraft received for the angle Pitch, whereas worst for the angles Heading and Roll on the base of both prediction models. Obtained value of standard deviation for the angles of turn HPR from both prediction models does not exceed a admissible exactitude 5_ only for the angle Pitch, however crosses over this value for the angles Heading and Roll.

    @InProceedings{wierzbicki2017uavg-42,
    Title = {THE PREDICTION OF POSITION AND ORIENTATION PARAMETERS OF UAV FOR VIDEO IMAGING },
    Author = {D. Wierzbicki},
    Booktitle = {ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2017},
    Abstract = {The paper presents the results of the prediction for the parameters of the position and orientation of the unmanned aerial vehicle (UAV) equipped with compact digital camera. Issue focus in this paper is to achieve optimal accuracy and reliability of the geo-referenced video frames on the basis of data from the navigation sensors mounted on UAV. In experiments two mathematical models were used for the process of the prediction: the polynomial model and the trigonometric model. The forecast values of position and orientation of UAV were compared with readings low cost GPS and INS sensors mounted on the unmanned Trimble UX-5 platform. Research experiment was conducted on the preview of navigation data from 23 measuring epochs. The forecast coordinate values and angles of the turnover and the actual readings of the sensor Trimble UX-5 were compared in this paper. Based on the results of the comparison it was determined that: the best results of co-ordinate comparison of an unmanned aerial vehicle received for the storage with, whereas worst for the coordinate Y on the base of both prediction models, obtained value of standard deviation for the coordinate XYZ from both prediction models does not cross over a admissible criterion 10 m for the term of the exactitudes of the position of a unmanned aircraft. The best results of the comparison of the angles of the turn of a unmanned aircraft received for the angle Pitch, whereas worst for the angles Heading and Roll on the base of both prediction models. Obtained value of standard deviation for the angles of turn HPR from both prediction models does not exceed a admissible exactitude 5_ only for the angle Pitch, however crosses over this value for the angles Heading and Roll.},
    }

Extended Abstracts

  • H. Bartholomeus, B. Brede, A. Lau, and L. Kooistra, “CAPTURING FOREST STRUCTURE USING UAV BASED LIDAR,” in Online Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics , 2017.
    [BibTeX] [PDF]
    @InProceedings{bartholomeus2017uavg,
    title = {CAPTURING FOREST STRUCTURE USING UAV BASED LIDAR},
    author = {H. Bartholomeus and B. Brede and A. Lau and L. Kooistra},
    year = 2017,
    booktitle = {Online Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics},
    note = {Extended Abstract},
    Url = {http://uavg17.ipb.uni-bonn.de/wp-content/papercite-data/pdf/bartholomeus2017uavg.pdf},
    }

  • K. Johansen, Y. Tu, C. Searle, D. Wu, and S. Phinn, “MAPPING THE CONDITION OF MACADAMIA TREE CROPS USING MULTI-SPECTRAL DRONE IMAGERY,” in Online Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics , 2017.
    [BibTeX] [PDF]
    @InProceedings{johansen2017uavg,
    title = {MAPPING THE CONDITION OF MACADAMIA TREE CROPS USING MULTI-SPECTRAL DRONE IMAGERY},
    author = {K. Johansen and Y. Tu and C. Searle and D. Wu and S. Phinn},
    year = 2017,
    booktitle = {Online Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics},
    note = {Extended Abstract},
    Url = {http://uavg17.ipb.uni-bonn.de/wp-content/papercite-data/pdf/johansen2017uavg.pdf},
    }

  • J. Suomalainen, T. Hakala, and E. Honkavaara, “MEASURING INCIDENT IRRADIANCE ON-BOARD AN UNSTABLE UAV PLATFORM – FIRST RESULTS ON VIRTUAL HORIZONTALIZATION OF MULTIANGLE MEASUREMENT,” in Online Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics , 2017.
    [BibTeX] [PDF]
    @InProceedings{suomalainen2017uavg,
    title = {MEASURING INCIDENT IRRADIANCE ON-BOARD AN UNSTABLE UAV PLATFORM - FIRST RESULTS ON VIRTUAL HORIZONTALIZATION OF MULTIANGLE MEASUREMENT},
    author = {J. Suomalainen and T. Hakala and E. Honkavaara},
    year = 2017,
    booktitle = {Online Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics},
    note = {Extended Abstract},
    Url = {http://uavg17.ipb.uni-bonn.de/wp-content/papercite-data/pdf/suomalainen2017uavg.pdf},
    }