Monday, May 2, 2016

Pix 4D Demo

Processing UAS Data in Pix 4D

Introduction:

The purpose of the activity for this week is to become familiar with running a file from a UAS in Pix 4D to create a 3D map. The program uses the geocoded points connected to the image files as a "geotag" to create a point cloud. The images are combined into one image and the z value is added to that image to push it up or down depending on the collected elevation.

In order to do this the pixels of each image have to be laid on top of one another perfectly to create the combined image. The connection points the images use to link up are called keypoints. It takes two keypoints to create one 3D point. There must be at least two keypoints or there is no way to create a 3D image. If the are of interest is rather bland or featureless there is a need for even more keypoints as it is more difficult to connect the images when they are so similar. Often times it will help to know whether or not sufficient points were gathered for coverage(figure 1). To test this a process called "rapid check" can be used. This is a tool that sacrifices accuracy for speed and quickly runs a low quality check on the data to ensure proper coverage.
For larger projects it is likely that multiple flights will need to be done to cover the study area. Though the data is stored in a separate file, Pix4D can still run analysis on multiple flights at once. However the pilot has to be sure of a few things, the conditions have to be similar to ensure data integrity and there must be sufficient overlap to connect the keypoints.
Pix4D also has the ability to process oblique images(figure2). These are images that are taken at 90 degrees or more from the surface of the Earth. A traditional aerial image we tend to think of is taken from 0 degrees, straight down at the Earth. Oblique images are taken of things like towers and buildings. If no ground control points are used, if there is nothing tying the data to a specific place on the ground, there is still a possibility of an output. Though they are highly recommended they are not needed to create a result with no scale, orientation, or any positional information
At the end of running a project a quality report is displayed. This appears after each step is run as a way to check in on the progress of a project, sort of like a print statement in PyScripter. 

Figure 1: This figure displays how much overlap between taken points is needed between two flights in order to link them,

Figure 2: This diagram displays what it means to collect data from images taken at different angles to the ground. 


Methods:

In this exercise the simple application of running the program to create a 3D map was done. To do this a folder of images is added to the program as a "new project". Some specifications can be set at this point and then the initial analysis of the images is run. This is the section where the keypoints within the rasters are connected together and their geotags link them to the location they were taken on the globe, giving them spatial reference. 
In the case of this project, 80 photographers were used and went through initial processing. This process took well over an hour. Once done however, the program had created several links between images allowing for a 3D image to be eventually created, these connection can be seen in figure 3. The connected images combine to form one image with 3D capability (figure 3).

Figure 3: The geotags on each image are used to connect images to each other, the more lines connecting points, the more reliable the connection between images is. This graphic shows that the images in this project have very high connectivity.


Results:

After the program is run, several things can be done with the data and resulting images. One of the first images created with the data, displayed in the quality check is figure 4, displaying the elevation of the points as well as a compiled image creating one picture of the track area that was constructed of several images from the drone.
Figure 4: A result of the initial processing showing the compiled image on the left and the raised features on the right.


Figure 5: The combined DEM of the images gathered from the UAS with a spatial reference.

Conclusion:

The goal of this activity was to become familiar with using images from a UAS in Pix4D as an introduction to what can be done with this kind of data. One of the features that can be done is a 3D fly over where an animation is made of a view going over and around the area of interest. This file is a .gif that displays what the image looks like from what the UAS saw as it collected points. One of the issues I encountered with this activity was exporting that .gif as the default was to an excel or .csv file format which did not help. What I needed was a short video clip exported as a video file not a text file. I also had trouble with the aspect of calulating volume of an object. I was able to trace the shed shown along the track on the bottom right of the image but could not calculated meters cubed. There are more things that can be done with this data and hopefully more will be done as I am able to work some of the programs within Pix4D such as the fly over and calculating the volume of an object in the AOI. As this is an introduction I hope to get a little more experience and be able to do some of the more analytical aspects of this program. I have gathered a brief understanding but certainly need more experience working with this program to become efficient with it. 


No comments:

Post a Comment