Skip to main content

Quality Reports

WebODM Lightning generates a PDF quality report for processed tasks, which can be useful in assessing the quality of the results. This can be downloaded by expanding a task from the cloud platform and pressing the Report button.

Click here for an example report which can be useful to follow along with this article.

note

If a dataset lacks georeferencing, some fields will not be available.

Dataset Summary

FieldDescription
DateDate and time the dataset was processed
Area CoverageCalculated using a bounding box. Note this is not in reference to the number of valid pixels of the map, but rather its rectangular extent.
Processing TimeTime it took to proces the dataset
Capture StartDate/time of the first image as specified in the EXIF metadata
Capture EndDate/time of the last image as specified in the EXIF metadata
Coordinate Reference SystemCoordinate reference system of the outputs

Processing Summary

FieldDescription
Reconstructed ImagesNumber of images used for reconstruction
Reconstructed Points (Sparse)Number of points extracted during reconstruction
Reconstructed Points (Dense)Number of points in the final point cloud
Average Ground Sampling Distance (GSD)Average distance between two adjacent pixels in the input images
Detected FeaturesNumber of features detected during reconstruction
Reconstructed FeaturesNumber of features used for reconstruction
Geographic ReferenceGPS: GPS was used for georeferencing
GCP: GCPs were used for georeferencing
GPS and GCP: Both GPS and GCPs were used for georeferencing
Alignment: The dataset was aligned to another
None: Dataset was not georeferenced
GPS/GCP/Alignment ErrorsAbsolute average error of the geographic reference

The image that follows shows the positions of the cameras (cyan) and the initial geographical positions (red), along with an overview of the sparse point cloud.

Previews

If orthophoto/DEMs are available, they are displayed here.

Survey Data

The diagram here displays the point cloud rendered with colors that indicate the number of photos that were used to reconstruct a certain area.

note

This diagram can highlight areas that had sufficient overlap coverage, but insufficient details for the software to use all available images.

Aim to have as much green (5+) coverage as possible
tip

The number of cameras used to reconstruct each individual point is also stored in the point cloud in the UserData dimension.

GPS/GCP/3D Errors Details

warning

Always use checkpoints to verify the accuracy of results. While the numbers in this section provide good estimates, they are not a substitute for using checkpoints. The uncertainty from GPS measurements is not modeled into these error estimates, and the estimates will only be as good as the precision of the GPS device.

When estimating errors, the program has at its disposal measurements of real world locations acquired with a GPS.

Errors are differences between measured and computed values. Computed locations in the model will deviate slightly from the real world measurements, due to factors such as accuracy limitations of the GPS device, lens distortions and computational inaccuracies.

For GPS/GCP errors, this should be intuitive: measured is the position from the GPS device and computed is the position calculated by the software.

3D errors provides a measure of the relative accuracy of the reconstruction. If an object in the real world is 1 meter, but the reconstructed model is 1.05 meters, the 3D error is 0.05 meters.

Technical Details

Since 3D errors do not have real world measurements to compare against, the measured part for each point is calculated by back-projecting each point back to the cameras that generated it, estimating a pixel reprojection error and performing an iterative sampling-based triangulation around the reprojection area with the goal of finding a worst case estimate (the maximum 3D error).

The values displayed in the report's tables are calculated from many error measurements and compiled into one metric.

  • GPS uses one measurement for each image that has GPS information.
  • GCPs uses one measurement for each GCP entry.
  • 3D uses 1000 measurements sampled from the sparse point cloud.

Metrics are broken into its X, Y (horizontal) and Z (vertical) components:

MetricDescription
MeanThe average error of all measurements
Standard DeviationHow "spread out" the measurements are from the average
RMS ErrorThe Root Mean Square (RMS) error, which is an average that ignores whether errors are positive or negative

Total is a metric showing the average error across all dimensions (X/Y/Z).er the X/Y/Z components for each error measurement and averaging the results to get a total.

If provided, checkpoints are displayed in a table separate from the GCPs.

Absolute vs. Relative

Absolute accuracy measures the relation of the model to its real position in the world. Relative accuracy is computed from the 3D errors and relates the proportions of the model to its expected real world dimensions.

note

It's entirely possible to have a model with excellent relative accuracy (all measurements and proportions are correct) but poor absolute accuracy (the entire model is shifted 50 meters from its true real world position).

Technical Details

GCPs are used to estimate absolute accuracy, if they are available. Otherwise GPS information is used.

The report also provides Circular Error (CE90) and Linear Error (LE90) estimates for horizontal X/Y and vertical Z errors respectively. The number 90 indicates the software is 90% confident that the reported value is equal or better than the actual value1.

Feature Details

Image features are points of interest in the input images. This section offers a heatmap of the features' distribution as well as a table listing statistics regarding the number of features detected and reconstructed. Reconstructed features are features that have been used to triangulate at least one point.

tip

The Reconstructed Min. field should be above 50. Smaller values suggest a lack of good features. Changing min-num-features will affect these values.

For best results, features should be evenly distributed across image frames

Reconstruction Details

Average Reprojection Error

Image features (in pixels) from camera space are triangulated to points in 3D space. When you take those 3D points and reproject them back to the image that generated them, the difference in pixel positions is the reprojection error. You can describe this error in different units:

UnitDescription
NormalizedEvery image feature is associated with a scale parameter that describes how many pixels are covered by the feature. Normalizing means dividing the reprojection error (in pixels) of a feature by its scale
PixelsReprojection error (in pixels)
AngularThe angle (radians) between the camera ray passing through the feature and the ray passing through the back-projected point.
In a nutshell

Smaller reprojection errors = better.

Average Track Length

Average number of images that have been used to reconstruct a point.

Technical Details

A track is a set of correspondences between features from different images that depict the same object.

Average Track Length (> 2)

Same as above, but without counting tracks of length 2, which are low confidence.

In a nutshell

Higher track lengths = better

Normalized/Pixel/Angular Residuals

Histograms displaying the distribution of reprojection errors. X-axis shows the reprojection error. Y-axis shows the number of points that have that reprojection error. "Residual" in this context is the reprojection error.

Track Details

A graph displaying connectivity between image matches. A well connected graph indicates a good reconstruction.

The table below displays how many (Count) points have been computed from N (Length) number of images.

For example:

Length23
Count10004000

Means 1000 points were computed from 2 images and 4000 points from 3 images.

Camera Models Details

Information about the cameras' internal parameters. These parameters are estimated and used to remove lens distortion.

  • Initial shows the parameters at the start of the reconstruction.
  • Optimized shows the final parameters at the end of the reconstruction.

The image below displays the distribution of the average reprojection errors (residuals) across all images for a particular camera. Each arrow represents the magnitude and direction of the average reprojection error.

Some errors are notably larger in the bottom right corner, indicating difficulty in using image features from that corner

The Residual Norm gradient displays the scale of the normalized reprojection errors. If the maximum value on this bar is 0.04, it means that a dark purple arrow in the grid indicates a reprojection error of 0.04 (in normalized units).

tip

One must look at the scale and not just at the size of the arrows when assessing the reprojection errors.

Footnotes

  1. Computation of scalar accuracy metrics LE, CE, and SE as both predictive and sample based statistics: asprs.org/a/publications/proceedings/IGTF2016/IGTF2016-000255.pdf