Skip to content

Image processing algorithms

Radiometric Correction

Radiometric correction is done when processing L0 raw frames to L1A Top of Atmosphere frames. The image processing pipeline applies several corrections on the raw data in order to correct for the payload’s radiometric distortion. Two main dimensions are addressed: pixel-wise (spatial variance) and global (spectral variance).

Pixel-wise Correction

The pixel-wise distortion impacts on the spatial variation of the scene. A set of image processing steps are applied in order to correct this, involving:

  • Dark frame subtraction: Dark frames are calibrated on orbit in order to guarantee the exact thermal conditions as in production imagery. Dark frames are obtained by averaging a set of on orbit captures of oceans at night.

  • Flat field correction: Flat fields are calibrated on orbit in order to guarantee the exact thermal and optical conditions as in production imagery. Flat fields are obtained by averaging random captures from production imagery, comprising varied terrains and spectral signatures. The input frames are pre-validated in order to discard those with more than 10% of saturated pixels. The average of at least 6000 frames has been proven to guarantee the convergence of the flat field uniformity.

  • Non linearity correction: Non linearity is calibrated in the lab for each sensor. As a step in the image processing pipeline, the nonlinearity function is applied to the data in order to retrieve a linear radiometric response.

  • Bad pixel filtering: Each pixel is compared with the mean estimated from eight adjacent pixels. If the pixel is proven to be an outlier, the pixel value is replaced with the average of the neighboring pixels.

  • PSF deconvolution: The PSF of each payload is measured in the lab during the pre launch campaigns. After performing the radiometric corrections, a PSF deconvolution is applied in order to improve the sharpness of the retrieved imagery.

Global Correction

Once the spatial variation of the scene is corrected, a global correction factor per spectral band is applied in order to correct for the spectral response of the payload. The sensor quantum efficiency along with the filter and telescope transmissivity are measured during the pre-launch campaigns to retrieve the spectral response function of each payload.

Top Of Atmopshere Reflectance Correction

Finally, imagery is converted to TOA reflectance (adimensional) by applying the conversion described in Equation below. The solar model is retrieved from the 2000 ASTM Standard Extraterrestrial Spectrum Reference E-490-00.

[TOA Formula]

Geometric Correction

The goal of the geometric correction is to solve, for each of the frames that are used to compose a relevant product tile, fine-tuned values for the position and attitude of the camera at the time when the frame was taken. The solved values is chosen such that the orthorectified images of each frame (when computed based on these corrected values, plus the camera model and the DEM; will match as much as possible both one another and the reference map (as represented by the GCPs that are available in that area). Satellogic’s geometric correction process involves matching of overlapping contents between frames, the matching of frames to GCPs, and fine-tuning the satellite attitude based on this information.

Inter-Frame Matching

Pairs of raw image frames which have been captured consecutively and have overlapping content are matched against each other, and estimates for the transformation functions between these pairs are computed based on these matches. This processing stage produces overlapping frames ready to be further proccessed.

Frame to GCPs Matching

A major input for the geometric correction algorithm is a set of “matched GCPs” generated for each individual frame using GCPs that were built automatically based on reference maps, which are geo-referenced imagery taken from an external provider. ESRI imagery is currently used at zoom level-17. To generate GCPs, for each region, a large number of candidate features are extracted from the reference maps. Only the features that were matched to relevant Satellogic frames, with matches that pass a set of filters, are used to generate coordinates (Lat and Long) of GCPs from the reference imagery.

For featureful terrain, around 1000 GCP matches are typically found per raw frame (40 matches per square-km). In cases where there is a sufficient number of matches for a given capture frame, a smaller partial set is chosen, which is more evenly distributed over the frame. But in harder cases (e.g. desert, snow fields, open water, dense forest, beaches and islands), matches might be concentrated in specific regions of the frame, which affects the geo-accuracy of the resulting image frames. In some situations, ground control points cannot be matched to the collected imagery, either because the ground is covered by clouds, or because there is no suitable reference imagery to be used as GCP source (e.g. open water). In these cases the state of the spacecraft and camera is modelled and either interpolated or extrapolated and orthorectification is attempted with the approximate geolocation data and estimated position.

Bundle adjustment

At this stage, the satellite position, pointing direction and focal length data at the time of capture are fine tuned. This is done by solving equations that include the inter-frame matches and the GCP matches that were generated in the previous processing. The algorithm also takes into account constraints such as continuity of the positions sequence (satellite positions should lie on a valid earth orbit) and accuracy of the onboard systems (the solved positions and attitudes should be within the uncertainty range, not too far from the measured telemetry data). Note that the solved camera states, together with the DEM, uniquely determine all the information that is required for both orthorectification and image composition. Accurate solutions should result in correct alignment of the different frames, which would manifest as accurate band alignment and smooth intra-band stitching, as well as good geo-accuracy. Therefore, in optimal conditions, this stage is where band alignment is achieved.

Orthorectification

The orthorectification processing solves geometric distortions caused by terrain relief and sensor and satellite position at the time of capture. At this stage, the frames are aligned to a trusted DEM and are projected to a coordinate reference system to create image tiles in a uniform format, which is independent of the particular capture position and camera angles.

Image Composition

Image composition stage involves band alighnment and image tiling.

Each individual frame contains 4 bands (blue, green, red and NIR), The goal of this stage is to combine the input frames into a product that has all the 4 bands for each pixel, but with a unique value for each pixel within a given individual band. For efficient resource usage, the product is split into tiles (4096 x 4096 pixel each) that can be independently generated. This tiling is based on the same grid that was used to orthorectify the input frames.

Before the actual composition of each tile, an extra co-registration algorithm is applied if the fine-tuned positions that were solved in the geometric correction stage were not sufficiently accurate. This extra band alignment processing solves the issue by adjusting the positioning of the individual orthorectified frames (based on some more image matching) before actually composing them into the tile.

Super resolution

Super-Resolution (SR) refers to algorithms aimed at increasing the image spatial resolution, which in turn increases the number of pixels but providing fine details in the resulting image as if a sensor with a higher nominal resolution would have been used.

Satellogic applies its proprietary super-resolution model based on Multi-Scale Residual (MSRN) neural network and adapted to Satellogic satellite images in a post-processing step of the native orthorectified product. After the model application which essentially applies a x2 upscaling factor, an image resampling using pixel area relation to bring the output resolution to the final 0.7 m/px and in the same time, it preserves the original radiometric quality (pixel values).

The following are benefits that are achieved with this processing:

  • Denoising: The SR model Increases the SNR of each MS band.

  • Deconvolution: The model uses the learned knowledge to generate the higher frequency details that were lost due to aliasing during the creation of the native 1m images.

  • Zoom effect: The model brings the native 1m images to a synthetic 70cm resolution, uniform across all captures.


Last update: 2024-02-07