Active Image Alignment - 10-band

Note: This notebook specifically demonstrates alignment and stack creation for 10-band data. For alignment of 5-band (RedEdge) and 6-band (Altum) data, pr more details about alignment in general, see the "Active Image Alignment" notebook.

Opening Images

As we have done in previous examples, we use the micasense.capture class to open, radiometrically correct, and visualize the 10 bands of a RedEdge-MX Dual (Red+Blue) capture.

First, we'll load the autoreload extension. This lets us change underlying code (such as library functions) without having to reload the entire workbook and kernel. This is useful in this workbook because the cell that runs the alignment can take a long time to run, so with autoreload extension we can update the code after the alignment step for analysis and visualization without needing to re-compute the alignments each time.

In [1]:
%load_ext autoreload
%autoreload 2
In [2]:
import os, glob
import micasense.capture as capture
%matplotlib inline

panelNames = None
paneCap = None

imagePath = os.path.join('.','data','10BANDSET','000')
imageNames = glob.glob(os.path.join(imagePath,'IMG_0431_*.tif'))
panelNames = glob.glob(os.path.join(imagePath,'IMG_0000_*.tif'))

# Allow this code to align both radiance and reflectance images; bu excluding
# a definition for panelNames above, radiance images will be used
# For panel images, efforts will be made to automatically extract the panel information
# but if the panel/firmware is before Altum 1.3.5, RedEdge 5.1.7 the panel reflectance
# will need to be set in the panel_reflectance_by_band variable.
# Note: radiance images will not be used to properly create NDVI/NDRE images below.
if panelNames is not None:
    panelCap = capture.Capture.from_filelist(panelNames)
else:
    panelCap = None

capture = capture.Capture.from_filelist(imageNames)

if panelCap is not None:
    if panelCap.panel_albedo() is not None:
        panel_reflectance_by_band = panelCap.panel_albedo()
    else:
        raise IOError("Comment this lne and set panel_reflectance_by_band here")
        panel_reflectance_by_band = [0.65]*len(imageNames)
    panel_irradiance = panelCap.panel_irradiance(panel_reflectance_by_band)    
    img_type = "reflectance"
    capture.plot_undistorted_reflectance(panel_irradiance)
else:
    if capture.dls_present():
        img_type='reflectance'
        capture.plot_undistorted_reflectance(capture.dls_irradiance())
    else:
        img_type = "radiance"
        capture.plot_undistorted_radiance()    

Unwarp and Align

Alignment is a three step process:

  1. Images are unwarped using the built-in lens calibration
  2. A transformation is found to align each band to a common band
  3. The aligned images are combined and cropped, removing pixels which don't overlap in all bands.

We provide utilities to find the alignement transformations within a single capture. Our experience shows that once a good alignmnet transformation is found, it tends to be stable over a flight and, in most cases, over many flights. The transformation may change if the camera undergoes a shock event (such as a hard landing or drop) or if the temperature changes substantially between flights. In these events a new transformation may need to be found.

Further, since this approach finds a 2-dimensional (affine) transformation between images, it won't work when the parallax between bands results in a 3-dimensional depth field. This can happen if very close to the target or when targets are visible at significantly different ranges, such as a nearby tree or building against a background much farther way. In these cases it will be necessary to use photogrammetry techniques to find a 3-dimensional mapping between images.

For best alignment results it's good to select a capture which has features which visible in all bands. Man-made objects such as cars, roads, and buildings tend to work very well, while captures of only repeating crop rows tend to work poorly. Remember, once a good transformation has been found for flight, it can be generally be applied across all of the images.

It's also good to use an image for alignment which is taken near the same level above ground as the rest of the flights. Above approximately 35m AGL, the alignement will be consistent. However, if images taken closer to the ground are used, such as panel images, the same alignment transformation will not work for the flight data.

In [3]:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import micasense.imageutils as imageutils
import micasense.plotutils as plotutils

## Alignment settings
match_index = 4 # Index of the band, here we use green
max_alignment_iterations = 20
warp_mode = cv2.MOTION_HOMOGRAPHY # MOTION_HOMOGRAPHY or MOTION_AFFINE. For Altum images only use HOMOGRAPHY
pyramid_levels = 3 # for 10-band imagery we use a 3-level pyramid. In some cases

print("Alinging images. Depending on settings this can take from a few seconds to many minutes")
# Can potentially increase max_iterations for better results, but longer runtimes
warp_matrices, alignment_pairs = imageutils.align_capture(capture,
                                                          ref_index = match_index,
                                                          max_iterations = max_alignment_iterations,
                                                          warp_mode = warp_mode,
                                                          pyramid_levels = pyramid_levels)

print("Finished Aligning, warp matrices={}".format(warp_matrices))
Alinging images. Depending on settings this can take from a few seconds to many minutes
Finished aligning band 4
Finished aligning band 2
Finished aligning band 3
Finished aligning band 5
Finished aligning band 0
Finished aligning band 1
Finished aligning band 9
Finished aligning band 7
Finished aligning band 8
Finished aligning band 6
Finished Aligning, warp matrices=[array([[ 1.0020056e+00, -3.9406412e-04,  2.4968414e+01],
       [ 6.6501042e-04,  1.0005579e+00,  1.7184687e+01],
       [ 2.4267777e-06, -9.4770775e-07,  1.0000000e+00]], dtype=float32), array([[ 9.9140424e-01, -5.4997763e-05,  4.8464928e+01],
       [ 3.4214940e-05,  9.9199688e-01, -1.0904565e+01],
       [-7.3307541e-07,  4.9458703e-07,  1.0000000e+00]], dtype=float32), array([[ 1.0018286e+00, -2.1810613e-04,  5.5261364e+00],
       [ 7.2287896e-04,  1.0021816e+00,  5.8704624e+00],
       [-1.9958026e-08,  9.8048474e-07,  1.0000000e+00]], dtype=float32), array([[ 9.9151838e-01, -5.4842597e-03,  4.1504356e+01],
       [ 3.8410851e-03,  9.9495345e-01,  1.7234241e+01],
       [-3.2994915e-06, -2.6061947e-08,  1.0000000e+00]], dtype=float32), array([[ 1.0000000e+00,  2.4165013e-21,  3.1297349e-14],
       [-1.3839418e-21,  1.0000000e+00,  1.4210855e-14],
       [-1.5799604e-23, -5.9906160e-24,  1.0000000e+00]], dtype=float32), array([[ 9.9775982e-01, -6.4119918e-04,  5.2152943e+01],
       [-3.4182621e-03,  9.9602389e-01,  2.0527115e+01],
       [-4.4730422e-07, -4.8827987e-06,  1.0000000e+00]], dtype=float32), array([[ 9.9623400e-01,  3.1665245e-03,  3.7465309e+01],
       [-6.7898342e-03,  9.9743778e-01,  8.9408970e+00],
       [-3.6395738e-06, -2.4694048e-06,  1.0000000e+00]], dtype=float32), array([[ 9.8944712e-01,  3.8575201e-04,  9.4857979e+00],
       [-4.0420871e-03,  9.8855078e-01,  1.5348701e+01],
       [-2.4452283e-06, -3.8092637e-06,  1.0000000e+00]], dtype=float32), array([[ 9.9748904e-01,  1.6254005e-03,  4.3281814e-01],
       [-7.3913463e-05,  9.9532980e-01,  3.5486279e+01],
       [ 3.8616540e-06, -4.0625605e-07,  1.0000000e+00]], dtype=float32), array([[ 9.9991107e-01,  6.6631651e-03, -9.0683794e+00],
       [-9.0075899e-03,  9.9834883e-01,  1.5179089e+01],
       [-1.8021225e-07, -3.6330111e-06,  1.0000000e+00]], dtype=float32)]

Crop Aligned Images

After finding image alignments we may need to remove pixels around the edges which aren't present in every image in the capture. To do this we use the affine transforms found above and the image distortions from the image metadata. OpenCV provides a couple of handy helpers for this task in the cv2.undistortPoints() and cv2.transform() methods. These methods takes a set of pixel coordinates and apply our undistortion matrix and our affine transform, respectively. So, just as we did when registering the images, we first apply the undistortion process the coordinates of the image borders, then we apply the affine transformation to that result. The resulting pixel coordinates tell us where the image borders end up after this pair of transformations, and we can then crop the resultant image to these coordinates.

In [4]:
cropped_dimensions, edges = imageutils.find_crop_bounds(capture, warp_matrices, warp_mode=warp_mode)
im_aligned = imageutils.aligned_capture(capture, warp_matrices, warp_mode, cropped_dimensions, match_index, img_type=img_type)

Visualize Aligned Images

Once the transformation has been found, it can be verified by composting the aligned images to check alignment. The image 'stack' containing all bands can also be exported to a multi-band TIFF file for viewing in extrernal software such as QGIS. Usef ul componsites are a naturally colored RGB as well as color infrared, or CIR.

In [5]:
# figsize=(30,23) # use this size for full-image-resolution display
figsize=(16,13)   # use this size for export-sized display

rgb_band_indices = [capture.band_names().index('Red'),capture.band_names().index('Green'),capture.band_names().index('Blue-444')]
cir_band_indices = [capture.band_names().index('NIR'),capture.band_names().index('Red'),capture.band_names().index('Green')]

# Create a normalized stack for viewing
im_display = np.zeros((im_aligned.shape[0],im_aligned.shape[1],im_aligned.shape[2]), dtype=np.float32 )

im_min = np.percentile(im_aligned[:,:,rgb_band_indices].flatten(), 0.5)  # modify these percentiles to adjust contrast
im_max = np.percentile(im_aligned[:,:,rgb_band_indices].flatten(), 99.5)  # for many images, 0.5 and 99.5 are good values

# for rgb true color, we use the same min and max scaling across the 3 bands to 
# maintain the "white balance" of the calibrated image
for i in rgb_band_indices:
    im_display[:,:,i] =  imageutils.normalize(im_aligned[:,:,i], im_min, im_max)

rgb = im_display[:,:,rgb_band_indices]

# for cir false color imagery, we normalize the NIR,R,G bands within themselves, which provides
# the classical CIR rendering where plants are red and soil takes on a blue tint
for i in cir_band_indices:
    im_display[:,:,i] =  imageutils.normalize(im_aligned[:,:,i])

cir = im_display[:,:,cir_band_indices]
fig, axes = plt.subplots(1, 2, figsize=figsize)
axes[0].set_title("Red-Green-Blue Composite")
axes[0].imshow(rgb)
axes[1].set_title("Color Infrared (CIR) Composite")
axes[1].imshow(cir)
plt.show()

Image Enhancement

There are many techniques for image enhancement, but one which is commonly used to improve the visual sharpness of imagery is the unsharp mask. Here we apply an unsharp mask to the RGB image to improve the visualization, and then apply a gamma curve to make the darkest areas brighter.

In [6]:
# Create an enhanced version of the RGB render using an unsharp mask
gaussian_rgb = cv2.GaussianBlur(rgb, (9,9), 10.0)
gaussian_rgb[gaussian_rgb<0] = 0
gaussian_rgb[gaussian_rgb>1] = 1
unsharp_rgb = cv2.addWeighted(rgb, 1.5, gaussian_rgb, -0.5, 0)
unsharp_rgb[unsharp_rgb<0] = 0
unsharp_rgb[unsharp_rgb>1] = 1

# Apply a gamma correction to make the render appear closer to what our eyes would see
gamma = 1.4
gamma_corr_rgb = unsharp_rgb**(1.0/gamma)
fig = plt.figure(figsize=figsize)
plt.imshow(gamma_corr_rgb, aspect='equal')
plt.axis('off')
plt.show()

Image Export

Composite images can be exported to JPEG or PNG format using the imageio package. These images may be useful for visualization or thumbnailing, and creating RGB thumbnails of a set of images can provide a convenient way to browse the imagery in a more visually appealing way that browsing the raw imagery.

In [7]:
import imageio
imtype = 'png' # or 'jpg'
imageio.imwrite('rgb.'+imtype, (255*gamma_corr_rgb).astype('uint8'))
imageio.imwrite('cir.'+imtype, (255*cir).astype('uint8'))

Stack Export

We can export the image easily stacks using the gdal library (http://www.glal.org). Once exported, these image stacks can be opened in software such as QGIS and raster operations such as NDVI or NDRE computation can be done in that software. At this time the stacks don't include any geographic information.

In [8]:
from osgeo import gdal, gdal_array
rows, cols, bands = im_display.shape
driver = gdal.GetDriverByName('GTiff')
filename = "bbggrreeen" #blue,blue,green,green,red,red,rededge,rededge,rededge,nir

sort_by_wavelength = True # set to false if you want stacks in camera-band-index order
    
outRaster = driver.Create(filename+".tiff", cols, rows, bands, gdal.GDT_UInt16, options = [ 'INTERLEAVE=BAND','COMPRESS=DEFLATE' ])
try:
    if outRaster is None:
        raise IOError("could not load gdal GeoTiff driver")

    if sort_by_wavelength:
        eo_list = list(np.argsort(capture.center_wavelengths()))
    else:
        eo_list = capture.eo_indices()

    for outband,inband in enumerate(eo_list):
        outband = outRaster.GetRasterBand(outband+1)
        outdata = im_aligned[:,:,inband]
        outdata[outdata<0] = 0
        outdata[outdata>2] = 2   #limit reflectance data to 200% to allow some specular reflections
        outband.WriteArray(outdata*32768) # scale reflectance images so 100% = 32768
        outband.FlushCache()

    for outband,inband in enumerate(capture.lw_indices()):
        outband = outRaster.GetRasterBand(len(eo_list)+outband+1)
        outdata = (im_aligned[:,:,inband]+273.15) * 100 # scale data from float degC to back to centi-Kelvin to fit into uint16
        outdata[outdata<0] = 0
        outdata[outdata>65535] = 65535
        outband.WriteArray(outdata)
        outband.FlushCache()
finally:
    outRaster = None

Notes on Alignment and Stack Usage

"Stacks" as described above are useful in a number of processing cases. For example, at the time of this writing, many photogrammetry suites could import and process stack files without significantly impacting the radiometric processing which has already been accomplished.

Running photogrammetry on stack files instead of raw image files has both advantages and drawbacks. The primary advantage has been found to be an increase in processing speed and a reduction in program memory usage. As the photogrammetric workflow generally operates on luminance images and may not use color information, stacked images may require similar resources and be processed at a similar speed as single-band images. This is because one band of the stack can be used to generate the matching feature space while the others are ignored for matching purposes. This reduces the feature space 5-fold over matching using all images separately.

One disadvantage is that stacking images outside of the photogrammetric workflow may result in poor image matching. The RedEdge is known to have stable lens characteristics over the course of normal operation, but variations in temperature or impacts to the camera through handling or rough landings may change the image alignment parameters. For this reason, we recommend finding a matching transformation for each flight (each take-off and landing). Alignment transformations from multiple images within a flight can be compared to find the best transformation to apply to the set of the flight. While not described or supported in this generic implementation, some matching algorithms can use a "seed" value as a starting point to speed up matching. For most cases, this seed could be the transformation found in a previous flight, or another source of a known good transformation.

NDVI Computation

For raw index computation on single images, the numpy package provides a simple way to do math and simple visualizatoin on images. Below, we compute and visualize an image histogram and then use that to pick a colormap range for visualizing the NDVI of an image.

Plant Classification

After computing the NDVI and prior to displaying it, we use a very rudimentary method for focusing on the plants and removing the soil and shadow information from our images and histograms. Below we remove non-plant pixels by setting to zero any pixels in the image where the NIR reflectance is less than 20%. This helps to ensure that the NDVI and NDRE histograms aren't skewed substantially by soil noise.

In [9]:
from micasense import plotutils
import matplotlib.pyplot as plt

nir_band = [name.lower() for name in capture.band_names()].index('nir')
red_band = [name.lower() for name in capture.band_names()].index('red')

np.seterr(divide='ignore', invalid='ignore') # ignore divide by zero errors in the index calculation

# Compute Normalized Difference Vegetation Index (NDVI) from the NIR(3) and RED (2) bands
ndvi = (im_aligned[:,:,nir_band] - im_aligned[:,:,red_band]) / (im_aligned[:,:,nir_band] + im_aligned[:,:,red_band])

# remove shadowed areas (mask pixels with NIR reflectance < 20%))
if img_type == 'reflectance':
    ndvi = np.ma.masked_where(im_aligned[:,:,nir_band] < 0.20, ndvi) 
elif img_type == 'radiance':
    lower_pct_radiance = np.percentile(im_aligned[:,:,3],  10.0)
    ndvi = np.ma.masked_where(im_aligned[:,:,nir_band] < lower_pct_radiance, ndvi) 
    
# Compute and display a histogram
ndvi_hist_min = np.min(ndvi)
ndvi_hist_max = np.max(ndvi)
fig, axis = plt.subplots(1, 1, figsize=(10,4))
axis.hist(ndvi.ravel(), bins=512, range=(ndvi_hist_min, ndvi_hist_max))
plt.title("NDVI Histogram")
plt.show()

min_display_ndvi = 0.45 # further mask soil by removing low-ndvi values
#min_display_ndvi = np.percentile(ndvi.flatten(),  5.0)  # modify with these percentilse to adjust contrast
max_display_ndvi = np.percentile(ndvi.flatten(), 99.5)  # for many images, 0.5 and 99.5 are good values
masked_ndvi = np.ma.masked_where(ndvi < min_display_ndvi, ndvi)

#reduce the figure size to account for colorbar
figsize=np.asarray(figsize) - np.array([3,2])

#plot NDVI over an RGB basemap, with a colorbar showing the NDVI scale
fig, axis = plotutils.plot_overlay_withcolorbar(gamma_corr_rgb, 
                                    masked_ndvi, 
                                    figsize = figsize, 
                                    title = 'NDVI filtered to only plants over RGB base layer',
                                    vmin = min_display_ndvi,
                                    vmax = max_display_ndvi)
fig.savefig('ndvi_over_rgb.png')

NDRE Computation

In the same manner, we can compute, filter, and display another index useful for the RedEdge camera, the Normalized Difference Red Edge (NDRE) index. We also filter out shadows and soil to ensure our display focuses only on the plant health.

In [10]:
# Compute Normalized Difference Red Edge Index from the NIR(3) and RedEdge(4) bands
rededge_band = [name.lower() for name in capture.band_names()].index('red edge')
ndre = (im_aligned[:,:,nir_band] - im_aligned[:,:,rededge_band]) / (im_aligned[:,:,nir_band] + im_aligned[:,:,rededge_band])

# Mask areas with shadows and low NDVI to remove soil
masked_ndre = np.ma.masked_where(ndvi < min_display_ndvi, ndre)

# Compute a histogram
ndre_hist_min = np.min(masked_ndre)
ndre_hist_max = np.max(masked_ndre)
fig, axis = plt.subplots(1, 1, figsize=(10,4))
axis.hist(masked_ndre.ravel(), bins=512, range=(ndre_hist_min, ndre_hist_max))
plt.title("NDRE Histogram (filtered to only plants)")
plt.show()

min_display_ndre = np.percentile(masked_ndre, 5)
max_display_ndre = np.percentile(masked_ndre, 99.5)

fig, axis = plotutils.plot_overlay_withcolorbar(gamma_corr_rgb, 
                                    masked_ndre, 
                                    figsize=figsize, 
                                    title='NDRE filtered to only plants over RGB base layer',
                                    vmin=min_display_ndre,vmax=max_display_ndre)
fig.savefig('ndre_over_rgb.png')

Red vs NIR Reflectance

Finally, we show a classic agricultural remote sensing output in the tassled cap plot. This plot can be useful for visualizing row crops and plots the Red Reflectance channel on the X-axis against the NIR reflectance channel on the Y-axis. This plot also clearly shows the line of the soil in that space. The tassled cap view isn't very useful for this arid data set; however, we can see the "badge of trees" of high NIR reflectance and relatively low red reflectance. This provides an example of one of the uses of aligned images for single capture analysis.

In [11]:
x_band = red_band
y_band = nir_band
x_max = np.max(im_aligned[:,:,x_band])
y_max = np.max(im_aligned[:,:,y_band])

fig = plt.figure(figsize=(12,12))
plt.hexbin(im_aligned[:,:,x_band],im_aligned[:,:,y_band],gridsize=640,bins='log',extent=(0,x_max,0,y_max))
ax = fig.gca()
ax.set_xlim([0,x_max])
ax.set_ylim([0,y_max])
plt.xlabel("{} Reflectance".format(capture.band_names()[x_band]))
plt.ylabel("{} Reflectance".format(capture.band_names()[y_band]))
plt.show()

Last, we output the warp_matrices that we got for this image stack for usage elsewhere. Currently these can be used in the Batch Processing.ipynb notebook to save reflectance-compensated stacks of images to a directory.

In [12]:
print(warp_matrices)
[array([[ 1.0020056e+00, -3.9406412e-04,  2.4968414e+01],
       [ 6.6501042e-04,  1.0005579e+00,  1.7184687e+01],
       [ 2.4267777e-06, -9.4770775e-07,  1.0000000e+00]], dtype=float32), array([[ 9.9140424e-01, -5.4997763e-05,  4.8464928e+01],
       [ 3.4214940e-05,  9.9199688e-01, -1.0904565e+01],
       [-7.3307541e-07,  4.9458703e-07,  1.0000000e+00]], dtype=float32), array([[ 1.0018286e+00, -2.1810613e-04,  5.5261364e+00],
       [ 7.2287896e-04,  1.0021816e+00,  5.8704624e+00],
       [-1.9958026e-08,  9.8048474e-07,  1.0000000e+00]], dtype=float32), array([[ 9.9151838e-01, -5.4842597e-03,  4.1504356e+01],
       [ 3.8410851e-03,  9.9495345e-01,  1.7234241e+01],
       [-3.2994915e-06, -2.6061947e-08,  1.0000000e+00]], dtype=float32), array([[ 1.0000000e+00,  2.4165013e-21,  3.1297349e-14],
       [-1.3839418e-21,  1.0000000e+00,  1.4210855e-14],
       [-1.5799604e-23, -5.9906160e-24,  1.0000000e+00]], dtype=float32), array([[ 9.9775982e-01, -6.4119918e-04,  5.2152943e+01],
       [-3.4182621e-03,  9.9602389e-01,  2.0527115e+01],
       [-4.4730422e-07, -4.8827987e-06,  1.0000000e+00]], dtype=float32), array([[ 9.9623400e-01,  3.1665245e-03,  3.7465309e+01],
       [-6.7898342e-03,  9.9743778e-01,  8.9408970e+00],
       [-3.6395738e-06, -2.4694048e-06,  1.0000000e+00]], dtype=float32), array([[ 9.8944712e-01,  3.8575201e-04,  9.4857979e+00],
       [-4.0420871e-03,  9.8855078e-01,  1.5348701e+01],
       [-2.4452283e-06, -3.8092637e-06,  1.0000000e+00]], dtype=float32), array([[ 9.9748904e-01,  1.6254005e-03,  4.3281814e-01],
       [-7.3913463e-05,  9.9532980e-01,  3.5486279e+01],
       [ 3.8616540e-06, -4.0625605e-07,  1.0000000e+00]], dtype=float32), array([[ 9.9991107e-01,  6.6631651e-03, -9.0683794e+00],
       [-9.0075899e-03,  9.9834883e-01,  1.5179089e+01],
       [-1.8021225e-07, -3.6330111e-06,  1.0000000e+00]], dtype=float32)]

Copyright (c) 2017-2018 MicaSense, Inc. For licensing information see the project git repository