## Where are the inverse equations hiding?

A common problem in computer vision is modelling lens distortion. Lens distortion distort the shape of objects and introduce large errors in structure from motion applications. Techniques and tools for calibrating cameras and removing lens distortion are widely available. While books and papers readily provide the forward distortion equations (given an ideal undistorted coordinate, compute the distorted coordinate) inverse equations are much harder to come by.

Turns out there is no analytic equation for inverse radial distortion. Which might explain the mysterious absence of inverse equations, but still it would be nice if this issue was discussed in common references. First a brief summary of background equations is given, followed by the solution to the inverse distortion problem.

**Inverse Distortion Problem:** *Given a distorted image coordinate and distortion parameters, determined the coordinate of the ideal undistorted coordinate.*

## Background

Unlike the idealized pin-hole camera model, real world cameras have lens distortions. Radial distortion is a common model for lens distortion and is summarized by the following equations:

where and are the observed distorted and ideal undistorted pixel coordinates, and are the observed and ideal undistorted normalized pixel coordinates, and are radial distortion coefficients. The relationship between normalized pixel and unnormalized pixel coordinates is determined by the camera calibration matrix, , where is the 3×3 calibration matrix.

## Solution

While there is no analytic solution there is an iterative solution. The iterative solution works by first estimating the radial distortion’s magnitude at the distorted point and then refining the estimate until it converges.

- do until converge:

Where is the principle point/image center, and is the Euclidean norm. Only a few iterations are required before convergence. For added speed when applying to an entire image the results can be cached. A similar solution can be found by adding in terms for tangential distortion.

Any chance you could point me towards a few (academic) reference articles?

Just about any 3D computer vision book (e.g. [1]) should provide the equations for radial distortion. As mentioned above they don’t seem to discuss the inverse problem, that I am aware of at least. The popular matlab camera calibration toolbox found http://www.vision.caltech.edu/bouguetj/calib_doc/ uses a similar formula when it computes the inverse radial distortion.

[1] Richard Hartley and Andrew Zisserman, “Multiple View Geometry in Computer Vision” 2003

Brilliant…thanks!

Unfortunately there can be multiple solutions to the inverse problem.

The iteration will then stick to nearest minimum, which may not be the original (undistorted) value.

I know this does not pose a problem with simple applications, but estimating lens distortion from points (e.g. camera calibration, image registration, bundle adjustment), with some noise added may cause such issue.

I tried to solve this by splitting the distortion function in in part that is monotonic to a certain point and then is replaced by linear function. This ensures the function is differentiable and invertible everywhere.

However, the solution is quite messy and the “breaking point” itself depends on kappa parameters. This causes my unit tests failing because partial derivatives w.r.t. kappa are badly defined.

Any ideas?

I’ve never bothered to go into that much detail into trying to find the global optimal solution. Basically during calibration you use the forward equations (just double checked that statement by looking at code) and RANSAC or some other robust technique removes outliers (which would include poor convergence) when reconstructing a scene.

How much of an error are you seeing caused by poor convergence? Are these highly distorted lenses?