Fundamentals of



Image Warping

[pic] [pic]

No warping Warping

By

Pavanesh S N

Roll No. 726

Exam Reg. No: 2SD98CS032

Examiner

Abstract

The application image distortion (warping) in image processing share a core of fundamental technique. This seminar introduces the concept of image warping and also explains the one of the mapping techniques for image warping the projective mapping. This seminar treats the special case of Euclidean warping along with a discussion of a Mat lab implementation. An application of image warping is given; namely image mosaicing where images are stitched together – e.g. to form a panoramic view.

And finally many of the uses of the image warping are given and briefly discussed.

Contents

1. Introduction 1

Image Warping

• Projective mapping for image warping

2. Euclidian Warping 9

3. Image mosaicing with matlab implementation 11

4.Other Warping techniques 14

5. Implemnataion Details 15

6. Uses of image warping 18

1.Introduction:

Image Warping

Image warping is the act of distorting a source image into a destination image according to a mapping between source space(u,v) and destination space(x,y). The mapping is usually specified by the functions x(u,v) and y(u,v).

Image Warping is used in image processing primarily for the correction of geometric distortions introduced by imperfect imaging systems. Camera lenses sometimes introduces pincushion or barrel distortions, perspective views introduce a projective distortion , and other nonlinear optical components can create more complex distortions. In image processing, we do image warping typically to remove the distortions from an image, while in computer graphics we are usually introducing one. Image warps are also used for artistic purposes and special effects in interactive paint programs. For image processing applications, the mapping may be derived given a model of the geometric distortions of a system, but more typically the mapping is inferred from a set of corresponding points in the source and destination images. The point correspondence can be automatic, as for stereo matching, or manual, as in paint programs. Most geometric correction systems supports a limited set of mappings are usually parameterized by a grid of control points.

Image warping is useful in creating facial animations because the movements can be very smooth and have seamless transitions from one state, or position to another. Control points, such as muscle-based models, or underlying mesh models or images, such as texture maps may be warped.Warping is the distortion method which distorts and changes an image. Using one face as the starting face and another as the target face, one may change facial expression through the use of control lines on both faces. This is more commonly used for facial models than morphing.

Projective mapping for textual mapping

Homogeneous Notation

The homogeneous representation for points provides a consistent notation for affine and projective mapping. Homogeneous notation was used in projective geometry, long before it’s introduced to computer Graphics.

In familiar Euclidean a geometry we represent points of the real plane R^2 by the vectors of the form(x, y). Projective geometry deals with the projective plane, a superset of the real plane, whose homogeneous coordinates are (x’, y’, w). In projective geometry the 2D real point(x,y) is represented by the homogeneous vector p=(x’,y’,w), where w is an arbitrary nonzero number. Vectors of the form (xw, yw, w) for w! =0 form the equivalence class of homogeneous representations for the real point (x, y). To recover the actual coordinate from a homogeneous vector, we simply divide by the homogeneous component;e.g., the homogeneous vector p=(xw,yw,w)=(x;,y’,w) represents the actual point (x,y) =(x’/w,y’/w). This division, a projection onto the w=1 plane, cancels the effect of scalar multiplication by w. When representing real points with homogeneous notation we could use any nonzero w, but it is usually most convenient to choose w=1 so that the real coordinates can be recovered without division. Projective space also include the points at infinity:vectors of the form(x’,y’,0) excluding (0,0,0) . The points at infinity lie on the line at infinity.

In homogeneous notation, 2D points are represented by 3-vectors and 3D points are represented by 4-vectors. For affine projective mappings, we denote points in source space by Pa= (u’, v’, q) and points in the destination space by Pd= (x’, y’, w).

We have three different simple mapping techniques are affine mapping, bilinear mapping and projective mapping. The Commonly used Projective mapping is explained below.

Projective Mappings

The projective mapping, also known as the perspective or homogeneous transformation, is a projection from one plane through a point on to another plane. Homogeneous transformations are used extensively for 3D affine modeling transformations and for perspective camera transformations. The 2D projective mappings studied here a subset of these familiar 3D homogeneous transformations.

[pic]

Although there are 9 coefficient in the matrix above, these mappings are homogeneous, so any nonzero scalar multiple of these matrices given an equivalent mapping. Hence there are only 8 degrees of freedom in a 2D projective mapping. We can assume without loss of generality that i=1 except in the special case that source point(0, 0) maps to a point at infinity. A projective mapping is affine when g = h = 0.

Projective mapping will in general map the line at infinity to a line in the real plane. We ca n think of this line as the horizon line of all vanishing points, by analogy to the perspective projection. Affine mappings are the special case of projective mappings that map the line at infinity into itself. By defining the projective mapping over the projective plane and not just the real plane, projective mappings become bijections (one-to-one and onto), except when the mapping matrix is singular. For nondegenerate mappings the forward and inverse transforms are single-valued, just as for an affine mapping.

Projective mapping share many of the desirable properties of affine mapping. Unlike bilinear mappings, which preserve equispaced points along certain lines, the projective mappings do not in general preserve equispaced points(as in the below fig). Instead they preserve a quantity called the “cross ratio” of points. Like affine mappings, projective mappings preserve lines at all orientations. In fact, projective mappings are the most general line-preserving bijective mappings. Projective mappings may be composed by concatenating their matrices

Another remarkable property is that the inverse of a projective mapping is a projective mapping. This is intuitivel;y explained by reversing the palne-to-plane mapping by which a projective mapping is defined. The matrix for the inverse mapping is the inverse or adjoint of the forward mapping. In homogeneous algebra, the adjoint matrix can be used in place of the inverse matrix whenever an inverse transform is needed, since the two scalar multiples of each other, and the adjoint always exists, while the inverse does not if the matrix singular. The inverse transformations is thus

[pic]

[pic]

When mapping a point by the inverse transform we compute (u,v) from (x,y). If w!=0 and q!=0 then we can choose w=1 and calculate:

[pic]

Inferring Projective Mappings

Ina an interactive image warper one might specify the four corners of source and destination quadrilaterals with a tablet or mouse ,and wish to warp one are to other. This sort of task is an ideal application of projective mappings, but how do we find the mapping matrix.

A projective mappings has 8 degrees of freedom which can be determined from the source and destination coordinates of the four corners of a quadrilateral. Let the corresponding map (uk,vk) to (xk,yk) for vertices numbered cyclically k=0,1,2,3. All coordinates are assumed to be real (finite). To compute the forward mapping matrix Msd, assuming that I = 1, we have eight equation in the eight unknown a-g:

[pic]

for k=0,1,2,3. This can be rewritten as a n 8X8 system:

[pic]

This linear system can be solved using Gaussian elimination for the forward mapping coefficients a-h. If the inverse mapping is desired instead, then either we compute the adjoint of Msd or we follow the same procedure, starting from equation(2.2) instead of (2.1), and solve an 8X8 system for coefficient A-H.

In speed-critical special cases, there are more efficient formulas for computing the mappimg matrix. The formula above handles the case where the polygon is general quadrilateral in both source and destination spaces. We will consider 3 cases: square-to-quadrtilateral, quadrilateral-to-square, and (again) the general quadrilateral-to-quadrilateral mapping.

Case 1: the system is easily solved symbolically in the special case where the uv quadrilateral is a unit square. If the vertex correspondence is as follows:

[pic]

then the eight equations reduced to

[pic]

If we define

[pic]

Then the solution splits into two sub cases:

[pic]

This computation is much faster than a straightforward 8X8 system solver. The mapping above is easily generalized to map a rectangle to a quadrilateral by pre=multiplying with a scale and translation matrix.

Caser 2 : the inverse mapping, a quadrilateral to a square, can also be optimized. It turns out that the most efficient algorithm for computing this is not purely symbolic, as in the previous case, but numerical. We use the square-to-quadrilatral formulas just described to find the inverse o fthe desired mapping, and then take its adjoint to compute the quadrilateral formula just described to find the inverse of the desired mapping and then take its adjoint to compute the quadrilateral-to-square mapping.

Case3: Since we can compute quadrilateral-to-square and square-to-quadrilateral mappings quickly, the two mappings can easily be composed to yield a general quadrilateral-to-quadrilateral mapping(as shown below). This solution method is faster than a general 8X8 system solver.

[pic]

Quadrilateral to quadrilateral mapping as a composition of simpler mappings.

2.Euclidean Warping:

In the following, we will study one particular type of warp namely; the Euclidean warp also called the Euclidean similarity transform. This type of warp involves four parameters:

[pic] , which denotes scale, rotation and translation respectively.

Let x = [ x y 1 ] T denote a position in the original 2D image, I, and let x’ = [ x’ y’ 1 ] T denote the corresponding position in the warped image, I’, (both in homogeneous coordinates).Looking at one pixel, equation (1) can now be written as a simple linear transformation, where T denotes the transformation matrix:

[pic]

[pic]

The above transformation is implemented in the Matlab script euclideanwarp where the transformation matrix, T, that rotates an image five degrees is written as:

s = 1;

a = 5*pi/180;

tx = 0;

ty = 0;

T = [ s*cos(a) s*sin(a) tx ; -s*sin(a) s*cos(a) ty ; 0 0 1 ];

Due the discrete nature of raster images, one is in no way ensured that each input pixel exactly maps to an out pixel. Consequently warping is mostly performed backwards – i.e. from the output image to the input image. Since T is square and has full rank we can easily compute the inverse transformation as:

[pic]

In Matlab this is accomplished by:

X = T \ [ Xp(:) Yp(:) ones(n,1) ]';

where ' is the transpose operator, Xp(:) and Yp(:) are column vectors containing the x’ and y’coordinates and ones(n,1) is a column vector of ones. What’s left now is to lookup (or interpolate) the intensity values in the incoming image, I, at the positions in the X matrix. As seen below this is done for each color channel.

xI = reshape( X(1,:),wp,hp)';

yI = reshape( X(2,:),wp,hp)';

Ip(:,:,1) = interp2(I(:,:,1), xI, yI, '*bilinear'); % red

Ip(:,:,2) = interp2(I(:,:,2), xI, yI, '*bilinear'); % green

Ip(:,:,3) = interp2(I(:,:,3), xI, yI, '*bilinear'); % blue

3.Image Mosaicing with matlab imlmentation:

One application for image warping is merging of several images into a complete mosaic –e.g. to form a panoramic view. Here this is carried out by a Euclidean warp 1 .In mosaicing, the transformation between images is often not known beforehand. In this example, two images are merged and we will estimate the transformation by letting the user give points of correspondence (also called landmarks or fiducial markers etc.)

in each of the images. In order to recover the transformation we rearrange the warping equation (2) so that the warping parameters is the vector t in:

[pic]

The warping parameters is now obtained by solving the linear system (7) in Matlab:

t = Z \ xp;

From (8) we see that (at least) two points are required to determine the four warping parameters. In the Matlab script mosaic we prompt the user to deliver two such points on each image using the function ginput2(). The full code for recovering the parameters

is then:

figure; image(I1);

[X1 Y1] = ginput2(2);

figure; image(I2);

[X2 Y2] = ginput2(2);

Z = [ X2' Y2' ; Y2' -X2' ; 1 1 0 0 ; 0 0 1 1 ]';

xp = [ X1' Y1' ]';

t = Z \ xp;

a = t(1); % = s cos(alpha)

b = t(2); % = s sin(alpha)

tx = t(3);

ty = t(4);

A demonstration of mosaicing using two input images is given in figure 1 and 2.

[pic]

[pic]

Figure 2: Image mosaic produced by mosaic.

[pic]

5.Implementation details

Eclideanwarp.m

% load input image

I = double(imread('right.jpg'));

[h w d] = size(I);

% show input image

figure; image(I/255); axis image;

title('input image');

% make transformation matrix (T)

s = 1;

a = 5*pi/180;

tx = 0;

ty = 0;

T = [ s*cos(a) s*sin(a) tx ; -s*sin(a) s*cos(a) ty ; 0 0 1 ];

% warp incoming corners to determine the size of the output image

cp = T*[ 1 1 w w ; 1 h 1 h ; 1 1 1 1 ];

Xpr = min( cp(1,:) ) : max( cp(1,:) ); % min x : max x

Ypr = min( cp(2,:) ) : max( cp(2,:) ); % min y : max y

[Xp,Yp] = ndgrid(Xpr,Ypr);

[wp hp] = size(Xp); % = size(Yp)

% do backwards transform (from out to in)

n = wp*hp;

X = T \ [ Xp(:) Yp(:) ones(n,1) ]'; % warp

% re-sample pixel values with bilinear interpolation

clear Ip;

xI = reshape( X(1,:),wp,hp)';

yI = reshape( X(2,:),wp,hp)';

Ip(:,:,1) = interp2(I(:,:,1), xI, yI, '*bilinear'); % red

Ip(:,:,2) = interp2(I(:,:,2), xI, yI, '*bilinear'); % green

Ip(:,:,3) = interp2(I(:,:,3), xI, yI, '*bilinear'); % blue

% show the warping result

figure; image(Ip/255); axis image;

title('warped image');

mosaic.m

% load input images

I1 = double(imread('left.jpg'));

[h1 w1 d1] = size(I1);

I2 = double(imread('right.jpg'));

[h2 w2 d2] = size(I2);

% show input images and prompt for correspondences

figure; subplot(1,2,1); image(I1/255); axis image; hold on;

title('first input image');

[X1 Y1] = ginput2(2); % get two points from the user

subplot(1,2,2); image(I2/255); axis image; hold on;

title('second input image');

[X2 Y2] = ginput2(2); % get two points from the user

% estimate parameter vector (t)

Z = [ X2' Y2' ; Y2' -X2' ; 1 1 0 0 ; 0 0 1 1 ]';

xp = [ X1 ; Y1 ];

t = Z \ xp; % solve the linear system

a = t(1); % = s cos(alpha)

b = t(2); % = s sin(alpha)

tx = t(3);

ty = t(4);

% construct transformation matrix (T)

T = [a b tx ; -b a ty ; 0 0 1];

% warp incoming corners to determine the size of the output image (in to out)

cp = T*[ 1 1 w2 w2 ; 1 h2 1 h2 ; 1 1 1 1 ];

Xpr = min( [ cp(1,:) 0 ] ) : max( [cp(1,:) w1] ); % min x : max x

Ypr = min( [ cp(2,:) 0 ] ) : max( [cp(2,:) h1] ); % min y : max y

[Xp,Yp] = ndgrid(Xpr,Ypr);

[wp hp] = size(Xp); % = size(Yp)

% do backwards transform (from out to in)

X = T \ [ Xp(:) Yp(:) ones(wp*hp,1) ]'; % warp

% re-sample pixel values with bilinear interpolation

clear Ip;

xI = reshape( X(1,:),wp,hp)';

yI = reshape( X(2,:),wp,hp)';

Ip(:,:,1) = interp2(I2(:,:,1), xI, yI, '*bilinear'); % red

Ip(:,:,2) = interp2(I2(:,:,2), xI, yI, '*bilinear'); % green

Ip(:,:,3) = interp2(I2(:,:,3), xI, yI, '*bilinear'); % blue

% offset and copy original image into the warped image

offset = -round( [ min( [ cp(1,:) 0 ] ) min( [ cp(2,:) 0 ] ) ] );

Ip(1+offset(2):h1+offset(2),1+offset(1):w1+offset(1),:) = double(I1(1:h1,1:w1,:));

% show the result

figure; image(Ip/255); axis image;

title('mosaic image');

7.Uses Of image warping

Applications of the image warping in the Brain surgery

Pathology Detection. Normal anatomic complexity makes it difficult to design automated strategies that detect abnormal brain structure. At the same time, brain structure is so variable that group-specific patterns of

anatomy and function are often obscured. To distinguish abnormalities from normal variants, a realistically complex mathematical framework is required to encode information on anatomic variability in homogeneous

populations . Warping algorithms offer substantial advantages for encoding patterns of anatomic variation and detecting pathology.

Analyzing Brain Data. One of the driving forces that made registration important in brain imaging was the need to perform brain to brain comparisons. Anatomic variations severely hamper the integration and comparison of data across subjects and groups. Warping algorithms can transfer 3D maps of functional and vascular territories onto the scan of any subject, as well as information on tissue types, cytoarchitecture, histologic and neurochemical content .

Measuring Anatomical Differences. As a valuable by-product, 3D warping algorithms also quantify local and global shape changes. The complex profiles of dilation and contraction required to warp an atlas onto the new subject's brain provide an index of the anatomical shape differences between that subject's brain and the atlas.

Population-Based Atlases. Pathology detection algorithms will invoke

deformation fields that match one brain with a large number of others. The result is a probabilistic brain atlas that encodes patterns of anatomic variation in human populations, and incorporates algorithms to detect structural variants outside of the normal range .

Measuring Brain Changes. When applied to scans acquired over many months or years from a single subject, 3D warping algorithms can also calculate measures of local and global shape change over time. In many ways, static representations of brain structure are ill-suited to investigating dynamic processes of disease. With warping algorithms, measures of

dilation rates, contraction rates, and rates of shearing and divergence of the cellular architecture may be computed locally, for all structures, directly from the deformation field that matches one scan with the other. As a result,

warping algorithms offer a powerful strategy to track temporal change and classify age-related, developmental or pathologic alterations in anatomy.

Clinical Applications

[pic]Head and Neck MRI-CT Registration

[pic]Spine Registration for Minimally Invasive Techniques

[pic]MRI-CT Registration for Epilepsy Surgery

Displacement mapping by image warping:

Displacement mapping provides rich geometric and visual detail without requiring the user to model all this detail explicitly. In contrast to other texture mapping techniques not only the appearance of a surface is altered but the surface itself is displaced by an amount specified in a texture map.

While displacement mapping reduces the burden on the modeling side, it does increase the effort necessary to obtain images from the model. Two approaches are commonly taken to render displacement maps: micro-polygons and ray-tracing. Both of these methods are very time consuming and more efficient alternatives are desirable

By subdividing displacement-mapped geometry into micro-polygons and displacing their vertices, a great deal of additional geometry is introduced that makes the model time-consuming to render. A fine subdivision is usually necessary to capture the details in the displacement map.

Ray-tracing is a costly rendering technique in itself and ray-tracing inverse displacement maps has been described as only being practical for objects of the complexity of tori or sweeps. Others have presented an approach to handle micropolygons in ray-tracing

An alternative arises from the observation that displacement mapping is sufficiently equivalent to warping an image with depth. The pixel colors describe the optical surface properties and the depth specifies how much the surface deviates from the image plane. These displacements can either be applied perpendicular to the normal of a flat surface using an orthographic projection or they can be made to emanate from a single point - namely the center of projection in a perspective image. Image warping will take the displaced surface points to the same location in the image by first displacing them in 3D and then projecting them onto the image plane. An inverse image warping algorithm will always treat a depth difference between adjacent pixels as a surface slope. Consequently it will fill in a connected surface between the samples

CT software and image warping

|[pic] |[pic] |

|(a) CT Bone surface |(b) CT Skin surface |

|[pic] |[pic] |

|(c) Warped bone surface (note bridge of nose) |(d) Corresponding warped skin surface |

This is not a very dramatic example, but it illustrates how very few landmarks can create a localised warping effect.

(a) is the normal bone surface rendered from CT,

(b) is the corresponding skin surface. Both these surfaces were generated by

specifying an appropriate threshold in the ray casting software.

(c) is the bone surface of the warped dataset.

The same threshold applied in (a) is applied to the warped dataset to generate this image. The warping was defined by specifying landmarks on the ridges of the `eyebrows', the nose and bridge of the nose. Eight further control points were specified in the periphery of the volume. These were specified as zero vectors. A linear basis function was chosen in this example. Basically, we have just pushed in the bridge of the nose by 5mm and seen what happens !

Note that here application of the warping to the skull has alter the bone structure of the face manifests itself on the skin surface. Whether this is valid or not is an interesting research topic... indeed the nature of the relationship between the skull, the skin appearance, the fat and age of the subject is yet to be quantified and established mathematically.

8.Conclusion

In this seminar I have given introduction to image warping and I have explained Euclidian warping technique and how to use this technique for image mosaicing and I have also explained different uses of the image warping( the different areas where image warping is being used) specially in the medical field. I have explained how image warping is used in CT dataset software. There are many soft wares, which are using the concept of the image warping for image processing applications.

There are many mapping techniques for image warping but I have explained only one method here. I have given mat lab implementation details of image mosaicing .

Still research is going in Image warping field and we can expect new applications in the coming years.

10. References

1.Digital Image Processing-by Rafael C Gonzalez and Richard E. Woods

2.[Foley-van Dam82] James D. Foley, Andries van

Dam, Fundamentals of Interactive Computer

Graphics, Addison-Wesley, Reading, MA, 1982.

3.MATLAB for Image Processing a tool kit

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download