Patrick Zimmer

Home | Projects | Music | Sitemap | Contact

Image Processing Morphological Operations

Task Details

1 Image Acquisition

Acquire and store suitable grey-level images of a hand (for example, your own), of a face (more amusing if it's someone else's) and of the cardboard circular shapes available in the Masters laboratory. By "suitable", I mean as usual images taken under appropriate lighting and imaging conditions such that the hand, face and circular shapes fill a reasonably large proportion of the field of view, that detail within each image can be seen, for example of the vein structure on the back of the hand and of facial features, and that the images are not too dark (ie, above the noise level) without being too bright (ie, reaching the maximum signal level of the camera). In view of the exercises to be carried out, you might grab several examples of each image so that you can choose the one best suited to each exercise. For the hand examples, without moving the camera, take some images of the background against which you imaged your hand.

2 Thresholding

Using one example image each of the hand, face and circular disc, inspect their grey-level histograms and select thresholds that you think will enable you to distinguish  each of the objects of interest from the image background. Carry out the thresholding and comment on whether it successfully enables you to segment the object of interest in each case. Note for which of the three images the procedure works best and for which it appears to work least well and, if you can, explain why this is so. If you do not obtain a very good segmentation in which each object and background are clearly distinguished, vary the threshold levels by trial and error or interactively until you obtain satisfactory results. If the thresholds that yield a good segmentation are different from those you intially selected from the histograms, comment on the difference and, if you can, explain why the final threshold works better.

3 Iterative Computation of the Threshold

Using the same images as in 3.2 above, use the iterative algorithm described in the lectures to compute the optimal thresholds from the image grey-level histograms to segment each image into two regions. Initialize the computation (a) by using the mean grey-level of each image, and (b) by using the threshold you initally selected in 3.2 above. Describe what happens in each case and compare the results obtained in (a) and (b) with each other and with the corresponding results obtained in 3.2 for both the initial and final thresholds used there.

4 Object Labelling

Write an algorithm to label all pixels belonging to the object of interest by using Haralick's iterative method described in the lectures. Carry out the computation using both 4 and 8 connectivity for the object and comment on the number of iterations required for each object. Investigate whether equivalent functions are provided in IDL and, if so, use them and check that they produce results similar to those obtained with the algorithm you wrote.

5 Image Subtraction

Subtract some of the background images taken for the hand examples from the example images. Describe the results obtained, commenting on the extent to which the background can be eliminated from the hand images by such means and on what you think affects this. Experiment to see if a threshold can be set in order to enable the hands in your difference images to be segmented reliably and compare the results obtained from the difference images with those you obtained in sections 3.2 and 3.3. Discuss the potential for application of such techniques as a means of detecting objects in a scene and for detecting object motion. Describe briefly what you think could be done to improve the reliability of object detection by this means.

6 Morphological Noise Removal

Use IDL functions to investigate how simple morphological operations (dilation and erosion) may be used for removing background noise from the subtracted images. Try such "cleaning" procedures on both thresholded and non-thresholded images and experiment with a small number of structuring elements of different size. Comment on the results obtained and discuss whether you think it is better: (a) to difference the images, threshold and then clean using binary mophological operations, (b) to difference the images, clean using grey-level morphology and then threshold, or (c) some other combination of these three types of operation.

7 Computing Moments

Select a well segmented image of your hand, obtained by whichever of the above techniques you like, and label the pixels belonging to the hand as in section 3.4. Compute the pixel co-ordinates of its centre of mass and use the result to calculate the three second order moments of you hand image. Repeat the procedure for a second image taken with the hand in a different position and orientation. Compute the eigenvalues and eigenvectors of the two second order moment matrices so obtained and: (a) compare the eigenvalues with each other, (b) comment on the directions of the eigenvectors obtained.

Project Work

Contents Page

1. Introduction

1 2. Methods 1

2.1 Thresholding 1

2.2 Iterative Thresholding 3

2.3 Object Labelling 4

2.4 Image Subtraction 6

2.5 Morphological Noise Removal 6

2.6 Computing Moments 6

3. Results 9

3.1 Thresholding 9

3.2 Iterative Thresholding 10

3.3 Object Labelling 13

3.4 Image Subtraction 14

3.5 Morphological Noise Removal 15

3.6 Computing Moments 17

4. Discussion and Conclusions 18

4.1 Thresholding 18

4.2 Iterative Thresholding 18

4.3 Object Labelling 18

4.4 Image Subtraction 19

4.5 Morphological Noise Removal 20

4.6 Computing Moments 20

5. Appendix 21

5.1 Thresholding 21

5.2 Iterative Thresholding 22

5.3 Object Labelling 23

5.4 Image Subtraction 25

5.5 Morphological Noise Removal 25

5.6 Computing Moments 26

1. Introduction The aim of this practical is to use thresholding methods to obtain a binary from a greyscale image. The binary image can then be subjected to operations, which identify the individual objects in the image. It is shown how second order moments of an object are calculated and how these are a shape descriptor, invariant to rotation. 2. Methods

2.1 Thresholding

Thresholding is simply performed by comparing the intensity of every pixel with a set threshold. Pixels lower in value get set to black and pixels above to white.

Below are shown the images used for thresholding and the corresponding histograms (Fig. 2.1a to f).

Fig. 2.1a Fig. 2.1b Fig. 2.1c

Fig. 2.1d

The histogram for the hand image (Fig. 2.1d) shows that a threshold between 50 and 80 would separate the two distinct classes. Results were taken using 50 to capture as much of the hand as possible.

                    Fig. 2.1 e

The histogram for the face image (Fig. 2.1e) shows that a threshold between 70 and 110 would separate the two classes. Results were taken using 70 to capture as much of the face as possible.

Fig. 2.1f

The histogram for the circles image (Fig. 2.1f) shows that a threshold between 20 and 30 would separate the distinct lower class. Results were taken using 20 to capture as much of the circles as possible.

2.2 Iterative Thresholding

The expression for the optimum threshold, T is found from maximizing the 'between class variance'.

T = ½ ( ma+mb)

The mean intensities of the two classes, ma and mb require that a threshold be set to split the intensities into two classes first. The process of finding the optimum threshold is hence iterative starting from an arbitrary value. This value is better chosen near the expected threshold to minimize the number of iterations required. The iterative process it set to stop when the change in threshold value is low, indicating that the threshold is very close to optimum.

The iterations were initiated with the mean image intensity and a threshold chosen from viewing the histogram.

The mean values are calculated by summing the product of each intensity and the corresponding histogram proportion. This sum is then divided by the total sum of the histogram proportions to give the mean. The range of the values used is varied as the classes change size to give the mean of the appropriate class each time.

A WHILE statement around the threshold calculation code repeats the process until the change in threshold intensity is less then 0.5.

The threshold intensity is made an integer using the IDL FIX function so that the histogram can be split.

2.3 Object Labelling

Object labeling is performed as shown in the following example.

Using the binary image below (Fig. 2.3a) all object pixels (black) are individually labeled (Fig. 2.3b). The image array in IDL is such that the first element is in the bottom left corner. All operations on the image are hence performed in raster fashion column by column from bottom left to top right or vice versa. The object pixels are labeled in this fashion with ascending label values to aid simplicity.

Fig. 2.3a Fig. 2.3b

           
           
           
           
           
           










 

7

8

12

15

21

 

6

     

20

     

11

 

19

3

   

10

14

18

2

5

   

13

17

1

4

 

9

 

16









The pixels operated on in forward and reverse sweeps, are shown


in Fig. 2.3 c and d respectively, using IDL label array references. The current pixel is shown shaded and its neighbors white.

Traversing the image array from bottom left to top right (forward operation) the current pixel label is propagated to neighboring pixel above and to the right (Fig. 2.3c). For eight-way connectivity the label is also propagated to the upper right pixel.

Traversing from top right to bottom left (reverse operation) the label propagation is to neighbors below and to the left (Fig. 2.3d) in order to complete the symmetry. The propagation of labels can then occur in all directions.

A replacement rule needs to be defined to avoid operations on a current column affecting operations on the previous column. Since the labels are in ascending order it is quickest to set the label to the lowest neighboring value.

To restrict operations to the object it is first tested whether the neighboring pixel in the image is black.

J Fig. 2.3c Fig. 2.3d

 

Forward operation Reverse operation

Label[[I],{J+1]]

Label[[I+1],{J+1]]

Label [[I],{J]]

Label [[I+1],{J]]

 

Label[[I+1],{J-1]]

The results of several forward and reverse
operations are shown below.

 

Using this shape object labeling is completed with three single operations on each pixel.


Label [[Wi-I+1],{He-J-1]]

 

Label [[Wi-I],{He-J-1]]

Label [[Wi-I],{He-J]]

Label [[Wi-I-1],{He-J+1]]

Label [[Wi-I-1],{He-J]]

 

 

Using four-way connectivity the pixel labeled 9 is treated as a separate object giving a total of three objects.

Using eight-way connectivity gives the same results for the first forward (Fig. 2.3e) operation. The first reverse operation (Fig. 2.3h) enables the pixel labeled 9 to be changed to 6. The result of the second forward operation would look like Fig. 2.3g with pixel 9 changed to 6, giving two objects.

T

he arrows show the direction of label propagation starting from the shaded pixel (column by column).

Fig 2.3f Pixel labels after first reverse operation

Fig 2.3e Pixel labels after first foreward operation

 

6

6

6

6

6

 

6

     

10

     

10

 

10

1

   

10

10

10

1

1

   

10

10

1

1

 

9

 

10














 

6

6

6

6

6

 

6

     

6

     

10

 

6

1

   

6

6

6

1

1

   

6

6

1

1

 

9

 

6




 



Fig 2.3g Pixel labels after second foreward operation

Fig 2.3h Pixel labels after first reverse operation using eight-way connectivity

 

6

6

6

6

6

 

6

     

6

     

10

 

6

1

   

6

6

6

1

1

   

6

6

1

1

 

6

 

6

 

 

 


 

6

6

6

6

6

 

6

     

6

     

6

 

6

1

   

6

6

6

1

1

   

6

6

1

1

 

9

 

6












There is of course a problem when operating on object pixels at the very edge of the image since there are neighbors missing. This is avoided by using images where no objects touch the edge or by inserting a thin border around the image of the background colour.

The planet image was chosen for object labeling since there are three distinct objects. Results of threshold experimentation showed that a threshold of 80 eliminates most of the background noise while capturing the objects fully.

2.4 Image Subtraction

Below are shown the images used for subtraction (Fig. 2.4a and b). The images used for thresholding are unsuitable since there is no background detail.

Fig. 2.4a Face Image Fig. 2.4b Background

An array subtraction can be performed by operating on the whole arrays.

When the result of a subtraction is negative IDL automatically wraps the result back round 255. eg. 50 - 55 = 251. The alternative is to take the absolute value of the subtraction although this will have the result of inverting the intensities in the object.

Both methods will be tried and compared.

2.5 Morphological Noise Removal

The IDL functions ERODE and DILATE were used for noise removal. They both require a structuring element, a two dimensional binary array. The effect of opening and closing the object was tested using 2x2, 3x3 and 4x4 size elements consisting all of value 1.

2.6 Computing Moments

Assuming we are using a binary image with intensity, I(x,y) of either 0 or 1. The total mass of an object, M assuming a density, r(x,y) uniform over the object, is given by:


I(x,y) is equivalent to r(x,y) assuming a normalised density of 1 per pixel.

Since the object is composed of discrete samples the mass can be obtained from summing the number of object pixels:

In the image array, values are of 255 instead of 1, so the mass is divided by 255.

The centre of mass, is then found from:

The central moments for an image are then defined as:

So the two, second order moments are given by:

The third second order moment U(1,1) is obtained by transforming coordinates to axes at 45 degrees to the original set. By simple trigonometry the new coordinates, (x',y') are given by:

and

Then

where is the projection on the axis, x'.

The second order moments may be written as a symmetric matrix.

The eigenvectors for this matrix should be orthogonal to on another and give the principal axes.

3. Results

3.1 Thresholding

The hand, face and circle images are shown before (Fig. 2.1a, c and e) and after thresholding (Fig. 3.1b, d and f) using thresholds of 50, 60 and 20 respectively.

Fig. 3.1a Fig. 3.1b

Fig. 3.1 c Fig. 3.1d

Fig. 3.1 e Fig. 3.1f

3.2 Iterative Thresholding

The mean and threshold values as printed by IDL on each iteration are shown below.

Using the hand image:

- Computation initialised using the mean image intensity

Initial Threshold = 76

Iteration number 1

Threshold = 78

Lower Mean = 22.3215

Upper Mean = 135.767

New Threshold = 79

Iteration number 2

Threshold = 79

Lower Mean = 22.3747

Upper Mean = 135.834

New Threshold = 79

- Computation initialized using threshold chosen in section 2.1 (50).

Fig. 3.2a

Initial Threshold=50

Iteration number 1

Threshold=77

Lower Mean=22.2420

Upper Mean=135.682

New Threshold=78

Iteration number 2

Threshold=78

Lower Mean=22.3215

Upper Mean=135.767

New Threshold=79

Iteration number 3

Threshold=79

Lower Mean=22.3747

Upper Mean=135.834

New Threshold=79

Using the face image:

- Computation initialised using the mean image intensity

Initial Threshold=104

Iteration number 1

Threshold=101

Lower Mean=42.7414

Upper Mean=159.663

New Threshold=101

- Computation initialized using threshold chosen in section 2.1 (60).

Initial Threshold=60

Fig. 3.2b

Iteration number 1

Threshold=94

Lower Mean=41.6336

Upper Mean=158.518

New Threshold=100

Iteration number 2

Threshold=100

Lower Mean=42.5539

Upper Mean=159.476

New Threshold=101

Iteration number 3

Threshold=101

Lower Mean=42.7414

Upper Mean=159.663

New Threshold=101

Using the planet image:

- Computation initialised using the mean image intensity

Initial Threshold=49

Iteration number 1

Threshold=88

Lower Mean=13.0421

Upper Mean=177.869

New Threshold=95

Iteration number 2

Threshold=95

Lower Mean=13.7109

Upper Mean=180.606

New Threshold=97

Iteration number 3

Threshold=97

Lower Mean=13.9093

Upper Mean=181.363

New Threshold=97

- Computation initialized using threshold chosen in section 2.1 (20).

Initial Threshold=20

Iteration number 1

Fig. 3.2c

Threshold=78

Lower Mean=12.4577

Upper Mean=175.120

New Threshold=93

Iteration number 2

Threshold=93

Lower Mean=13.5261

Upper Mean=179.865

New Threshold=96

Iteration number 3

Threshold=96

Lower Mean=13.8117

Upper Mean=180.995

New Threshold=97

Iteration number 4

Threshold=97

Lower Mean=13.9093

Upper Mean=181.363

New Threshold=97

3.3 Object Labelling

Fig. 3.3a - Thresholded Image Fig. 3.3b - Labeled Image

Fig. 3.3c -Image after sweep Fig. 3.3d -Image after sweep

from left to right from right to left

3.4 Image Subtraction

Results are shown using the absolute value and a 'wrapped' value of negative intensity results. The wrapping is around 255 so the wrapped value, I is given from the intensity, x by: I = 256 - x.

Fig. 3.4d is shown inverted to preserve the look of the face

Fig. 3.4a Face Image Fig. 3.4b Background Image

Fig. 3.4c Face - Background (wrapped) Fig. 3.4d Face - Background (absolute)

Fig. 3.4e Histogram for Fig. 3.4c Fig. 3.4f Histogram for Fig. 3.4d

Fig. 3.4g Thresholded Fig. 3.4c (0) Fig. 3.4h Thresholded Fig. 3.4d (230)

Fig. 3.4i Histogram for Fig. 3.4a Fig. 3.4j Thresholded Fig. 3.4a (120)

3.5 Morphological Noise Removal

The planet and face images thresholded at 20 and 10 respectively, are good examples of images with background and object noise. The planet image has excessive background noise caused by stars and the face has foreground noise caused by dark areas on the face.

Fig. 3.5 a, c and d Fig. 3.5 b, d and f

a Original thresholded image b Opening using a 2x2

c Closing using a 2x2 d Opening using a 3x3

e Closing using a 3x3 f Opening using a 4x4

Fig. 3.5 h and j Fig. 3.5 i and k

h Original thresholded image i Closing using a 2x2

j Closing using a 3x3 k Closing using a 4x4

3.6 Computing Moments

To see the effect of rotation on the eigenvectors, a simple 90-degree rotation was chosen. The principal axes as given by the eigenvectors should then also be at right angles to each other.

Fig. 3.6a Fig. 3.6b

Fig. 3.6 a (200x183)

Centre of Mass (37.1825 , 42.5511)

U(2,0)= 4.19686e+008

U(0,2)= 2.66802e+008

U(1,1)= 1.72790e+006

Eigenvalues

4.19705e+008 2.66783e+008

Eigenvectors

0.999936 0.0112999

-0.0112999 0.999936

Fig. 3.6 b

Eigenvalues

4.19690e+008 2.58173e+008

Eigenvectors

0.00540041 0.999985

0.999985 -0.00540041

4. Discussion and Conclusions

4.1 Thresholding

The thresholds chosen for the hand, face and circle images (Fig. 3.1 a, c and e) successfully highlight the objects of interest (Fig. 3.1b, d and f). In Fig. 3.1d and there is background noise and noise around the face and planets. The hand image (Fig.3.1a) has a light object on a very dark background so the thresholding works best (Fig. 3.1b).

4.2 Iterative Thresholding

The iterative computation produced a threshold of 79 for the hand image as opposed to the chosen value of 50. This is as expected since there are two distinct peaks in the histogram at 20 and 140 (Fig. 2.1d).

Using this higher threshold does not capture as much of the hand and results in more noise around the edge of the hand.

The iterative computation produced a threshold of 101 for the face image as opposed to the chosen value of 60. The two classes in the histogram (Fig. 2.1e) peak at around 40 and 170 so this result is also expected.

Using a higher threshold does not capture some of the darkest details in the face. The left ear is also completely detached from the face by the darker shadow.

The iterative computation produced a threshold of 97 for the face image as opposed to the chosen value of 20.The thresholded image in Fig. 2.1f shows that part of the left side of the large planet is missing but also that the background noise has disappeared.

In all cases the chosen thresholds take into account that there is more variation in object intensity than background intensity. The thresholds are hence chosen to be as near to the background intensity as possible. This is easily done by viewing the image but the histogram shapes show that variation in the upper object class is greater than in the lower. Perhaps better thresholding could be achieved if the variances of each class were accounted for in the threshold calculation.

4.3 Object Labelling

4.4 Image Subtraction

Fig. 3.4 c and d show that taking the absolute value of negative intensities appears to separate the object from the background more effectively than using the wrapped value. The wrapping effect produces areas of very high contrast where the result of the subtraction produces positive and negative values of low absolute value.

Fig 3.4 d shows that the cable-holder on the wall has successfully been removed from the background. The subtraction of this object from the face has left a darker area across the face however. This problem cannot be avoided with overlapping foreground and background objects.

Generally the results of image subtraction show that the face has been partially separated. Intensities on the face having the same value as corresponding intensities on the background will produce black regions indistinguishable from the background.

The poor separation on the left side of the face also highlights the problem of lighting. The shadow on the wall caused by the presence of the object means that the background intensities in the pictures do not match in this area.

The result of thresholding this image without using image subtraction (Fig. 3.4 j) is very poor for several reasons:

· The spread of intensities in the original image histogram is fairly even

(Fig. 3.4 i)

· There are several fairly light regions on the face, which will not separate from the light background.

· The cable-holder on the wall and the head itself create dark shadows, which will not separate from the face.

Using image subtraction captures more of the face and removes background objects effectively. The only reason that more of the face was captured was that the darker strip across the face caused by the light cable-holder covered some of the brightest areas. In this case the distortion caused by overlapping foreground and background objects has been an advantage. Generally this would be a problem and overlapping should be avoided.

In general image subtraction is most useful in removing background detail and objects to highlight details in the image. The problem of lighting will vary for different applications. Where lighting can be can be accurately controlled such as on a circuit board production line there is no problem. The lighting is constant, so subtracting a template of the perfect circuit board will show any defects.

In the case of identifying vehicles on a motorway camera lighting and overlapping objects would cause a big problem. The lighting changes constantly through the day and distinguishing certain colour cars from background objects would be impossible.

4.5 Morphological Noise Removal

The results for opening and closing on the object show the process of noise removal. The effect is to remove holes, islands, and peninsula smaller than the shape operator.

Fig. 3.5 b, d and f show that opening is more effective at removing background noise than closing (Fig. 3.5 c and f). Similarly Fig. 3.5 i, j and k show that closing is more appropriate for removing noise on the object. The larger the shape operator, the larger the noise elements that can be removed. Fig. 3.5 e shows that the two planets have begun to merge as a result of the closing operation. The danger in making the structuring element too large is that details on the object begin to be affected.

In general opening and closing operations effectively remove noise. The use of different shape structuring elements would be better for certain objects but the possible range of element shapes makes the choice difficult.

4.6 Computing Moments

As expected the eigenvalues are roughly the same for both images. The eigenvalues are hence invariant to rotation.

The change in eigenvectors for a 90-degree rotation is from approximately

to , where both vectors are shown in the same matrix.

Since the eigenvectors describe the principal axes, which have been rotated by 90 degrees, this appears correct. The eigenvectors are as expected orthogonal to one another.

In general this practical shows how each of the procedures practiced can combine to identify and measure object properties.

Image subtraction is useful if there is a large range of intensities in both object and background or if there are background objects that need eliminating. Thresholding is the vital step in separating objects and usually requires experimentation. Object labeling is used to identify different objects in the image for reference. Once an object has been identified its moments can be calculated to give properties of shape that are invariant to rotation. All these method have been proven to work well in the appropriate circumstances.

5. Appendix

Common Methods

The following code to read a tiff image into an array and invert the array horizontally (to produce an upright displayed image) precedes all the following code sections and will hence be excluded. The QUERY command to obtain the image dimensions is used in most sections of code so is included here instead of constantly repeating it. The image file destination is not specified since it is irrelevant. The IDL HISTOGRAM function is used to produce histograms and the PLOT / TV commands used to view the results as required.

;Read tiff from file

dest='<file source path>' ;eg. C:hand.tif

imag = READ_TIFF(dest)

;Invert image horizontally

image=REVERSE(ROTATE(imag,2))

info=QUERY_TIFF(dest,inf)

;Obtain image dimentions

Wi=inf.DIMENSIONS[0]

He=inf.DIMENSIONS[1]

;Plot histogram with IDL Histogram function

PLOT, HISTOGRAM(image)

;Print image

TV, image

;Save contents of display window to tiff image file

WRITE_TIFF, '<file destination path>', TVRD() ;eg. C:face.tif

5.1 Thresholding

The following code simply compares every pixel intensity with the set threshold. A new image is created with pixels above the threshold set to white and those below to black.

objects=MAKE_ARRAY(Wi*He, /INTEGER, VALUE=255)

FOR I = 0L, Wi*He-1 DO BEGIN

s=image[I]

;Threshold intensity set

IF s[0] LT 65 THEN objects[I]=0

ENDFOR

END

5.2 Iterative Thresholding

The mean values are calculated by summing the product of each intensity and its corresponding histogram proportion. This sum is then divided by the total sum of the histogram proportions to give the mean. The range of the values used is varied as the classes change size.

shapes=MAKE_ARRAY(Wi,He, /INTEGER, VALUE=255)

hist=FLOAT(HISTOGRAM(image));Histogram array of type float

me=FIX(MEAN(image));Mean intensity of image - type integer

;Calculate mean of lower class

sum=0L

FOR I=0,me-1 DO sum=(I*hist[I])+sum

mean1=sum/TOTAL(hist[0:me])

;Calculate mean of upper class

sum1=0L

FOR I=me, 254 DO sum1=(I*hist[I])+sum1

mean2=sum1/TOTAL(hist[me:255])

;Set arbitrary threshold

thold=0

;Calculate new threshold (integer)

nthold=Fix(0.5*(mean1+mean2))

;Current number of iterations

c=0

;Set iteration to end when the change in threshold is

;under 0.5

WHILE (nthold LT thold-0.5) OR (nthold GT thold+0.5) DO BEGIN

;Count iterations

c=c+1

;Used as the previous threshold for comparison

thold=nthold

;Calculate mean of lower class

sum=0L

FOR I=0,thold-1 DO sum=(I*hist[I])+sum

me1=sum/TOTAL(hist[0:thold])

;Calculate mean of upper class

sum1=0L

FOR I=thold, 254 DO sum1=(I*hist[I])+sum1

me2=sum1/TOTAL(hist[thold:255])

;Calculate new threshold

nthold=FIX(0.5*(me1+me2))

;Print threshold changes

print, "Initial Threshold="

print, me

print, "Iteration number"

print, c

print, "Threshold="

print, thold

print, "Lower Mean="

print, me1

print, "Upper Mean="

print, me2

print, "New Threshold="

print, nthold

ENDWHILE

END

5.3 Object Labelling

labeli=LABEL_REGION(image) ;IDL Labelling function

label=MAKE_ARRAY(Wi,He, /LONG, VALUE=0) ;Label array

;Initial label value - set to contrast object labels

;values with background for viewing

lab=200L

;Creating a labelled array

FOR I = 0, Wi-1 DO BEGIN

FOR J = 0, He-1 DO BEGIN

s=image[[I],[J]]

;Set threshold intensity

IF s[0] GT 60 THEN BEGIN

image[[I],[J]]=0

;labelling

lab=lab+1

label[[I],[J]]=lab

ENDIF ELSE image[[I],[J]]=255

ENDFOR

ENDFOR

;Overall loop to set number of forward and reverse sweeps

FOR P=0, 0 DO BEGIN

;Find connected components on foreward sweep

FOR I = 0, Wi-1 DO BEGIN

FOR J = 0, He-1 DO BEGIN

;Image pixels to test

Im=image[[I],[J]]

Im1=image[[I],[J+1]]

Im2=image[[I+1],[J]]

;Im3=image[[I+1],[J+1]] ;Using 8 connectivity

;Im4=label[[I+1],[J-1]] ;Using 8 connectivity

;Labels on which to operate

La=label[[I],[J]]

La1=label[[I],[J+1]]

La2=label[[I+1],[J]]

;La3=label[[I+1],[J+1]] ;Using 8 connectivity

;La4=label[[I+1],[J-1]] ;Using 8 connectivity

;IF Im[0] EQ 255 THEN BEGIN ;Only operate on object ;(white)

;Propagate lower labels to neighboring pixels

IF Im[0] EQ Im1[0] THEN label[[I],[J+1]]=La < La1

IF Im[0] EQ Im2[0] THEN label[[I+1],[J]]=La < La2

;Using 8 connectivity

;IF Im[0] EQ Im3[0] THEN label[[I+1],[J+1]]=La < La3

;IF Im[0] EQ Im4[0] THEN label[[I+1],[J-1]]=La < La4

; ENDIF

ENDFOR

ENDFOR

;Find connected components on reverse sweep

FOR I = 0, Wi-1 DO BEGIN

FOR J = 0, He-1 DO BEGIN

;Image pixels to test

Im=image[[Wi-I],[He-J]]

Im1=image[[Wi-I],[He-J-1]]

Im2=image[[Wi-I-1],[He-J]]

;Im3=image[[Wi-I-1],[He-J+1]] ;Using 8 connectivity

;Im4=image[[Wi-I+1],[He-J-1]] ;Using 8 connectivity

;Labels on which to operate

La=label[[Wi-I],[He-J]]

La1=label[[Wi-I],[He-J-1]]

La2=label[[Wi-I-1],[He-J]]

;La3=label[[Wi-I-1],[He-J+1]] ;Using 8 connectivity

;La4=label[[Wi-I+1],[He-J-1]] ;Using 8 connectivity

;IF Im[0] EQ 255 THEN BEGIN ;Only operate on object ;(white)

;Propagate lower labels to neighboring pixels

IF Im[0] EQ Im1[0] THEN label[[Wi-I],[He-J-1]]= La < La1

IF Im[0] EQ Im2[0] THEN label[[Wi-I-1],[He-J]]= La < La2

;Using 8 connectivity

;IF Im[0] EQ Im3[0] THEN label[[Wi-I-1],[He-J+1]]= La < La3

;IF Im[0] EQ Im4[0] THEN label[[Wi-I+1],[He-J-1]]= La < La4

;ENDIF

ENDFOR

ENDFOR

ENDFOR

END

5.4 Image Subtraction

Having loaded the full and background image into arrays image and image1 respectively the absolute and wrapped subtractions are calculated.

imab= ABS(image1-image) ;Compute absolute subtraction

inv= ABS(imab-255) ;Invert absolute subtraction

imw=image-image1 ;Compute wrapped subtraction

5.5 Morphological Noise Removal

The TVSCL command was used to view the images in this case.

x=2 ;Structuring element size

str = make_array(x,x ,/integer , value=1)

open = DILATE(ERODE(image,str),str) ;Perform Opening

clos = ERODE(DILATE(image,str),str) ;Perform Closing

5.6 Computing Moments

x = make_array(width, /long) ;Array of x moments

for i = 0, width-1 do begin

count = 0

for j = 0, height-1 do begin

a = image[[i],[j]]

if (a[0] eq 255) then count = count+1 ;Count pixels in column

;Multiply pixels in column by distance from origin

x[i] = count * i

endfor

endfor

cmx = total(x)/(total(image)/255) ;Centre of mass x coordinate

y = make_array(height, /long) ;Array of y moments

for j = 0, height-1 do begin

count = 0

for i = 0, width-1 do begin

a = image[[i],[j]]

if (a[0] eq 255) then count = count+1

y[j] = count * j

endfor

endfor

cmy = total(y)/(total(image)/255) ;Centre of mass ycoordinate

mom = make_array(width,height)

mom1 = make_array(width,height)

mom2 = make_array(width,height)

;Calculate second order moments U(2,0), U(0,2)and U(1,1)

for i= 0,width do begin

for j= 0,height do begin

a = image[[i],[j]]

if (a[0] eq 255) then begin

mom[[i],[j]] = (((i-cmx)^2)*a)

mom1[[i],[j]] = (((j-cmy)^2)*a)

mom2[[i],[j]] = 0.5*((a/255)*((i-cmy)-(j-cmx))^2)

endif

endfor

endfor

U=MAKE_ARRAY(2,2)

U[0,0]=TOTAL(mom)

U[1,0]=TOTAL(mom2)

U[1,1]=TOTAL(mom1)

U[0,1]=TOTAL(mom2)

residual = 1 & ev = 1

Eig=EIGENQL(U,EIGENVECTORS = ev, RESIDUAL = residual)

end

- Academia And Career
- Music
- Projects
     - Academic
          - Bt Transition From Public To Private Sector
          - Ethnic Clustering Assignment
          - Genetic Engineering Essay
          - Image Processing
          - Image Processing Basic Operations
          - Image Processing Geometric Operations
          - Image Processing Morphological Operations
          - Madonna Image Analysis Essay
          - Sws Architecture
          - Sws Demonstration
          - Urban Vegetative Growth Project
          - Why Eurodisney Failed
     - Commercial
     - Free




Home | Projects | Music | Sitemap | Contact

Powered by TheFreeAdSite.com