Thursday, October 27, 2011

Image Enhancement Techniques

IMAGE ENHANCEMENT TECHNIQUES


ABSTRACT

          Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites or air crafts or pictures taken in normal day to day life various applications such as remote sensing, biomedical, industrial, forensic sciences, etc.  Some times the images from various sensors may lack in contrast and may have different types of noises.  In order to get the required details from images, they have to be enhanced using digital image processing techniques.  With the availability of          high-speed computers and other peripherals, the usage of these techniques has become more.

          This package is useful for biomedical, graphic arts, industrial applications, etc.  It was developed in C++ language under MS DOS environment.  It has different enhancement techniques like histogram based enhancements, filters in spatial domain such as low pass, high pass, median, mean, sobel, lee filters, display of histogram and statistics, display of images in B/W mode, etc.


CONTENTS


1. INTRODUCTION                                                                                   

(I)                          IMAGE PROCESSING
(II)                     METHODS OF IMAGE PROCESSING
(III)                 APPLICATIONS OF IMAGE PROCESSING

2. IMAGE ENHANCEMENT TECHNIQUES                                                 

(I)                          STATISTICS
(II)                     ADDITION OR SUBTRACTION
(III)                 HISTOGRAM STRETCHING
(IV)                     LOGARITHMIC STRETCHING
(V)                         MEDIAN FILTER
(VI)                     SOBEL OPERATOR
(VII)                LOW PASS FILTER
(VIII)            HIGH PASS FILTER
(IX)                    LEE FILTER
(X)                        COPY
(XI)                    MIRROR
(XII)               MERGE
(XIII)           HISTOGRAM GRAY LEVEL GRAPH

3. RESULTS AND CONCLUSION                                                             

4. ANNEXURE -  DIGITAL IMAGE PROCESSING                                             

(I)                          DIGITAL IMAGE REPRESENTATION
(II)                     IMAGE MODEL
(III)                 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING
(IV)                     IDEA ABOUT IMAGE ENHANCEMENT TECHNIQUES
(V)                         THE GRAY LEVEL HISTOGRAM
(VI)                     USES OF HISTOGRAM
                                                    
 BIBLIOGRAPHY                                                                                       


1. INTRODUCTION



(I) IMAGE PROCESSING

          Images form an overwhelming part of our experiences from birth.  Before we can define image processing, we must agree upon a definition of the word image.

          “An image is a representation, likeness, or limitation of an object or thing, a vivid or graphic description, something introduced to represent something else.”

          For example, a photograph of Abraham Lincoln, for instance, is a representation of an American President.

          An image contains descriptive information about the object it represents.

          Images can be classified into several types.


          Consider the set of all objects, the images form a subset.  The physical images are distributions of measurable physical properties.  Optical images are spatial distributions of light intensity.  Example of non-visible physical images are temperature, pressure, elevation and population density maps.
          The black and white image has one value of brightness at each point, the color image has three values of brightness, one each in red, green, and blue.  The three values represent intensity in different optical spectra which the eye perceives as different colors.
          Images are used in various applications such as Remote Sensing,     Bio-medical Engineering, Forensic Sciences, Astronomy, Non-destructive testing, etc.  These images can be obtained from the data collected from various Satellites, Scanners, etc.  In all these applications image processing is required.  Image processing in its general form, pertains to the alteration and analysis of pictorial information.
          Image processing is defined as the various operations that can be applied to image data.  Activities in image processing mean dealing with image acquisition devices (cameras, scanners), image interface sub-systems (frame grabbers), digital computers and software for processing.
          Image processing techniques are used to improve the visual appearance of an image, or to convert the image to a form better suited for analysis by a human or a machine.
          In industries, various products have to be checked for quality before dispatching.  The image processing techniques are useful to find out the minor defects in the products.



(II) METHODS OF IMAGE PROCESSING

Functionally, three techniques are involved in implementing image processing.  They are,
A)   Optical image processing
B)    Analog image processing
C)    Digital image processing


A) OPTICAL IMAGE PROCESSING :

          In this processing, it uses an arrangement of optics to carry out the process.

Examples are:
  1. Eyeglasses are a form of optical image processing.
  2. Another important form of optical image processing is found in the photographic dark room.  For years photographers have enhanced, manipulated and abstracted images from one form to another.  The objective is always being to produce a more favorable image.
B) ANALOG IMAGE PROCESSING :

          It refers to the alteration of the image through electrical means.  The most common example of this is television image.
          The television signal is a voltage level which varies in amplitude to represent brightness throughout the image.  By electrically altering the signal, we correspondingly alter the final displayed image appearance.  The brightness and contrast controls on a TV set serve to adjust the amplitude and reference of the video signal, resulting in the brightening, darkening and alteration of the brightness range of the displayed image.

C) DIGITAL IMAGE PROCESSING :
          “Digital image processing is subjecting numerical representations of objects to a series of operations in order to obtain a desired result.  It starts with one image and produces a modified version of the same.  It is therefore a process that takes an image into an image.”
          Digital image processing is a process that takes an image into an image.
          Digital image processing is a form of image processing brought on by the advent of the digital computer.  This form provides flexibility and power for applications.



(III) APPLICATIONS OF IMAGE PROCESSING

  • Remote sensing for natural resource mapping.( Mineral, Water, Forest )
  • Remote sensing for prediction of natural phenomena( Weather, Floods, Monsoons )
  • Image storage and analysis for medical diagnosis( X-ray imaging, Ultra-sound, CT scan, MRI, Orthopaedics, Cytoanalysis, Video-microscopy cosmetology )
  • Machine vision for industrial automation and quality assurance
  • Astronomy
  • Military equipment
  • Security and access control through finger-print and facial image databases
  • Video information capture and printing in Desk-Top-Publication (DTP)
  • Special effects and animation
  • Picture communication using data compression ( HDTV, Video-phone, Tele-conferencing )


2. IMAGE ENHANCEMENT TECHNIQUES



IMAGE ENHANCEMENT TECHNIQUES


          These are a group of operations that improve the detect ability of targets or categories.  These operations include contrast improvement, edge enhancement, spatial filtering, noise suppression, image smoothing and image sharpening.  These are broadly divided into two categories.  They are

i)                   Point operations
ii)                Local operations

POINT OPERATION  is one in which the output pixel value depends only on the value of the corresponding input pixel.  Point operations are sometimes called Contrast manipulation or stretching.  Addition or Subtraction, Logarithmic stretching, Histogram based enhancements fall into this category.
LOCAL OPERATION  is one in which the output pixel value depends on the pixel values in a neighborhood of the corresponding input point.  Median filter, Sobel operator, Low pass filter, High pass filter, Lee filter into this category.
          I developed software in C++ language under MS DOS* environment.  The programs are integrated as menu.  The executable name in MENU.EXE.  As the menu is executed the following menu appears on the screen.  The detailed description is given below.




 ********************************************************************
*                                                                                                                *
*                        IMAGE ENHANCEMENT TECHNIQUES                          *
*                                                                                                                *
*********************************************************************
*                                                                                                                *
*                           1  : STATISTICS                                                         *
*                           2  : ADDITION OR SUBTRACTION                             *
*                           3  : HISTOGRAM STRETCHING                                   *
*                           4  : LOGARITHMIC STRETCHING                               *
*                           5  : MEDIAN FILTER                                                   *
*                           6  : SOBEL OPERATOR                                                 *
*                           7  : LOW PASS FILTER                                                *
*                           8  : HIGH PASS FILTER                                               *
*                           9  : LEE FILTER                                                           *
*                           10  : COPY                                                                     *
*                           11  : MIRROR                                                                 *
*                           12  : MERGE                                                                  *
*                           13  : HISTOGRAM GRAPH                                             *
*                           14 : DISPLAY THE IMAGE                                            *
*                           15 : EXIT                                                                      *
*                                                                                                                *
********************************************************************

(I) STATISTICS

          In the image processing, before doing any enhancements to any image file, it is required to know various parameters regarding the image file.  The parameters are   minimum, maximum, mean gray values, variance, and standard deviation histogram.  The software calculates the above parameters and prints the information.

DESCRIPTION :
The software reads the data from input image file and calculates minimum, maximum, mean gray values, variance, and standard deviation for all pixels.

                             Sum of squares – (Number of pixels X Mean)
Variance       =       --------------------------------------------------      
                                                Number of pixels - 1

Standard Deviation         =        SQRT(Variance)



(II) ADDITION OR SUBTRACTION

          The program is useful for addition or subtraction.  This is useful to increase or decrease the intensity of a particular image by the required amount.

DESCRIPTION :
          The inputs are input image file name, output image file name and the constant value ‘K’.  For the addition operation, ‘I’ value is two.  For the subtraction operation, ‘I’ value is zero.  The input pixel value is ‘A’.  The output pixel value ‘B’ is computed using the formula,
                           B  =  A  +  ( I – 1 )  *  K
          The above modification of pixel value will be done for the entire input image file and will be written in to the output data file.



(III) HISTOGRAM STRETCHING

          The histogram of any image file will tells us the quality of the data in terms of contrast and brightness.  If the histogram plot occupies the lower range of grey scale, i.e., less than 100 gray values, the image will be dark.  Most of the details may not get.  The histogram stretch program stretches the entire image gray values between 0 to 255.  So that, image will have good contrast.

DESCRIPTION :
          The program first calculates the minimum and the maximum pixel values in the input image.  Then the output pixel will be modified as per the following formula,
                                  X  -  X1 
                     Y      =    --------   *     255
                                  X2 – X1

                   Where         X1     =        Minimum pixel value in the input image
                                      X2     =        Maximum pixel value in the input image
                                      X       =        Input pixel value
                                      Y        =        Output pixel value

          The above modification of pixel value will be done for the entire input image file and will be written in to the output data file.

(IV) LOGARITHMIC STRETCHING

          The transformation expands the contrast of small gray values and compress the large values.  It also transforms multiplicative noise to additive noise which can be easily removed.  It makes low contrast details more visible by enhancing low contrast edges.

DESCRIPTION :
The program firstly transforms the input image into log image using the following function.
                           Y     =     log ( X )
                   Where         Y        =        Output pixel value
                                      X       =        Input pixel value
          Then the software calculates the minimum and maximum log values and stretches the log values between 0 to 255 and writes into the output data file.

(V) MEDIAN FILTER

          The type of filter is especially useful to remove salt and pepper noise, speckle noise from the images.  These types of noises belong to high frequency components.  Low pass filter removes these types of noises and the image becomes smooth but blurred.  But median filter removes noise and image becomes smooth without any blur.

DESCRIPTION :
          This software works on 3X3 window basis.  It reads 9 pixels and finds the mean value.  Then the central pixel is replaced with the median value.  Firstly, software reads 3 lines of data into a buffer.  3X3 window operation will be executed with single pixel stepwise throughout 3 lines of data.  The resultant line of data will be written on to the output file.  The software reads another line of data in place of first line and above repeats till the end of data.

(VI) SOBEL OPERATOR

          The sobel operator is one of the edge enhancement operators.  It works perfectly on a threshold images.  In order to get a binary image threshold is used.  This operator finds an important role in image processing of industrial and medical applications.

DESCRIPTION :
          It works on 3X3 window basis.  Let the 3X3 window of pixels appears as follows.
                                      X11    X12    X13
                                      X21    X22   X23
                                      X31    X32   X33

          The central pixel value is modified as follows.

          Central pixel value                    =        SQRT( X2 + Y2 )

          Where         X       =        ( X13 + 2*X23 + X33 )  – (X11 + 2*X21 + X31)
                             Y        =        ( X11 + 2*X12 + X13 ) – ( X31 + 2*X32 + X33)

          The above procedure is repeated for three lines of data with single step increment (window movement).  The software reads another line of data in place of first line and the above process repeats till the end of data.



(VII) LOW PASS FILTER

          Images can contain random noise superimposed on the pixel brightness values.  The noise may be generated in the transducers which acquire the image data and during the transmission.  This is also called ‘salt and pepper noise’.  It can be removed by the process of low pass filtering or smoothing but at the expense of some high frequency information of the image.  The filtering is performed by means of 3X3 or 5X5 window.

DESCRIPTION :
          The software works on 3X3 or 5X5 window basis.  It reads 3 or 5 lines of data firstly, then it reads 3X3 or 5X5 window of pixels.  The pixels are multiplied by the selected mask weights and the resultant value is replaced with the central pixel value.  The process repeats with single pixel stepwise throughout the 3 or 5 lines of data.  The resultant data (line) will be written to the output data file.  Then another line of data will be read in place of first line and the above process repeats till the end of data.



          LPF3 (3X3 Window)

                   Enter the mask number :
                   (1)      .25     .5       .25
                             .5       1        .5
                             .25     .5       .25

(2)     .1       .1       .1
                             .1       .1       .1
                             .1       .1       .1

                   (3)     .1       .1       .1
                             .1       .2       .1
                             .1       .1       .1
          
LPF5 (5X5 Window)            
Enter the mask number :
                   (1)      .1       .1       .1       .1       .1
                             .1       .1       .1       .1       .1
                             .1       .1       .2       .1       .1
                             .1       .1       .1       .1       .1
                             .1       .1       .1       .1       .1

                   (2)     .1       .1       .1       .1       .1
                             .1       .1       .1       .1       .1
                             .1       .1       .1       .1       .1
                             .1       .1       .1       .1       .1
                             .1       .1       .1       .1       .1



(VIII) HIGH PASS FILTER

          In any images high frequency contents and low frequency contents are available.  The high frequency contents are associated with frequent changes of brightness with position.  Edges, lines and some types of noise are examples of high frequency data.  This type of data plays an important role in remote sensing data.  The high pass filter allows high frequency contents and filters low frequency contents.  Then the extraction of information from edges becomes simple.  The filtering is performed by means of 3X3 or 5X5 window.

DESCRIPTION :
          The software works on 3X3 or 5X5 window basis.  It reads 3 or 5 lines of data firstly, then it reads 3X3 or 5X5 window of pixels.  The pixels are multiplied by the selected mask weights and the resultant value is replaced with the central pixel value.  The process repeats with single pixel stepwise throughout the 3 or 5 lines of data.  The resultant data (line) will be written to the output data file.  Then another line of data will be read in place of first line and the above process repeats till the end of data.


          HPF3 (3X3 Window)
Enter the mask number :
                   (1)       1       -2      1
                             -2      5       -2
                              1       -2      1

                   (2)     -1       -1       -1
                             -1       9       -1
                             -1       -1       -1

                   (3)     0       -1       0
                             -1       4       -1
                              0       -1       0

          HPF5 (5X5 Window)
Enter the mask number :
                  
(1)      0       0       -1       0       0
                              0       -1       0       -1       0
                             -1       0       10      0       -1
                              0       -1       0       -1       0
                              0       0       -1       0       0

                   (2)     -1       -1       -1       -1       -1
                             -1       -1       -1       -1       -1
                             -1       -1       25      -1       -1
                             -1       -1       -1       -1       -1
                             -1       -1       .1       -1       -1



(IX)       LEE FILTER

          Computational techniques involving contrast enhancement and noise filtering on two-dimensional image arrays are developed based on their local mean and variance.  Lee filter is a type of enhancement based on local statistics especially mean value.  It enhances the edges in a given image.  It is most effective for high-resolution data like IRS, LANDSAT and SPOT data sets.  It works on 5X5 window basis.

DESCRIPTION :
          The Lee filter works on 5X5 window basis.  The software firstly reads 5 lines of input data.  Then it reads 5X5 window pixels, i.e., 25 pixels.  Then the mean value is calculated for 25 pixels.  The resultant pixel is calculated using the formula.
X’     =     mean + 2 * ( X – mean )
          Where         X       =        Input pixel value
                             X’      =        Output pixel value
          The central pixel is replaced with the output pixel value.  The above process is repeated till the end of data.

(X) COPY

          The images taken from various X-ray photographs or tomographs or some other form are of different sizes.  Sometimes the sizes are of order 2000 pixels X 2000 lines or so.  But the necessary area in the images may be small.  This program will copy the required portion of the images into another file for further processing.

DESCRIPTION :
          The inputs for the program are input and output file names, starting pixel and line and the ending pixel and line of input image.  According to the above information, the required portion from the input file will be copied into the output file.

(XI) MIRROR

          The digital images are obtained thru satellites and using transducers such as digitizers.  Drum scanners, frame grabbers, etc., comes under the category of digitizers.  Sometimes if a map or X-ray photograph is digitized with scanners, mirror image may be obtained.  So, before doing any image processing, the mirror image has to be flipped to the right image.  This program will do the above task.

DESCRIPTION :
          The program reads one line of data from the input data file.  It flips the mirror image data to the right image data and writes into the output data file.  The process continues till the end of input data file.

(XII) MERGE

          In image processing, the processed data will be in different file.  If we want to see the input and output data files, they have to be loaded separately.  They cannot be seen at a single time.  This program combines various input files into a single file based on the position of given co-ordinates.  If there are two images of size 512X512, the output file will have a size of 512X1024.  Here the two images are kept side by side.

DESCRIPTION :
          The inputs for the program are two input file names and one output file name, sizes of two input file names in terms of pixels and lines.  According to the above information, the two input images are merged and placed in the output file.

(XIII) HISTOGRAM GRAY LEVEL GRAPH

          In the image processing, before doing any enhancements to any image file, it is required to know various parameters regarding the image file.  The parameters are minimum, maximum, mean gray values, variance, standard deviation, histogram values and histogram plots.  The software calculates the above parameters and prints the information and histogram plot on the PC terminal.

DESCRIPTION :
          The software reads the input image file data and calculates minimum, maximum, mean gray values, standard deviation, mean, histogram values and finally plots the histogram on PC terminal along with the above information.


3. RESULTS AND CONCLUSION

          Image processing represents a set of techniques with vital application in diverse areas like manufacturing, industrial quality control, medical imaging, remote sensing, etc.

          Image processing involves sampling, digitization, storage and processing of analog picture information.

          All the programs developed are integrated under one menu.  The software works for any image of size 512X512 maximum.



4. ANNEXURE


DIGITAL IMAGE PROCESSING

DIGITAL IMAGE PROCESSING

          The word Digital relates to “Calculation by numerical methods or by discrete units.”
          We can now define a digital image to be a numerical representation of an object.
          Processing is the act of subjecting something to a process.  A process is a series of actions or operations leading to a desired result.  Thus a series of actions or operations are performed upon an object to alter its form in a desired manner.
          The restricted definition of a digital image is a sampled, quantized function of two dimensions which has been generated by optical means, sampled in an equally spaced rectangular grid pattern, and quantized in equal intervals of gray level.
          Digital image processing involves the manipulation and interpretation of digital images with the aid of computer.  This form of image processing brought by the advent of the digital computer.  The data has to be converted into digital form using various scanners.  Allowing the precise implementation of the processes, this form provides the greatest flexibility and power for general image processing applications.

·         CONTAST refers to the amplitude of gray level variations within an image.
·         NOISE is broadly defined as an additive (or multiplicative) contamination of an image.
·    THE SAMPLING DENSITY of a digital image is the number of sample points per unit measure in the domain.
·     GRAY SCALE RESOLUTION is the number of gray levels per unit measure of image amplitude.
·         MAGNIFICATION refers to the size relationship between an image and the object or image it represents.
         Magnification is a meaningful relationship between input and output digital images in a processing step.

ELEMENTS OF DIGITAL IMAGE PROCESSING :

          Digital Image Processing basically requires a computer upon which to process images.  In addition, the system must have two pieces of special input/output equipment, an image digitizer and an image display device.
          But, computers work with numerical rather than pictorial data, an image must be converted to numerical form before processing.  This conversion process is called Digitization.  After processing, the final product is displayed by a process that is the reverse of digitization.
          The image is divided into small regions called picture elements or pixels or pels.  Each pixel has a location or address (line or row number and sample or column number) and an integer value called the gray level.  Gray level represents the brightness or darkness of the point.

ADVANTAGES OF DIGITAL IMAGE PROCESS :

          In industries, products have to be checked for quality before dispatching.  These are tested using Non-destructive Evaluation methods like X-ray, Ultrasonic imaging and Acoustic Emission techniques.  Because of limitations of the outputs from above techniques, minor details may not be seen clearly.  So, these can be converted into digital form using various scanners and images can be enhanced using digital image processing techniques to know the defects clearly.
          The principle advantage of digital image processing methods are their versatility, repeatability and the preservation of original data precision.

FACTORS TO INDICATE A LIVELY FUTURE FOR DIGITAL IMAGE PROCESSING :
1)      The declining cost of computer equipment.
2)    Increasing availability of the equipment.
3)    Several new technological trends, which include parallel processing made practical by low-cost microprocessors, and the use of Charge Coupled Devices (CCD).
4)    Medical diagnosis.
5)    Remote sensing programs.


(I) DIGITAL IMAGE REPRESENTATION

          The term monochrome image refers to a two dimensional light intensity function f(x, y).  Where x and y denotes spatial co-ordinates and the value of f at any point (x, y) is proportional to the brightness (or gray-level) of the image at that point.  A digital image is an image f(x, y) that has been discretized both in spatial coordinates and in brightness.  We may consider a digital image as a matrix whose row and column indicate a point in the image and the corresponding matrix element value identifies the gray level of that point.  The elements of such a digital array are called image elements or picture elements or pixels or pels with last two names being used as abbreviations of picture elements.

(II) IMAGE MODEL

          The term image refers to a two dimensional light intensity function denoted by f(x, y), where the value or amplitude of f at spatial coordinates (x, y), gives the intensity of the image at that point.
          Since light is a form of energy, f(x, y) must be non zero and finite, i.e., 0 < f(x, y) < x, the image we receive in our everyday visual activities normally consists of light reflected from objects.  The basic nature of     f(x, y) may be considered as being characterized by two components.  One component is amount of source incident on the scene being viewed while the other is amount of light reflected by the objects in the scene.  These components are appropriately called the illumination and the reflectance components and are denoted by i(x, y) and r(x, y) respectively.
          The functions i(x, y) and r(x, y) combine as a product to form f(x, y).

0 < f(x, y) < x                 ---------------à   eq(2.1)

f(x, y) = i(x, y) * r(x, y)      ---------------à   eq(2.2)

where       0 < i(x, y) < x           --------------à     eq(2.3)
0 <= r(x, y) <= 1       --------------à     eq(2.4)

          Eq(2.1) indicates the fact that reflectance is bounded by 0 (total absorption) and 1 (total reflectance).  The nature of i(x, y) is determined by the characteristics of the objects in the scene.

          The intensity of a monochrome f at coordinates (x, y) will be called the gray level of the image at that point.  From the above equations it is evident that l lies in the range.

lmin <= l <= lmax        --------------à     eq(2.5)

          In theory, the only requirement one lmin is that it be positive and lmax is that it be finite in practice.

       lmin = i(min) * r(min)    and          lmax =  i(max) * r(max)

          The interval lmin to lmax is called the gray scale.  It is a common practice to shift this interval numerically to the interval (0,L), where l = 0 is considered black and l = L is considered white in the scale.  All intermediate values are shades of gray varying continuously from black to white.

(III) FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING

The first step in the process is image acquisition – that is, to acquire a digital image.  To do so requires an imaging sensor and the capability to digitize the signal produced by sensor.  After the digital image has been obtained, the next step deals with preprocessing that image.  The key function of preprocessing is to improve the image in ways that increases the chances for success of the other processes.  For example, enhancing contrast, removing noise, and isolating regions, etc.
The next stage deals with segmentation.  Segmentation partitions an input image into its constituent parts or objects.  The outputs of the segmentation stage usually a raw pixel data.  Choosing a representation is only part of the solution for transforming raw data into suitable form for subsequent computer processing.  Description is also called feature selection, delay with extracting features that result in some quantitative information of interest or features that are basic for differentiating one class of objects from another.
The last stage involves recognition and interpretation. Recognition is the process that assigns a label to an object based on the information provided by descriptors.  Interpretation involves assigning meaning to an ensemble of recognized objects.
Knowledge about a problem domain is coded into an image processing system in the form of a knowledge base.  Thus it limits the search that has to be conducted in seeking that information.


(IV) IDEA ABOUT IMAGE ENHANCEMENT TECHNIQUES

          Most of the raw materials are to be purified, similarly most of raw images are to be enhanced if need be.  In fact, most of the pictures obtained either from space probe or from microscope are not very clear and most of them can be made better.  This pictures need to be enhanced in one-way or the other to make them better and more perceptible.
The popular enhancement techniques often applied are :
A)               Contrast Stretching or Enhancement
B)                Image Filtering
C)                Image ratioing or Image Transformation

A)  CONTRAST STRETCHING OR ENHANCEMENT :
          Some times the images (e.g., over water bodies, deserts, dense forests, snow, clouds and under heavy conditions over heterogeneous regions) are homogeneous, i.e., they do not have much change in their levels.  In terms of histogram representation, they are characterized as the occurrence of very narrow peaks.  The homogeneity can also be due to the incorrect illumination of the scene.
Ultimately the pictures hence obtained are not easily interpretable or sometimes poor human perceptible.  This is because there exists only a narrow range of gray levels in the image having provision for wider range of gray levels.  The contrast stretching methods are designed exclusively frequently encountered situations.  Different stretching techniques have been developed to stretch the narrow range to the whole of the available dynamic range.  These techniques can broadly be classified as Liner and Non-linear stretching.


B)   IMAGE FILTERING :

Spatial frequency is the number of changes in the brightness values per unit area in any part of an image.  If the spatial frequency is low, it is called low frequency area and if it is high, it is called high frequency area.  Algorithms performing the enhancement of images by suppressing are de-emphasizing certain frequency and passing are emphasizing certain other frequencies are called filters.  Filters that pass high frequency is, and hence emphasize fine details and edges are referred as high frequency filters conversely, low frequency filters suppress the high frequency content of the imagery while emphasizing gradual change.
          The resultant of this filtering of the image is by changing the brightness value of the original image in a particular fashion.  The brightness value of the pixel at (u, v) coordinates of an image, BV is changes to BV’ by applying the mask A.  The coefficients/kernal values are multiplied as shown below:

C(1, 1)*BV(u-1, v-1)           C(1, 2)*BV(u-1, v)             C(1, 3)*BV(u-1, v+1)
C(2, 1)*BV(u, v-1)             C(2, 2)*BV(u, v)               C(2, 3)*BV(u, v+1)
C(3, 1)*BV(u+1, v-1)                   C(3, 2)*BV(u+1, v)            C(3, 3)*BV(u+1, v+1)

The new values from the filtering operation BV’ is calculated as :
      
BV’(u, v) =  1/N Eij { C(i, j) * BV(u-2+i, v-2+j) }

C)   IMAGE RATIOING OR IMAGE TRANSORMATION :

The theme of the technique of magnification is to the closer view by magnifying or zooming the interested part in the imagery by reduction, we can bring the unmanageable huge amount of data to a mount limit in order to display the imagery.

i)                   MAGNIFICATION :
This is usually done to improve the scale of display for visual interpretation or sometimes to match the scale of one image to another.  To magnify an image by a factor of m2.  Each pixel of the original image is replaced by a block of mXm pixels, all with the same brightness value as the original pixel.

ii)                REDUCTION :
          To reduce a digital image to 1/m2 of the original data, every mth column of the original imagery is selected and displayed.  Another way of accomplishing the same is by taking the average in mXm block and displaying this average after proper rounding.



(V) THE GRAY LEVEL HISTOGRAM

          One of the simplest and most useful tools in digital image processing is the gray level histogram.  This function summarizes the gray level content of an image.  While the histogram of any image contains considerable information, certain types of images are completely specified by their histograms.  Computation of the histogram is simple and may be done at little apparent cost during transfer of any image from one data set to another.

          The gray level histogram is a function showing, for each gray level, the number of pixels in the image that have the same gray value level.

          This has become the most frequent form of representation of an image because it contains many properties to the image.  By looking at the histogram, one can find out the nature of the image and by using it, one can explain his techniques or implementations as well as the results.

(VI) USES OF HISTOGRAM

A)  DIGITIZING PARAMETERS :
The histogram indicates whether an image is properly scaled within the available range of gray levels.  A digital image should make use of almost all the available gray levels.

B)   BOUNDAY THRESHOLD SELECTION :
The gray level corresponding to minimum between the two peaks in histogram is optimal for defining the boundary.  Given the histogram, we could determine an optimal threshold gray level for the object and compute its area without even seeing the image.


BIBLIOGRAPHY

1.                   DIGITAL IMAGE PROCESSING
Addison Wesley Publication Company – GONZALEZ & WINTZ
2.                 DIGITAL IMAGE PROCESSING         -        JENSON
3.                 DIGITAL IMAGE PROCESSING – KENNETH R. CASTLEMAN
4.                 PATTERN RECOGNITION LETTERS   Volume 13, 1992
5.                 IMAGE ENHANCEMENT AND RESTORATION   –                   R.C. GONZALEZ
6.                 DIGITAL PICTURE PROCESSING – AZRIEL ROSENFELD AND
                                                                             AVINASH C. KAK
7.                 PRAGMATIC DIGITAL IMAGE PROCESSING –   BILLINGSLEY F.C.
8.                 COMPUTER VISION, GRAPHICS & IMAGE PROCESSING
Volume 29, 1985
9.                 COMPUTER VISION, GRAPHICS & IMAGE PROCESSING
Volume 41, 1988
10.              COMPUTER VISION, GRAPHICS & IMAGE PROCESSING
Volume 56, 1988      


THE END