A New Approach to the Analysis of Fingerprints
Introduction
Conventional methods of analysing fingerprints require acquisition of
their images via a sensor array which is usually a CCD or, less popularly
these days, a tube sensor. In either case, the image is formulated in the
system by horizontal and vertical scanning, and converted into Cartesian
arrangements of pixels in the subsequent processing. Standard image
processing routines, such as edge detection and thinning, are used to
extract the salient features so that comparisons with previously-stored
data can be facilitated. However, there is an argument that such an
approach is quite unnatural. How can this be so? Consider the fact that,
in general terms, a fingerprint shows circular characteristics. Why should
it be constrained, when analysed, to fit into a rectangular world? The
study undertaken at Bradford shows that a different approach to
fingerprint analysis can be taken, resulting in certain advantages over
conventional methods.
Approach
The problem mainly associated with Cartesian coordinate imaging
is in the measurement of diagonal distances and velocities. Spatial
resolution along the diagonals is lower compared to measurements along the
two major axes. Consider a different geometry: if an image is broken down
into hexagons, instead of rectangles, then it is possible to use 3 major
axes. Furthermore, each pixel is equidistant from each of its 6 nearest
neighbours in the hexagonal format. Consequently, distance and velocity
measurements are facilitated. Clearly, the outside world being Cartesian,
in terms of sensors, computer memory, and the basic television system in
general, leads to certain compromises when adapting to the new approach.
Nonetheless, and perhaps surprisingly, this process is not difficult,
thus, reduction of necessary stored data is possible, leading to improved
processing of circularly-symmetric data ranging from 13.4% to 50% [1].
Pre-processing is an essential part of an automatic fingerprint
identification system, because if the fingerprint is not of an acceptable
standard automatic fingerprint recognition becomes extremely difficult.
The main reason for this is that normal methods use the small unique
features known as minutie in the image to identify the fingerprint.
The pre-procrssing procedure consists of four stages:
- Filtering to smooth the ridges
- Adaptive segmentation to separate the foreground from the background ridges
- Noise removal eliminating pores and spurs
- Thinning that reduces the ridges to a width of one pixel.
Filtering
A fingerprint image contains narrow ridges separated by
equally-narrow background valleys. This may be corrupted by various kinds
of noise, causing breaks in the ridges, joins between them and overall
grey scale intensity variation. In order to enhance the image, filtering
is applied, which is designed to increase the contrast between ridges and
valleys. The filter should also enhance data along the same orientation as
the ridges within the local area of each pixel. The contrast enhancement
should depend only on local differences in intensities between ridges and
background. That is, the filter should perform adaptively to regional
differences in average intensity. The fingerprint image may be
transformed to a directional form that represents the local orientations
of the ridges [1,2]. Six directions are chosen to determine the local
ridge orientation, as shown in Figure 1. A filter for one direction is
designed, and filters for all other directions are generated by rotating
this basic filter. The fingerprint ridges vary between one and three
pixels. A filter mask using six nearest neighbours is chosen (Figure 2).
The middle row has coefficients to amplify the ridges.
By looking into the block directional image, the appropriate directional
filter may be selected. This filter is used for convolution of the
fingerprint image within the region.
Adaptive Binarisation
The aim of segmentation of fingerprint images is to
separate the clear fingerprint area from the non-fingerprint area. An
acceptable segmentation technique should have the following
characteristic:
- it should not be sensitive to the contrast in the input image
- the result of segmentation should be invariant to the enhancements in the input image
- it should give consistent results for varying image quality.
The histograms of fingerprint images are not bimodal. Hence, thresholding
and segmentation of these images are nontrivial. A form of adaptive
average thresholding has been found to be most suitable for this
application. By selecting a suitable region size, computing its average
and setting the ridges and valleys to distinct grey levels within the
image, uneven impressions can be removed with considerable success. For
best results, a window that encompasses one period of the fingerprint
image (ie one ridge and one valley) is chosen.
Noise removal
The binarisation algorithm may introduce noise in the form
of small holes in and around the ridges. In order to remove these, a
number of morphological operators are convoluted with the image, as shown
in Figure 3.
The first template fills in isolated holes within the fingerprint ridges.
The second and third templates, along with five rotations fill in gaps
along the ridge edges. The templates switch the centre pixel from "0" to
"1" if all the conditions in that template are true.
Thinning
Generally the thickness of the fingerprint ridges varies because
of bad inking, improper printing, etc. In classifying the prints and
extracting the features from the ridges, it is necessary to reduce the
thickness of the ridges to unity. In this approach the thinning can be
seen as the result of an iterated compression process. At every step,
contour pixels are tested using suitable removal criteria. This causes
some internal pixels at the next step. Removal operations are iteratively
applied until no further pixel is removed. Due to the symmetric erosion,
the skeleton becomes centred within the original shape and is of unit
width everywhere. Six operators shown in Figure 4 are passed over the
image along 0, 60, 120 degree orientations. The removal criterion is
determined by the crossing number and the sum of the nearest neighbours,
respectively (2). The criterion preserves the connectivity of the thinned
image whilst the second stops any unit limbs from disappearing. Each
iteration is subdivided into six Note that templates (a)-(f) ensure that
only boundary pixels are removed and hence no hole is formed during any
pass of the iterations. The thinning tends to leave small noise spurs
when the input image boundary is noisy. These can be eliminated by
deletion [5].
Methods and results
Fingerprints are inked and impressed on paper.
Physical prints are captured and digitised into an array of size 512 by
512. In order to simulate a hexagonal grid, the image is subsampled, as in
[3]. Figure 5 illustrates the hexagonal fingerprint image of size 256 by
256. Figure 6 shows the result after filtering, binarisation, noise
removal, and thinning has been applied. The ridges have been reduced to
the width of one pixel and the connectivity has been preserved.
Conclusions
An alternative sampling strategy to the Cartesian approach is
hexagonal sampling. It offers greater angular resolution than the former,
as its nearest neighbours are separated by sixty degrees instead of
ninety. Also, the hexagonal grid has an appealing isotropic character, due
to the fact that each cell is equidistant from its nearest neighbours. It
has been found that algorithms which pre-process fingerprints on a
hexagonal grid can be implemented quickly and efficiently; typically 20%
faster with fewer data points. The method therefore incorporates a form of
real data compression intrinsically.
References
-
Mereseau R,
Hexagonally-sampled two dimensional signals
Proceeding of the IEEE, pp 927-952, 1979.
-
Mehtre, B M, Murthy N N, Kapoor S and Chatterjee B,
Segmentation of fingerprint images using directional image
Pattern Recognition 20, pp 429-435, 1987.
-
O'Gorman L and Nickerson J V,
An approach to fingerprint filter design
Pattern Recognition 22, pp 29-38, 1989.
-
Lester L N and Sandor J,
Computer graphics on a hexagonal grid Computer
Graphics vol. 8
-
Deutsch E S,
Thinning algorithms on rectangular, hexagonal and triangular arrays
Communications of the ACM, Vol. 15, pp 827-835, 1972.
-
Gonzalez R C and Wintz P,
Digital Image Processing, pp 351-366, 1987.
-
Golay E,
Hexagonal parallel pattern transformations
IEEE Trans on Comp, Vol. 19, pp 733-740, 1969.
-
Davis E R and Plummer P N,
Thinning Algorithms: A critique and a new methology
Pattern Recognition, Vol 14, pp 53-63, 1980.
Roger J Green and Andrew Fitz
Electronic Imaging and Media Communications Unit
University of Bradford
R J Green@bradford.ac.uk