**Post: #1**

[attachment=891]

INTRODUCTION

The subject of this work is image compression with fractals. Today JPEG has become an industrial standard in image compression. Further researches are held in two areas, wavelet based compression and fractal image compression. The fractal scheme was introduced by Michael F Barnsley in the year 1945.His idea was that images could be compactly stored as iterated functions which led to the development of the IFS scheme which forms the basis of fractal image compression. Further work in this area was conducted by A.Jacquin, a student of Barnsley who published several papers on this subject. He was the first to publish an efficient algorithm based on local fractal system.

Fractal image compression has the following features:

Â¢ Compression has high complexity.

Â¢ Fast image decoding

Â¢ High compression ratios can be achieved.

These features enable applications such as computer encyclopedias, like the Microsoft Atlas that came with this technology. The technology is relatively new.

OVERVIEW OF IMAGE COMPRESSION

Images are stored as pixels or picture forming elements. Each pixel requires a certain amount of computer memory to be stored on. Suppose we had to store a family album with say a hundred photos. To store this on a computer memory would require say a few thousands of dollars.

This problem can be solved by image compression. The number of pixels involved in the picture can be drastically reduced by employing image compression techniques. The human eye is insensitive to a wide variety of information loss. The redundancies in images cannot be easily detected and certain minute details in pictures can also be eliminated while storing so as to reduce the number of pixels. These can be further incorporated while reconstructing the image for minimum error. This is the basic idea behind image compression. Most of the image compression techniques are said to be lossy as they reduce the information being stored.

The present method being employed consists of storing the image by eliminating the high frequency Fourier co-efficients and storing only the low frequency coefficients. This is the principle behind the DCT transformation which forms the basis of the JPEG scheme of image compression.

Barnsley suggested that storing of images as iterated functions of parts of itself leads to efficient image compression.In the middle of the 80â„¢s this concept of IFS became popular. Barnsley and his colleagues in the Georgia University first observed the use of IFS in computerized graphics applications. They found that IFS was able to cress colour images upto 10000 times. The compression contained two phases. First the image was partitioned to segments that were self-similar as possible. Then each part was described as IFS with probabilities. The key to compression was the collage theory, which gave a criterion to select the parameters in the transformation. Image decoding was done by iterative repeat of the transformation. While the decoding was done automatically the encoding required human iterations, at least in the image segmentation.

In 1989, Jacquin proposed a full automatic algorithm for fractal image compression based on affine transforms that work locally rather than globally. Ever since the release of this paper several strides have been made in this area.

FRACTALS

Consider the photocopying machine as illustrated. This machine reduces the image input to it by one-third and triplicates it in the copy. Thus the image finally obtained has been scaled by a factor of three as well as increased in number . However on careful observation of the image we can find that the final image is self-similar to the original image and contains detail at every stage. It is thus a fractal. If we were to put this image in a feedback loop and repeat the process iteratively say infinite number of times we would get a highly complex image with perfect self-similarity at any scale. The final image obtained is called the attractor for the final output. This is the process followed to obtain many complex figures such as the fern leaf image, Cantor dust, Sierpenski triangle etc.

Natural fractals are difficult to detect. No object in nature can be infinitely reduced to obtain a self-similar portion. However every image will contain some self-similar portion which can yield a fractal at a certain scale. Thus every image may be considered to be formed of self-transformed parts of itself. These transformations which the image undergoes are affine transforms.

PROPERTIES OF FRACTALS

A set F is said to be a fractal if it possesses the following poperties.

1. F is found to contain detail at every scale.

2. F is self-similar.

3. The fractal dimension of F is greater than itâ„¢s topological dimension.

4. F has got a simple algorithmic description.

AFFINE TRANSFORMATIONS

These are combinations of rotation, scaling and translation of the co-ordinate axis in an N-dimensional space.

The figure shows an example of an affine transformation W which moves towards W(f)-that is it is a contractive transformation.

FRACTAL IMAGE COMPRESSION

Let us consider the case of the photocopying machine which we saw earlier. Suppose we were to feed the output of this machine back as input and continue the process iteratively. We can find that all the output images seem to be converging back to the same output image and also that the final image is not changed by the process. We call this image the attractor for the copying machine.

Because the copying machine reduces the input image the copies of the initial image will be reduced to a point as we repeatedly feed the output back as input; there will be more and more copies but the copies at each stage get smaller and smaller. So the initial image doesnâ„¢t affect the final attractor ; in fact, it is only the position and orientation of the copies that determines what the final image will look like.

The final result is determined by the way the input image is transformed , we only describe these transformations. Different transformations lead to different

attractors with the technical limitation that the images must be contractive; i.e, a given transformation applied to any point in the input image must bring them closer in the copy. This technical condition is very natural since if the points in the copy were to be spread out the attractor might have to be of infinite size.

A common feature of these attractors thus formed is that in the position of each of these images of the original square there is a transformed copy of the whole image. Thus the images thus formed are all fractals. This method of creating fractals was put forward by John Hutchinson.

M.Barnsley suggested that perhaps storing images as collections of transformations could lead to image compression. The complex figure of a fern is generated from just four affine transforms. Each affine transform is defined by six numbers a,b,c,d,e and f. Storing these on a computer do not require much memory. Suppose we are required to store the image of a face. If a small number of transformations could generate that face then it could be stored compactly.

If we scale the transformations defining any image the resulting attractor will also be scaled. This is the implication of fractal image compression. The extra detail needed for decoding at larger sizes is generated automatically by the encoding transforms. However in some cases the detail is realistic at low magnification and this is a useful feature of the method.

Magnification of the original shows pixelation; the dots that make up the image are clearly discernable. This is because of the magnification produced.

Standard image compression methods can be evaluated using their compression ratios : the ratio of the memory required to store an image as a collection of pixels and the memory require to store a representation of the image in compressed form.

The compression ratio of fractal is easy to misunderstand since the image can be decoded at any scale. In practice it is important to either give the initial and decompressed image sizes or use the same sizes for a proper evaluation. Because the decoded image is not exactly the same as the original such schemes are said to be lossy.

ITERATED FUNCTION SYSTEM

A mathematical model for the copying machine described earlier is called an Iterative Function System (IFS). An IFS system consists of a collection of contractive transformations wi , this collection defines a map, W(s) = U wi(s) , where I = 1, 2, Â¦.,n. Where s is the input and W(s) is the output of the copier. Whatever the initial image S, the image after infinite interactions will tend to the attractor Xw. This Xw is unique for that particular W(s).

Two important facts are now listed.

Â¢ When the wi are contractive in the plane then W is contractive in a ste of the plane. This was proved by Hutchinson.

Â¢ If we are given a contractive map W on a space of images then there is a special image called the attractor denoted by Xw with the following properties.

1. The attractor is called the fixed point of W

2. Xw is unique. If we find any set S and an image transformation w satisfying W(s) =S, then S is the attractor of W.

SELF SIMILARITY IN IMAGES

Natural images are not self-similar. A typical image of a face does not contain the type of self similarity found in the fractals. The image does not appear to contain affine transformations of itself. But this image contains a different type of self-similarity. Here the image is formed of properly transformed parts of itself. These transformed parts do not fit together in general to form an exact copy of the original image and so we must allow some amount of error in our representation of an image as a set of self-transformations. This means that an image that we encode as a set of transformations will not be an identical copy but an approximation.

To get a mathematical model of an image we express it as a function z = f(x,y) where z is the grayscale.

METRIC ON IMAGES

If we want to know whether W transformation is contractive we will have to define a distance between two images. A metric is a function that measures the distance between two images. There are metrics that measure the distance between two images , the distance between two points , distance between two sets etc.

Two of the most commonly used metrics are

Dsup(f,g) = Sup(f(x,y) â€œ g(x,y) )

And the rms metric

Drms(f.g) = [ S(f(x,y) â€œ g(x,y) )2 ]^0.5

The supremium metric finds the position where two images f and g differ the most and sets this value as the distance between the two points. The rms metric is more convenient in applications.

PARTITIONED IFS

The individual transformations described earlier are performed on parts of the image and not on the whole.This forms a partitioned IFS also called a PIFS. There are two spatial dimensions and the grey level adds a third dimension. A portion of the original image called domain (Di) is mapped to a part of the produced image called range (Ri) by the transformation Wi. Since W(f) is an image the Ri covers the whole square page and are adjacent but not overlapping.

In the PIFS case, a fixed point or attractor is an image f that satisfies W(f) =f. Thus if we apply the transformations to the image we get back the original image.Thus if we are required to code a natural image this is first partitioned to several domain blocks and iterative function systems are formed of the different parts of the image to yield the output image.

AN ILLUSTRATIVE EXAMPLE

Suppose we are dealing with a 256x256 pixel image in which each pixel can be one of 256 level of gray. Let R1, R2,Â¦.. be the 8x8 pixel non-overlapping subsquares of the image and let D be the collection of all 16x16 pixel (overlapping subsequences of the image). For each Ri search through all of D to find a Di E D that looks like the image above Ri, i.e. minimizes the function.

That is, we find pieces Di and maps wi so that when we apply a wi to a part of the image over Di we get something that is very close to the part of image over Ri.

There are 8 ways to map one square onto another so that this means comparing 8x242x241 = 464648 squares with each of 1024 range squares. Also, a square D has 4 times as many pixels as an Ri, we must either subsample or average the 2x2 subsquares corresponding to each pixel of Ri, when we minimize the above function.

Minimizing the above equation means two things. Firstly, finding a good choice for the Di and secondly finding good contrast and brightness setting si and oi for wi. A choice of Di alongwith a corresponding si and oi determines a map wi. Once we have a collection w1, w2, Â¦ we can decode the image by estimating Xw. It is to be noted that the choice of the metric used affects the contractivity of the transformation and the fidelity of the system.

PARTITIONING OF IMAGES

The basic idea behind partitioning of images is to first partition the image by some collection of ranges Ri, then for each Ri seeek from some collection of image pieces a Di that has a low rms error when mapped to Ri. If we know Ri and Di then we can determine the remaining co-efficients. The various partitioning schemes used are as given under

QUADTREE PARTITIONING

Quadtree partitioning is based on the generalization of the fixed range sizes. In a quadtree partition a square is broken up into four equal-sized sub-squares when it is not covered well enough.This process repeats recursively starting from the whole image and continuing till the squares are small enough to be covered by some specific rms tolerance.Small squares are covered better than larger ones because contiguous pixels in an image tend to be highly correlated.

Let us assume the image size to be 256x256 pixels. Choose for the collection D of permissible domains all the sub-squares in the image of size 8, 12, 16, 32, 48 and 64.Partition the image recursively by a quadtree method until the squares are of size 32.Attempt to cover each square by a larger domain. If a predetermined tolerance value is met then call the square Ri and the covering domain Di. Else repeat the entire process.

HV PARTITIONING

A weakness of the quadtree based scheme is that it makes no attempt to select the domain pool in a content-dependent way. The collection must be chosen to be very large so that a good fit to a given range can be obtained. A way to remedy this while increasing the flexibility of the range partition is to use a HV partition.

In an HV partition a rectangular image is recursively partitioned either horizontally or vertically to form two new rectangles,The partitioning repeats recursively until a covering tolerance is satisfied.

This scheme is more flexible since the position of the partition is variable. We can try to make the partitions in such a way that they share some self-similar structure. For example, we can try to arrange the partitions so that the edges in the image tend to run diagonally through them. It is then possible to use the larger partitions to cover the smaller ones with a reasonable expectation of a good cover.

TRIANGULAR PARTITIONING

Here the partitioning is not restricted to the horizontal and vertical directions. It is flexible so that triangles in the scheme can be chosen to share self-similar properties as before. This scheme is still in its infancy.

These partitioning schemes are adaptive since they use a range size that depends on the local image complexity. For a fixed image more transformations lead to better fidelity but worse compression. The tradeoff between compression and fidelity leads to two different approaches to encoding an image. If fidelity is to be maximized partitioned should be continued until the rms error falls below a particular value. If the compression is to be increased we should limit the number of transformations.

OTHER PARTITIONING SCHEMES

Partitioning schemes come in various varieties. In the triangular scheme a rectangular image is divided diagonally into two triangles. This scheme has several potential advantages. It is flexible in that the triangles can be chosen to share self similar properties. The artifacts arising from the covering do not run horizontally and vertically which is less distracting. Also the triangles can have any orientation so we break away from the rigid 90 degree rotation schemes. Fixed partitioning schemes are also being developed.

LEAST REGRESSION METHOD

In practice we compare a domain and range using an rms metric. Using this metric also allows easy computation of optimal values for contrast and brightness. Given two squares of pixel intensities a1, a2,Â¦an (from Di) and b1, b2,Â¦bn (from Ri) we can seek s and o to minimize R.

APPLICATIONS OF FRACTALS

Creating a fractal requires just a few lines of software. The resulting picture could be quite rich in details and would require large memory if stored as such. This forms the basis of fractal compression of pictures. Given any arbitrary picture one has to find out which of the portions of images could be thought of as self-similar versions of other portions. Self-affine transformations can then be found out. The image can then be displayed quickly and at any magnification with infinite levels of fractal detail. Genetic algorithms in MATLAB are generally used for the efficient encoding of fractals.

FUTURE DEVELOPMENTS

Several new schemes of image compression are being developed day-by-day. The most notable of these is the fractal scheme. It provides endless scope and is under research. Further new partitioning schemes are also being developed. Another recent advance in this field has been through wavelet transformations. This is a highly efficient method based on the Fourier representation of an image. This scheme comes as a competitive development to the fractal approach.

CONCLUSION

The power of fractal encoding is shown by its ability to outperform the DCT, which forms the basis of the JPEG scheme of image compression. This method has had the benefit of thousands of man hours of research, optimization and general tweaking. Thus the fractal scheme has won for itself more than the attention of the average engineer.

REFERENCES

1. FRACTAL IMAGE COMPRESSION By Yuval Fischer

2. ISTE-AICTE STTP ADVANCES IN SIGNAL COMPRESSION

TECHNOLOGY (January 2003)

3. IMAGE COMPRESSION TECHNIQUES By Raouf Hamzaoui

ABSTRACT

Storing an image on a computer requires a very large memory. This problem can be averted by the use of various image compression techniques. Most images contain some amount of redundancy that can be removed when the image is stored and then replaced when it is reconstructed.

Fractal image compression is a recent technique based on the representation of an image. The self-transformability property of an image is assumed and exploited in fractal coding. It provides high compression ratios and fast decoding. Apart from this it is also simple and is an easily executable technique.

ACKNOWLEDGEMENT

I express my sincere gratitude to Dr.Nambissan, Prof. & Head, Department of Electrical and Electronics Engineering, MES College of Engineering, Kuttippuram, for his cooperation and encouragement.

I would also like to thank my seminars guide and Staff in-charge

Asst. Prof. Gylson Thomas. (Department of EEE) for his invaluable advice and wholehearted cooperation without which this seminars would not have seen the light of day.

Gracious gratitude to all the faculty of the department of EEE & friends for their valuable advice and encouragement.

CONTENTS

1. INTRODUCTION 1

2. OVERVIEW OF IMAGE COMPRESSION 2

3. FRACTALS 4

4. PROPERTIES OF FRACTALS 5

5. FRACTAL IMAGE COMPRESSION 6

6. ITERATED FUNCTION SYSTEM 9

7. SELF SIMILARITY IN IMAGES 10

8. PARTITIONED IFS 12

9. PARTITIONING OF IMAGES 14

10. HV PARTITIONING 15

11. TRIANGULAR PARTITIONING 16

12. APPLICATIONS OF FRACTALS 18

13. FUTURE DEVELOPMENTS 19

14. CONCLUSION 20

15. REFERENCES 21

[attachment=893]