CAPTCHA (Completely Automated Public Turing test to Tell Computers and Humans Apart) is a program that can generate and grade tests that most humans can pass, yet current computer programs can't pass. The concept behind such a program arose from real world problems faced by internet companies such as Yahoo and AltaVista. Yahooo users free email accounts. The intended users are humans, but Yahoo discovered that various web companies and others were using bots to sign up for thousands of email accounts every minute from which they could send out junk mail. The solution was to require that a user solve a CAPTCHA test before they receive an account. The program picks a word from a dictionary, and produces a distorted and noisy image of the word. The user is presented the image and is asked to type in the word that appears in the image. Given the type of deformations used, most humans succeed at this test, while current programs (including OCR programs) fail the test. Our goal in this project is to use various techniques to break the visual CAPTCHA thus exposing the aws in the CAPTCHA to encourage the design of more complex and tough CAPTCHAs. These CAPTCHAs provide excellent problem sets since the clutter they contain is adversarial; it is designed to confuse computer programs. The input to our program is a color image stored in the png format. This Image then goes through Image Transformations to give a binary Image. The isolated letters are obtained from this binary image and resized making them noisy. This noisy input is given to a Feed Forward Neural Network which gives the closest letter matching with the input. The problem of identifying words in such severe clutter provides valuable insight into the more general problem of object recognition in scenes. The methods that we present are instances of a framework designed to tackle this general problem.