[ Free DEMO download available! ]   [ VEDA LPR engine SDK/API ]   [ VEDA main page ]
VEDA LPR represents our current software solution in the domain of
vehicle license plate recognition (LPR), being
intended to automatically identify and recognize / "read" such license
plates in digital images.
The VEDA LPR engine not only represents an (Windows /
32-bit) extension, but almost a completely new design, development and
implementation, started from an old (DOS / 16-bit) LPR / ALPR / ANPR approach of
ours, namely VEDA CAR (1993-94).
The software may use, as input, both color and grayscale image files (grabbed
either by video cameras or by digital photo ones), and raw 256 gray levels
images already "prepared" in (a buffer of) the computer memory. It doesn't depend on
any specific image acquisition board (frame grabber). Supported image file formats
include: JPEG, PNG, PCX, BMP, TIFF and IMG (Imaging Technology 256 gray levels
uncompressed format).
VEDA LPR isn't directly meant as a universal and ready-to-run solution,
but merely as a foundation on which we/one can build customized ANPR / ALPR
specific applications.
Thus, once interfaced with some specific image acquisition chain (camera, frame
grabber, triggering devices), possibly a controller for an access device (gate /
barrier), and a database management software, the VEDA LPR engine can be
easily used to monitorize, control and/or validate the access in restricted areas,
public and/or private parking lots or garages, to monitorize highway (or other
roads) traffic, red-light or speed limit infringements, to control the input /
output at customs, toll collection points, etc.
A VEDA LPR engine SDK / API is available for
developers / integrators.
Same as for VEDA CAR, we designed the new LPR engine as having two main parts: an image segmentation subsystem and a recognition / training (OCR) subsystem. The first one is meant to globally analyze a 256 gray levels input image and to find the area(s) of interest potentially containing alphanumeric characters. Once such an area found, it is converted to a monochrome bitmap, using a smart, adaptive, binarization technique. Some "noise" filters and geometrical distortions (skew, slant, rotation) compensation algorithms are then applied on the obtained black and white bitmap. The next target is to split this bitmap into rows of text (if needed), and these ones into their component characters. Vectors of appropriate parameters are generated starting from these character bitmaps, as internal representations for each one. These vectors are then passed further as input to the second subsystem. At this level, if the purpose is recognition, a powerful neural-like pattern classification algorithm is used to "identify" them, based on (one or more) existing OCR knowledge bases. If the purpose is training a (selected) knowledge base, specific representations (patterns) derived for each character sample, explicitly "tagged" interactively by the user, are "learned". A dedicated user interface is provided for this purpose. Thus, on further recognition sessions, the system's experience is enhanced, and, after a number of training sessions, the knowledge becomes good enough to ensure average recognition rates above 99%.
In order to be as fast as possible, the global image segmentation system (interest area finder) doesn't use traditional image analysis techniques such as Fourier analysis, Sobel or Laplace edge detection methods and the likes. Instead, we developed a "contrast vaults" analysis method and algorithm for providing the most effective results. Briefly, a fast, image-to-"contrast vaults" measure map, transformation is performed on a per-block basis, i.e. a numeric value derived from an average "contrast vaults" measurement is attached to each block. Next, a conventional (histogram-based) method is applied on that map for finding a proper segmentation threshold, meant to isolate "blobs" from the background. Some filtering and connectivity analysis are applied on the initially retained "runs" of blocks in order to obtain the candidate blobs. These latter ones are subject of a blobs filtering process, and only the appropriate ones are kept. Usually, area(s) containing the car license plate proves to be (among) the remained "blob(s)". Once the interest area(s) thus segmentated, a smart, adaptive threshold based binarization (using some statistical methods and several test points) is performed, and black/white resulted bitmap(s) is (are) filtered of "noise", de-skewed/de-slanted, and finally passed further for segmentation to the recognition (or, respectively, to the training) subsystem.
The recognition / training (OCR) subsystem is based on one of our proprietary approaches / technologies in the field of neural and not-neural but somehow alike classifiers. This technology features a high degree of noise tolerance and power of generalization. It practically allows to learn and to further recognize any graphic symbol. It is also used as the basic recognition / training engine for our VEDA OCR/NeurOCR application.
Our training experiments focused on car license plates were performed on several hundreds (even thousands) of color and gray levels images, taken in various conditions and at different resolutions (e.g. 640x480, 768x512, 768x576, 800x600, 1024x768, 1384x1030, and other - both lesser and greater - ones), with quite various framing of the vehicles in the captured scenes also. These vehicles mainly included Romanian (RO), French (F), Italian (I), Dutch (NL), German (D), British (GB), Spanish (E), Slovakian (SK), Czech (CZ), Israeli (IL), Brasilian (BR) and Hong Kong (HK) car license plates, but also American (USA), Mexican (MEX), Belgian (B), Greek (GR), Turkish (TR), Australian (AU), and other ones. This amount of training finally ensured very good average recognition ratios. We emphasize again that the recognition / training system can, in fact, recognize/learn any kind of machine printed text, if properly segmentated within the image. For avoiding possible confusions / misnomers (like '1' with 'I', '0' with 'O' or 'D', '8' with 'B', etc.) in real(-time) applications, a customized list of allowed number formats / syntaxes may be defined and used as a post-recognition automatic correction guide, from case to case, in such specific applications.
You can download a FREE DEMO application of VEDA LPR (for Windows / 32-bit), from HERE. What you'll get after downloading will be a vedalpr.zip archive file which must be "unzipped" in a (preferably new, empty) directory / folder on your Windows based computer. This "ZIP" file usually contains a readme.txt text file (which is recommended to be read firstly in order to correctly install and run the DEMO), and an automatic installer of all the necessary archived files: install.bat (which must be run in order to get all the required files extracted on your hard disk). Finally, you can use the extracted DEMO application (vedalpr.exe), or the character patterns training tool (vedapatt.exe) if you want to build/update an OCR knowledge base using some images of your own. Please note that, when trying the DEMO, you must first select the image files to test it on, and the OCR knowledge base to be used. The demonstration can then be started to either automatically load and analyze image after image, with an adjustable delay between them (for seeing the segmentation and recognition result for each one), or, manually, step by step, one image (next/previous) at a time.
We shall always appreciate comments and suggestions about our VEDA LPR
engine.
Please note that for extended LPR performance testing, some freely downloadable
image data sets are available, providing color and grayscale images of various resolutions,
containing frontal and rear views of various vehicles with various kinds of international
registration plates, taken from various positions and distances and in various real life
conditions, e.g.:
Each and/or all of the above mentioned ones may be used for testing our VEDA LPR Demo.
CONTACT: Mr. Mihnea VREJOIU
Go to VEDA main page |