{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "mjAScbd2vl9P"
},
"source": [
"# OCR model for reading Captchas\n",
"\n",
"**Author:** [A_K_Nain](https://twitter.com/A_K_Nain)
\n",
"**Date created:** 2020/06/14
\n",
"**Last modified:** 2020/06/26
\n",
"**Description:** How to implement an OCR model using CNNs, RNNs and CTC loss."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wWvlZPBJvl9U"
},
"source": [
"## Introduction\n",
"\n",
"This example demonstrates a simple OCR model built with the Functional API. Apart from\n",
"combining CNN and RNN, it also illustrates how you can instantiate a new layer\n",
"and use it as an \"Endpoint layer\" for implementing CTC loss. For a detailed\n",
"guide to layer subclassing, please check out\n",
"[this page](https://keras.io/guides/making_new_layers_and_models_via_subclassing/)\n",
"in the developer guides."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Yq0Pe4Zuvl9U"
},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "5q-xCl8Qvl9V"
},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import sys\n",
"\n",
"from pathlib import Path\n",
"from collections import Counter\n",
"\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"from tensorflow.keras import layers\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "KIc-3qB0L5OE",
"outputId": "5162487a-c946-4569-aa2a-b6dd3944e85a",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"True"
]
},
"metadata": {},
"execution_count": 2
}
],
"source": [
"tf.executing_eagerly()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sSm7N--8vl9W"
},
"source": [
"## Load the data: [Captcha Images](https://www.kaggle.com/fournierp/captcha-version-2-images)\n",
"Let's download the data."
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"id": "g3EVJfHBvl9X",
"outputId": "63444ebd-1935-47ff-c031-0060c94fe3fc",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Number of images found: 10610\n",
"Number of labels found: 10610\n",
"Number of unique characters: 22\n",
"Characters present: [' ', '0', '2', '4', '5', '8', 'A', 'D', 'G', 'H', 'J', 'K', 'M', 'N', 'P', 'R', 'S', 'T', 'V', 'W', 'X', 'Y']\n"
]
}
],
"source": [
"substitutions = {\n",
" 'B': '8',\n",
" 'F': 'P',\n",
" 'U': 'V',\n",
" '6': 'G',\n",
" 'Z': '2',\n",
" 'O': '0'\n",
"}\n",
"\n",
"def apply_substitutions(input_string):\n",
" output_string = \"\"\n",
" for char in input_string:\n",
" if char in substitutions:\n",
" output_string += substitutions[char]\n",
" else:\n",
" output_string += char\n",
"\n",
" return output_string\n",
"\n",
"data_dir = Path(\"./images_10k/\")\n",
"\n",
"# Get list of all the images\n",
"images = sorted(list(map(str, list(data_dir.glob(\"*.png\")))))\n",
"labels = [apply_substitutions(img.split(os.path.sep)[-1].split(\".png\")[0]) for img in images]\n",
"\n",
"# Maximum length of any captcha in the dataset\n",
"max_length = max([len(label) for label in labels])\n",
"labels = [x + ' ' * (max_length - len(x)) for x in labels]\n",
"\n",
"characters = set(char for label in labels for char in label)\n",
"characters = sorted(list(characters))\n",
"\n",
"print(\"Number of images found: \", len(images))\n",
"print(\"Number of labels found: \", len(labels))\n",
"print(\"Number of unique characters: \", len(characters))\n",
"print(\"Characters present: \", characters)\n",
"\n",
"# Batch size for training and validation\n",
"batch_size = 16\n",
"\n",
"# Desired image dimensions\n",
"img_width = 300\n",
"img_height = 80\n",
"\n",
"# Factor by which the image is going to be downsampled\n",
"# by the convolutional blocks. We will be using two\n",
"# convolution blocks and each block will have\n",
"# a pooling layer which downsample the features by a factor of 2.\n",
"# Hence total downsampling factor would be 4.\n",
"downsample_factor = 4\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gqn-NjRovl9Y"
},
"source": [
"## Preprocessing"
]
},
{
"cell_type": "code",
"source": [
"!rm -rf sdir"
],
"metadata": {
"id": "8bVogUbzY6Fi"
},
"execution_count": 48,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"id": "MjQltH0Mvl9Y"
},
"outputs": [],
"source": [
"from skimage.morphology import opening, square, label\n",
"from skimage.measure import regionprops\n",
"from skimage.io import imread, imsave\n",
"from skimage import img_as_ubyte\n",
"\n",
"# Mapping characters to integers\n",
"char_to_num = layers.StringLookup(\n",
" vocabulary=list(characters), mask_token=None,\n",
")\n",
"\n",
"# Mapping integers back to original characters\n",
"num_to_char = layers.StringLookup(\n",
" vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True\n",
")\n",
"\n",
"def filter_image(img, kernel_size=3, num_components=8, min_height_ratio=0.25, max_height_ratio=1):\n",
" # Binarize the image\n",
" binary_image = img < 0.5 # Pixels with a value less than 0.5 will be True (1)\n",
"\n",
" # Label connected components in the image\n",
" label_image = label(binary_image)\n",
"\n",
" # Get properties of the labeled regions\n",
" properties = regionprops(label_image)\n",
"\n",
" # Sort the regions by area (in descending order)\n",
" properties.sort(key=lambda x: x.area, reverse=True)\n",
"\n",
" # Create an empty image to store the result\n",
" filtered_image = np.zeros_like(label_image, dtype=bool)\n",
"\n",
" # Keep only the largest components that satisfy the height constraints\n",
" for prop in properties[:num_components]:\n",
" minr, minc, maxr, maxc = prop.bbox\n",
" height = maxr - minr\n",
" if height > max_height_ratio * img.shape[0] or height < min_height_ratio * img.shape[0]:\n",
" continue\n",
" filtered_image[label_image == prop.label] = 1\n",
"\n",
" return filtered_image == 0\n",
"\n",
"\n",
"def read_and_process(imgpath, cdir):\n",
" img = imread(imgpath, as_gray=True);\n",
" img = np.hstack([img, np.ones((img_height, img_width - img.shape[1]))]).astype(\"float32\")\n",
" img = filter_image(img)\n",
" output_path = os.path.join(cdir, Path(imgpath).stem + \".png\")\n",
" imsave(output_path, np.clip(img_as_ubyte(img), 0, 238))\n",
" return tf.convert_to_tensor((1 - img).astype(\"float32\").reshape((80, 300, 1)));\n",
"\n",
"def load_data(images, labels, cache, shuffle=True):\n",
" os.makedirs(cache, exist_ok=True)\n",
" # 1. Get the total size of the dataset\n",
" size = len(images)\n",
" # 2. Make an indices array and shuffle it, if required\n",
" indices = np.arange(size)\n",
" if shuffle:\n",
" np.random.shuffle(indices)\n",
" # 3. Get the size of training samples\n",
" train_samples = int(size)\n",
" # 4. Split data into training and validation sets\n",
" x_train, y_train = images[indices], labels[indices]\n",
" x_train = [read_and_process(x, cache) for x in x_train]\n",
" return x_train, y_train\n",
"\n",
"\n",
"# Splitting data into training and validation sets\n",
"rx_train, ry_train = load_data(np.array(images), np.array(labels), Path(\"sdir\"))\n",
"\n",
"\n",
"def encode_single_sample(img, label):\n",
" img = tf.image.convert_image_dtype(img, tf.float32)\n",
" # 4. Resize to the desired size\n",
" #img = tf.image.resize_with_pad(img, img_height, img_width)\n",
" # 5. Transpose the image because we want the time\n",
" # dimension to correspond to the width of the image.\n",
" img = tf.transpose(img, perm=[1, 0, 2])\n",
" # 6. Map the characters in label to numbers\n",
" label = char_to_num(tf.strings.unicode_split(label, input_encoding=\"UTF-8\"))\n",
" # 7. Return a dict as our model is expecting two inputs\n",
" return {\"image\": img, \"label\": label}\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fnwhurZ-vl9Z"
},
"source": [
"## Create `Dataset` objects"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {
"id": "IEec36ZDL5OH",
"outputId": "e79ca0b6-4bce-4830-c863-d7167cd2f666",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"2653\n",
"7957\n"
]
}
],
"source": [
"split_index = int(len(rx_train) * 0.75)\n",
"\n",
"# Move the first 75% of x_valid to x_train\n",
"x_train = rx_train[:split_index];\n",
"# Move the first 75% of y_valid to y_train\n",
"y_train = ry_train[:split_index];\n",
"\n",
"# Keep only the last 25% of x_valid\n",
"x_valid = rx_train[split_index:]\n",
"# Keep only the last 25% of y_valid\n",
"y_valid = ry_train[split_index:]\n",
"\n",
"print(len(x_valid))\n",
"print(len(x_train))"
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {
"id": "k2MZdcpXvl9Z"
},
"outputs": [],
"source": [
"train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n",
"train_dataset = (\n",
" train_dataset.map(\n",
" encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE\n",
" )\n",
" .batch(batch_size)\n",
" .prefetch(buffer_size=tf.data.AUTOTUNE)\n",
")\n",
"\n",
"validation_dataset = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))\n",
"validation_dataset = (\n",
" validation_dataset.map(\n",
" encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE\n",
" )\n",
" .batch(batch_size)\n",
" .prefetch(buffer_size=tf.data.AUTOTUNE)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NI0NRV5Ivl9Z"
},
"source": [
"## Visualize the data"
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {
"id": "7GT5RSNgvl9Z",
"outputId": "67525f74-36e2-4ca5-9246-6990c7c5a368",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 405
}
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"