## Train and Test Fast-RCNN on Another Dataset

These is actually an old post from my GitHub, I put it here for better exposition and presentation.

## 1. Training on INRIA

I will illustrate how to train Fast-RCNN on another dataset in the following steps with INRIA Person as the example dataset.

At first, the dataset must be well organized with the following required format.

INRIA
|-- data
|-- Annotations
|-- *.txt (Annotation files)
|-- Images
|-- *.png (Image files)
|-- ImageSets
|-- train.txt


The train.txt contains all the names(without extensions) of images files that will be used for training. For example,

crop_000011
crop_000603
crop_000606
crop_000607
crop_000608


### 1.2 Construct IMDB

Add a new Python file describing the new-added dataset to the directory $FRCNN_ROOT/lib/datasets, see inria.py. Further, the following steps should be taken. • Modify self._classes in the constructor function to fit your dataset. • Be careful with the extensions of your image files. See image_path_from_index in inria.py. • Write the function for parsing annotations. See _load_inria_annotation in inria.py. • Do not forget to add import syntaxes in your own python file and other python files in the same directory. Then modify factory.py in the same directory. For example, to add INRIA Person, I would add inria_devkit_path = '/home/szy/INRIA' for split in ['train', 'test']: name = '{}_{}'.format('inria', split) __sets[name] = (lambda split=split: datasets.inria(split, inria_devkit_path))  Modify the MATLAB file selective_search.m in the directory $FRCNN_ROOT/selective_search, if you do not have that directory, you could find it at here.

image_db = '/home/szy/INRIA/';
image_filenames = textread([image_db '/data/ImageSets/train.txt'], '%s', 'delimiter', '\n');
for i = 1:length(image_filenames)
if exist([image_db '/data/Images/' image_filenames{i} '.jpg'], 'file') == 2
image_filenames{i} = [image_db '/data/Images/' image_filenames{i} '.jpg'];
end
if exist([image_db '/data/Images/' image_filenames{i} '.png'], 'file') == 2
image_filenames{i} = [image_db '/data/Images/' image_filenames{i} '.png'];
end
end
selective_search_rcnn(image_filenames, 'train.mat');


Run this MATLAB file and move the output train.mat to the root directory of your dataset, here it should be /home/szy/INRIA/. As it is a time consuming process, please be patient.

For example, if you want to use the model VGG_CNN_M_1024, then you should modify train.prototxt in $FRCNN_ROOTmodels/VGG_CNN_M_1024, which mainly concerns with the number of classes you want to train. Let’s assume that the number of classes is C(do not forget to count the background class). Then you should • Modify num_classes to C; • Modify num_output in the cls_score layer to C • Modify num_output in the bbox_pred layer to 4 * C See here for more details. ### 1.5 Train! In the directory$FRCNN_ROOT, run the following command in the shell.

./tools/train_net.py --gpu 0 --solver models/VGG_CNN_M_1024/solver.prototxt \
--Iights data/imagenet_models/VGG_CNN_M_1024.v2.caffemodel --imdb inria_train


Be careful with the argument imdb as it specifies the dataset you will train on. Then take a break to wait for training.

## 2. Test Fast-RCNN on INRIA

I will illustrate how to test Fast-RCNN on another dataset in the following steps while taking INRIA Person as the example dataset.

At first, the dataset must be well organized with the required format.

INRIA
|-- data
|-- Annotations
|-- *.txt (Annotation files)
|-- Images
|-- *.png (Image files)
|-- ImageSets
|-- test.txt
|-- results
|-- test (empty before test)
|-- VOCcode (optical)


The test.txt contains all the names(without extensions) of images files that will be used for training. For example, there are a few lines in test.txt below.

crop_000001
crop_000002
crop_000003
crop_000004
crop_000005


### 2.2 Construct IMDB

It is the same with training. Actually you do not need to implement the _load_inria_annotation, you could just use inria.py to construct IMDB for your own dataset. For example, to train on a dataset named TownCenter, just add the followings to factory.py.

towncenter_devkit_path = '/home/szy/TownCenter'
for split in ['test']:
name = '{}_{}'.format('towncenter', split)
__sets[name] = (lambda split=split: datasets.inria(split, towncenter_devkit_path))


It is the same with training. Note that it should be test.mat rather than train.mat.

### 2.4 Modify Prototxt

It is the same with training.

### 2.5 Prepare Your Evaluation Code

In the original framework of Fast-RCNN, it uses MATLAB wrappers to evaluate the results. As the evaluation process is not very complex, you could modify the function evaluate_detections in inria.py. As INRIA Person provides some MATLAB files in the format of PASCAL-VOC, you could modify it a little and use it directly. See here for more details.

If you do not want to use the evaluation function in the framework of Fast-RCNN, you could find the results in the directory results/test in the root directory of your dataset and evaluate them yourself.

### 2.6 Test!

In the directory \$FRCNN_ROOT, run the following command in the shell.

./tools/test_net.py --gpu 1 --def models/VGG_CNN_M_1024/test.prototxt \
--net output/default/train/vgg_cnn_m_1024_fast_rcnn_iter_40000.caffemodel --imdb inria_test


Be careful with the argument imdb as it specifies the dataset you will train on.

See here

## 4. References

Fast-RCNN

Written on May 12, 2016