天天看点

Caffe经典模型--faster-rcnn目标检测实战案例(一)

这篇文章主要记录py-faster-rcnn的编译及测试,是实战案例的前期准备。想动手训练自己的数据集可以参考下一篇文章Caffe经典模型--faster-rcnn目标检测实战案例(二)(训练kitti数据集)

在编译py-faster-rcnn之前,首先要确保机器上已经安装后caffe,如果还没有安装好caffe,可以参考Centos下的Caffe编译安装简易手册

下面正式开始py-faster-rcnn的编译和安装

第一步:下载源码

在命令行执行下载源代码:git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git

注意:确保要有--recursive  ,这个会确保将faster-rcnn下的caffe-fast-rcnn一同下载

           如果没有下载到caffe-faster-rcnn,则需要手动去下载,执行命令:git submodule update --init --recursive

           如果,没有git命令,则使用命令:yum -y install git     安装git

第二步:编译lib目录

1、编译lib目录(带GPU时)

        先确定当前GPU的计算能力,修改py-faster-rcnn/lib/setup.py文件(按照如下方式修改):

Caffe经典模型--faster-rcnn目标检测实战案例(一)

        执行编译命令:

                $>cd py-faster-rcnn/lib

                $>make

2、编译lib目录(无GPU时)

        (a)首先按照如下方式修改py-faster-rcnn/lib/setup.py文件(取消使用GPU的配置)

                将文件中58行的CUDA=locate_cuda()注释掉

                将125行开始的含有nms.gpu_nms的Extension部分也注释掉(也就是将如下内容全部注释)

Caffe经典模型--faster-rcnn目标检测实战案例(一)

        (b)在/py-faster-rcnn/lib/fast_rcnn/config.py文件中取消GPU的配置

                在该文件的第205行的__C.USE_GPU_NMS = True中的True改为False,如下所示:

Caffe经典模型--faster-rcnn目标检测实战案例(一)

        (c)在py-faster-rcnn/lib/fast_rcnn/nms_wrapper.py文件中取消GPU的配置

                将该文件的第9行的from nms.gpu_nms import gpu_nms注释掉,如下所示:

Caffe经典模型--faster-rcnn目标检测实战案例(一)

        注意:如果没有GPU,直接编译将会报错,显示没有CUDA的相关信息

第三步:编译FasterRCNN

1、配置caffe-fast-rcnn的Makefile.config

        将caffe-fast-rcnn目录下的Makefile.config.example 复制一份到Makefile.config,然后编辑Makefile.config

        修改的地方有如下几处:

                a)取消    WITH_PYTHON_LAYER := 1     这一行的注释.

                b)如果有GPU的话,将     USE_CUDNN := 1    这一行的注释也取消

        如下是我的Makefile.config文件的内容:

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
#CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := open
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
BLAS_INCLUDE := /usr/include/openblas

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib64/python2.7/site-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
#PYTHON_LIBRARIES := boost_python3 python3.6m
#PYTHON_INCLUDE := /usr/include/python3.6m \
#                  /usr/local/lib64/python3.6/site-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @
           

2、编译fast-rcnn

        在caffe-fast-rcnn目录下执行如下命令:

                $>make all -j8 && make pycaffe 

第四步:下载预训练的模型探测器

1、下载模型压缩包

        在py-faster-rcnn/data目录下执行如下命令去下载模型的压缩包

                $>./scripts/fetch_faster_rcnn_models.sh

        下载完后会在py-faster-rcnn/data目录下得到faster_rcnn_models.tgz这个压缩文件

2、解压模型压缩包的到模型文件

        执行如下命令解压faster_rcnn_models.tgz文件:

                $>tar -zxvf faster_rcnn_models.tgz

        解压后得到VGG16_faster_rcnn_final.caffemodel和ZF_faster_rcnn_final.caffemodel两个模型文件

        这两个模型是使用VOC2007数据集训练后得到的

第五步:运行图片探测---执行Demo

在执行demo之前首先将探测的图片放到py-faster-rcnn/data/demo目录下

在py-faster-rcnn目录下执行如下命令:(默认使用GPU,如果是cpu,需要在后面添加   --cpu   )

        $>./tool/demo.py

继续阅读