天天看點

Caffe經典模型--faster-rcnn目标檢測實戰案例(一)

這篇文章主要記錄py-faster-rcnn的編譯及測試,是實戰案例的前期準備。想動手訓練自己的資料集可以參考下一篇文章Caffe經典模型--faster-rcnn目标檢測實戰案例(二)(訓練kitti資料集)

在編譯py-faster-rcnn之前,首先要確定機器上已經安裝後caffe,如果還沒有安裝好caffe,可以參考Centos下的Caffe編譯安裝簡易手冊

下面正式開始py-faster-rcnn的編譯和安裝

第一步:下載下傳源碼

在指令行執行下載下傳源代碼:git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git

注意:確定要有--recursive  ,這個會確定将faster-rcnn下的caffe-fast-rcnn一同下載下傳

           如果沒有下載下傳到caffe-faster-rcnn,則需要手動去下載下傳,執行指令:git submodule update --init --recursive

           如果,沒有git指令,則使用指令:yum -y install git     安裝git

第二步:編譯lib目錄

1、編譯lib目錄(帶GPU時)

        先确定目前GPU的計算能力,修改py-faster-rcnn/lib/setup.py檔案(按照如下方式修改):

Caffe經典模型--faster-rcnn目标檢測實戰案例(一)

        執行編譯指令:

                $>cd py-faster-rcnn/lib

                $>make

2、編譯lib目錄(無GPU時)

        (a)首先按照如下方式修改py-faster-rcnn/lib/setup.py檔案(取消使用GPU的配置)

                将檔案中58行的CUDA=locate_cuda()注釋掉

                将125行開始的含有nms.gpu_nms的Extension部分也注釋掉(也就是将如下内容全部注釋)

Caffe經典模型--faster-rcnn目标檢測實戰案例(一)

        (b)在/py-faster-rcnn/lib/fast_rcnn/config.py檔案中取消GPU的配置

                在該檔案的第205行的__C.USE_GPU_NMS = True中的True改為False,如下所示:

Caffe經典模型--faster-rcnn目标檢測實戰案例(一)

        (c)在py-faster-rcnn/lib/fast_rcnn/nms_wrapper.py檔案中取消GPU的配置

                将該檔案的第9行的from nms.gpu_nms import gpu_nms注釋掉,如下所示:

Caffe經典模型--faster-rcnn目标檢測實戰案例(一)

        注意:如果沒有GPU,直接編譯将會報錯,顯示沒有CUDA的相關資訊

第三步:編譯FasterRCNN

1、配置caffe-fast-rcnn的Makefile.config

        将caffe-fast-rcnn目錄下的Makefile.config.example 複制一份到Makefile.config,然後編輯Makefile.config

        修改的地方有如下幾處:

                a)取消    WITH_PYTHON_LAYER := 1     這一行的注釋.

                b)如果有GPU的話,将     USE_CUDNN := 1    這一行的注釋也取消

        如下是我的Makefile.config檔案的内容:

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
#CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := open
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
BLAS_INCLUDE := /usr/include/openblas

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib64/python2.7/site-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
#PYTHON_LIBRARIES := boost_python3 python3.6m
#PYTHON_INCLUDE := /usr/include/python3.6m \
#                  /usr/local/lib64/python3.6/site-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @
           

2、編譯fast-rcnn

        在caffe-fast-rcnn目錄下執行如下指令:

                $>make all -j8 && make pycaffe 

第四步:下載下傳預訓練的模型探測器

1、下載下傳模型壓縮包

        在py-faster-rcnn/data目錄下執行如下指令去下載下傳模型的壓縮包

                $>./scripts/fetch_faster_rcnn_models.sh

        下載下傳完後會在py-faster-rcnn/data目錄下得到faster_rcnn_models.tgz這個壓縮檔案

2、解壓模型壓縮包的到模型檔案

        執行如下指令解壓faster_rcnn_models.tgz檔案:

                $>tar -zxvf faster_rcnn_models.tgz

        解壓後得到VGG16_faster_rcnn_final.caffemodel和ZF_faster_rcnn_final.caffemodel兩個模型檔案

        這兩個模型是使用VOC2007資料集訓練後得到的

第五步:運作圖檔探測---執行Demo

在執行demo之前首先将探測的圖檔放到py-faster-rcnn/data/demo目錄下

在py-faster-rcnn目錄下執行如下指令:(預設使用GPU,如果是cpu,需要在後面添加   --cpu   )

        $>./tool/demo.py

繼續閱讀