天天看點

caffe: fuck compile error again : error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

wangxiao@wangxiao-GTX980:~/Downloads/caffe-master$ make -j8

find: `wangxiao/bvlc_alexnet/spl': No such file or directory

find: `caffemodel': No such file or directory

find: `wangxiao/bvlc_alexnet/0.77': No such file or directory

NVCC src/caffe/layers/euclidean_loss_layer.cu

src/caffe/layers/euclidean_loss_layer.cu(41): error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

          detected during instantiation of "void caffe::EuclideanLossLayer<Dtype>::Backward_gpu(const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &) [with Dtype=float]"

(140): here

src/caffe/layers/euclidean_loss_layer.cu(42): error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

src/caffe/layers/euclidean_loss_layer.cu(41): error: a value of type "const double *" cannot be used to initialize an entity of type "double *"

          detected during instantiation of "void caffe::EuclideanLossLayer<Dtype>::Backward_gpu(const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &) [with Dtype=double]"

src/caffe/layers/euclidean_loss_layer.cu(42): error: a value of type "const double *" cannot be used to initialize an entity of type "double *"

4 errors detected in the compilation of "/tmp/tmpxft_000007e5_00000000-16_euclidean_loss_layer.compute_50.cpp1.ii".

make: *** [.build_debug/cuda/src/caffe/layers/euclidean_loss_layer.o] Error 1

wangxiao@wangxiao-GTX980:~/Downloads/caffe-master$

----------------------------------------------------------------------

原因分析:

主要是因為,在歐式距離損失函數中,我增添了以下兩句話:

       Dtype* diff_cpu_data = bottom[i]->mutable_cpu_diff();

       Dtype* label_data = bottom[1]->cpu_data();    // label data: 0 or 1

       Dtype* predict_data = bottom[0]->cpu_data();  // predict data

再來看看提示的錯誤:

detected during instantiation of "void

caffe::EuclideanLossLayer<Dtype>::Backward_gpu(const

std::vector<caffe::Blob<Dtype> *,

std::allocator<caffe::Blob<Dtype> *>> &, const

std::vector<__nv_bool, std::allocator<__nv_bool>> &,

const std::vector<caffe::Blob<Dtype> *,

std::allocator<caffe::Blob<Dtype> *>> &) [with

Dtype=float]"

肯定是少了 const 限制:

so , 改為:

const Dtype* label_data = bottom[1]->cpu_data();    // label data: 0 or 1

const Dtype* predict_data = bottom[0]->cpu_data();  // predict data

在編譯,就ok啦 ~~

2. 問題2:

CXX src/caffe/layers/accuracy_layer.cpp

src/caffe/layers/accuracy_layer.cpp: In instantiation of ‘void caffe::AccuracyLayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’:

src/caffe/layers/accuracy_layer.cpp:286:1:   required from here

src/caffe/layers/accuracy_layer.cpp:98:21: error: assignment of read-only location ‘*(bottom_data + 84u)’

     bottom_data[21] = 1; bottom_data[22] = 0; bottom_data[23] = 0; bottom_data[24] = 0;

看到錯誤就應該知道:錯誤的原因是給隻讀的變量指派了。。。

但是,又想改變那個值,來作為最後求解的結果,怎麼辦呢? 好吧,那就隻好先将該變量存到一個中間變量裡面去了,但是,問題又來了,原本const 的變量可以指派給普通的變量麼?

例如: float a = const float b ? 這樣ok麼?我覺得不ok,先試試吧。。。

繼續閱讀