Crop_layer层源码的一点疑惑

在caffe/src/caffe/layers/crop_layer.cpp中的crop_copy函数中有以下的代码,我个人认为else中的for循环没有作用,反而增加里无用的循环计算。如果有谁能理解,希望留言,在此感谢!if (cur_dim + 1 < top[0]->num_axes()) {
    // We are not yet at the final dimension, call copy recursively
    // 还没到最后一个维度,递归调用crop_copy()
    for (int i = 0; i < top[0]->shape(cur_dim); ++i) {
      indices[cur_dim] = i;
      crop_copy(bottom, top, offsets, indices, cur_dim+1,
                src_data, dest_data, is_forward);
    }
  } else {
    // We are at the last dimensions, which is stored continously(连续) in memory
    for (int i = 0; i < top[0]->shape(cur_dim); ++i) {
      // prepare index vector reduced(red) and with offsets(off) 准备索引向量
      std::vector<int> ind_red(cur_dim, 0); //顶层的偏移向量
      std::vector<int> ind_off(cur_dim+1, 0);//底层的偏移向量
      for (int j = 0; j < cur_dim; ++j) {//注意这里的cur_dim=3,因此j最大为2,ind_red[0]初始化时是0
          ind_red[j] = indices[j];
          ind_off[j] = indices[j] + offsets[j];
      }
      ind_off[cur_dim] = offsets[cur_dim];//ind_off最后一维
      // do the copy  复制操作
      if (is_forward) {
        caffe_copy(top[0]->shape(cur_dim),
            src_data + bottom[0]->offset(ind_off),
            dest_data + top[0]->offset(ind_red));
      } else {
        // in the backwards pass the src_data is top_diff
        // and the dest_data is bottom_diff
        // 后向过程src_data是top_diff,dest_data是bottom_diff
        caffe_copy(top[0]->shape(cur_dim),
            src_data + top[0]->offset(ind_red),
            dest_data + bottom[0]->offset(ind_off));
      }
    }
  }
}
已邀请:

xmyqsh

赞同来自:

也在用crop  发现了同样的问题
else 里的for循环确实没有必要
我看了下crop.cu 就没有这一层for循环
另外,crop.cu 中的
// Copy (one line per thread) from one array to another, with arbitrary
// strides in the last two dimensions.
template <typename Dtype>
__global__ void copy_kernel(const int n, const int height, const int width,
const int src_outer_stride, const int src_inner_stride,
const int dest_outer_stride, const int dest_inner_stride,
const Dtype* src, Dtype* dest) {
CUDA_KERNEL_LOOP(index, n) {
int src_start = index / height * src_outer_stride
+ index % height * src_inner_stride;
int dest_start = index / height * dest_outer_stride
+ index % height * dest_inner_stride;
for (int i = 0; i < width; ++i) {
dest[dest_start + i] = src[src_start + i];
}
}
}
可以优化成
// Copy (one line per thread) from one array to another, with arbitrary
// strides in the last two dimensions.
template <typename Dtype>
__global__ void copy_kernel(const int n, const int height, const int width,
const int src_outer_stride, const int src_inner_stride,
const int dest_outer_stride, const int dest_inner_stride,
const Dtype* src, Dtype* dest) {
CUDA_KERNEL_LOOP(index, n) {
int src_start = index * src_inner_stride;
int dest_start = index * dest_inner_stride;
for (int i = 0; i < width; ++i) {
dest[dest_start + i] = src[src_start + i];
}
}
}
也就是可以优化掉  N*C*H*2个除法,乘法,取模,加法运算
你看看对不?

要回复问题请先登录注册