caffe训练自己的图片分类【共2类,每类500张训练,30张测试】,结果很差,为什么

  1. SLOVER.prototxt

net: "./train_val.prototxt"
test_iter: 25
test_interval: 500
test_initialization: false
display: 100
average_loss: 40
base_lr: 0.01
lr_policy: "step"
stepsize: 100000
gamma: 0.1
max_iter: 450000
momentum: 0.9
weight_decay: 0.0002
snapshot: 100000
snapshot_prefix: "caffenet_train"
solver_mode: GPU

  1. DEPLOY.prototxt

name: "CaffeNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 10 dim: 3 dim: 200 dim: 200 } }
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}

  1. TRAIN_VAL.prototxt

name: "CaffeNet"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 200
mean_file: "imagenet_mean.binaryproto"
}
data_param {
source: "train_lmdb"
batch_size: 50
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 200
mean_file: "imagenet_mean.binaryproto"
}
data_param {
source: "val_lmdb"
batch_size: 100
backend: LMDB
}
}

QQ截图20171229103214.jpg

 
已邀请:

Hustachao - 90后上海工科男

赞同来自:

要了解网络模型的参数 和进行调参吧。
我也是小白

莲子熊猫

赞同来自:

数据量太少了,感觉至少要上千,然后就是调节参数

DaoYB033 - All In AI

赞同来自:

"accuracy = 0.5008",对于二分类来说,这等同《都选C》的节奏,说明你的训练模型根本没起作用。大神们的模型不好使,相信你也清楚原因在自身:
1、数据量少是硬伤;NN离不开大数据,多大合适?建议:类比该模型的典型数据集与自有数据集的区分度复杂度等,估算达到预期目标的各数据集大小。
2、生搬硬套现有模型参数;调参是难点中的难点,先不说数学理论功底,也不说caffe源码精读,建议:你先把用到的这几个模型文件中每一项含义弄明白。(你的输出打印与所贴文件不一致,猜测你修改然后训练尝试过N次,与其急于求成的浪费时间,静下来深度学习吧,才能教机器深度学习~_~)

要回复问题请先登录注册