caffe如何进行多标签训练,最后softmax输出又是怎样的

传统的单标签分类学习是从一个只属于一个标签l的样本集合中学习,其中每一个标签属于一个互斥的标签集合L |L| > 1。在多标签分类中,每个样本属于一个L样本集合的一个子集。在过去,多标签分类由文本分类和医学分析而产生和推动的。现在,我们发现现代的许多应用对多标签分类方法需求持续增长,比如蛋白质分类,音乐归类,和语义场景分类。
在semantic scene分类中,一张照片可以属于多个概念类别,如它可以同时属于日出和海滩。
那么这种多标签的分类在caffe上网络结构是怎样的?最终输出需要将多类的结果都输出,该怎么操作。
name: "CIFAR10_quick"
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "mean.binaryproto"
}
data_param {
source: "imagenet_train_lmdb_30_42"
batch_size: 1
backend: LMDB
}
}
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mean_file: "mean.binaryproto"
}
data_param {
source: "imagenet_test_lmdb_30_42"
batch_size: 1
backend: LMDB
}
}
layer{
name: "silency_data"
type: "Silence"
bottom: "data"
}
layer {
name: "conv1"
type: "Convolution"
bottom: "silency_data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.0001
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "pool1"
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3"
top: "pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool3"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 64
weight_filler {
type: "gaussian"
std: 0.1
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "gaussian"
std: 0.1
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "sigmoid5"
type: "Sigmoid"
bottom: "ip2"
top: "pred"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}

已邀请:

cjwdeq

赞同来自: anan1205

同意YJango所说,补充一下,现在有些多分类任务在最后使用了N个二分类器,最后将损失求和的方式来做多分类,这样的话就包含了一些类别之间的关系信息。

YJango - 在日研究生,人工智能专业

赞同来自:

没明白你问的是啥。
现在的分类识别都是用softmax来输出所有类的可能性,而不是单个标签。

假如分120个类。
那在训练的时候,目标是全部都是0,只有对应着类别序号的处是1,向量长度是120的向量。
预测的时候,输出还是120维的向量,不过每一维对应着该类的可能性。

zhubenfulovepoe

赞同来自:

我自己试了很多次,网络结构该怎么写,都是报错

要回复问题请先登录注册