博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
tensorflow使用tensorboard进行可视化
阅读量:2137 次
发布时间:2019-04-30

本文共 30972 字,大约阅读时间需要 103 分钟。

tensorboard使用步骤  

①指定面板图上显示的变量

②训练过程中将这些变量计算出来,输出到文件中

③文件解析

   tensorboard --logdir=文件名

 

 

在vggnet的代码上进行添加tensorboard语句

源代码

import tensorflow as tfimport osimport pickleimport numpy as npCIFAR_DIR = "dataset/cifar-10-batches-py"print(os.listdir(CIFAR_DIR))def load_data(filename):    """read data from data file."""    with open(filename, 'rb') as f:        data = pickle.load(f, encoding='bytes')        return data[b'data'], data[b'labels']# tensorflow.Dataset.class CifarData:    def __init__(self, filenames, need_shuffle):        all_data = []        all_labels = []        for filename in filenames:            data, labels = load_data(filename)            all_data.append(data)            all_labels.append(labels)        self._data = np.vstack(all_data)        self._data = self._data / 127.5 - 1        self._labels = np.hstack(all_labels)        print(self._data.shape)        print(self._labels.shape)                self._num_examples = self._data.shape[0]        self._need_shuffle = need_shuffle        self._indicator = 0        if self._need_shuffle:            self._shuffle_data()                def _shuffle_data(self):        # [0,1,2,3,4,5] -> [5,3,2,4,0,1]        p = np.random.permutation(self._num_examples)        self._data = self._data[p]        self._labels = self._labels[p]        def next_batch(self, batch_size):        """return batch_size examples as a batch."""        end_indicator = self._indicator + batch_size        if end_indicator > self._num_examples:            if self._need_shuffle:                self._shuffle_data()                self._indicator = 0                end_indicator = batch_size            else:                raise Exception("have no more examples")        if end_indicator > self._num_examples:            raise Exception("batch size is larger than all examples")        batch_data = self._data[self._indicator: end_indicator]        batch_labels = self._labels[self._indicator: end_indicator]        self._indicator = end_indicator        return batch_data, batch_labelstrain_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i) for i in range(1, 6)]test_filenames = [os.path.join(CIFAR_DIR, 'test_batch')]train_data = CifarData(train_filenames, True)test_data = CifarData(test_filenames, False)x = tf.placeholder(tf.float32, [None, 3072])y = tf.placeholder(tf.int64, [None])# [None], eg: [0,5,6,3]x_image = tf.reshape(x, [-1, 3, 32, 32])# 32*32x_image = tf.transpose(x_image, perm=[0, 2, 3, 1])# conv1: 神经元图, feature_map, 输出图像conv1_1 = tf.layers.conv2d(x_image,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_1')conv1_2 = tf.layers.conv2d(conv1_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_2')# 16 * 16pooling1 = tf.layers.max_pooling2d(conv1_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool1')conv2_1 = tf.layers.conv2d(pooling1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_1')conv2_2 = tf.layers.conv2d(conv2_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_2')# 8 * 8pooling2 = tf.layers.max_pooling2d(conv2_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool2')conv3_1 = tf.layers.conv2d(pooling2,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_1')conv3_2 = tf.layers.conv2d(conv3_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_2')# 4 * 4 * 32pooling3 = tf.layers.max_pooling2d(conv3_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool3')# [None, 4 * 4 * 32]flatten = tf.layers.flatten(pooling3)y_ = tf.layers.dense(flatten, 10)loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)# y_ -> sofmax# y -> one_hot# loss = ylogy_# indicespredict = tf.argmax(y_, 1)# [1,0,1,1,1,0,0,0]correct_prediction = tf.equal(predict, y)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))with tf.name_scope('train_op'):    train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)init = tf.global_variables_initializer()batch_size = 20train_steps = 10000test_steps = 100# train 10k: 73.4%with tf.Session() as sess:    sess.run(init)    for i in range(train_steps):        batch_data, batch_labels = train_data.next_batch(batch_size)        loss_val, acc_val, _ = sess.run(            [loss, accuracy, train_op],            feed_dict={                x: batch_data,                y: batch_labels})        if (i+1) % 100 == 0:            print('[Train] Step: %d, loss: %4.5f, acc: %4.5f'                   % (i+1, loss_val, acc_val))        if (i+1) % 1000 == 0:            test_data = CifarData(test_filenames, False)            all_test_acc_val = []            for j in range(test_steps):                test_batch_data, test_batch_labels \                    = test_data.next_batch(batch_size)                test_acc_val = sess.run(                    [accuracy],                    feed_dict = {                        x: test_batch_data,                         y: test_batch_labels                    })                all_test_acc_val.append(test_acc_val)            test_acc = np.mean(all_test_acc_val)            print('[Test ] Step: %d, acc: %4.5f' % (i+1, test_acc))

 

 

指定面板图上显示的变量

因为是标量,所以用tf.summary.scalar

所有跟tensorboard有关的函数都在tf.summary下面

可以这样理解,tf.summary.scalar是给loss节点另外一个名字,将节点中间过程训练的值聚合起来的一个标识符

在文件中可能是这样去存储的
'loss':<10,1.1>,<20,1.08>

第10次训练的时候loss是1.1,第20次训练loss是1.08

解析的时候就生成这样的图

 

图像用tf.summary.image

 

现在我们有了3个summary,loss_summary、accurancy_summary、inputs_summary, 如果在运行过程中每个都需要输入到sess.run中进行计算是很麻烦的。在tensorboard中有个函数可以将它们merge起来,就是merge_all()

另一种方法是直接显示地指定把那些summary合并到一起,tf.summary.merge([loss_summary, accurancy_summary])

这样是因为我们在训练和测试的时候可能关注的变量并不完全一样。比如训练的时候可能会关注所有的变量如loss、accurancy和图像,而测试的时候可能只会关注accurancy/loss

 

添加后的代码

import tensorflow as tfimport osimport pickleimport numpy as npCIFAR_DIR = "dataset/cifar-10-batches-py"print(os.listdir(CIFAR_DIR))def load_data(filename):    """read data from data file."""    with open(filename, 'rb') as f:        data = pickle.load(f, encoding='bytes')        return data[b'data'], data[b'labels']# tensorflow.Dataset.class CifarData:    def __init__(self, filenames, need_shuffle):        all_data = []        all_labels = []        for filename in filenames:            data, labels = load_data(filename)            all_data.append(data)            all_labels.append(labels)        self._data = np.vstack(all_data)        self._data = self._data / 127.5 - 1        self._labels = np.hstack(all_labels)        print(self._data.shape)        print(self._labels.shape)                self._num_examples = self._data.shape[0]        self._need_shuffle = need_shuffle        self._indicator = 0        if self._need_shuffle:            self._shuffle_data()                def _shuffle_data(self):        # [0,1,2,3,4,5] -> [5,3,2,4,0,1]        p = np.random.permutation(self._num_examples)        self._data = self._data[p]        self._labels = self._labels[p]        def next_batch(self, batch_size):        """return batch_size examples as a batch."""        end_indicator = self._indicator + batch_size        if end_indicator > self._num_examples:            if self._need_shuffle:                self._shuffle_data()                self._indicator = 0                end_indicator = batch_size            else:                raise Exception("have no more examples")        if end_indicator > self._num_examples:            raise Exception("batch size is larger than all examples")        batch_data = self._data[self._indicator: end_indicator]        batch_labels = self._labels[self._indicator: end_indicator]        self._indicator = end_indicator        return batch_data, batch_labelstrain_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i) for i in range(1, 6)]test_filenames = [os.path.join(CIFAR_DIR, 'test_batch')]train_data = CifarData(train_filenames, True)test_data = CifarData(test_filenames, False)x = tf.placeholder(tf.float32, [None, 3072])y = tf.placeholder(tf.int64, [None])# [None], eg: [0,5,6,3]x_image = tf.reshape(x, [-1, 3, 32, 32])# 32*32x_image = tf.transpose(x_image, perm=[0, 2, 3, 1])# conv1: 神经元图, feature_map, 输出图像conv1_1 = tf.layers.conv2d(x_image,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_1')conv1_2 = tf.layers.conv2d(conv1_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_2')# 16 * 16pooling1 = tf.layers.max_pooling2d(conv1_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool1')conv2_1 = tf.layers.conv2d(pooling1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_1')conv2_2 = tf.layers.conv2d(conv2_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_2')# 8 * 8pooling2 = tf.layers.max_pooling2d(conv2_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool2')conv3_1 = tf.layers.conv2d(pooling2,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_1')conv3_2 = tf.layers.conv2d(conv3_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_2')# 4 * 4 * 32pooling3 = tf.layers.max_pooling2d(conv3_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool3')# [None, 4 * 4 * 32]flatten = tf.layers.flatten(pooling3)y_ = tf.layers.dense(flatten, 10)loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)# y_ -> sofmax# y -> one_hot# loss = ylogy_# indicespredict = tf.argmax(y_, 1)# [1,0,1,1,1,0,0,0]correct_prediction = tf.equal(predict, y)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))with tf.name_scope('train_op'):    train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)loss_summary = tf.summary.scalar('loss',loss)accuracy_summary = tf.summary.scalar('accuracy',accuracy)#x_image在程序中被归一化成了(-1,1)的值,但是tf.summary.image用的图是0-255之间,是像素值#如果直接用的话会出问题,所以要先逆归一化一下source_image = (x_image + 1)*127.5inputs_summary = tf.summary.image('inputs_summary', source_image)merged_summary = tf.summary.merge_all()merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary])init = tf.global_variables_initializer()batch_size = 20train_steps = 10000test_steps = 100# train 10k: 73.4%with tf.Session() as sess:    sess.run(init)    for i in range(train_steps):        batch_data, batch_labels = train_data.next_batch(batch_size)        loss_val, acc_val, _ = sess.run(            [loss, accuracy, train_op],            feed_dict={                x: batch_data,                y: batch_labels})        if (i+1) % 100 == 0:            print('[Train] Step: %d, loss: %4.5f, acc: %4.5f'                   % (i+1, loss_val, acc_val))        if (i+1) % 1000 == 0:            test_data = CifarData(test_filenames, False)            all_test_acc_val = []            for j in range(test_steps):                test_batch_data, test_batch_labels \                    = test_data.next_batch(batch_size)                test_acc_val = sess.run(                    [accuracy],                    feed_dict = {                        x: test_batch_data,                         y: test_batch_labels                    })                all_test_acc_val.append(test_acc_val)            test_acc = np.mean(all_test_acc_val)            print('[Test ] Step: %d, acc: %4.5f' % (i+1, test_acc))

 

在训练过程中将变量计算出来,输出到文件中

不需要指定到具体的文件名,只指定到文件夹就可以了,它会在文件夹下自动地命名

 

创建文件夹

 

建立writer

就是文件句柄,有了文件句柄我们就可以做写或读的操作

可以加上sess.graph,也可以不加

要把summary输出出来的话,首先要计算出来它

把merged_summary和merged_summary_test可以看作是一个变量,加进去就可以计算出来了

但是计算会比较耗时,所以我们每隔100步进行一次计算

然后会把原来的代码进行一些重写

注意在test的时候,我们一般是希望它test的数据集是固定的,所以我们就生成一个fixed的batch data和batch labels

 

添加后的代码

import tensorflow as tfimport osimport pickleimport numpy as npCIFAR_DIR = "dataset/cifar-10-batches-py"print(os.listdir(CIFAR_DIR))def load_data(filename):    """read data from data file."""    with open(filename, 'rb') as f:        data = pickle.load(f, encoding='bytes')        return data[b'data'], data[b'labels']# tensorflow.Dataset.class CifarData:    def __init__(self, filenames, need_shuffle):        all_data = []        all_labels = []        for filename in filenames:            data, labels = load_data(filename)            all_data.append(data)            all_labels.append(labels)        self._data = np.vstack(all_data)        self._data = self._data / 127.5 - 1        self._labels = np.hstack(all_labels)        print(self._data.shape)        print(self._labels.shape)                self._num_examples = self._data.shape[0]        self._need_shuffle = need_shuffle        self._indicator = 0        if self._need_shuffle:            self._shuffle_data()                def _shuffle_data(self):        # [0,1,2,3,4,5] -> [5,3,2,4,0,1]        p = np.random.permutation(self._num_examples)        self._data = self._data[p]        self._labels = self._labels[p]        def next_batch(self, batch_size):        """return batch_size examples as a batch."""        end_indicator = self._indicator + batch_size        if end_indicator > self._num_examples:            if self._need_shuffle:                self._shuffle_data()                self._indicator = 0                end_indicator = batch_size            else:                raise Exception("have no more examples")        if end_indicator > self._num_examples:            raise Exception("batch size is larger than all examples")        batch_data = self._data[self._indicator: end_indicator]        batch_labels = self._labels[self._indicator: end_indicator]        self._indicator = end_indicator        return batch_data, batch_labelstrain_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i) for i in range(1, 6)]test_filenames = [os.path.join(CIFAR_DIR, 'test_batch')]train_data = CifarData(train_filenames, True)test_data = CifarData(test_filenames, False)x = tf.placeholder(tf.float32, [None, 3072])y = tf.placeholder(tf.int64, [None])# [None], eg: [0,5,6,3]x_image = tf.reshape(x, [-1, 3, 32, 32])# 32*32x_image = tf.transpose(x_image, perm=[0, 2, 3, 1])# conv1: 神经元图, feature_map, 输出图像conv1_1 = tf.layers.conv2d(x_image,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_1')conv1_2 = tf.layers.conv2d(conv1_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_2')# 16 * 16pooling1 = tf.layers.max_pooling2d(conv1_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool1')conv2_1 = tf.layers.conv2d(pooling1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_1')conv2_2 = tf.layers.conv2d(conv2_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_2')# 8 * 8pooling2 = tf.layers.max_pooling2d(conv2_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool2')conv3_1 = tf.layers.conv2d(pooling2,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_1')conv3_2 = tf.layers.conv2d(conv3_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_2')# 4 * 4 * 32pooling3 = tf.layers.max_pooling2d(conv3_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool3')# [None, 4 * 4 * 32]flatten = tf.layers.flatten(pooling3)y_ = tf.layers.dense(flatten, 10)loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)# y_ -> sofmax# y -> one_hot# loss = ylogy_# indicespredict = tf.argmax(y_, 1)# [1,0,1,1,1,0,0,0]correct_prediction = tf.equal(predict, y)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))with tf.name_scope('train_op'):    train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)loss_summary = tf.summary.scalar('loss',loss)accuracy_summary = tf.summary.scalar('accuracy',accuracy)#x_image在程序中被归一化成了(-1,1)的值,但是tf.summary.image用的图是0-255之间,是像素值#如果直接用的话会出问题,所以要先逆归一化一下source_image = (x_image + 1)*127.5inputs_summary = tf.summary.image('inputs_summary', source_image)merged_summary = tf.summary.merge_all()merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary])LOG_DIR = '.'run_label = 'run_vgg_tensorboard'run_dir = os.path.join(LOG_DIR, run_label)if not os.path.exists(run_dir):    os.mkdir(run_dir)train_log_dir = os.path.join(run_dir,'train')test_log_dir = os.path.join(run_dir,'test')if not os.path.exists(train_log_dir):    os.mkdir(train_log_dir)init = tf.global_variables_initializer()batch_size = 20train_steps = 10000test_steps = 100output_summary_every_steps = 100# train 10k: 73.4%with tf.Session() as sess:    sess.run(init)    #训练集和测试集都分别进行输出,建立2个writer    train_writer = tf.summary.FileWriter(train_log_dir, sess.graph)     test_writer = tf.summary.FileWriter(test_log_dir)    fixed_test_batch_data, fixed_test_batch_labels = test_data.next_batch(batch_size)    for i in range(train_steps):        batch_data, batch_labels = train_data.next_batch(batch_size)        eval_ops = [loss,accuracy,train_op]        shoud_output_summary = ((i+1)%output_summary_every_steps == 0)        if shoud_output_summary:            eval_ops.append(merged_summary)        eval_ops_results = sess.run(            eval_ops,            feed_dict={                x: batch_data,                y: batch_labels})        loss_val, acc_val = eval_ops_results[0:2]        if shoud_output_summary:            train_summary_str = eval_ops_results[-1]            train_writer.add_summary(train_summary_str,i+1)            test_summary_str = sess.run([merged_summary_test],                                        feed_dict={                                            x:fixed_test_batch_data,                                            y:fixed_test_batch_labels,                                        })[0]            test_writer.add_summary(test_summary_str,i+1)        if (i+1) % 100 == 0:            print('[Train] Step: %d, loss: %4.5f, acc: %4.5f'                   % (i+1, loss_val, acc_val))        if (i+1) % 1000 == 0:            test_data = CifarData(test_filenames, False)            all_test_acc_val = []            for j in range(test_steps):                test_batch_data, test_batch_labels \                    = test_data.next_batch(batch_size)                test_acc_val = sess.run(                    [accuracy],                    feed_dict = {                        x: test_batch_data,                         y: test_batch_labels                    })                all_test_acc_val.append(test_acc_val)            test_acc = np.mean(all_test_acc_val)            print('[Test ] Step: %d, acc: %4.5f' % (i+1, test_acc))

 

写函数多打印一些变量的值到tensorboard中去

包括均值、方差、最小值、最大值、直方图

 

可以看到tf中的很多函数都加了reduce前缀,意思是能把多维的tensor求出了一个最小值,或者均值,所以叫reduce什么什么

将各个卷积层的输出都输出到tensorboard中

import tensorflow as tfimport osimport pickleimport numpy as npCIFAR_DIR = "dataset/cifar-10-batches-py"print(os.listdir(CIFAR_DIR))def load_data(filename):    """read data from data file."""    with open(filename, 'rb') as f:        data = pickle.load(f, encoding='bytes')        return data[b'data'], data[b'labels']# tensorflow.Dataset.class CifarData:    def __init__(self, filenames, need_shuffle):        all_data = []        all_labels = []        for filename in filenames:            data, labels = load_data(filename)            all_data.append(data)            all_labels.append(labels)        self._data = np.vstack(all_data)        self._data = self._data / 127.5 - 1        self._labels = np.hstack(all_labels)        print(self._data.shape)        print(self._labels.shape)                self._num_examples = self._data.shape[0]        self._need_shuffle = need_shuffle        self._indicator = 0        if self._need_shuffle:            self._shuffle_data()                def _shuffle_data(self):        # [0,1,2,3,4,5] -> [5,3,2,4,0,1]        p = np.random.permutation(self._num_examples)        self._data = self._data[p]        self._labels = self._labels[p]        def next_batch(self, batch_size):        """return batch_size examples as a batch."""        end_indicator = self._indicator + batch_size        if end_indicator > self._num_examples:            if self._need_shuffle:                self._shuffle_data()                self._indicator = 0                end_indicator = batch_size            else:                raise Exception("have no more examples")        if end_indicator > self._num_examples:            raise Exception("batch size is larger than all examples")        batch_data = self._data[self._indicator: end_indicator]        batch_labels = self._labels[self._indicator: end_indicator]        self._indicator = end_indicator        return batch_data, batch_labelstrain_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i) for i in range(1, 6)]test_filenames = [os.path.join(CIFAR_DIR, 'test_batch')]train_data = CifarData(train_filenames, True)test_data = CifarData(test_filenames, False)x = tf.placeholder(tf.float32, [None, 3072])y = tf.placeholder(tf.int64, [None])# [None], eg: [0,5,6,3]x_image = tf.reshape(x, [-1, 3, 32, 32])# 32*32x_image = tf.transpose(x_image, perm=[0, 2, 3, 1])# conv1: 神经元图, feature_map, 输出图像conv1_1 = tf.layers.conv2d(x_image,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_1')conv1_2 = tf.layers.conv2d(conv1_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv1_2')# 16 * 16pooling1 = tf.layers.max_pooling2d(conv1_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool1')conv2_1 = tf.layers.conv2d(pooling1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_1')conv2_2 = tf.layers.conv2d(conv2_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv2_2')# 8 * 8pooling2 = tf.layers.max_pooling2d(conv2_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool2')conv3_1 = tf.layers.conv2d(pooling2,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_1')conv3_2 = tf.layers.conv2d(conv3_1,                           32, # output channel number                           (3,3), # kernel size                           padding = 'same',                           activation = tf.nn.relu,                           name = 'conv3_2')# 4 * 4 * 32pooling3 = tf.layers.max_pooling2d(conv3_2,                                   (2, 2), # kernel size                                   (2, 2), # stride                                   name = 'pool3')# [None, 4 * 4 * 32]flatten = tf.layers.flatten(pooling3)y_ = tf.layers.dense(flatten, 10)loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)# y_ -> sofmax# y -> one_hot# loss = ylogy_# indicespredict = tf.argmax(y_, 1)# [1,0,1,1,1,0,0,0]correct_prediction = tf.equal(predict, y)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))with tf.name_scope('train_op'):    train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)#name是指定命名空间,为了防止冲突def variable_summary(var,name):    with tf.name_scope(name):        mean = tf.reduce_mean(var)#均值        with tf.name_scope('stdddev'):            stddev = tf.sqrt(tf.reduce_mean(tf.square(var-mean)))        tf.summary.scalar('mean',mean)        tf.summary.scalar('stddev',stddev)        tf.summary.scalar('min',tf.reduce_min(var))        tf.summary.scalar('max',tf.reduce_max(var))        tf.summary.histogram('histogram',var)#直方图with tf.name_scope('summary'):    variable_summary(conv1_1, 'conv1_1')    variable_summary(conv1_2, 'conv1_2')    variable_summary(conv2_1, 'conv2_1')    variable_summary(conv2_2, 'conv2_2')    variable_summary(conv3_1, 'conv3_1')    variable_summary(conv3_2, 'conv3_2')    #后面的merge_all会把我们写的这些都汇总起来loss_summary = tf.summary.scalar('loss',loss)accuracy_summary = tf.summary.scalar('accuracy',accuracy)#x_image在程序中被归一化成了(-1,1)的值,但是tf.summary.image用的图是0-255之间,是像素值#如果直接用的话会出问题,所以要先逆归一化一下source_image = (x_image + 1)*127.5inputs_summary = tf.summary.image('inputs_summary', source_image)merged_summary = tf.summary.merge_all()merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary])LOG_DIR = '.'run_label = 'run_vgg_tensorboard'run_dir = os.path.join(LOG_DIR, run_label)if not os.path.exists(run_dir):    os.mkdir(run_dir)train_log_dir = os.path.join(run_dir,'train')test_log_dir = os.path.join(run_dir,'test')if not os.path.exists(train_log_dir):    os.mkdir(train_log_dir)init = tf.global_variables_initializer()batch_size = 20train_steps = 10000test_steps = 100output_summary_every_steps = 100# train 10k: 73.4%with tf.Session() as sess:    sess.run(init)    #训练集和测试集都分别进行输出,建立2个writer    train_writer = tf.summary.FileWriter(train_log_dir, sess.graph)     test_writer = tf.summary.FileWriter(test_log_dir)    fixed_test_batch_data, fixed_test_batch_labels = test_data.next_batch(batch_size)    for i in range(train_steps):        batch_data, batch_labels = train_data.next_batch(batch_size)        eval_ops = [loss,accuracy,train_op]        shoud_output_summary = ((i+1)%output_summary_every_steps == 0)        if shoud_output_summary:            eval_ops.append(merged_summary)        eval_ops_results = sess.run(            eval_ops,            feed_dict={                x: batch_data,                y: batch_labels})        loss_val, acc_val = eval_ops_results[0:2]        if shoud_output_summary:            train_summary_str = eval_ops_results[-1]            train_writer.add_summary(train_summary_str,i+1)            test_summary_str = sess.run([merged_summary_test],                                        feed_dict={                                            x:fixed_test_batch_data,                                            y:fixed_test_batch_labels,                                        })[0]            test_writer.add_summary(test_summary_str,i+1)        if (i+1) % 100 == 0:            print('[Train] Step: %d, loss: %4.5f, acc: %4.5f'                   % (i+1, loss_val, acc_val))        if (i+1) % 1000 == 0:            test_data = CifarData(test_filenames, False)            all_test_acc_val = []            for j in range(test_steps):                test_batch_data, test_batch_labels \                    = test_data.next_batch(batch_size)                test_acc_val = sess.run(                    [accuracy],                    feed_dict = {                        x: test_batch_data,                         y: test_batch_labels                    })                all_test_acc_val.append(test_acc_val)            test_acc = np.mean(all_test_acc_val)            print('[Test ] Step: %d, acc: %4.5f' % (i+1, test_acc))

 

log文件解析

可以同时打开多个文件夹,但这就需要给每个文件夹定义个名字

tensorboard --logdir=train:'train',test:'test'

打开的两个,可以用鼠标点击显示与不显示

images

graphs

distribution

histograms

 

 

 

转载地址:http://blygf.baihongyu.com/

你可能感兴趣的文章
阿里云《云原生》公开课笔记 第四章 理解Pod和容器设计模式
查看>>
阿里云《云原生》公开课笔记 第五章 应用编排与管理
查看>>
阿里云《云原生》公开课笔记 第六章 应用编排与管理:Deployment
查看>>
阿里云《云原生》公开课笔记 第七章 应用编排与管理:Job和DaemonSet
查看>>
阿里云《云原生》公开课笔记 第八章 应用配置管理
查看>>
阿里云《云原生》公开课笔记 第九章 应用存储和持久化数据卷:核心知识
查看>>
linux系统 阿里云源
查看>>
国内外helm源记录
查看>>
牛客网题目1:最大数
查看>>
散落人间知识点记录one
查看>>
Leetcode C++ 随手刷 547.朋友圈
查看>>
手抄笔记:深入理解linux内核-1
查看>>
内存堆与栈
查看>>
Leetcode C++《每日一题》20200621 124.二叉树的最大路径和
查看>>
Leetcode C++《每日一题》20200622 面试题 16.18. 模式匹配
查看>>
Leetcode C++《每日一题》20200625 139. 单词拆分
查看>>
Leetcode C++《每日一题》20200626 338. 比特位计数
查看>>
Leetcode C++ 《拓扑排序-1》20200626 207.课程表
查看>>
Go语言学习Part1:包、变量和函数
查看>>
Go语言学习Part2:流程控制语句:for、if、else、switch 和 defer
查看>>