关于tensorflow:深度学习之简单卷积神经网络Fashion-MNIST

5次阅读

共计 3367 个字符,预计需要花费 9 分钟才能阅读完成。

Fashion MNIST 分类

Fashion MNIST 数据集当初称之为深度学习的 Hello World。代替了之前的手写体辨认了。

起因应该是深度学习的倒退,手写体辨认变得太简略了。

官网例子

官网代码

尝试应用卷积神经网络来辨认

官网卷积神经网络参考

获取 Fashion MNIST 数据

import tensorflow as tf
from tensorflow import keras

# Helper libraries
import numpy as np
import matplotlib.pyplot as plt

print(tf.__version__)

fashion_mnist = tf.keras.datasets.fashion_mnist

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images, test_images = train_images / 255.0, test_images / 255.0
print(train_images.shape)

定义卷积神经网络模型

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,1)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# model = tf.keras.models.Sequential([#     tf.keras.layers.Conv2D(32,kernel_size=(3,3),activation='relu', input_shape=(28, 28,1)),
#     tf.keras.layers.MaxPooling2D(),
#     tf.keras.layers.Conv2D(64,kernel_size=(5,5),activation='relu'),
#     tf.keras.layers.MaxPooling2D(),
#     tf.keras.layers.Flatten(),
#     tf.keras.layers.Dense(64,activation='relu'),
#     tf.keras.layers.Dense(10,activation='softmax')]
# )
model.summary()

训练模型

model.compile(optimizer='adam',
              loss=tf.keras.losses.sparse_categorical_crossentropy,
              metrics=['accuracy'])

history = model.fit(train_images.reshape(-1,28,28,1), train_labels, epochs=10, 
                    validation_data=(test_images.reshape(-1,28,28,1), test_labels),batch_size=1000)

训练后果

能够看到同样的 10 次迭代训练,应用卷积神经网络训练的后果

val_loss: 0.3499 - val_accuracy: 0.8765

比官网的应用一般神经网络训练的数据要精确一些
loss: 0.3726 - accuracy: 0.8635

Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 105s 2ms/sample - loss: 1.1311 - accuracy: 0.6256 - val_loss: 0.6830 - val_accuracy: 0.7482
Epoch 2/10
60000/60000 [==============================] - 105s 2ms/sample - loss: 0.5803 - accuracy: 0.7813 - val_loss: 0.5385 - val_accuracy: 0.8018
Epoch 3/10
60000/60000 [==============================] - 105s 2ms/sample - loss: 0.4933 - accuracy: 0.8199 - val_loss: 0.4858 - val_accuracy: 0.8226
Epoch 4/10
60000/60000 [==============================] - 106s 2ms/sample - loss: 0.4413 - accuracy: 0.8405 - val_loss: 0.4407 - val_accuracy: 0.8419
Epoch 5/10
60000/60000 [==============================] - 106s 2ms/sample - loss: 0.4057 - accuracy: 0.8553 - val_loss: 0.4226 - val_accuracy: 0.8514
Epoch 6/10
60000/60000 [==============================] - 105s 2ms/sample - loss: 0.3795 - accuracy: 0.8643 - val_loss: 0.3933 - val_accuracy: 0.8591
Epoch 7/10
60000/60000 [==============================] - 108s 2ms/sample - loss: 0.3583 - accuracy: 0.8727 - val_loss: 0.3835 - val_accuracy: 0.8633
Epoch 8/10
60000/60000 [==============================] - 106s 2ms/sample - loss: 0.3437 - accuracy: 0.8780 - val_loss: 0.3694 - val_accuracy: 0.8641
Epoch 9/10
60000/60000 [==============================] - 106s 2ms/sample - loss: 0.3322 - accuracy: 0.8810 - val_loss: 0.3570 - val_accuracy: 0.8701
Epoch 10/10
60000/60000 [==============================] - 116s 2ms/sample - loss: 0.3258 - accuracy: 0.8831 - val_loss: 0.3499 - val_accuracy: 0.8765

做下简略预测

的确是一件上衣哈

print(model.predict(test_images[10].reshape(-1,28,28,1)))

print(test_labels[10])

plt.figure()
plt.imshow(test_images[10])
plt.colorbar()
plt.grid(False)
plt.show()
class_names[4]

正文完
 0