濮阳杆衣贸易有限公司

主頁 > 知識(shí)庫 > Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類

Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類

熱門標(biāo)簽:孝感營銷電話機(jī)器人效果怎么樣 商家地圖標(biāo)注海報(bào) 南陽打電話機(jī)器人 騰訊地圖標(biāo)注沒法顯示 聊城語音外呼系統(tǒng) 地圖標(biāo)注自己和別人標(biāo)注區(qū)別 打電話機(jī)器人營銷 ai電銷機(jī)器人的優(yōu)勢(shì) 海外網(wǎng)吧地圖標(biāo)注注冊(cè)

一、環(huán)境配置

安裝Anaconda

具體安裝過程,請(qǐng)點(diǎn)擊本文

配置Pytorch

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple torch
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple torchvision

二、數(shù)據(jù)集的準(zhǔn)備

1.數(shù)據(jù)集的下載

kaggle網(wǎng)站的數(shù)據(jù)集下載地址:
https://www.kaggle.com/lizhensheng/-2000

2.數(shù)據(jù)集的分類

將下載的數(shù)據(jù)集進(jìn)行解壓操作,然后進(jìn)行分類
分類如下(每個(gè)文件夾下包括cats和dogs文件夾)

 

三、貓狗分類的實(shí)例

導(dǎo)入相應(yīng)的庫

# 導(dǎo)入庫
import torch.nn.functional as F
import torch.optim as optim
import torch
import torch.nn as nn
import torch.nn.parallel
 
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets

設(shè)置超參數(shù)

# 設(shè)置超參數(shù)
#每次的個(gè)數(shù)
BATCH_SIZE = 20
#迭代次數(shù)
EPOCHS = 10
#采用cpu還是gpu進(jìn)行計(jì)算
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

圖像處理與圖像增強(qiáng)

# 數(shù)據(jù)預(yù)處理
 
transform = transforms.Compose([
    transforms.Resize(100),
    transforms.RandomVerticalFlip(),
    transforms.RandomCrop(50),
    transforms.RandomResizedCrop(150),
    transforms.ColorJitter(brightness=0.5, contrast=0.5, hue=0.5),
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])

讀取數(shù)據(jù)集和導(dǎo)入數(shù)據(jù)

# 讀取數(shù)據(jù)
 
dataset_train = datasets.ImageFolder('E:\\Cat_And_Dog\\kaggle\\cats_and_dogs_small\\train', transform)
 
print(dataset_train.imgs)
 
# 對(duì)應(yīng)文件夾的label
 
print(dataset_train.class_to_idx)
 
dataset_test = datasets.ImageFolder('E:\\Cat_And_Dog\\kaggle\\cats_and_dogs_small\\validation', transform)
 
# 對(duì)應(yīng)文件夾的label
 
print(dataset_test.class_to_idx)
 
# 導(dǎo)入數(shù)據(jù)
 
train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True)
 
test_loader = torch.utils.data.DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=True)

定義網(wǎng)絡(luò)模型

# 定義網(wǎng)絡(luò)
class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 3)
        self.max_pool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 64, 3) 
        self.max_pool2 = nn.MaxPool2d(2) 
        self.conv3 = nn.Conv2d(64, 64, 3) 
        self.conv4 = nn.Conv2d(64, 64, 3) 
        self.max_pool3 = nn.MaxPool2d(2) 
        self.conv5 = nn.Conv2d(64, 128, 3) 
        self.conv6 = nn.Conv2d(128, 128, 3) 
        self.max_pool4 = nn.MaxPool2d(2) 
        self.fc1 = nn.Linear(4608, 512) 
        self.fc2 = nn.Linear(512, 1)
  
    def forward(self, x): 
        in_size = x.size(0) 
        x = self.conv1(x) 
        x = F.relu(x) 
        x = self.max_pool1(x) 
        x = self.conv2(x) 
        x = F.relu(x) 
        x = self.max_pool2(x) 
        x = self.conv3(x) 
        x = F.relu(x) 
        x = self.conv4(x) 
        x = F.relu(x) 
        x = self.max_pool3(x) 
        x = self.conv5(x) 
        x = F.relu(x) 
        x = self.conv6(x) 
        x = F.relu(x)
        x = self.max_pool4(x) 
        # 展開
        x = x.view(in_size, -1)
        x = self.fc1(x)
        x = F.relu(x) 
        x = self.fc2(x) 
        x = torch.sigmoid(x) 
        return x
 
modellr = 1e-4
 
# 實(shí)例化模型并且移動(dòng)到GPU
 
model = ConvNet().to(DEVICE)
 
# 選擇簡(jiǎn)單暴力的Adam優(yōu)化器,學(xué)習(xí)率調(diào)低
 
optimizer = optim.Adam(model.parameters(), lr=modellr)

調(diào)整學(xué)習(xí)率

def adjust_learning_rate(optimizer, epoch):
 
    """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
    modellrnew = modellr * (0.1 ** (epoch // 5)) 
    print("lr:",modellrnew) 
    for param_group in optimizer.param_groups: 
        param_group['lr'] = modellrnew

定義訓(xùn)練過程

# 定義訓(xùn)練過程
def train(model, device, train_loader, optimizer, epoch):
 
    model.train() 
    for batch_idx, (data, target) in enumerate(train_loader):
 
        data, target = data.to(device), target.to(device).float().unsqueeze(1)
 
        optimizer.zero_grad()
 
        output = model(data)
 
        # print(output)
 
        loss = F.binary_cross_entropy(output, target)
 
        loss.backward()
 
        optimizer.step()
 
        if (batch_idx + 1) % 10 == 0:
 
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
 
                epoch, (batch_idx + 1) * len(data), len(train_loader.dataset),
 
                    100. * (batch_idx + 1) / len(train_loader), loss.item()))
# 定義測(cè)試過程
 
def val(model, device, test_loader):
 
    model.eval()
 
    test_loss = 0
 
    correct = 0
 
    with torch.no_grad():
 
        for data, target in test_loader:
 
            data, target = data.to(device), target.to(device).float().unsqueeze(1)
 
            output = model(data)
            # print(output)
            test_loss += F.binary_cross_entropy(output, target, reduction='mean').item()
            pred = torch.tensor([[1] if num[0] >= 0.5 else [0] for num in output]).to(device)
            correct += pred.eq(target.long()).sum().item()
 
        print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
            test_loss, correct, len(test_loader.dataset),
            100. * correct / len(test_loader.dataset)))

定義保存模型和訓(xùn)練

# 訓(xùn)練
for epoch in range(1, EPOCHS + 1):
 
    adjust_learning_rate(optimizer, epoch)
    train(model, DEVICE, train_loader, optimizer, epoch) 
    val(model, DEVICE, test_loader)
 
torch.save(model, 'E:\\Cat_And_Dog\\kaggle\\model.pth')

訓(xùn)練結(jié)果

 

四、實(shí)現(xiàn)分類預(yù)測(cè)測(cè)試

準(zhǔn)備預(yù)測(cè)的圖片進(jìn)行測(cè)試

from __future__ import print_function, division
from PIL import Image
 
from torchvision import transforms
import torch.nn.functional as F
 
import torch
import torch.nn as nn
import torch.nn.parallel
# 定義網(wǎng)絡(luò)
class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 3)
        self.max_pool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 64, 3)
        self.max_pool2 = nn.MaxPool2d(2)
        self.conv3 = nn.Conv2d(64, 64, 3)
        self.conv4 = nn.Conv2d(64, 64, 3)
        self.max_pool3 = nn.MaxPool2d(2)
        self.conv5 = nn.Conv2d(64, 128, 3)
        self.conv6 = nn.Conv2d(128, 128, 3)
        self.max_pool4 = nn.MaxPool2d(2)
        self.fc1 = nn.Linear(4608, 512)
        self.fc2 = nn.Linear(512, 1)
 
    def forward(self, x):
        in_size = x.size(0)
        x = self.conv1(x)
        x = F.relu(x)
        x = self.max_pool1(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = self.max_pool2(x)
        x = self.conv3(x)
        x = F.relu(x)
        x = self.conv4(x)
        x = F.relu(x)
        x = self.max_pool3(x)
        x = self.conv5(x)
        x = F.relu(x)
        x = self.conv6(x)
        x = F.relu(x)
        x = self.max_pool4(x)
        # 展開
        x = x.view(in_size, -1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.fc2(x)
        x = torch.sigmoid(x)
        return x
# 模型存儲(chǔ)路徑
model_save_path = 'E:\\Cat_And_Dog\\kaggle\\model.pth'
 
# ------------------------ 加載數(shù)據(jù) --------------------------- #
# Data augmentation and normalization for training
# Just normalization for validation
# 定義預(yù)訓(xùn)練變換
# 數(shù)據(jù)預(yù)處理
transform_test = transforms.Compose([
    transforms.Resize(100),
    transforms.RandomVerticalFlip(),
    transforms.RandomCrop(50),
    transforms.RandomResizedCrop(150),
    transforms.ColorJitter(brightness=0.5, contrast=0.5, hue=0.5),
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
 
 
class_names = ['cat', 'dog']  # 這個(gè)順序很重要,要和訓(xùn)練時(shí)候的類名順序一致
 
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
 
# ------------------------ 載入模型并且訓(xùn)練 --------------------------- #
model = torch.load(model_save_path)
model.eval()
# print(model)
 
image_PIL = Image.open('E:\\Cat_And_Dog\\kaggle\\cats_and_dogs_small\\test\\cats\\cat.1500.jpg')
#
image_tensor = transform_test(image_PIL)
# 以下語句等效于 image_tensor = torch.unsqueeze(image_tensor, 0)
image_tensor.unsqueeze_(0)
# 沒有這句話會(huì)報(bào)錯(cuò)
image_tensor = image_tensor.to(device)
 
out = model(image_tensor)
pred = torch.tensor([[1] if num[0] >= 0.5 else [0] for num in out]).to(device)
print(class_names[pred])

預(yù)測(cè)結(jié)果


實(shí)際訓(xùn)練的過程來看,整體看準(zhǔn)確度不高。而經(jīng)過測(cè)試發(fā)現(xiàn),該模型只能對(duì)于貓進(jìn)行識(shí)別,對(duì)于狗則會(huì)誤判。

五、參考資料

實(shí)現(xiàn)貓狗分類

到此這篇關(guān)于Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類的文章就介紹到這了,更多相關(guān)Pytorch實(shí)現(xiàn)貓狗分類內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

您可能感興趣的文章:
  • pytorch 搭建神經(jīng)網(wǎng)路的實(shí)現(xiàn)
  • 手把手教你實(shí)現(xiàn)PyTorch的MNIST數(shù)據(jù)集
  • pytorch模型的保存和加載、checkpoint操作
  • 總結(jié)近幾年P(guān)ytorch基于Imgagenet數(shù)據(jù)集圖像分類模型

標(biāo)簽:南寧 六盤水 揚(yáng)州 聊城 撫州 迪慶 牡丹江 楊凌

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類》,本文關(guān)鍵詞  Python,機(jī)器,學(xué),習(xí)之,基于,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類》相關(guān)的同類信息!
  • 本頁收集關(guān)于Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章
    微博| 大石桥市| 阿克苏市| 平利县| 罗田县| 新乐市| 五莲县| 高邑县| 南华县| 六枝特区| 开封县| 江都市| 金平| 怀安县| 景德镇市| 肇州县| 彭州市| 金湖县| 汉沽区| 原阳县| 西安市| 中阳县| 肥城市| 阳谷县| 云林县| 沙湾县| 开鲁县| 库车县| 定西市| 土默特左旗| 德州市| 广灵县| 普洱| 蚌埠市| 五原县| 工布江达县| 屏边| 阿拉善左旗| 钟山县| 吐鲁番市| 阿尔山市|