過擬合是當(dāng)數(shù)據(jù)量較小時(shí)或者輸出結(jié)果過于依賴某些特定的神經(jīng)元,訓(xùn)練神經(jīng)網(wǎng)絡(luò)訓(xùn)練會(huì)發(fā)生一種現(xiàn)象。出現(xiàn)這種現(xiàn)象的神經(jīng)網(wǎng)絡(luò)預(yù)測的結(jié)果并不具有普遍意義,其預(yù)測結(jié)果極不準(zhǔn)確。
2.L1,L2,L3…正規(guī)化,即在計(jì)算誤差值的時(shí)候加上要學(xué)習(xí)的參數(shù)值,當(dāng)參數(shù)改變過大時(shí),誤差也會(huì)變大,通過這種懲罰機(jī)制來控制過擬合現(xiàn)象
3.dropout正規(guī)化,在訓(xùn)練過程中通過隨機(jī)屏蔽部分神經(jīng)網(wǎng)絡(luò)連接,使神經(jīng)網(wǎng)絡(luò)不完整,這樣就可以使神經(jīng)網(wǎng)絡(luò)的預(yù)測結(jié)果不會(huì)過分依賴某些特定的神經(jīng)元
import torch
import matplotlib.pyplot as plt
N_SAMPLES = 20
N_HIDDEN = 300
# train數(shù)據(jù)
x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1)
y = x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1))
# test數(shù)據(jù)
test_x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1)
test_y = test_x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1))
# 可視化
plt.scatter(x.data.numpy(), y.data.numpy(), c='magenta', s=50, alpha=0.5, label='train')
plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='cyan', s=50, alpha=0.5, label='test')
plt.legend(loc='upper left')
plt.ylim((-2.5, 2.5))
plt.show()
# 網(wǎng)絡(luò)一,未使用dropout正規(guī)化
net_overfitting = torch.nn.Sequential(
torch.nn.Linear(1, N_HIDDEN),
torch.nn.ReLU(),
torch.nn.Linear(N_HIDDEN, N_HIDDEN),
torch.nn.ReLU(),
torch.nn.Linear(N_HIDDEN, 1),
)
# 網(wǎng)絡(luò)二,使用dropout正規(guī)化
net_dropped = torch.nn.Sequential(
torch.nn.Linear(1, N_HIDDEN),
torch.nn.Dropout(0.5), # 隨機(jī)屏蔽50%的網(wǎng)絡(luò)連接
torch.nn.ReLU(),
torch.nn.Linear(N_HIDDEN, N_HIDDEN),
torch.nn.Dropout(0.5), # 隨機(jī)屏蔽50%的網(wǎng)絡(luò)連接
torch.nn.ReLU(),
torch.nn.Linear(N_HIDDEN, 1),
)
# 選擇優(yōu)化器
optimizer_ofit = torch.optim.Adam(net_overfitting.parameters(), lr=0.01)
optimizer_drop = torch.optim.Adam(net_dropped.parameters(), lr=0.01)
# 選擇計(jì)算誤差的工具
loss_func = torch.nn.MSELoss()
plt.ion()
for t in range(500):
# 神經(jīng)網(wǎng)絡(luò)訓(xùn)練數(shù)據(jù)的固定過程
pred_ofit = net_overfitting(x)
pred_drop = net_dropped(x)
loss_ofit = loss_func(pred_ofit, y)
loss_drop = loss_func(pred_drop, y)
optimizer_ofit.zero_grad()
optimizer_drop.zero_grad()
loss_ofit.backward()
loss_drop.backward()
optimizer_ofit.step()
optimizer_drop.step()
if t % 10 == 0:
# 脫離訓(xùn)練模式,這里便于展示神經(jīng)網(wǎng)絡(luò)的變化過程
net_overfitting.eval()
net_dropped.eval()
# 可視化
plt.cla()
test_pred_ofit = net_overfitting(test_x)
test_pred_drop = net_dropped(test_x)
plt.scatter(x.data.numpy(), y.data.numpy(), c='magenta', s=50, alpha=0.3, label='train')
plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='cyan', s=50, alpha=0.3, label='test')
plt.plot(test_x.data.numpy(), test_pred_ofit.data.numpy(), 'r-', lw=3, label='overfitting')
plt.plot(test_x.data.numpy(), test_pred_drop.data.numpy(), 'b--', lw=3, label='dropout(50%)')
plt.text(0, -1.2, 'overfitting loss=%.4f' % loss_func(test_pred_ofit, test_y).data.numpy(),
fontdict={'size': 20, 'color': 'red'})
plt.text(0, -1.5, 'dropout loss=%.4f' % loss_func(test_pred_drop, test_y).data.numpy(),
fontdict={'size': 20, 'color': 'blue'})
plt.legend(loc='upper left'); plt.ylim((-2.5, 2.5));plt.pause(0.1)
# 重新進(jìn)入訓(xùn)練模式,并繼續(xù)上次訓(xùn)練
net_overfitting.train()
net_dropped.train()
plt.ioff()
plt.show()