LSTM - Predicting the same constant values after a while(LSTM-在一段时间后预测相同的常量值)
问题描述
我有一个变量,我想要预测到未来30年。遗憾的是,我没有太多样品。
df = pd.DataFrame({'FISCAL_YEAR': [1979,1980,1981,1982,1983, 1984,
1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994,
1995, 1996,
1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006,
2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016,
2017, 2018, 2019],
'VALS': [1341.9, 1966.95, 2085.75, 2087.1000000000004, 2760.75,
3461.4, 3156.3, 3061.8, 2309.8500000000004, 2320.65, 2535.3,
2964.6000000000004, 2949.75, 2339.55,
2327.4, 2571.75, 2299.05, 1560.6000000000001, 1370.25, 1301.4,
1215.0, 5691.6, 6281.55, 6529.950000000001, 17666.100000000002,
14467.95, 15205.050000000001, 14717.7, 14426.1, 12946.5,
13000.5, 12761.550000000001, 13076.1, 13444.650000000001,
13444.650000000001, 13321.800000000001, 13536.45, 13331.25,
12630.6, 12741.300000000001, 12658.95]})
以下是我的代码:
def build_model(n_neurons,dropout,s):
lstm = Sequential()
if cudnn:
lstm.add(CuDNNLSTM(n_neurons))
n_epochs = 200
else:
lstm.add(Masking(mask_value=-1,input_shape=(s[1],s[2])))
lstm.add(LSTM(n_neurons,dropout=dropout))
n_epochs = 500
lstm.add(Dense(1))
#lstm.add(Activation('softmax'))
lstm.compile(loss='mean_squared_error',optimizer='adam')
return lstm
def create_df(dfin,fwd,lstmws):
''' Input Normalization '''
idx = dfin.FISCAL_YEAR.values[fwd:]
dfx = dfin[[varn]].copy()
dfy = dfin[[varn]].copy()
# LSTM window - use last lstmws values
for i in range(0,lstmws-1):
dfx = dfx.join(dfin[[varn]].shift(-i-1),how='left',rsuffix='{:02d}'.format(i+1))
dfx = (dfx-vmnx).divide(vmxx-vmnx)
dfx.fillna(-1,inplace=True) # replace missing values with -1
dfy = (dfy-vmnx).divide(vmxx-vmnx)
dfy.fillna(-1,inplace=True) # replace missing values with -1
return dfx,dfy,idx
def forecast(dfin,dfx,lstm,idx,gapyr=1):
''' Model Forecast '''
xhat = dfx.values
xhat = xhat.reshape(xhat.shape[0],lstmws,int(xhat.shape[1]/lstmws))
yhat = lstm.predict(xhat)
yhat = yhat*(vmxx-vmnx)+vmnx
dfout = pd.DataFrame(list(zip(idx+gapyr,yhat.reshape(1,-1)[0])),columns=['FISCAL_YEAR',varn])
dfout = pd.concat([dfin.head(1),dfout],axis=0).reset_index(drop=True)
#append last prediction to X and use for prediction
dfin = pd.concat([dfin,dfout.tail(1)],axis=0).reset_index(drop=True)
return dfin
def lstm_training(dfin,lstmws,fwd,num_years,batchsize=4,cudnn=False,n_neurons=47,dropout=0.05,retrain=False):
''' LSTM Parameter '''
seed(2018)
set_random_seed(2018)
gapyr = 1 # Forecast +1 Year
dfx,dfy,idx = create_df(dfin,fwd,lstmws)
X,y = dfx.iloc[fwd:-gapyr].values,dfy[fwd+gapyr:].values[:,0]
X,y = X.reshape(X.shape[0],lstmws,int(X.shape[1]/lstmws)),y.reshape(len(y), 1)
lstm = build_model(n_neurons,dropout,X.shape)
''' LSTM Training Start '''
if batchsize == 1:
history_i =
lstm.fit(X,y,epochs=25,batch_size=batchsize,verbose=0,shuffle=False)
else:
history_i = lstm.fit(X,y,epochs=n_epochs,batch_size=batchsize,verbose=0,shuffle=False)
dfin = forecast(dfin,dfx,lstm,idx)
lstm.reset_states()
if not retrain:
for fwd in range(1,num_years):
dfx,dfy,idx = create_df(dfin,fwd,lstmws)
dfin = forecast(dfin,dfx,lstm,idx)
lstm.reset_states()
del dfy,X,y,lstm
gc.collect();
clear_session();
return dfin,history_i
varn = "VALS"
#LSTM-window
lstmws = 10
vmnx,vmxx = df[varn].astype(float).min(),df[varn].astype(float).max()
dfin,history_i = lstm_training(dfin,lstmws,0,2051-2018)
在我的第一个版本中,每次添加新的预测后,我都会重新训练模型,而预测永远不会收敛。但因为每次新的观察后训练都非常耗时,我不得不改变。
我的结果:
dfin.VALS.values
array([ 1341.9 , 1966.95 , 2085.75 , 2087.1 ,
2760.75 , 3461.4 , 3156.3 , 3061.8 ,
2309.85 , 2320.65 , 2535.3 , 2964.6 ,
2949.75 , 2339.55 , 2327.4 , 2571.75 ,
2299.05 , 1560.6 , 1370.25 , 1301.4 ,
1215. , 5691.6 , 6281.55 , 6529.95 ,
17666.1 , 14467.95 , 15205.05 , 14717.7 ,
14426.1 , 12946.5 , 13000.5 , 12761.55 ,
13076.1 , 13444.65 , 13444.65 , 13321.8 ,
13536.45 , 13331.25 , 12630.6 , 12741.3 ,
12658.95 , 10345.97167969, 12192.12792969, 13074.4296875 ,
13264.40917969, 12956.1796875 , 12354.1953125 , 11659.03125 ,
11044.06933594, 10643.19921875, 10552.52246094, 10552.52246094,
10552.52246094, 10552.52246094, 10552.52246094, 10552.52246094,
10552.52246094, 10552.52246094, 10552.52246094, 10552.52246094,
10552.52246094, 10552.52246094, 10552.52246094, 10552.52246094,
10552.52246094, 10552.52246094, 10552.52246094, 10552.52246094,
10552.52246094, 10552.52246094, 10552.52246094, 10552.52246094,
10552.52246094, 10552.52246094])
我如何避免得到过去20多年的相同预测?
编辑:
我预先考虑了更多的随机数据,看看是否因为样本量太小,但一段时间后,预测再次保持不变。
df0 = pd.DataFrame([range(1900,1979),list(np.random.rand(1979-1900)*(vmxx-vmnx)+vmnx)],index=["FISCAL_YEAR","VALS"]).T
df = pd.concat([df0,df])
df["FISCAL_YEAR"] = df["FISCAL_YEAR"].astype(int)
df.index = range(1900,2020)
我观察到的一件奇怪的事情是,10年后的预测是相同的,即窗口大小,但如果我将lstmws增加到20,20年后的预测会收敛:
lstmws = 20
结果:
{'FISCAL_YEAR': [2020, 2021, 2022, 2023, 2024, 2025, 2026, 027, 028, 2029, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 039, 2040, 2041, 2042, 2043, 2044, 2045, 2046, 2047, 2048, 2049, 050, 2051, 2052],
'VALS': [11183.32421875, 12388.28125, 13151.013671875, 12543.6796875, 2590.0888671875, 12002.583984375, 11822.8857421875, 11479.6572265625, 1423.1279296875, 11444.5751953125, 11506.60546875, 11563.3173828125, 1595.0029296875, 11599.8955078125, 11586.8037109375, 11571.337890625, 1574.541015625, 11620.7900390625, 11734.2431640625, 11934.216796875, 1934.216796875, 11934.216796875, 11934.216796875, 11934.216796875, 1934.216796875, 11934.216796875, 11934.216796875, 11934.216796875, 1934.216796875, 11934.216796875, 11934.216796875, 11934.216796875, 1934.216796875]}
推荐答案
在我使用LSTM的经验中(我一直在生成类似this的舞蹈序列),我发现有两点特别有助于防止模型停滞和预测相同的输出。
添加混合密度层
首先,使用混合密度网络而不是L2损失(如您所拥有的)是很有帮助的。有关详细信息,请阅读Christopher Bishop的paper on MDN layers,但基本上L2损失试图将某些输入的错误项的条件平均值预测为y。如果对于一个值x,您有多个可能的输出y0、y1、y2,每个输出都有一定的概率(许多复杂系统都会这样),您将需要考虑MDN层和负的对数似然损失。Here是我正在使用的Kera实现。
现在更仔细地阅读您的情况,这可能对您的情况没有帮助,因为您似乎正在预测一个时间序列,根据定义,每个x都映射到一个y。
给LSTM提供更长的序列
接下来,我发现将LSTMn
序列值放在我试图预测的序列值之前是很有帮助的。N越大,我发现的结果就越好(尽管训练进行得较慢)。我读过的许多论文都使用1024个先前序列值来预测下一个序列值。
您的观察值不多,但您可以尝试输入前8个观察值以预测下一个观察值。
确保输出数据与训练数据具有相同的结构
最后,我在几年后来到了这里,因为我正在训练一个以绝对交叉损失和一个热点向量作为输入的模型。当我使用经过训练的模型生成序列时,我使用:
# this predicts the same value over and over
predict_length = 100
sequence = X[0]
for i in range(predict_length):
# note that z is a dense vector -- it needs to be converted to one hot!
z = model.predict( np.expand_dims( sequence[-sequence_length:], 0 ) )
sequence = np.vstack([sequence, z])
我应该将输出预测转换为一个热向量:
# this predicts new values :)
predict_length = 1000
sequence = X[0]
for i in range(predict_length):
# z is still a dense vector; we'll convert it to one-hot below
z = model.predict( np.expand_dims( sequence[-sequence_length:], 0 ) ).squeeze()
# let's convert z to a one hot vector to match the training data
prediction = np.zeros(len(types),)
prediction[ np.argmax(z) ] = 1
sequence = np.vstack([sequence, prediction])
我怀疑这最后一步就是大多数人会在这个帖子上结束的原因!
这篇关于LSTM-在一段时间后预测相同的常量值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:LSTM-在一段时间后预测相同的常量值
基础教程推荐
- 症状类型错误:无法确定关系的真值 2022-01-01
- 如何在Python中绘制多元函数? 2022-01-01
- 合并具有多索引的两个数据帧 2022-01-01
- 使 Python 脚本在 Windows 上运行而不指定“.py";延期 2022-01-01
- 使用 Google App Engine (Python) 将文件上传到 Google Cloud Storage 2022-01-01
- 使用Python匹配Stata加权xtil命令的确定方法? 2022-01-01
- Python 的 List 是如何实现的? 2022-01-01
- 哪些 Python 包提供独立的事件系统? 2022-01-01
- 将 YAML 文件转换为 python dict 2022-01-01
- 如何在 Python 中检测文件是否为二进制(非文本)文 2022-01-01