項目資料
PN_IB是2011-03-1到2019-10-01的資料,以月為分隔符
PN_usage是2011-06-25到2017-08-05的資料,以周為分割
long_term_pred_results 2014-10-04到2018-03-31,以周為間隔,預測的是IB量
項目結構
檔案名 | 功能 |
---|---|
Demo_code.py | 相當于main函數,做預處理,調用long_term_pred.py |
Long_term_pred.py | 長期預測函數,調用Anomaly_Detecion.py |
Anomaly_Detection.py | 異常點檢測函數,調用detect_anoms函數 |
detect_anoms.py | 實作異常點檢測ESD算法 |
Outlier_replace.py | 異常點替換函數,用卡爾曼濾波方法 |
R語言和Python的第三方包 的對比
語言 | 函數名 | 函數功能 | 函數相關連結,備注 |
---|---|---|---|
R | mdy() | 把月日年的日期形式轉化為年月日的形式 | https://www.rdocumentation.org/packages/splusTimeDate/versions/2.5.0-137/topics/mdy;mdy安裝方法:https://www.rdocumentation.org/packages/lubridate/versions/1.7.3 |
Python | pd.to_datetime(pd.Series,format=’%m/%d/%Y’) | 把format規定格式的日期轉化為XXXX-XX-XX的年月日格式 | |
R | seq(time.min, time.max, by=”week”) | 以時間最小值為起點,時間最大值為終點,周為間隔生成時間序列 | |
Python | start = PN_usage[‘date’].min() end = PN_usage[‘date’].max() print(“enter if”) sevendayscnt=int(((end-start).days)/7+1) fulldate=[] for i in range(0,sevendayscnt): fulldate.append(start + datetime.timedelta(days=7* i) ) | 生成從某日到某日以7天為分隔的時間序列 | 沒有現成的函數可以實作,自己DIY |
R | data_decomp <- stl(ts(data[[2L]], frequency = num_obs_per_period), s.window = “periodic”, robust = TRUE) | 把時間序列分解成趨勢、季節性、噪聲三部分 | |
Python | decomposition = sm.tsa.seasonal_decompose(data.set_index(‘date’), model=”additive”, filt=None, freq=4, two_sided=True) # 分解結果和R分解相比并不是一模一樣的 | 時間序列分解成趨勢、季節性、噪聲三部分 | 畫圖decomposition.plot() |
R | query<-reinterpolate(PN_IB[[2L]], new.length = round(dim(PN_IB)[1]*4.34524)) | 插值 | |
Python | yinput = y[‘ib’] xinput = np.linspace(start=1, stop=len(y), num=len(y), endpoint=True) xnew = np.linspace(start=1, stop=len(y), num=round(len(y) * 4.34524), endpoint=True) f = interpolate.interp1d(xinput, yinput, kind=’slinear’) query = f(xnew) | 插值 | from scipy import interpolate interp1d函數的kind參數有很多可選值,經試驗kind=”slinear”與R的reinterpolate一模一樣 |
R | PN_usage[[2L]] <- lowess(PN_usage[[2L]], f=0.2)$y | 局部權重回歸,用于平滑曲線 | https://stats.stackexchange.com/questions/125359/lowess-r-python-statsmodels-vs-matlab-biopython |
Python | data[‘usage’] = pd.DataFrame(sm.nonparametric.lowess(exog=list(data[‘usage’].index), endog=list(data[‘usage’]), frac=0.2)).iloc[:, 1] | 局部權重平滑 | import statsmodels.api as sm 函數API說明:http://www.statsmodels.org/stable/generated/statsmodels.nonparametric.smoothers_lowess.lowess.html 函數可視化:https://stackoverflow.com/questions/20618804/how-to-smooth-a-curve-in-the-right-way |
R | reference <- zscore(reference) | zscore變換 | |
Python | reference = stats.zscore(reference, ddof=0) # zsocre的結果,python和R有0.001的誤差 | zscore變換 | from scipy import stats |
R | score <- lb_improved(reference, query, window.size = 10, norm = ‘L2’) | 計算兩個時間序列之間的距離 | https://www.rdocumentation.org/packages/dtwclust/versions/3.1.1/topics/lb_improved R的門檻值是10 |
Python | distance, path = fastdtw(query, reference, radius=20, dist=euclidean) | 計算兩個時間序列之間的距離 | 與R的結果很不一樣。我設的門檻值是35,需要再推敲。 |
R | results <- solve.QP(t(X2) %% X2, t(reference) %% X2, cbind(c(min(y[[2L]]), 1), c(1, 0)), c(0, 0)) | 解二次規劃 | 官方API:https://www.rdocumentation.org/packages/quadprog/versions/1.5-5/topics/solve.QP |
Python | query = ynew # len(query)相當于R裡面的length(query) reference = data[‘usage’] X2 = pd.DataFrame({‘ibofquery’: ynew, ‘one’: 1}) X2 = X2.as_matrix() P = np.dot(X2.T, X2) q = np.dot(reference.as_matrix().T, X2) * (-1) G = matrix([[-1.0, -1.0], [-1.0, 0.0]], tc=’d’) h = matrix([0.0, 0.0], tc=’d’) sol = solvers.qp(matrix(P), matrix(q), G=G, h=h) # ,solver=’mosek’ baseline = sol[‘x’][1] + sol[‘x’][0] * query # baseline是數組 | 解二次規劃 | from cvxopt import matrix # conda install -c conda-forge cvxopt from cvxopt import solvers 使用說明:https://segmentfault.com/a/1190000004439482 注意大坑!有R的solve.QP的官方文檔裡二次規劃的目标函數裡正負号的定義都不一樣,是以Python寫參數的時候,和R的參數一模一樣就錯了! |
R | func_sigma <- match.fun(mad) func_sigma(data[[2L]]) | 計算mad中位數絕對偏差 | MAD(Median absolute deviation, 中位數絕對偏差)是單變量資料集中樣本差異性的穩健度量。在ESD算法中用到了 |
Python | data_sigma = data[‘count’].mad() | 計算mad | pd.Series自帶mad() |
R | t <- qt(p,(n-i-1L)) | 用來傳回分位數 | https://www.quora.com/In-R-what-is-the-difference-between-dt-pt-and-qt-in-reference-to-the-student-t-distribution http://www.r-tutor.com/category/r-functions/qt |
Python | t = ttest.ppf(p, (n - i - 1)) # inverse CDF like qt() in R,p之是以寫成[]形式是因為api要求 | 用來傳回分位數 | from scipy.stats.distributions import t as ttest https://blog.csdn.net/m0_37777649/article/details/74938120 注:ttest包有很多函數,比如cdf(),pdf(),ppf()等等,我選擇ppf是因為看了函數的API以後發現它和R的qt()所代表的inverse cdf代表的意義一緻。 https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html |
基礎文法
讀取檔案
usage = read.csv('data/TopmostPN_filter_weekly_consumption(change)/R31 1019 usage.csv',header = TRUE)
ib = read.csv('data/TopmostPN_ib_filter(change)/total topmost ib with new startpoint2.0.csv',header = FALSE)
usage = pd.read_csv('R31 1019 usage.csv')
ib = pd.read_csv('total topmost ib with new startpoint2.0.csv',header=None)
dataframe切片
資料格式
usage
TopmostPN date usage
1 04W0433 01/01/2011 6
2 04W4459 01/01/2011 1
3 13N7096 01/01/2011 3
4 41W1479 01/01/2011 1
5 42T0152 01/01/2011 3
6 42T0476 01/01/2011 1
7 42W3769 01/01/2011 1
8 42X4880 01/01/2011 1
9 42X5067 01/01/2011 1
10 43N8359 01/01/2011 1
R裡面的計數是從1開始的,沒有0。是以2:3代表第2列(包括)和第3列(包括)
pn='ABCDEF'
PN_usage = usage[usage[[1]] == pn,:]
Python裡面的計數是從0開始的,包含0。是以1:3代表第2列(包括)和第3列(包括)
pn='ABCDEF'
PN_usage = usage[usage['TopmostPN']==pn].iloc[:,:]#截取PN符合pn的date和usage
日期類型處理
規範為日期格式
#mdy之前要library("lubridate")
PN_usage$date = mdy(PN_usage$date)#本來是月日年的形式,處理以後就變成了年月日
PN_usage = PN_usage[order(PN_usage$date),]#把PN_usage整個dateframe按日期升序排列
rownames(PN_usage) = :length(rownames(PN_usage))#相當于Python的df.reset_index()
PN_usage['date']=pd.to_datetime(PN_usage['date'],format='%m/%d/%Y')#把object類型轉換為datetime64[ns]類型
PN_usage=PN_usage.sort_values(by='date')
PN_usage=PN_usage.reset_index()
空白日期對應資料填充為0
R:
#以下這段資料預處理代表,如果不是所有相鄰日期都相差天,代表日期有丢失,就在最小日期和最大日期區間内用range分隔,
if(!(all(diff(PN_usage[[1]])==))){ #判斷條件的意思是:如果不是所有相鄰日期都相差天
#文法 :(預設)diff(x, lag = , differences = , …)
#若x是一個數值向量,則表示後一項減前一項,即滞後一階差分;
time.min <- PN_usage[[1]][]
time.max <- PN_usage[[1]][length(PN_usage[[1]])]
all.dates <- seq(time.min, time.max, by="week")#以時間最小為起點,時間最大為重點,周為間隔,生成時間序列
all.dates.frame <- data.frame(list(date=all.dates))
PN_usage <- merge(all.dates.frame, PN_usage, all=T)
PN_usage[[2]][which(is.na(PN_usage[[2]]))] <-
}
#PN_usage相當于一個時間段内以周為間隔,每周的消耗量資料
python:
if not all(PN_usage['interval']>):
start = PN_usage['date'].min()
end = PN_usage['date'].max()
print("enter if")
sevendayscnt=int(((end-start).days)/+)
fulldate=[]
for i in range(,sevendayscnt):
fulldate.append(start + datetime.timedelta(days=* i) )
fulldatedf=pd.DataFrame(fulldate)
fulldatedf.columns=['date']
result = pd.merge(fulldatedf, PN_usage, on='date',how='outer')#預設是inner join
PN_usage=result[['date','usage']].fillna()
對dataframe求行數
tt <- nrow(PN_usage)
tt=len(PN_usage)