#Problema de Regressão Caso 3: Dados airquality do pacote MASS#
Conjunto de dados que tem como variável resposta a concentração de ozônio em partes por bilhão no ar, representada por Ozone. O objetivo é explicar esta variável de acordo com 3 covariáveis:
Solar.R: Radiação solar
Wind: Média da velocidade do vento em milhas por hora
Temp: Temperatura máxima diária em Fahrenheit
A análise está dividida em 4 arquivos, sendo que o arquivo com:
Parte1: Análise com a covariável Temp e comparação dos modelos pela qualidade preditiva
Parte2: Análise com a covariável Temp e comparação dos modelos pela qualidade de ajuste
Parte3: Análise com as três covariáveis e comparação dos modelos pela qualidade preditiva
Parte4: Análise com as três covariáveis e comparação dos modelos pela qualidade de ajuste
pkg=c("mcglm","xtable","cluster","class","mvtnorm","ggplot2","splines","e1071","kernlab","mlbench","rpart","MASS","gtools","geoR","pROC","data.table","mgcv","gamlss.data")
sapply(pkg,require,character.only=T)
## mcglm xtable cluster class mvtnorm ggplot2
## TRUE TRUE TRUE TRUE TRUE TRUE
## splines e1071 kernlab mlbench rpart MASS
## TRUE TRUE TRUE TRUE TRUE TRUE
## gtools geoR pROC data.table mgcv gamlss.data
## TRUE TRUE TRUE TRUE TRUE TRUE
source("conditional_prediction.R")
#set.seed(13032019)
set.seed(13032019)
data("airquality")
dat=na.omit(data.frame(y=sqrt(airquality$Ozone),X1=airquality$Solar.R,X2=airquality$Wind,X3=airquality$Temp))
summary(dat)
## y X1 X2 X3
## Min. : 1.000 Min. : 7.0 Min. : 2.30 Min. :57.00
## 1st Qu.: 4.243 1st Qu.:113.5 1st Qu.: 7.40 1st Qu.:71.00
## Median : 5.568 Median :207.0 Median : 9.70 Median :79.00
## Mean : 6.016 Mean :184.8 Mean : 9.94 Mean :77.79
## 3rd Qu.: 7.874 3rd Qu.:255.5 3rd Qu.:11.50 3rd Qu.:84.50
## Max. :12.961 Max. :334.0 Max. :20.70 Max. :97.00
ggplot(data=dat,aes(x=X3,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Temperatura em Fahrenheit")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
ggplot(data=dat,aes(x=X2,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Velocidade do vendo em milhas por hora")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
ggplot(data=dat,aes(x=X1,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Radiação solar")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
ggplot(data=dat,aes(x=X1*X2,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Interação da Radiação solar com a velocidade de vento")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
ggplot(data=dat,aes(x=X1*X3,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Interação da Radiação solar com a temperatura do ar")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
ggplot(data=dat,aes(x=X2*X3,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Interação da velocidade do vento com a temeratura do ar")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
ggplot(data=dat,aes(x=X1*X2*X3,y=y))+geom_point(alpha=.5,size=4,col="black")+xlab("Interação das três covariáveis")+ylab("Raiz quadrada da média de ozônio em partes por bilhão")
Parece haver um padrão da variável resposta na interação da radiação solar com a temperatura e na interação da velocidade do vento com a temperatura do ar. Porém, por haver um padrão semelhante com a dispersão da resposta com apenas a covariável radiação solar e com apenas a covariável velocidade do vento, possivelmente não precisará incluir as interações no modelo, uma vez que estas covariáveis já explicam a mesma coisa.
São 111 observações no total. As mesmas serão divididas em 66 para treino ( ∼ 60%) e 45 para teste ( ∼ 40%).
#indexes<-sample(nrow(dat),floor(0.6*nrow(dat)),replace = F)
indexes=c(34,90,20,86,66,11,36,3,65,8,59,2,94,107,56,76,101,80,6,17,63,19,15,97,22,18,27,111,13,38,7,16,23,69,31,42,103,82,5,99,39,62,105,78,70,53,47,44,108,37,50,14,96,29,89,35,83,1,54,72,75,25,52,9,110,85)
Basicamente, foram ajustados vários modelos com finalidade de compara-los segundo a qualidade preditiva e qualidade de ajuste global.
Primeiramente foram ajustados modelos da classe MCGLM, os quais tem média e covariância espressos por:
E(Ozonio) = μ
V(Ozonio) = Ω
Para os MCGLMs, a média foi considerada de duas formas:
Como constante
μ = β0
ou com as covariáveis
μ = β0 + β1 * Temp + β2 * wind
Z0 <- mc_id(train)
fit.iid.0 <- mcglm(linear_pred = c(y~1), list(Z0),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .3,
"max_iter"=1000),
covariance = "identity",
control_initial = list(regression=list(c(6)),
power=list(c(0)),
tau=list(c(1)),
rho=0))
plot(fit.iid.0,'algorithm')
summary(fit.iid.0)
## Call: y ~ 1
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) 6.003375 0.3053544 19.66035
##
## Dispersion:
## Estimates Std.error Z value
## 1 6.153927 0.9097819 6.764178
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 29
fit.norm.1=glm(y~X2+X3,data = train,family = gaussian())
form1=fit.norm.1$formula
n.cov=names(fit.norm.1$coefficients)[-1]
form1=y~X2+X3
ptm=proc.time()
fit.iid.1 <- mcglm(linear_pred = c(form1), list(Z0),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .5,
"max_iter"=1000),
covariance = "identity",
control_initial = list(regression=list(fit.norm.1$coefficients),
power=list(c(0)),
tau=list(c(5)),
rho=0))
proc.time()-ptm
## user system elapsed
## 0.90 0.03 1.00
summary(fit.iid.1)
## Call: y ~ X2 + X3
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) -2.6641210 1.96173574 -1.358043
## X2 -0.2350696 0.05797174 -4.054899
## X3 0.1440339 0.02038469 7.065787
##
## Dispersion:
## Estimates Std.error Z value
## 1 1.872072 0.3311735 5.652845
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 16
As covariáveis foram significativas, além disso, a dispersão reduziu de 6, 15 para 1, 87.
Modelos com modelagem da dependência entre as observações. Neste primeiro, com kernel exponencial Kexp expresso por:
Kexp = exp( − D/ϕ)
Onde
Dij = ∥ xi − xj∥
sendo que ∥ .∥ denota a distância euclidiana.
Foi especificado um grid de 200 possíveis valores para ϕ, variando de min(D)/3 até (min(D) + max(D))/2.
D=dist(train[,c("X1","X2","X3")])
phi.grid=seq(min(D)/3,(min(D)+max(D))/2,len=200)
Calculando Kexp para cada um dos valores da sequência de ϕ
Z_exp=lapply(1:length(phi.grid), function(i) as.matrix(cov.spatial(D,cov.model = "matern",cov.pars = c(1,phi.grid[i]),kappa = .5)))
gof=lapply(1:length(Z_exp),function(i) mc_sic_covariance(fit.iid.0, scope = list(Z_exp[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~phi.grid,xlab = "Phi",ylab = "SIC",main="Kernel exponencial",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_exp))[min(sic)==sic]
## [1] 26
Valor estimado em ϕ = 20, 6
ptm=proc.time()
fit.exp.0 <- mcglm(linear_pred = c(y~1), list(list(Z0[[1]],Z_exp[[26]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .1,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.0$Regression),
power=list(c(0)),
tau=list(c(4,2)),
rho=0))
proc.time()-ptm
## user system elapsed
## 10.67 0.08 11.02
summary(fit.exp.0)
## Call: y ~ 1
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) 5.541544 0.7628743 7.264033
##
## Dispersion:
## Estimates Std.error Z value
## 1 6.121392 1.617420 3.784665
## 2 5.954059 1.785606 3.334474
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 103
plot(fit.exp.0,"algorithm")
gof=lapply(1:length(Z_exp),function(i) mc_sic_covariance(fit.iid.1, scope = list(Z_exp[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~phi.grid,xlab = "Phi",ylab = "SIC",main="Kernel exponencial",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_exp))[min(sic)==sic]
## [1] 7
ptm=proc.time()
fit.exp.1 <- mcglm(linear_pred = c(form1), list(list(Z0[[1]],Z_exp[[7]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .1,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.1$Regression),
power=list(c(0)),
tau=list(c(4,2)),
rho=0))
proc.time()-ptm
## user system elapsed
## 13.50 0.14 15.15
summary(fit.exp.1)
## Call: y ~ X2 + X3
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) -2.3435669 2.06437331 -1.135244
## X2 -0.2422501 0.05765853 -4.201462
## X3 0.1402396 0.02243001 6.252318
##
## Dispersion:
## Estimates Std.error Z value
## 1 1.849730 0.3507716 5.273317
## 2 1.548689 0.5515875 2.807694
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 115
plot(fit.exp.1,"algorithm")
Modelos com modelagem da dependência entre as observações. Neste segundo, com kernel gaussiano Kgauss expresso por:
Kexp = exp( − (D/ϕ)2)
Onde
Dij = ∥ xi − xj∥
sendo que ∥ .∥ denota a distância euclidiana.
Foi especificado um grid de 200 possíveis valores para ϕ, variando de min(D)/5, 92 até (min(D) + max(D))/2.
phi.grid=seq(min(D)/5.92,(min(D)+max(D))/2,len=200)
Calculando Kgauss para cada um dos valores da sequência de ϕ
Z_gauss=lapply(1:length(phi.grid), function(i) as.matrix(cov.spatial(D,cov.model = "matern",cov.pars = c(1,phi.grid[i]),kappa = 2.5)))
gof=lapply(1:length(Z_gauss),function(i) mc_sic_covariance(fit.iid.0, scope = list(Z_gauss[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~phi.grid,xlab = "Phi",ylab = "SIC",main="Kernel gaussiano",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_gauss))[min(sic)==sic]
## [1] 11
ptm=proc.time()
fit.gauss.0 <- mcglm(linear_pred = c(y~1), list(list(Z0[[1]],Z_gauss[[11]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .1,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.0$Regression),
power=list(c(0)),
tau=list(c(4,2)),
rho=0))
proc.time()-ptm
## user system elapsed
## 13.53 0.14 14.48
summary(fit.gauss.0)
## Call: y ~ 1
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) 5.612437 0.7452062 7.531388
##
## Dispersion:
## Estimates Std.error Z value
## 1 6.515884 1.673157 3.894365
## 2 5.307999 1.649041 3.218840
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 122
plot(fit.gauss.0,"algorithm")
gof=lapply(1:length(Z_gauss),function(i) mc_sic_covariance(fit.iid.1, scope = list(Z_gauss[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~phi.grid,xlab = "Phi",ylab = "SIC",main="Kernel gaussonencial",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_gauss))[min(sic)==sic]
## [1] 4
ptm=proc.time()
fit.gauss.1 <- mcglm(linear_pred = c(form1), list(list(Z0[[1]],Z_gauss[[4]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .1,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.1$Regression),
power=list(c(0)),
tau=list(c(4,2)),
rho=0))
proc.time()-ptm
## user system elapsed
## 10.49 0.19 10.80
summary(fit.gauss.1)
## Call: y ~ X2 + X3
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) -2.4038056 2.05899419 -1.167466
## X2 -0.2427903 0.05781199 -4.199653
## X3 0.1411903 0.02230682 6.329468
##
## Dispersion:
## Estimates Std.error Z value
## 1 1.845468 0.3428036 5.383454
## 2 1.030753 0.4862822 2.119661
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 114
plot(fit.gauss.1,"algorithm")
n.clust=seq(1,nrow(train)/2)
set.seed(222)
ag.n=lapply(1:length(n.clust), function(i) kmeans(train[,c("X1","X2","X3")],i+1,iter.max = 100))
Z_disc=lapply(1:length(n.clust), function(i) as.matrix(mc_mixed(~0+factor(id.n),data = data.frame(id.n=ag.n[[i]]$cluster))[[1]]))
gof=lapply(1:length(Z_disc),function(i) mc_sic_covariance(fit.iid.0, scope = list(Z_disc[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~n.clust,xlab = "Clusters",ylab = "SIC",main="Kernel via k-médias",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_disc))[min(sic)==sic]
## [1] 11
##################################
ptm=proc.time()
fit.kmedias.0 <- mcglm(linear_pred = c(y~1), list(list(Z0[[1]],Z_disc[[14]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .1,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.0$Regression),
power=list(c(0)),
tau=list(c(3,1)),
rho=0))
proc.time()-ptm
## user system elapsed
## 12.66 0.21 15.62
summary(fit.kmedias.0)
## Call: y ~ 1
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) 6.088405 0.4899603 12.42632
##
## Dispersion:
## Estimates Std.error Z value
## 1 2.943130 0.5411914 5.438243
## 2 2.773867 1.2973360 2.138126
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 105
plot(fit.kmedias.0,"algorithm")
gof=lapply(1:length(Z_disc),function(i) mc_sic_covariance(fit.iid.1, scope = list(Z_disc[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~n.clust,xlab = "Clusters",ylab = "SIC",main="Kernel via k-médias",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_disc))[min(sic)==sic]
## [1] 24
##################################
ptm=proc.time()
fit.kmedias.1 <- mcglm(linear_pred = c(form1), list(list(Z0[[1]],Z_disc[[17]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .1,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.1$Regression),
power=list(c(0)),
tau=list(c(3,1)),
rho=0))
proc.time()-ptm
## user system elapsed
## 11.48 0.22 13.29
summary(fit.kmedias.1)
## Call: y ~ X2 + X3
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) -2.5350721 1.94563441 -1.302954
## X2 -0.2318234 0.05491013 -4.221870
## X3 0.1416492 0.02098050 6.751466
##
## Dispersion:
## Estimates Std.error Z value
## 1 1.5078585 0.2711197 5.561597
## 2 0.3959571 0.2886163 1.371915
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 112
plot(fit.kmedias.1,"algorithm")
gof=lapply(1:length(Z_disc),function(i) mc_sic_covariance(fit.gauss.0, scope = list(Z_disc[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~n.clust,xlab = "Clusters",ylab = "SIC",main="Kernel via k-médias",type=c("p","l"),lwd=2,pch=19)
seq(1,length(Z_disc))[min(sic)==sic]
## [1] 13
##################################
ptm=proc.time()
fit.gauss.kmedias.0 <- mcglm(linear_pred = c(y~1), list(list(Z0[[1]],Z_gauss[[11]],Z_disc[[12]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .01,
"max_iter"=1000,
tol=10e-6),
covariance = "identity",
control_initial = list(regression=list(fit.iid.0$Regression),
power=list(c(0)),
tau=list(c(6,5,2)),
rho=0))
proc.time()-ptm
Modelo não convergiu.
summary(fit.gauss.kmedias.0)
plot(fit.gauss.kmedias.0,"algorithm")
gof=lapply(1:length(Z_disc),function(i) mc_sic_covariance(fit.gauss.1, scope = list(Z_disc[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~n.clust,xlab = "Clusters",ylab = "SIC",main="Kernel via k-médias",type=c("p","l"),lwd=2,pch=19)
e=seq(1,length(Z_disc))[min(sic)==sic]
##################################
ptm=proc.time()
fit.gauss.kmedias.1 <- mcglm(linear_pred = c(form1), list(list(Z0[[1]],Z_gauss[[4]],Z_disc[[e]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .05,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.1$Regression),
power=list(c(0)),
tau=list(c(3,1,1)),
rho=0))
## Warning: Maximum iterations number reached.
proc.time()-ptm
## user system elapsed
## 157.11 1.95 181.83
summary(fit.gauss.kmedias.1)
## Call: y ~ X2 + X3
##
## Link function: identity
## Variance function: tweedie
## Covariance function: identity
## Regression:
## Estimates Std.error Z value
## (Intercept) 0.2690121 1.196654032 0.2248036
## X2 -0.2459318 0.052412645 -4.6922220
## X3 0.1073863 0.009907961 10.8383869
##
## Dispersion:
## Estimates Std.error Z value
## 1 1.0089091 0.9437541 1.0690382
## 2 -1.4285910 1.8904372 -0.7556935
## 3 0.9807753 1.3656885 0.7181545
##
## Algorithm: chaser
## Correction: TRUE
## Number iterations: 1000
plot(fit.gauss.kmedias.1,"algorithm")
gof=lapply(1:length(Z_disc),function(i) mc_sic_covariance(fit.exp.0, scope = list(Z_disc[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~n.clust,xlab = "Clusters",ylab = "SIC",main="Kernel via k-médias",type=c("p","l"),lwd=2,pch=19)
e=seq(1,length(Z_disc))[min(sic)==sic]
##################################
ptm=proc.time()
fit.exp.kmedias.0 <- mcglm(linear_pred = c(y~1), list(list(Z0[[1]],Z_exp[[26]],Z_disc[[e]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "rc",
"tuning" = .7,
"max_iter"=1000,
tol=10e-5),
covariance = "identity",
control_initial = list(regression=list(fit.iid.0$Regression),
power=list(c(0)),
tau=list(c(6,5,1)),
rho=0))
proc.time()-ptm
Modelo não convergiu
summary(fit.exp.kmedias.0)
plot(fit.exp.kmedias.0,"algorithm")
gof=lapply(1:length(Z_disc),function(i) mc_sic_covariance(fit.exp.1, scope = list(Z_disc[[i]]), idx = 1, data = train, response = 1))
sic= unlist(gof)[seq(1,(length(unlist(gof))-4),by=5)]
lattice::xyplot(sic~n.clust,xlab = "Clusters",ylab = "SIC",main="Kernel via k-médias",type=c("p","l"),lwd=2,pch=19)
e=seq(1,length(Z_disc))[min(sic)==sic]
##################################
ptm=proc.time()
fit.exp.kmedias.1 <- mcglm(linear_pred = c(form1), list(list(Z0[[1]],Z_exp[[7]],Z_disc[[e]])),
link = "identity", variance = "tweedie",
data = train, power_fixed = T,
control_algorithm = list("verbose" = F,
"method" = "chaser",
"tuning" = .05,
"max_iter"=1000,
tol=10e-7),
covariance = "identity",
control_initial = list(regression=list(fit.iid.1$Regression),
power=list(c(0)),
tau=list(c(3,1,1)),
rho=0))
proc.time()-ptm
Modelo não convergiu
summary(fit.exp.kmedias.1)
plot(fit.exp.kmedias.1,"algorithm")
source("tablemcglm.R")
models<-c("fit.iid.0","fit.iid.1","fit.exp.0","fit.exp.1","fit.gauss.0","fit.gauss.1","fit.kmedias.0","fit.kmedias.1","fit.gauss.kmedias.1")
tabcomp2<- data.frame(Omega=c("$\\tau_0\\textbf{I}$","$\\tau_0\\textbf{I}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{exp}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{exp}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{gauss}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{gauss}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{disc}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{disc}$","$\\tau_0\\textbf{I}+\\tau_1\\textbf{K}_{gauss}+\\tau_2\\textbf{K}_{kmedias}$"),Media=c("~1","~x","~1","~x","~1","~x","~1","~x","~x"),LogLik=est(models,e="like"),pAIC=est(models,e="aic"),pBIC=est(models,e="ess"))
#print(xtable(tabcomp2),sanitize.text.function = function(x){x},type="html")
Omega | Media | LogLik | pAIC | pBIC | |
---|---|---|---|---|---|
1 | τ0I | ~1 | -154.12 | 312.24 | 316.62 |
2 | τ0I | ~x | -115.84 | 239.68 | 248.44 |
3 | τ0I + τ1Kexp | ~1 | -134.19 | 274.38 | 280.95 |
4 | τ0I + τ1Kexp | ~x | -113.44 | 236.88 | 247.83 |
5 | τ0I + τ1Kgauss | ~1 | -133.51 | 273.02 | 279.59 |
6 | τ0I + τ1Kgauss | ~x | -113.36 | 236.72 | 247.67 |
7 | τ0I + τ1Kdisc | ~1 | -140.94 | 287.88 | 294.45 |
8 | τ0I + τ1Kdisc | ~x | -112.02 | 234.04 | 244.99 |
9 | τ0I + τ1Kexp + τ2Kkmedias | ~1 | -135.32 | 278.64 | 287.40 |
10 | τ0I + τ1Kexp + τ2Kkmedias | ~x | -112.01 | 236.02 | 249.16 |
11 | τ0I + τ1Kgauss + τ2Kkmedias | ~1 | -129.87 | 267.74 | 276.50 |
12 | τ0I + τ1Kgauss + τ2Kkmedias | ~x | -111.95 | 235.90 | 249.04 |