Bayesian First Aid (cont.)

Test of Proportions

A lot of questions out there involves estimating the proportion or relative frequency of success of two or more groups (where success could be a saved life, a click on a link, or a happy baby) and there exists a little known R function that does just that, prop.test. Here we’ll use its Bayesian version bayes.prop.test.

This is an extension of the binomial test which estimates the underlying relative frequency of success given a number of trials.

For \(m\) different trials, in the i-th trial there were \(x_1\) successes out of \(n_i\) trials, so:

\[x_i \sim \text{Binomial}(\theta_i, n_i)\]

\[\theta_i \sim \text{Beta}(1,1)\]

Here’s an eg with two trials:

trial.1 <- c( 43, 275)
trial.2 <- c(170,1454)
model <- bayes.prop.test(trial.1, trial.2)
model
## 
##  Bayesian First Aid propotion test
## 
## data: trial.1 out of trial.2
## number of successes:    43,  275
## number of trials:      170, 1454
## Estimated relative frequency of success [95% credible interval]:
##   Group 1: 0.26 [0.19, 0.32]
##   Group 2: 0.19 [0.17, 0.21]
## Estimated group difference (Group 1 - Group 2):
##   0.07 [-0.0034, 0.13]
## The relative frequency of success is larger for Group 1 by a probability
## of 0.975 and larger for Group 2 by a probability of 0.025 .

It seems that the first trial has more successes by around 7%. However this is not very solid evidence since the 95% confidence interval includes 0 even if barely.

We can calculate the probability that the difference between the two trials is small (say 5%):

ts <- as.data.frame(model)
head(ts)
##   theta1 theta2 x_pred1 x_pred2
## 1 0.3004 0.1976      46     305
## 2 0.2749 0.1769      58     279
## 3 0.2518 0.1836      39     263
## 4 0.2867 0.1751      48     258
## 5 0.2024 0.1986      36     309
## 6 0.2417 0.1900      47     282
diff <- mean(abs( ts$theta1 - ts$theta2 ) < 0.05) # check Kruschke's ROPE
diff
## [1] 0.3277

So, there’s a 33% that the relative frequency of successes in the two trials is equivalent, which is weak evidence for the ‘no difference’ hypothesis.

plot(model)

plot of chunk unnamed-chunk-4

diagnostics(model)
## 
## Iterations = 1:5000
## Thinning interval = 1 
## Number of chains = 3 
## Sample size per chain = 5000 
## 
##   Diagnostic measures
##                    mean     sd mcmc_se n_eff  Rhat
## theta[1]          0.256  0.033   0.000 14727 1.000
## theta[2]          0.190  0.010   0.000 14791 1.000
## x_pred[1]        43.451  8.024   0.067 14311 1.000
## x_pred[2]       275.637 20.968   0.175 14385 1.001
## theta_diff[1,2]   0.066  0.035      NA    NA    NA
## 
## mcmc_se: the estimated standard error of the MCMC approximation of the mean.
## n_eff: a crude measure of effective MCMC sample size.
## Rhat: the potential scale reduction factor (at convergence, Rhat=1).
## 
##   Model parameters and generated quantities
## theta: The relative frequency of success
## x_pred: Predicted number of successes in a replication
## theta_diff[i,j]: the difference between two groups (theta[i] - theta[j])

plot of chunk unnamed-chunk-4

model.code(model)
## ### Model code for the Bayesian First Aid  ###
## ### alternative to the test of proportions ###
## require(rjags)
## 
## # Setting up the data
## x <- c(43, 275) 
## n <- c(170, 1454) 
## 
## # The model string written in the JAGS language
## model_string <- "model {
##   for(i in 1:length(x)) {
##     x[i] ~ dbinom(theta[i], n[i])
##     theta[i] ~ dbeta(1, 1)
##     x_pred[i] ~ dbinom(theta[i], n[i])
##   }
## }"
## 
## # Running the model
## model <- jags.model(textConnection(model_string), data = list(x = x, n = n), 
##                     n.chains = 3, n.adapt=1000)
## samples <- coda.samples(model, c("theta", "x_pred"), n.iter=5000)
## 
## # Inspecting the posterior
## plot(samples)
## summary(samples)
## 
## # You can extract the mcmc samples as a matrix and compare the thetas 
## # of the groups. For example, the following shows the median and 95%
## # credible interval for the difference between Group 1 and Group 2.
## samp_mat <- as.matrix(samples)
## quantile(samp_mat[, "theta[1]"] - samp_mat[, "theta[2]"], c(0.025, 0.5, 0.975))

Let’s see an eg with four groups:

smokers  <- c(83, 90, 129, 70)
patients <- c(86, 93, 136, 82)
model <- bayes.prop.test(smokers, patients)
model
## 
##  Bayesian First Aid propotion test
## 
## data: smokers out of patients
## number of successes:   83,  90, 129,  70
## number of trials:      86,  93, 136,  82
## Estimated relative frequency of success [95% credible interval]:
##   Group 1: 0.96 [0.91, 0.99]
##   Group 2: 0.96 [0.92, 0.99]
##   Group 3: 0.94 [0.90, 0.98]
##   Group 4: 0.85 [0.77, 0.92]
## Estimated pairwise group differences (row - column) with 95 % cred. intervals:
##                         Group                        
##            2               3               4       
##   1        0             0.01            0.11      
##     [-0.064, 0.055] [-0.045, 0.072]  [0.023, 0.2]  
##   2                      0.02            0.11      
##                     [-0.042, 0.072]  [0.025, 0.2]  
##   3                                      0.09      
##                                      [0.014, 0.18]
plot(model)

plot of chunk unnamed-chunk-5

The 4th group seems slighty different from the others.

model.code(model)
## ### Model code for the Bayesian First Aid  ###
## ### alternative to the test of proportions ###
## require(rjags)
## 
## # Setting up the data
## x <- c(83, 90, 129, 70) 
## n <- c(86, 93, 136, 82) 
## 
## # The model string written in the JAGS language
## model_string <- "model {
##   for(i in 1:length(x)) {
##     x[i] ~ dbinom(theta[i], n[i])
##     theta[i] ~ dbeta(1, 1)
##     x_pred[i] ~ dbinom(theta[i], n[i])
##   }
## }"
## 
## # Running the model
## model <- jags.model(textConnection(model_string), data = list(x = x, n = n), 
##                     n.chains = 3, n.adapt=1000)
## samples <- coda.samples(model, c("theta", "x_pred"), n.iter=5000)
## 
## # Inspecting the posterior
## plot(samples)
## summary(samples)
## 
## # You can extract the mcmc samples as a matrix and compare the thetas 
## # of the groups. For example, the following shows the median and 95%
## # credible interval for the difference between Group 1 and Group 2.
## samp_mat <- as.matrix(samples)
## quantile(samp_mat[, "theta[1]"] - samp_mat[, "theta[2]"], c(0.025, 0.5, 0.975))

Poisson Test

Given a set of counts per time period, and assuming a Poisson distribution, the goal is to estimate the \(\lambda\) parameter of the Poisson. It assumes that every count comes from equal periods of time, and that the parameter is fixed, ie, all counts com from the same distribution.

sales.per.period <- c(14,16,9,18,10,6,13)
poisson.test(x=sum(sales.per.period), T=length(sales.per.period),r=10) # r -- hypothesized rate
## 
##  Exact Poisson test
## 
## data:  sum(sales.per.period) time base: length(sales.per.period)
## number of events = 86, time base = 7, p-value = 0.06345
## alternative hypothesis: true event rate is not equal to 10
## 95 percent confidence interval:
##   9.827 15.173
## sample estimates:
## event rate 
##      12.29

The bayes version uses the following model:

\[x \sim \text{Poisson}(\lambda_{\text{total}})\]

\[\lambda_{\text{total}} = \lambda . T\]

\[\lambda \sim \text{Gamma}(0.5,0.00001)\]

The \(\text{Gamma}(0.5,0.00001)\) is the JAGS-friendly approximation of the Jeffrey’s prior for the parameter of the Poisson (which is \(p(\lambda) \propto 1/\sqrt{\lambda}\)).

Using bayes.poisson.text:

model <- bayes.poisson.test(x=sum(sales.per.period), T=length(sales.per.period), r=10)
model
## 
##  Bayesian Fist Aid poisson test - one sample
## 
## number of events: 86, time periods: 7
## Estimated event rate:
##   12 
## 95% credible interval:
##   9.7 15 
## The event rate is more than 10 by a probability of 0.969 
## and less than 10 by a probability of 0.031  .
plot(model)

plot of chunk unnamed-chunk-8

diagnostics(model)
## 
## Iterations = 101:5100
## Thinning interval = 1 
## Number of chains = 3 
## Sample size per chain = 5000 
## 
##   Diagnostic measures
##         mean     sd mcmc_se n_eff  Rhat
## lambda 12.35  1.325   0.011 15313 1.001
## x_pred 86.45 13.083   0.104 15741 1.001
## 
## mcmc_se: the estimated standard error of the MCMC approximation of the mean.
## n_eff: a crude measure of effective MCMC sample size.
## Rhat: the potential scale reduction factor (at convergence, Rhat=1).
## 
##   Model parameters and generated quantities
## lambda: the rate of the process.
## x_pred: predicted event count during 7 periods.

plot of chunk unnamed-chunk-8

model.code(model)
## ### Model code for the Bayesian First Aid one sample Poisson test ###
## require(rjags)
## 
## # Setting up the data
## x <- 86 
## t <- 7 
## 
## # The model string written in the JAGS language
## model_string <- "model {
##   x ~ dpois(lambda * t)
##   lambda ~ dgamma(0.5, 0.00001)
##   x_pred ~ dpois(lambda * t)
## }"
## 
## # Running the model
## model <- jags.model(textConnection(model_string), data = list(x = x, t = t), n.chains = 3)
## samples <- coda.samples(model, c("lambda", "x_pred"), n.iter=5000)
## 
## # Inspecting the posterior
## plot(samples)
## summary(samples)

The test accepts two different counts and compare its ratios. In the following eg we are comparing if the first group, that was X-rayed, the cancer rate is 1.5 higher than in the 2nd group, the control group:

cancer.cases <- c(41,15)
person.per.thousand  <- c(28.011,19.025)
model <- bayes.poisson.test(x=cancer.cases, T=person.per.thousand, r=1.5)
model
## 
##  Bayesian Fist Aid poisson test - two sample
## 
## number of events: 41 and 15, time periods: 28.011 and 19.025
## 
##   Estimates [95% credible interval]
## Group 1 rate: 1.5 [1.0, 1.9]
## Group 2 rate: 0.80 [0.42, 1.2]
## Rate ratio (Group 1 rate / Group 2 rate):
##               1.8 [1.1, 3.5]
## 
## The event rate of group 1 is more than 1.5 times that of group 2 by a probability 
## of 0.757 and less than 1.5 times that of group 2 by a probability of 0.243 .

So, there is evidence, around 75%, that the cancer rate after the patients being X-rayed, is higher than 1.5 times the control group.

plot(model)

plot of chunk unnamed-chunk-10

diagnostics(model)
## 
## Iterations = 101:5100
## Thinning interval = 1 
## Number of chains = 3 
## Sample size per chain = 5000 
## 
##   Diagnostic measures
##              mean    sd mcmc_se n_eff Rhat
## lambda[1]   1.485 0.230   0.002 15000    1
## lambda[2]   0.815 0.208   0.002 15000    1
## x_pred[1]  41.547 9.128   0.075 15000    1
## x_pred[2]  15.501 5.576   0.046 14814    1
## rate_diff   0.670 0.310   0.003 15000    1
## rate_ratio  1.860    NA   0.005 14512    1
## 
## mcmc_se: the estimated standard error of the MCMC approximation of the mean.
## n_eff: a crude measure of effective MCMC sample size.
## Rhat: the potential scale reduction factor (at convergence, Rhat=1).
## 
##   Model parameters and generated quantities
## lambda[1]: the rate of the process of group 1.
## lambda[2]: the rate of the process of group 2.
## x_pred[1]: predicted event count of group 1 during 28.01 periods.
## x_pred[2]: predicted event count of group 2 during 19.02 periods.
## rate_diff: The difference lambda[1] - lambda[2].
## rate_ratio: The ratio lambda[1] / lambda[2].

plot of chunk unnamed-chunk-10plot of chunk unnamed-chunk-10

model.code(model)
## ### Model code for the Bayesian First Aid two sample Poisson test ###
## require(rjags)
## 
## # Setting up the data
## x <- c(41, 15) 
## t <- c(28.011, 19.025) 
## 
## # The model string written in the JAGS language
## model_string <- "model {
##   for(group_i in 1:2) {
##     x[group_i] ~ dpois(lambda[group_i] * t[group_i])
##     lambda[group_i] ~ dgamma(0.5, 0.00001)
##     x_pred[group_i] ~ dpois(lambda[group_i] * t[group_i])
##   }
##   rate_diff <- lambda[1] - lambda[2]
##   rate_ratio <- lambda[1] / lambda[2]
## }"
## 
## # Running the model
## model <- jags.model(textConnection(model_string), data = list(x = x, t = t), n.chains = 3)
## samples <- coda.samples(model, c("lambda", "x_pred", "rate_diff", "rate_ratio"), n.iter=5000)
## 
## # Inspecting the posterior
## plot(samples)
## summary(samples)

It’s possible to add Poisson tests for more than two groups. Just copy-paste the R and JAGS code above, and change the input data for longer vectors. Then update the for(group_i in 1:2) to the required group size.

Zelig package

Zelig is a unified framework for a large range of statistical models. Here in we’ll check its bayesian models.

library(MCMCpack)
library(Zelig)

Bayesian Logistic Regression

A logistic model for \(p_i = P(Z_i=1|X_i)\) is

\[logit(p_i) = \log(\frac{p_i}{1-p_i}) = \beta_0 + \beta_1 \log(x_i)\]

\[\beta_0, \beta_1 \sim U(-\infty,+\infty)\]

The model option for zelig is model = "logit.bayes". Here’s an eg of training and testing using the bayesian logistic regression (check here for more info):

head(mtcars)
##                    mpg cyl disp  hp drat    wt  qsec vs am gear carb
## Mazda RX4         21.0   6  160 110 3.90 2.620 16.46  0  1    4    4
## Mazda RX4 Wag     21.0   6  160 110 3.90 2.875 17.02  0  1    4    4
## Datsun 710        22.8   4  108  93 3.85 2.320 18.61  1  1    4    1
## Hornet 4 Drive    21.4   6  258 110 3.08 3.215 19.44  1  0    3    1
## Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2
## Valiant           18.1   6  225 105 2.76 3.460 20.22  1  0    3    1
set.seed(101)
n <- nrow(mtcars)
sample.set <- sample(1:n, 0.7*n)
train.set  <- mtcars[sample.set,]
test.set   <- mtcars[-sample.set,]

# model the probability of a manual transmission in a vehicle based on its engine horsepower & weight
z.out <- zelig(am ~ hp + wt, model = "logit.bayes", data = train.set, verbose=FALSE, cite=FALSE)
# test a new record:
x.out <- setx(z.out, wt=test.set[1,"wt"], hp=test.set[1,"hp"]) # this is test.set[1,]
s.out <- sim(z.out, x = x.out)
summary(s.out)
## 
## Model:  logit.bayes 
## Number of simulations:  1000 
## 
## Values of X
##            (Intercept) hp   wt
## Merc 450SE           1 93 2.32
## attr(,"assign")
## [1] 0 1 2
## 
## Expected Value: E(Y|X) 
##             mean  sd   50% 2.5% 97.5%
## Merc 450SE 0.934 0.1 0.974 0.63     1
## 
## Predicted Value: Y|X 
##                0     1
## Merc 450SE 0.068 0.932
test.set[1,"am"] # compare with real value
## [1] 1
# check the entire test set
ams <- rep(NA, nrow(test.set))
for(i in 1:nrow(test.set)) {
   
  x.out <- setx(z.out, wt=test.set[i,"wt"], hp=test.set[i,"hp"])
  s.out <- sim(z.out, x = x.out)
  
  ams[i] <- which.max(s.out$stats$`Predicted Value: Y|X`)-1
}
table(test.set$`am`, ams, dnn = c("real","predicted"))
##     predicted
## real 0 1
##    0 4 0
##    1 1 5

To check more bayesian models from this package check Zelig’s manual.

Laplace’s Demon

This is another package that offers many different kinds of MCMC. Check their webpage for more info.

# to install, download from http://www.bayesian-inference.com/softwaredownload
install.packages(pkgs="path\\LaplacesDemon_[version].tar.gz", repos=NULL, type="source")

# check package documentation
vignette("BayesianInference")
vignette("LaplacesDemonTutorial")
vignette("Examples")

Let’s implement the following model (ref):

\[y_i \sim \mathcal{N}(\mu,\sigma)\] \[\mu \sim \mathcal{N}(0,100)\] \[\sigma \sim \text{LogNormal}(0,4)\]

and get some data:

set.seed(1337)
y <- rnorm(20,10,5)

There is no special mechanism for keeping the parameters inside a bounded range, therefore \(\sigma\) is sampled on the log-scale and then exponentiated to make sure it is always positive.

library(LaplacesDemon)

model <- function(theta, data) {
  mu       <- theta[1]
  sigma    <- exp(theta[2])
  log_lik  <- sum( dnorm(data$y, mu, sigma, log=T) )  # sum ( log p(y_i|theta) )
  log_post <- log_lik + dnorm(mu, 0, 100, log=T) + dlnorm(sigma, 0, 4, log=T)
  
  # return list in package required format
  list(LP=log_post, 
       Dev= -2*log_lik, 
       Monitor=c(log_post,sigma),
       yhat=NA,
       parm=theta)
}

data.list <- list(N=length(y), y=y, mon.names=c("log_post","sigma"), parm.names=c("mu","log.sigma"))
mcmc.samples <- LaplacesDemon(Model=model, 
                              Data=data.list, 
                              Iterations=1e4, 
                              Algorithm="HARM", # Hit-and-Run Metropolis, www.bayesian-inference.com/mcmcharm
                              Thinning=1
                             )
## 
## Laplace's Demon was called on Mon Oct 13 12:50:10 2014
## 
## Performing initial checks...
## WARNING: Initial Values were not supplied.
## Suggestion: 1 possible instance(s) of for loops
##      were found in the Model specification. Iteration speed will
##      increase if for loops are vectorized in R or coded in a
##      faster language such as C++ via the Rcpp package.
## 
## Laplace Approximation will be used on initial values.
## Suggestion: 1  possible instance(s) of for loops
##      were found in the Model specification. Iteration speed will
##      increase if for loops are vectorized in R or coded in a
##      faster language such as C++ via the Rcpp package.
## Sample Size:  20 
## Laplace Approximation begins...
## Iteration:  10  of  100 ,   LP:  -91.2 
## Iteration:  20  of  100 ,   LP:  -85.6 
## Iteration:  30  of  100 ,   LP:  -82.8 
## Iteration:  40  of  100 ,   LP:  -79 
## Iteration:  50  of  100 ,   LP:  -72.9 
## Estimating the Covariance Matrix
## Creating Summary from Point-Estimates
## Estimating Log of the Marginal Likelihood
## Laplace Approximation is finished.
## 
## The covariance matrix from Laplace Approximation has been scaled
## for Laplace's Demon, and the posterior modes are now the initial
## values for Laplace's Demon.
## 
## Algorithm: Hit-And-Run Metropolis 
## 
## Laplace's Demon is beginning to update...
## Iteration: 100Iteration: 100,   Proposal: Multivariate,   LP:-73.5
## Iteration: 200Iteration: 200,   Proposal: Multivariate,   LP:-72.9
## Iteration: 300Iteration: 300,   Proposal: Multivariate,   LP:-72.7
## Iteration: 400Iteration: 400,   Proposal: Multivariate,   LP:-74.5
## Iteration: 500Iteration: 500,   Proposal: Multivariate,   LP:-73
## Iteration: 600Iteration: 600,   Proposal: Multivariate,   LP:-73.5
## Iteration: 700Iteration: 700,   Proposal: Multivariate,   LP:-73.8
## Iteration: 800Iteration: 800,   Proposal: Multivariate,   LP:-72.8
## Iteration: 900Iteration: 900,   Proposal: Multivariate,   LP:-72.5
## Iteration: 1000Iteration: 1000,   Proposal: Multivariate,   LP:-74
## Iteration: 1100Iteration: 1100,   Proposal: Multivariate,   LP:-73.9
## Iteration: 1200Iteration: 1200,   Proposal: Multivariate,   LP:-73.8
## Iteration: 1300Iteration: 1300,   Proposal: Multivariate,   LP:-73
## Iteration: 1400Iteration: 1400,   Proposal: Multivariate,   LP:-74.3
## Iteration: 1500Iteration: 1500,   Proposal: Multivariate,   LP:-74.8
## Iteration: 1600Iteration: 1600,   Proposal: Multivariate,   LP:-74.2
## Iteration: 1700Iteration: 1700,   Proposal: Multivariate,   LP:-72.9
## Iteration: 1800Iteration: 1800,   Proposal: Multivariate,   LP:-76.3
## Iteration: 1900Iteration: 1900,   Proposal: Multivariate,   LP:-72.9
## Iteration: 2000Iteration: 2000,   Proposal: Multivariate,   LP:-74.2
## Iteration: 2100Iteration: 2100,   Proposal: Multivariate,   LP:-72.7
## Iteration: 2200Iteration: 2200,   Proposal: Multivariate,   LP:-75.3
## Iteration: 2300Iteration: 2300,   Proposal: Multivariate,   LP:-73.7
## Iteration: 2400Iteration: 2400,   Proposal: Multivariate,   LP:-73.2
## Iteration: 2500Iteration: 2500,   Proposal: Multivariate,   LP:-73.4
## Iteration: 2600Iteration: 2600,   Proposal: Multivariate,   LP:-72.5
## Iteration: 2700Iteration: 2700,   Proposal: Multivariate,   LP:-75.1
## Iteration: 2800Iteration: 2800,   Proposal: Multivariate,   LP:-72.7
## Iteration: 2900Iteration: 2900,   Proposal: Multivariate,   LP:-72.9
## Iteration: 3000Iteration: 3000,   Proposal: Multivariate,   LP:-74.4
## Iteration: 3100Iteration: 3100,   Proposal: Multivariate,   LP:-72.7
## Iteration: 3200Iteration: 3200,   Proposal: Multivariate,   LP:-73
## Iteration: 3300Iteration: 3300,   Proposal: Multivariate,   LP:-72.5
## Iteration: 3400Iteration: 3400,   Proposal: Multivariate,   LP:-73.1
## Iteration: 3500Iteration: 3500,   Proposal: Multivariate,   LP:-73.5
## Iteration: 3600Iteration: 3600,   Proposal: Multivariate,   LP:-74.4
## Iteration: 3700Iteration: 3700,   Proposal: Multivariate,   LP:-74.8
## Iteration: 3800Iteration: 3800,   Proposal: Multivariate,   LP:-72.8
## Iteration: 3900Iteration: 3900,   Proposal: Multivariate,   LP:-74.1
## Iteration: 4000Iteration: 4000,   Proposal: Multivariate,   LP:-73.9
## Iteration: 4100Iteration: 4100,   Proposal: Multivariate,   LP:-73.1
## Iteration: 4200Iteration: 4200,   Proposal: Multivariate,   LP:-74.9
## Iteration: 4300Iteration: 4300,   Proposal: Multivariate,   LP:-73.1
## Iteration: 4400Iteration: 4400,   Proposal: Multivariate,   LP:-73.8
## Iteration: 4500Iteration: 4500,   Proposal: Multivariate,   LP:-72.6
## Iteration: 4600Iteration: 4600,   Proposal: Multivariate,   LP:-73.9
## Iteration: 4700Iteration: 4700,   Proposal: Multivariate,   LP:-73.1
## Iteration: 4800Iteration: 4800,   Proposal: Multivariate,   LP:-73
## Iteration: 4900Iteration: 4900,   Proposal: Multivariate,   LP:-73.4
## Iteration: 5000Iteration: 5000,   Proposal: Multivariate,   LP:-73.1
## Iteration: 5100Iteration: 5100,   Proposal: Multivariate,   LP:-73.3
## Iteration: 5200Iteration: 5200,   Proposal: Multivariate,   LP:-73.1
## Iteration: 5300Iteration: 5300,   Proposal: Multivariate,   LP:-72.7
## Iteration: 5400Iteration: 5400,   Proposal: Multivariate,   LP:-75.7
## Iteration: 5500Iteration: 5500,   Proposal: Multivariate,   LP:-73.8
## Iteration: 5600Iteration: 5600,   Proposal: Multivariate,   LP:-73.2
## Iteration: 5700Iteration: 5700,   Proposal: Multivariate,   LP:-72.7
## Iteration: 5800Iteration: 5800,   Proposal: Multivariate,   LP:-72.8
## Iteration: 5900Iteration: 5900,   Proposal: Multivariate,   LP:-72.8
## Iteration: 6000Iteration: 6000,   Proposal: Multivariate,   LP:-73.6
## Iteration: 6100Iteration: 6100,   Proposal: Multivariate,   LP:-72.9
## Iteration: 6200Iteration: 6200,   Proposal: Multivariate,   LP:-72.9
## Iteration: 6300Iteration: 6300,   Proposal: Multivariate,   LP:-73.1
## Iteration: 6400Iteration: 6400,   Proposal: Multivariate,   LP:-74
## Iteration: 6500Iteration: 6500,   Proposal: Multivariate,   LP:-72.8
## Iteration: 6600Iteration: 6600,   Proposal: Multivariate,   LP:-72.7
## Iteration: 6700Iteration: 6700,   Proposal: Multivariate,   LP:-74.9
## Iteration: 6800Iteration: 6800,   Proposal: Multivariate,   LP:-73
## Iteration: 6900Iteration: 6900,   Proposal: Multivariate,   LP:-73.6
## Iteration: 7000Iteration: 7000,   Proposal: Multivariate,   LP:-72.8
## Iteration: 7100Iteration: 7100,   Proposal: Multivariate,   LP:-72.7
## Iteration: 7200Iteration: 7200,   Proposal: Multivariate,   LP:-73.1
## Iteration: 7300Iteration: 7300,   Proposal: Multivariate,   LP:-73
## Iteration: 7400Iteration: 7400,   Proposal: Multivariate,   LP:-72.6
## Iteration: 7500Iteration: 7500,   Proposal: Multivariate,   LP:-73.6
## Iteration: 7600Iteration: 7600,   Proposal: Multivariate,   LP:-74.6
## Iteration: 7700Iteration: 7700,   Proposal: Multivariate,   LP:-73.6
## Iteration: 7800Iteration: 7800,   Proposal: Multivariate,   LP:-72.9
## Iteration: 7900Iteration: 7900,   Proposal: Multivariate,   LP:-74.1
## Iteration: 8000Iteration: 8000,   Proposal: Multivariate,   LP:-73.5
## Iteration: 8100Iteration: 8100,   Proposal: Multivariate,   LP:-74.3
## Iteration: 8200Iteration: 8200,   Proposal: Multivariate,   LP:-72.9
## Iteration: 8300Iteration: 8300,   Proposal: Multivariate,   LP:-72.5
## Iteration: 8400Iteration: 8400,   Proposal: Multivariate,   LP:-72.9
## Iteration: 8500Iteration: 8500,   Proposal: Multivariate,   LP:-72.8
## Iteration: 8600Iteration: 8600,   Proposal: Multivariate,   LP:-72.7
## Iteration: 8700Iteration: 8700,   Proposal: Multivariate,   LP:-72.9
## Iteration: 8800Iteration: 8800,   Proposal: Multivariate,   LP:-74.7
## Iteration: 8900Iteration: 8900,   Proposal: Multivariate,   LP:-74.2
## Iteration: 9000Iteration: 9000,   Proposal: Multivariate,   LP:-73.7
## Iteration: 9100Iteration: 9100,   Proposal: Multivariate,   LP:-74.2
## Iteration: 9200Iteration: 9200,   Proposal: Multivariate,   LP:-73.7
## Iteration: 9300Iteration: 9300,   Proposal: Multivariate,   LP:-74.4
## Iteration: 9400Iteration: 9400,   Proposal: Multivariate,   LP:-72.6
## Iteration: 9500Iteration: 9500,   Proposal: Multivariate,   LP:-73.6
## Iteration: 9600Iteration: 9600,   Proposal: Multivariate,   LP:-73.5
## Iteration: 9700Iteration: 9700,   Proposal: Multivariate,   LP:-75
## Iteration: 9800Iteration: 9800,   Proposal: Multivariate,   LP:-75.6
## Iteration: 9900Iteration: 9900,   Proposal: Multivariate,   LP:-73.2
## Iteration: 10000Iteration: 10000,   Proposal: Multivariate,   LP:-72.6
## 
## Assessing Stationarity
## Assessing Thinning and ESS
## Creating Summaries
## Estimating Log of the Marginal Likelihood
## Creating Output
## 
## Laplace's Demon has finished.

We can present the results from the sampling:

Consort(mcmc.samples)
## 
## #############################################################
## # Consort with Laplace's Demon                              #
## #############################################################
## Call:
## LaplacesDemon(Model = model, Data = data.list, Iterations = 10000, 
##     Thinning = 1, Algorithm = "HARM")
## 
## Acceptance Rate: 0.4179
## Algorithm: Hit-And-Run Metropolis
## Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
##        mu log.sigma 
##   1.49688   0.02556 
## 
## Covariance (Diagonal) History: (NOT SHOWN HERE)
## Deviance Information Criterion (DIC):
##          All Stationary
## Dbar 127.733    127.733
## pD     1.866      1.866
## DIC  129.599    129.599
## Initial Values:
##        mu log.sigma 
##    12.719     1.699 
## 
## Iterations: 10000
## Log(Marginal Likelihood): -62.64
## Minutes of run-time: 0.03
## Model: (NOT SHOWN HERE)
## Monitor: (NOT SHOWN HERE)
## Parameters (Number of): 2
## Posterior1: (NOT SHOWN HERE)
## Posterior2: (NOT SHOWN HERE)
## Recommended Burn-In of Thinned Samples: 0
## Recommended Burn-In of Un-thinned Samples: 0
## Recommended Thinning: 40
## Specs: (NOT SHOWN HERE)
## Status is displayed every 100 iterations
## Summary1: (SHOWN BELOW)
## Summary2: (SHOWN BELOW)
## Thinned Samples: 10000
## Thinning: 1
## 
## 
## Summary of All Samples
##              Mean     SD     MCSE    ESS      LB  Median      UB
## mu         12.355 1.2235 0.127777  103.6  10.079  12.343  14.871
## log.sigma   1.740 0.1599 0.005314 1636.8   1.453   1.732   2.079
## Deviance  127.733 1.9316 0.117931  264.3 125.826 127.141 132.812
## log_post  -73.539 0.9981 0.063120  249.6 -76.262 -73.238 -72.561
## sigma       5.774 0.9563 0.032264 1594.9   4.277   5.651   7.998
## 
## 
## Summary of Stationary Samples
##              Mean     SD     MCSE    ESS      LB  Median      UB
## mu         12.355 1.2235 0.127777  103.6  10.079  12.343  14.871
## log.sigma   1.740 0.1599 0.005314 1636.8   1.453   1.732   2.079
## Deviance  127.733 1.9316 0.117931  264.3 125.826 127.141 132.812
## log_post  -73.539 0.9981 0.063120  249.6 -76.262 -73.238 -72.561
## sigma       5.774 0.9563 0.032264 1594.9   4.277   5.651   7.998
## 
## Demonic Suggestion
## 
## Due to the combination of the following conditions,
## 
## 1. Hit-And-Run Metropolis
## 2. The acceptance rate (0.4179) is within the interval [0.15,0.5].
## 3. At least one target MCSE is >= 6.27% of its marginal posterior
##    standard deviation.
## 4. Each target distribution has an effective sample size (ESS)
##    of at least 100.
## 5. Each target distribution became stationary by
##    1 iteration.
## 
## Laplace's Demon has not been appeased, and suggests
## copy/pasting the following R code into the R console,
## and running it.
## 
## Initial.Values <- as.initial.values(mcmc.samples)
## mcmc.samples <- LaplacesDemon(Model, Data=data.list, Initial.Values,
##      Covar=NULL, Iterations=4e+05, Status=333333, Thinning=40,
##      Algorithm="HARM", Specs=list(alpha.star=NA, B=NULL)
## 
## Laplace's Demon is finished consorting.
plot(mcmc.samples, BurnIn=1e3, data.list)

plot of chunk unnamed-chunk-18plot of chunk unnamed-chunk-18

The modelling with this package is less user-friendly since there is no declarative language like in BUGS, JAGS or STAN. However it has the possibility of lots of configuration options.