Step 1: Building reinforcement learning model
Arguments
- data
A data frame in which each row represents a single trial, see data
- colnames
Column names in the data frame, see colnames
- behrule
The agent’s implicitly formed internal rule, see behrule
- funcs
The functions forming the reinforcement learning model, see funcs
- params
Parameters used by the model’s internal functions, see params
- priors
Prior probability density function of the free parameters, see priors
- settings
Other model settings, see settings
- engine
Specifies whether the core Markov Decision Process (MDP) update loop is executed in C++ or in R.
- ...
Additional arguments passed to internal functions.
Value
An S4 object of class multiRL.model.
inputAn S4 object of class
multiRL.input, containing the raw data, column specifications, parameters and ...behruleAn S4 object of class
multiRL.behrule, defining the latent learning rules.resultAn S4 object of class
multiRL.result, storing trial-level outputs of the Markov Decision Process.sumstatAn S4 object of class
multiRL.sumstat, providing summary statistics across different estimation methods.extraA
Listcontaining additional user-defined information.
Examples
multiRL.model <- multiRL::run_m(
data = multiRL::TAB[multiRL::TAB[, "Subject"] == 1, ],
behrule = list(
cue = c("A", "B", "C", "D"),
rsp = c("A", "B", "C", "D")
),
colnames = list(
subid = "Subject", block = "Block", trial = "Trial",
object = c("L_choice", "R_choice"),
reward = c("L_reward", "R_reward"),
action = "Sub_Choose",
exinfo = c("Frame", "NetWorth", "RT")
),
params = list(
free = list(
alpha = 0.5,
beta = 0.5
),
fixed = list(
gamma = 1,
delta = 0.1,
epsilon = NA_real_,
zeta = 0
),
constant = list(
seed = 123,
Q0 = NA_real_,
reset = NA_real_,
lapse = 0.01,
threshold = 1,
bonus = 0,
weight = 1,
capacity = 0,
sticky = 0
)
),
priors = list(
alpha = function(x) {stats::dbeta(x, shape1 = 2, shape2 = 2, log = TRUE)},
beta = function(x) {stats::dexp(x, rate = 1, log = TRUE)}
),
settings = list(
name = "TD",
mode = "fitting",
estimate = "MLE",
policy = "off",
system = c("RL", "WM")
),
engine = "R"
)
multiRL.summary <- multiRL::summary(multiRL.model)
#> Model Fit:
#> Accuracy: 100%
#> Log-Likelihood: -369.89
#> Log-Prior Probability: -0.09
#> Log-Posterior Probability: -369.99
#> AIC: 743.79
#> BIC: 751.56