Step 3: Optimizing parameters to fit real data
Usage
fit_p(
estimate,
data,
colnames,
behrule,
ids = NULL,
funcs = NULL,
priors = NULL,
settings = NULL,
models,
algorithm,
lowers,
uppers,
control,
...
)Arguments
- estimate
Estimate method that you want to use, see estimate
- data
A data frame in which each row represents a single trial, see data
- colnames
Column names in the data frame, see colnames
- behrule
The agent’s implicitly formed internal rule, see behrule
- ids
The Subject ID of the participant whose data needs to be fitted.
- funcs
The functions forming the reinforcement learning model, see funcs
- priors
Prior probability density function of the free parameters, see priors
- settings
Other model settings, see settings
- models
Reinforcement Learning Models
- algorithm
Algorithm packages that multiRL supports, see algorithm
- lowers
Lower bound of free parameters in each model.
- uppers
Upper bound of free parameters in each model.
- control
Settings manage various aspects of the iterative process, see control
- ...
Additional arguments passed to internal functions.
Example
# fitting
fitting.MLE <- multiRL::fit_p(
estimate = "MLE",
data = multiRL::TAB,
colnames = list(
object = c("L_choice", "R_choice"),
reward = c("L_reward", "R_reward"),
action = "Sub_Choose"
),
behrule = list(
cue = c("A", "B", "C", "D"),
rsp = c("A", "B", "C", "D")
),
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
algorithm = "NLOPT_GN_MLSL",
lowers = list(c(0, 0), c(0, 0, 0), c(0, 0, 0)),
uppers = list(c(1, 5), c(1, 1, 5), c(1, 5, 1)),
control = list(core = 10, iter = 100)
)