Step 4: Replaying the experiment with optimal parameters
Usage
rpl_e(
result,
free_params = NULL,
data,
colnames,
behrule,
ids = NULL,
models,
funcs = NULL,
priors = NULL,
settings = NULL,
...
)Arguments
- result
Result from
rcv_dorfit_p- free_params
In order to prevent ambiguity regarding the free parameters, their names can be explicitly defined by the user.
- data
A data frame in which each row represents a single trial, see data
- colnames
Column names in the data frame, see colnames
- behrule
The agent’s implicitly formed internal rule, see behrule
- ids
The Subject ID of the participant whose data needs to be fitted.
- models
Reinforcement Learning Models
- funcs
The functions forming the reinforcement learning model, see funcs
- priors
Prior probability density function of the free parameters, see priors
- settings
Other model settings, see settings
- ...
Additional arguments passed to internal functions.
Example
# info
data = multiRL::TAB
colnames = list(
object = c("L_choice", "R_choice"),
reward = c("L_reward", "R_reward"),
action = "Sub_Choose"
)
behrule = list(
cue = c("A", "B", "C", "D"),
rsp = c("A", "B", "C", "D")
)
replay.recovery <- multiRL::rpl_e(
result = recovery.MLE,
data = data,
colnames = colnames,
behrule = behrule,
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
omit = c("data", "funcs")
)
replay.fitting <- multiRL::rpl_e(
result = fitting.MLE,
data = data,
colnames = colnames,
behrule = behrule,
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
omit = c("funcs")
)