Skip to contents

Step 1: Building reinforcement learning model

Usage

run_m(
  data,
  colnames = list(),
  behrule = list(),
  funcs = list(),
  params = list(),
  priors = list(),
  settings = list(),
  engine = "Cpp",
  ...
)

Arguments

data

A data frame in which each row represents a single trial, see data

colnames

Column names in the data frame, see colnames

behrule

The agent’s implicitly formed internal rule, see behrule

funcs

The functions forming the reinforcement learning model, see funcs

params

Parameters used by the model’s internal functions, see params

priors

Prior probability density function of the free parameters, see priors

settings

Other model settings, see settings

engine

Specifies whether the core MDP update loop is executed in C++ or in R.

...

Additional arguments passed to internal functions.

Example

 # multiRL.model
 multiRL.model <- multiRL::run_m(
   data = multiRL::TAB[multiRL::TAB[, "Subject"] == 1, ],
   behrule = list(
     cue = c("A", "B", "C", "D"),
     rsp = c("A", "B", "C", "D")
   ),
   colnames = list(
     subid = "Subject", block = "Block", trial = "Trial",
     object = c("L_choice", "R_choice"),
     reward = c("L_reward", "R_reward"),
     action = "Sub_Choose",
     exinfo = c("Frame", "NetWorth", "RT")
   ),
   params = list(
     free = list(
       alpha = 0.5,
       beta = 0.5
     ),
     fixed = list(
       gamma = 1,
       delta = 0.1,
       epsilon = NA_real_,
       zeta = 0
     ),
     constant = list(
       Q0 = NA,
       lapse = 0.01,
       threshold = 1,
       bonus = 0
     )
   ),
   priors = list(
     alpha = function(x) {stats::dbeta(x, shape1 = 2, shape2 = 2, log = TRUE)},
     beta = function(x) {stats::dexp(x, rate = 1, log = TRUE)}
   ),
   settings = list(
     name = "TD",
     mode = "fitting",
     estimate = "MLE",
     policy = "off"
   ),
   engine = "Cpp"
 )

 multiRL.summary <- multiRL::summary(multiRL.model)