Skip to contents

This function processes the synthetic datasets generated by simulate_list(). For each of these simulated datasets, it then fits every model specified within the fit_model list. In essence, it iteratively calls the optimize_para() function for each generated object.

The fitting procedure is analogous to that performed by fit_p, and it similarly leverages parallel computation across subjects to significantly accelerate the parameter estimation process.

Usage

recovery_data(
  list,
  id = 1,
  n_trials,
  n_params,
  funcs = NULL,
  policy,
  model_name,
  fit_model,
  lower,
  upper,
  initial_params = NA,
  initial_size = 50,
  iteration = 10,
  seed = 123,
  nc = 1,
  algorithm
)

Arguments

list

[list]

A list generated by function simulate_list()

id

[vector]

Specifies which subject's data to use. In parameter and model recovery analyses, the specific subject ID is often irrelevant. Although the experimental trial order might have some randomness for each subject, the sequence of reward feedback is typically pseudo-random.

The default value for this argument is NULL. When id = NULL, the program automatically detects existing subject IDs within the dataset. It then randomly selects one subject as a sample, and the parameter and model recovery procedures are performed based on this selected subject's data.

default: id = NULL

n_trials

[integer]

The total number of trials in your experiment.

n_params

[integer]

The number of free parameters in your model.

funcs

[character]

A character vector containing the names of all user-defined functions required for the computation. When parallel computation is enabled (i.e., `nc > 1`), user-defined models and their custom functions might not be automatically accessible within the parallel environment.

Therefore, if you have created your own reinforcement learning model that modifies the package's default four default functions (default functions: util_func = func_gamma, rate_func = func_eta, expl_func = func_epsilon bias_func = func_pi prob_func = func_tau ), you must explicitly provide the names of your custom functions as a vector here.

policy

[character]

Specifies the learning policy to be used. This determines how the model updates action values based on observed or simulated choices. It can be either "off" or "on".

  • Off-Policy (Q-learning): This is the most common approach for modeling reinforcement learning in Two-Alternative Forced Choice (TAFC) tasks. In this mode, the model's goal is to learn the underlying value of each option by observing the human participant's behavior. It achieves this by consistently updating the value of the option that the human actually chose. The focus is on understanding the value representation that likely drove the participant's decisions.

  • Off-Policy (SARSA): In this mode, the target policy and the behavior policy are identical. The model first computes the selection probability for each option based on their current values. Critically, it then uses these probabilities to sample its own action. The value update is then performed on the action that the model itself selected. This approach focuses more on directly mimicking the stochastic choice patterns of the agent, rather than just learning the underlying values from a fixed sequence of actions.

default: policy = "off"

model_name

[character]

The name of your modal

fit_model

[function]

fit model

lower

[vector]

Lower bounds of free parameters

upper

[vector]

Upper bounds of free parameters

initial_params

[numeric]

Initial values for the free parameters that the optimization algorithm will search from. These are primarily relevant when using algorithms that require an explicit starting point, such as L-BFGS-B. If not specified, the function will automatically generate initial values close to zero.

default: initial_params = NA.

initial_size

[integer]

This parameter corresponds to the population size in genetic algorithms (GA). It specifies the number of initial candidate solutions that the algorithm starts with for its evolutionary search. This parameter is only required for optimization algorithms that operate on a population, such as `GA` or `DEoptim`.

default: initial_size = 50.

iteration

[integer]

The number of iterations the optimization algorithm will perform when searching for the best-fitting parameters during the fitting phase. A higher number of iterations may increase the likelihood of finding a global optimum but also increases computation time.

seed

[integer]

Random seed. This ensures that the results are reproducible and remain the same each time the function is run.

default: seed = 123

nc

[integer]

Number of cores to use for parallel processing. Since fitting optimal parameters for each subject is an independent task, parallel computation can significantly speed up the fitting process:

  • `nc = 1`: The fitting proceeds sequentially. Parameters for one subject are fitted completely before moving to the next subject.

  • `nc > 1`: The fitting is performed in parallel across subjects. For example, if `nc = 4`, the algorithm will simultaneously fit data for four subjects. Once these are complete, it will proceed to fit the next batch of subjects (e.g., subjects 5-8), and so on, until all subjects are processed.

default: nc = 1

algorithm

[character] Choose an algorithm package from L-BFGS-B, GenSA,GA,DEoptim,PSO, Bayesian, CMA-ES.

In addition, any algorithm from the nloptr package is also supported. If your chosen nloptr algorithm requires a local search, you need to input a character vector. The first element represents the algorithm used for global search, and the second element represents the algorithm used for local search.

Value

a data frame for parameter recovery and model recovery

Examples

if (FALSE) { # \dontrun{
binaryRL.res <- binaryRL::optimize_para(
  data = Mason_2024_G2,
  id = 1,
  n_params = 3,
  n_trials = 360,
  obj_func = binaryRL::RSTD,
  lower = c(0, 0, 0),
  upper = c(1, 1, 10),
  iteration = 100,
  algorithm = "L-BFGS-B"
)

summary(binaryRL.res)
} # }