Skip to contents

Example Datasets

Mason_2024_Exp1
Experiment 1 from Mason et al. (2024)
Mason_2024_Exp2
Experiment 2 from Mason et al. (2024)

Steps

run_m()
Step 1: Building reinforcement learning model
rcv_d()
Step 2: Generating fake data for parameter and model recovery
fit_p()
Step 3: Optimizing parameters to fit real data
rpl_e()
Step 4: Replaying the experiment with optimal parameters

Models

TD()
Model: TD
RSTD()
Model: RSTD
Utility()
Model: Utility

Functions

func_gamma()
Function: Utility Function
func_eta()
Function: Learning Rate
func_epsilon()
Function: Exploration Strategy
func_pi()
Function: Upper-Confidence-Bound
func_tau()
Function: Soft-Max Function

Processes

optimize_para()
Process: Optimizing Parameters
simulate_list()
Process: Simulating Fake Data
recovery_data()
Process: Recovering Fake Data

Summary

summary(<binaryRL>)
S3method summary