The structure of eta
depends on the model type:
Temporal Difference (TD) model:
eta
is a single numeric value representing the learning rate.Risk-Sensitive Temporal Difference (RSTD) model:
eta
is a numeric vector of length two, whereeta[1]
represents the learning rate for "good" outcomes, which means the reward is higher than the expected value.eta[2]
represents the learning rate for "bad" outcomes, which means the reward is lower than the expected value.
Usage
func_eta(
i,
L_freq,
R_freq,
L_pick,
R_pick,
L_value,
R_value,
var1 = NA,
var2 = NA,
value,
utility,
reward,
occurrence,
eta,
alpha,
beta
)
Arguments
- i
The current row number.
- L_freq
The frequency of left option appearance
- R_freq
The frequency of right option appearance
- L_pick
The number of times left option was picked
- R_pick
The number of times left option was picked
- L_value
The value of the left option
- R_value
The value of the right option
- var1
[character] Column name of extra variable 1. If your model uses more than just reward and expected value, and you need other information, such as whether the choice frame is Gain or Loss, then you can input the 'Frame' column as var1 into the model.
default: var1 = "Extra_Var1"
- var2
[character] Column name of extra variable 2. If one additional variable, var1, does not meet your needs, you can add another additional variable, var2, into your model.
default: var2 = "Extra_Var2"
- value
The expected value of the stimulus in the subject's mind at this point in time.
- utility
The subjective value that the subject assigns to the objective reward.
- reward
The objective reward received by the subject after selecting a stimulus.
- occurrence
The number of times the same stimulus has been chosen.
- eta
[numeric] Parameters used in the Learning Rate Function,
rate_func
, representing the rate at which the subject updates the difference (prediction error) between the reward and the expected value in the subject's mind.The structure of
eta
depends on the model type:For the Temporal Difference (TD) model, where a single learning rate is used throughout the experiment $$V_{new} = V_{old} + \eta \cdot (R - V_{old})$$
For the Risk-Sensitive Temporal Difference (RDTD) model, where two different learning rates are used depending on whether the reward is lower or higher than the expected value: $$V_{new} = V_{old} + \eta_{+} \cdot (R - V_{old}), R > V_{old}$$ $$V_{new} = V_{old} + \eta_{-} \cdot (R - V_{old}), R < V_{old}$$
TD: eta = 0.3
RSTD: eta = c(0.3, 0.7)
- alpha
[vector] Extra parameters that may be used in functions.
- beta
[vector] Extra parameters that may be used in functions.
Note
When customizing these functions, please ensure that you do not modify the arguments. Instead, only modify the `if-else` statements or the internal logic to adapt the function to your needs.
Examples
if (FALSE) { # \dontrun{
func_eta <- function(
# Trial number
i,
# Number of times this option has appeared
L_freq,
R_freq,
# Number of times this option has been chosen
L_pick,
R_pick,
# Current value of this option
L_value,
R_value,
# Extra variables
var1 = NA,
var2 = NA,
# Expected value for this stimulus
value,
# Subjective utility
utility,
# Reward observed after choice
reward,
# Occurrence count for this stimulus
occurrence,
# Free Parameter
eta,
# Extra parameters
alpha,
beta
){
################################# [ TD ] ####################################
if (length(eta) == 1) {
eta <- as.numeric(eta)
}
################################ [ RSTD ] ###################################
else if (length(eta) > 1 & utility < value) {
eta <- eta[1]
}
else if (length(eta) > 1 & utility >= value) {
eta <- eta[2]
}
################################ [ ERROR ] ##################################
else {
eta <- "ERROR" # Error check
}
return(eta)
}
} # }