Skip to contents

Authors

  • YuKi. Author, maintainer.

  • Xinyu. Author.

Citation

Source: DESCRIPTION

YuKi, Xinyu (2025). multiRL: Reinforcement Learning Tools for Multi-Armed Bandit. R package version 0.1.9, https://yuki-961004.github.io/multiRL/.

@Manual{,
  title = {multiRL: Reinforcement Learning Tools for Multi-Armed Bandit},
  author = {{YuKi} and {Xinyu}},
  year = {2025},
  note = {R package version 0.1.9},
  url = {https://yuki-961004.github.io/multiRL/},
}