#machine #learning #rl #ai #reinforcement

rsrl

A fast, extensible reinforcement learning framework in Rust

11 releases (4 breaking)

0.5.0 Jun 26, 2018
0.4.2 Apr 24, 2018
0.4.1 Feb 25, 2018
0.3.0 Feb 12, 2018
0.1.0 Dec 24, 2017

#7 in Machine learning

Download history 67/week @ 2018-06-14 11/week @ 2018-06-21 56/week @ 2018-06-28 43/week @ 2018-07-05 12/week @ 2018-07-12 34/week @ 2018-07-19 2/week @ 2018-07-26 45/week @ 2018-08-02 86/week @ 2018-08-09 19/week @ 2018-08-16 12/week @ 2018-08-23 1/week @ 2018-08-30 13/week @ 2018-09-06

108 downloads per month

MIT license

160KB
4.5K SLoC

RSRL (api)

Crates.io Build Status Coverage Status

Reinforcement learning should be fast, safe and easy to use.

Overview

rsrl provides generic constructs for running reinforcement learning (RL) experiments by providing a simple, extensible framework and efficient implementations of existing methods for rapid prototyping.

Installation

[dependencies]
rsrl = "0.5"

Usage

The code below shows how one could use rsrl to evaluate a GreedyGQ agent using a Fourier basis function approximator to solve the canonical mountain car problem.

See examples/ for more...

extern crate rsrl;
#[macro_use]
extern crate slog;

use rsrl::{
    control::gtd::GreedyGQ,
    core::{run, Evaluation, Parameter, SerialExperiment, make_shared, Trace},
    domains::{Domain, MountainCar},
    fa::{projectors::fixed::Fourier, LFA},
    geometry::Space,
    policies::fixed::EpsilonGreedy,
    logging,
};

fn main() {
    let logger = logging::root(logging::stdout());

    let domain = MountainCar::default();
    let mut agent = {
        let n_actions = domain.action_space().card().into();

        // Build the linear value functions using a fourier basis projection.
        let bases = Fourier::from_space(3, domain.state_space());
        let v_func = make_shared(LFA::simple(bases.clone()));
        let q_func = make_shared(LFA::multi(bases, n_actions));

        // Build a stochastic behaviour policy with exponential epsilon.
        let eps = Parameter::exponential(0.99, 0.05, 0.99);
        let policy = make_shared(EpsilonGreedy::new(q_func.clone(), eps));

        GreedyGQ::new(q_func, v_func, policy, 1e-1, 1e-3, 0.99)
    };

    let domain_builder = Box::new(MountainCar::default);

    // Training phase:
    let _training_result = {
        // Start a serial learning experiment up to 1000 steps per episode.
        let e = SerialExperiment::new(&mut agent, domain_builder.clone(), 1000);

        // Realise 1000 episodes of the experiment generator.
        run(e, 1000, Some(logger.clone()))
    };

    // Testing phase:
    let testing_result = Evaluation::new(&mut agent, domain_builder).next().unwrap();

    info!(logger, "solution"; testing_result);
}

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate and adhere to the angularjs commit message conventions (see here).

License

MIT

Dependencies

~4.5MB
~82K SLoC