Ensemble Methods for Reinforcement Learning ... - Semantic Scholar

1 downloads 0 Views 274KB Size Report
agents applied the Q-Learning algorithm to learn the action-values in subspaces and have been ... to decide about the best action to take. As Q-Learning tend to.
Ensemble Methods for Reinforcement Learning with Function Approximation Stefan Faußer and Friedhelm Schwenker Institute of Neural Information Processing, University of Ulm, 89069 Ulm, Germany {stefan.fausser,friedhelm.Schwenker}@uni-ulm.de

Abstract. Ensemble methods allow to combine multiple models to increase the predictive performances but mostly utilize labelled data. In this paper we propose several ensemble methods to learn a combined parameterized state-value function of multiple agents. For this purpose the Temporal-Difference (TD) and Residual-Gradient (RG) update methods as well as a policy function is adapted to learn from joint decisions. Such joint decisions include Majority Voting and Averaging of the state-values. We apply these ensemble methods to the simple pencil-and-paper game Tic-Tac-Toe and show that an ensemble of three agents outperforms a single agent in terms of the Mean-Squared Error (MSE) to the true values as well as in terms of the resulting policy. Further we apply the same methods to learn the shortest path in a 20 × 20 maze and empirically show that the learning speed is faster and the resulting policy, i.e. the number of correctly choosen actions is better in an ensemble of multiple agents than that of a single agent.

1

Introduction

In a single-agent problem multiple agents can be combined to act as a committee agent. The aim here is to rise the performance of the single acting agent. In contrast to a multi-agent problem multiple agents are needed to act in the same environment with the same (cooperative) or opposed (competitive) goals. Such multi-agent problems are formulated in a Collaborative Multiagent MDP (CMMDP) model. The Sparse Cooperative Q-Learning algorithm has been successfully applied to the distributed sensor network (DSN) problem where the agents cooperatively focus the sensors to capture a target (Kok et al. 2006 [6]). In the predator-prey problem multiple agents are predators (one agent as one predator) and are hunting the prey. For this problem the Q-learning algorithm also has been used where each agent maintains its own and independent Q-table (Partalas et al. 2007 [7]). Further an addition to add and to remove agents during learning, i.e. to perform self-organization in a network of agents has been proposed (Abdallah et al. 2007 [8]). While the Multi-Agent Reinforcement Learning (MARL) as described above is well-grounded with research work only little is known for the case where multiple agents are combined to a single agent (committee agent) for single-agent C. Sansone, J. Kittler, and F. Roli (Eds.): MCS 2011, LNCS 6713, pp. 56–65, 2011. c Springer-Verlag Berlin Heidelberg 2011 

Ensemble Methods for RL with Function Approximation

57

problems. One reason may be that RL algorithms with state-tables theoretically converge to a global minimum independent of the initialized state-values and therefore multiple runs with distinct state-value initializations result to the same solution with always the same bias and no variance. However in Ensemble Learning methods like Bagging (Breiman 1996 [3]) the idea is to reduce variance and to improve the ensemble’s overall performance. Sun et al. 2009 [5] has studied the partitioning of the input and output space and has developed some techniques using Genetic Algorithms (GA) to partition the spaces. Multiple agents applied the Q-Learning algorithm to learn the action-values in subspaces and have been combined through a weighting scheme to a single agent. However extensive use of heuristics and the need of much computation time for the GA algorithm makes this approach unusable for MDPs with large state spaces. In a more recent work (Wiering et al. 2008 [9]) action-values are combined resulting by multiple independently learnt RL algorithms (Q-Learning, SARSA, ActorCritic, etc.) to decide about the best action to take. As Q-Learning tend to converge to another fixed point than SARSA and Actor-Critic the action-value functions therefore have a different bias and variance. A Markov Decision Process (MDP) with a large state space imposes several problems on Reinforcement Learning (RL) algorithm. Depending on the number of states it may be possible to use RL algorithm that save the state-values in tables. However for huge state-spaces another technique is to learn a parameterized state-value function by linear or nonlinear function approximation. While the state-values in tables are independent on each other, the function approximated state-values are highly dependent based on the selection of the feature space and may therefore converge faster. In an application to English Draughts (Fausser et al. 2010 [10]) which has about 1031 states the training of the parameterized state-value function needed about 5, 000, 000 episodes to reach an amateur player level. Although a parameterized state-value function with simple features can be learnt, it may not converge to a global fixed point and multiple runs with distinct initial weights tends to result in functions with different solutions (different bias and large variance) of the state-values. Our contribution in this paper is to describe several ensemble methods that aim to increase the learning speed and the final performance opposed to that of a single agent. We show the new derived TD update method as well as the new policy to learn from joint decisions. Although we use parameterized statevalue functions in order to deal with large state MDPs we have applied the methods to more simple problems to be able to compare performances. Our work differs from others that we are combining multiple agents for a single agent problem and by our general way of combining multiple state-values that enables to target problems with large-state spaces. It can be empirically shown that these ensemble methods improves the overall performance of multiple agents for the pencil-and-paper game Tic-Tac-Toe as well as for several mazes.

58

2

S. Faußer and F. Schwenker

Reinforcement Learning with Parameterized State-Value Functions

Assume we want to estimate a smooth (differentiable) state-value function Vθπ (s) with its parameter vector θ where Vθπ (s) ≈ V ∗ (s), ∀s using the TD Prediction method (Sutton & Barto 1998 [1]). Starting in a certain state st we take an action at defined by a given policy π, observe reward (signal) rt+1 and move from state st to state st+1 . For this single state transition we can model the Squared TD Prediction error (T DP E) as follows:  2 T DP E(θ) = rt+1 + γVθ (st+1 ) − Vθ (st ) | st+1 = π(st )

(1)

The aim is to minimize the above error by updating parameter vector θ. Applying the gradient-descent technique to the T DP E this results in two possible update functions. The first one being Temporal-Difference learning with γV (st+1 ) kept fixed as training signal:   ∂V (s ) θ t ΔθT D := α rt+1 + γVθ (st+1 ) − Vθ (st ) | st+1 = π(st ) · ∂θ

(2)

and the second one being Residual-Gradient learning (Baird 1995 [2]) with variable γVθ (st+1 ) in terms of θ:    ∂V (s ) ∂V (s )  θ t+1 θ t − ΔθRG := −α rt+1 + γVθ (st+1 )− Vθ (st ) | st+1 = π(st ) · γ ∂θ ∂θ (3) In both equations α > 0 is the learning rate and γ ∈ (0, 1] discounts future statevalues. Now suppose that policy π is a function that chooses one successor state st+1 out of the set of all possible states Ssuccessor (st ) based on its state-value:   (4) π(st ) := argmax Vθ (st+1 ) | st+1 ∈ Ssuccessor (st ) st+1

It is quite clear that this simple policy can only be as good as the estimations of Vθ (st+1 ). Thus an improvement of the estimations of Vθ (st+1 ) results in a more accurate policy π and therefore in a better choice of a successor state st+1 . An agent using this policy tries to maximize its summed high-rated rewards and avoids getting low-rated rewards as much as possible. The optimal state-value function V ∗ (s) is: ∞   ∗ t γ rt+1 |so = s (5) V (s) = E t=0

While a parameterized state-value function can only approximate the optimal state-values to a certain degree it is expected that a function approximation of these state-values result in faster learning, i.e. needs less learning iterates than learning with independent state-values. Furthermore different initializations of the weights θ may result in different state-values after each learning step.

Ensemble Methods for RL with Function Approximation

3

59

Ensemble Methods

Suppose a single-agent problem is given, e.g. finding the shortest path through a maze. Given a set of M agents {A1 , A2 , . . . , AM } each with its own nonlinear function approximator, for instance a Multi-Layer Perceptron (MLP), and either TD updates (2) or RG updates (3) to adapt the weights then it is possible to independently train the M agents or to dependently train the M agents in terms of their state-value updates and their decisions. Irrespective of the training method the decision of all M agents can be combined as a Joint Majority Decision: M   Ni (st , st+1 ) (6) π V O (st ) := argmax st+1

i=1

where Ni (st , st+1 ) models the willingness of agent i to move from state st to state st+1 :  1, if πi (st ) = st+1 , Ni (st , st+1 ) = (7) 0, else Policy πi (st ) is equivalent to equation (4) but with a subfix to note which agent, i.e. which function approximator to use. The state-values of all agents can be further combined to an Average Decision based on averaging the state-values: π AV (st ) := argmax st+1

M  1   Vθi (st+1 ) | st+1 ∈ Ssuccessor (st ) M i=1

(8)

Here Vθi (st+1 ) is the state-value of agent i using the weights θi of this agent. Summed up three policies, namely πs (st ) (4), π V O (st ) (6) and π AV (st ) (8) are available to decide about the best state-value where only the last two ones include the state-values of the other agents in an ensemble, i.e. perform a joint decision. One way of constructing a RL ensemble is to independently train the M agents and to combine their state-values for a joint decision using one of the above described policies after the training. Another way is to use the joint decision during the learning process. For this case it may be necessary to add some noise to the policies to keep agents (state-value functions) diverge. Another suggestion is to have different starting state positions for each agent in the MDP resulting in a better exploration of the MDP. 3.1

Combining the State-Values

While joint decisions during the learning process implicitly updates the statevalues of one agent dependent on the state-values of all other agents M − 1 it can be another improvement to explicitely combine the state-values. Assume agent i is currently in state st and based on one of the policies described in the last section moves to state st+1 and gets reward rt+1 . Independent on the

60

S. Faußer and F. Schwenker

choosen policy the state-values of the successor state st+1 of all agents M can be combined to an Average Predicted Value: V AV (st+1 ) =

M 1  Vθ (st+1 ) M i=1 i

(9)

As the weights of the function approximators of all agents differ because of diverse initialization of the weights, exploration, different starting positions of the agents, decision noise and instabilities in the weight updates it is expected that the combination of the state-values results in a more stable and better predicted state-value. Now this single state-transition can be modelled in a Squared Average Predicted TD Error (AT DP E) function including the Average Predicted Value instead of one state-value of this single agent i: M  2 1  Vθj (st+1 ) − Vθi (st ) | st+1 = π(st ) AT DP E(θi ) = rt+1 + γ M j=1

(10)

By gradient-descent of the AT DP E function like we have done in section 2 with the T DP E function this formulates a new combined TD update function:   ∂V (s ) θi t Δθi CT D := α rt+1 + γV AV (st+1 ) − Vθi (st ) | st+1 = π(st ) · ∂θi

(11)

as well as a new combined RG update function:   Δθi CRG := −α rt+1 + γV AV (st+1 ) − Vθi (st ) | st+1 = π(st )  1 ∂V (s ) ∂V (s )  θi t+1 θi t γ − · M ∂θi ∂θi

(12)

With one of the above update functions the agents learn from the average predicted state-values. Theoretical this can be further combined with one of the prior described joint decision policies. Using the simple single-decision policy (4) this results in an interesting ensemble where each agent decides based on their own state-values but learns from the average predicted state-values. For this case less noise for the decision functions are required as the agents mainly keep their bias. With one of the joint policies, i.e. Joint Majority Decision (6) or Average Decision (8) all agents perform joint decisions and learn from the average predicted state-values. As the combined update functions and the policies for joint decisions only need some additional memory space to save the state-values of all agents and this memory space is far lower than the memory space of the function approximator weights they can be ignored in memory space considerations. Therefore training an ensemble of M agents takes M times memory space and M times computation time of a single agent.

Ensemble Methods for RL with Function Approximation

50

100

150

200

250

300

iterations * 100

300 250

number of best states choosen by policy 0

1 agent single decision 5 agents single decision avg states 5 agents avg decision avg states 5 agents vote decision avg states

200

300 250

1 agent single decision 5 agents single decision 5 agents avg decision 5 agents vote decision

200

number of best states choosen by policy

350

Ensemble Learning with 20x20 Maze, learning from avg. predicted state−values

350

Ensemble Learning with 20x20 Maze

61

0

50

100

150

200

250

300

iterations * 100

Fig. 1. Empirical results of different ensemble methods applied to five 20 × 20 mazes. Measured are the number of states that are correctly choosen by the resulting policy where left the agents have learnt from joint decisions and right the agents have learnt from joint decisions and averaged state-values.

4

Experiments and Results

To evaluate the behaviour of the ensemble methods described in the last sections we have performed experiments with the pencil-and-paper game Tic-Tac-Toe and several mazes. For fair evaluations we have performed multiple runs where we have given the same seed for the pseudo random number generator for all ensemble methods to ensure that the weights of the parameterized state-value function have been identically initialized. For example if we have performed 2 testruns then we have given seed1 for all evaluated methods in the first testrun and seed2 = seed1 in the second testrun. The given values are the averaged values of the multiple runs. 4.1

Maze

In the maze-problem an agent tries to find the shortest path to the goal. For our experiments we have created five 20 × 20 mazes each with randomly positioned 100 barriers. A barrier can be horizontally or vertically set between two states and does not fill out a whole state. Each maze has one goal where the goal position is about upper-left, upper-right, lower-left, lower-right or in the middle of the maze. An agent receives a reward of 1 if he moves to a goal and a reward of 0 otherwise. From each state there are up to 4 possible successor states. The agent cannot move over a barrier or outside the maze. We have applied the Breadthfirst search algorithm (Russel & Norvig 2002 [12]) to calculate the true state values and the optimal policy. For the experiments we have designed M = 5 agents where each of the agent has an own 2-layer MLP with 8 input neurons, 3 hidden neurons and one output neuron. The input neurons are coded as follows:

62

S. Faußer and F. Schwenker

1. x position, 2. y position, 3. 20 − x, 4. 20 − y, 5. 1 if x ≥ 11 otherwise 0, 6. 1 if x ≤ 10 otherwise 0, 7. 1 if y ≥ 11 otherwise 0, 8. 1 if y ≤ 10 otherwise 0. For all evaluations the agents had the following common training parameters: (combined) TD update, α = 0.01, γ = 0.9, epsilon-greedy exploration strategy with  = 0.3, tangens hyperbolicus (tanh) transfer functions for hidden layer and output layer and uniformly distributed random noise between ±0.05 for joint decisions. Each agent has an own randomly initialized start-state and maintains its current state. If one agent reaches a goal or exceeds to reach the goal within 100 iterations then he starts at a new randomly initialized start-state. The results of an ensemble of five agents compared to a single agent with values averaged over 10 testruns and 5 mazes can be seen in figure 1. Comparing the ensemble methods in terms of the number of states that are correctly choosen by the resulting policy the methods with joint decisions are better than the methods learning from joint decisions and average predicted state-values. Even more a simple combination of five independently trained agents (5 agents single decision curve) seem to be the best followed by a combination of five dependently trained agents with Joint Majority Voting decisions. All ensemble methods learn faster and have a better final performance than a single agent within 30, 000 iterations. 4.2

Tic-Tac-Toe

The pencil-and-paper game Tic-Tac-Toe is a competitive 2-player game where each player marks one of maximum nine available spaces turnwise until one player either has three of his own marks horizontal, vertical or diagonal resulting in a win or all spaces are marked resulting in a draw. Tic-Tac-Toe is has 5477 valid states excluding the empty position and starting from the empty position the game always results in a draw if both player perform the best moves. For our experiments we have designed M = 3 agents where each of the agent has an own 2-layer MLP with 9 input neurons, 5 hidden neurons and one output neuron. One input neuron binary codes one game space and is −1 if the space is occupied by the opponent, 1 if the space is occupied by the agent or 0 if the space is empty. The weights of the MLP are updated by the (combined) RG update function. A reward of 1 is received if the agent moves to a terminal state where he has won and receives a reward of 0 otherwise, i.e. for a transition to a non-terminal state and to a terminal state where he has lost or reached a draw. For all evaluations the agents had the following common training parameters: α = 0.0025, γ = 0.9, epsilon-greedy exploration strategy with  = 0.3, tangens hyperbolicus (tanh) transfer functions for hidden layer and output layer and uniformly distributed random noise between ±0.05 for joint decisions. Each agent learns by Self-Play, i.e. uses the same decision policy and statevalues for an inverted position to decide which action the opponent should take. Irrespective of the ensemble methods all agents learn episode-wise and start from the same initial state (the emtpy position). To calculate the true state-values and the optimal policy we have slightly modified the Minimax algorithm (Russel & Norvig 2002 [12]) to include the rewards and the discounting rate γ.

Ensemble Methods for RL with Function Approximation

Ensemble Learning with Tic−Tac−Toe, learning from avg. predicted state−values

0.24 0.22 0.20 0.16 0.14

100

150

200

250

300

0

50

150

200

250

300

Ensemble Learning with Tic−Tac−Toe

Ensemble Learning with Tic−Tac−Toe, learning from avg. predicted state−values

50

100

150

200

250

4000 3800 3600

3600

3800

4000

number of best states choosen by policy

4200

iterations * 1000

1 agent single decision 3 agents single decision 3 agents avg decision 3 agents vote decision 0

300

1 agent single decision 3 agents single decision avg states 3 agents avg decision avg states 3 agents vote decision avg states 0

50

iterations * 1000

150

200

250

300

1.02 1.00 0.98 0.96

percentage number of best−states of single agent

0.94

1.1 1.0

3 agents single decision 3 agents single decision avg states 3 agents avg decision avg states 3 agents vote decision avg states

0.8

0.9

1.04

Ensemble Learning with Tic−Tac−Toe

3 agents single decision 3 agents single decision avg states 3 agents avg decision avg states 3 agents vote decision avg states

1.2

100

iterations * 1000

Ensemble Learning with Tic−Tac−Toe

percentage MSE to true values of single agent

100

iterations * 1000

4200

50

0.12

0.12

1 agent single decision 3 agents single decision 3 agents avg decision 3 agents vote decision 0

number of best states choosen by policy

1 agent single decision 3 agents single decision avg states 3 agents avg decision agv states 3 agents vote decision avg states

0.18

MSE to true values

0.20 0.18 0.14

0.16

MSE to true values

0.22

0.24

Ensemble Learning with Tic−Tac−Toe

63

0

20

40

60

iterations * 1000

80

100

0

20

40

60

80

100

iterations * 1000

Fig. 2. Empirical results of different ensemble methods applied to Tic-Tac-Toe. The first two figures show the Mean-Squared Error (MSE) to the true state-values. The next two figures show the number of best states that are choosen by the resulting policy, higher values are better. The last two figures compare the MSE to the true state-values (left) and the number of best states (right) of an ensemble of three agents to a single agent.

64

S. Faußer and F. Schwenker

The results of an ensemble of three agents compared to a single agent with values averaged over 10 testruns can be seen in figure 2. Examining the MSE to the true values, an ensemble with three agents with single independent decisions and learning from average predicted state-values reaches the lowest error. During the first 100, 000 training episodes the MSE is almost always lower than the MSE of a single agent with three times the training episodes. This is especially true for the first ≈ 20, 000 and the last ≈ 40, 000 iterations. All other ensemble methods perform better than a single agent except the three agents that learnt from joint Majority Voting decision but have not learnt from the Average Predicted statevalues. Maybe lowering the noise for the joint decision would result in better MSE values for this case. Comparing the number of best states that are choosen by the resulting policy, all ensembles without exception are performing better than a single agent. Consider that Tic-Tac-Toe has 4520 non-terminal states.

5

Conclusion

We have described several ensemble methods new in its aspect to be integrated into Reinforcement Learning with function approximation. The necessary extensions to the TD and RG update formulas have been shown to learn from average predicted state-values. Further the policies for joint decisions such as Majority Voting and Averaging based on averaging the state-values have been formulated. For two applications, namely Tic-Tac-Toe and five different 20 × 20 mazes we have empirically shown that these ensembles have a faster learning speed and final performance than a single agent. While we have choosen simple applications to be able to unifiy the measure of the performances, we emphasize that our ensemble methods are most useful for large-state MDPs with simple feature spaces and MDPs with small number of hidden neurons. Such application to a large-state MDP to further evaluate these ensemble methods may be done in another contribution.

References 1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998) 2. Baird, L.: Residual Algorithms: Reinforcement Learning with Function Approximation. In: Proceedings of the 12th International Conference on Machine Learning pp. 30–37 (1995) 3. Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996) 4. Schapire, R.E.: The Strength of Learnability. Machine Learning 5(2), 197–227 (1990) 5. Sun, R., Peterson, T.: Multi-Agent Reinforcement Learning: Weighting and Partitioning. Journal on Neural Networks 12(4-5), 727–753 (1999) 6. Kok, J.R., Vlassis, N.: Collaborative Multiagent Reinforcement Learning by Payoff Propagation. Journal of Machine Learning Research 7, 1789–1828 (2006) 7. Partalas, I., Feneris, I., Vlahavas, I.: Multi-Agent Reinforcement Learning using Strategies and Voting. In: 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), vol. 2, pp. 318–324 (2007)

Ensemble Methods for RL with Function Approximation

65

8. Abdallah, S., Lesser, V.: Multiagent Reinforcement Learning and Self-Organization in a Network of Agents. In: Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi-Agent Systems, AAMAS 2007 pp.172–179 (2007) 9. Wiering, M.A., van Hasselt, H.: Ensemble Algorithms in Reinforcement Learning. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics 38, 930–936 (2008), ISSN 1083-4419 10. Faußer, S., Schwenker, F.: Learning a Strategy with Neural Approximated Temporal-Difference Methods in English Draughts. In: ICPR 2010, pp. 2925–2928 (2010) 11. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995) 12. Stuart, J.R., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice-Hall, Englewood Cliffs (2002)