Online Learning: Theory, Algorithms, and Applications - Ph.D thesis


Free download. Book file PDF easily for everyone and every device. You can download and read online Online Learning: Theory, Algorithms, and Applications - Ph.D thesis file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Online Learning: Theory, Algorithms, and Applications - Ph.D thesis book. Happy reading Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Bookeveryone. Download file Free Book PDF Online Learning: Theory, Algorithms, and Applications - Ph.D thesis at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Pocket Guide.
Search form

Connectionist learning in real time: Sutton-Barto adaptive element and classical conditioning of the nictitating membrane response , Proceedings of the Seventh Annual Conference of the Cognitive Science Society, pp. Temporal credit assignment in reinforcement learning Mbytes.

The algorithms considered include some from learning automata theory, mathematical learning theory, early "cybernetic" approaches to learning, Samuel's checker-playing program, Michie and Chambers's "Boxes" system, and a number of new algorithms. The tasks were selected to involve, first in isolation and then in combination, the issues of misleading generalizations, delayed reinforcement, unbalanced reinforcement, and secondary reinforcement.

The tasks range from simple, abstract "two-armed bandit" tasks to a physically realistic pole-balancing task. The results indicate several areas where the algorithms presented here perform substantially better than those previously studied. An unbalanced distribution of reinforcement, misleading generalizations, and delayed reinforcement can greatly retard learning and in some cases even make it counterproductive.

Performance can be substantially improved in the presence of these common problems through the use of mechanisms of reinforcement comparison and secondary reinforcement. We present a new algorithm similar to the "learning-by-generalization" algorithm used for altering the static evaluation function in Samuel's checker-playing program.

Simulation experiments indicate that the new algorithm performs better than a version of Samuel's algorithm suitably modified for reinforcement learning tasks. Theoretical analysis in terms of an "ideal reinforcement signal" sheds light on the relationship between these two algorithms and other temporal credit-assignment algorithms.

A theory of salience change dependent on the relationship between discrepancies on successive trials on which the stimulus is present. Synthesis of nonlinear control surfaces by a layered associative network , Biological Cybernetics 43 Adaptation of learning rate parameters. Barto and R. Toward a modern theory of adaptive networks: Expectation and prediction , Psychological Review 88 Translated into Spanish by G.

Ruiz to appear in the journal Estudios de Psicologia. An adaptive network that constructs and uses an internal model of its world , Cognition and Brain Theory 4 Goal seeking components for adaptive intelligence: An initial assessment. Appendix C is available separately.

Associative search network: A reinforcement learning associative memory , Biological Cybernetics 40 Landmark learning: An illustration of associative search , Biological Cybernetics 42 A unified theory of expectation in classical and instrumental conditioning. Bachelors thesis, Stanford University. Learning theory support for a single channel theory of the brain.

Frequently used models of the interaction between an agent and its environment, such as Markov Decision Processes MDP or Semi-Markov Decision Processes SMDP , do not capture the fact that, in an asynchronous environment, the state of the environment may change during computation performed by the agent. In an asynchronous environment, minimizing reaction time—the time it takes for an agent to react to an observation—also minimizes the time in which the state of the environment may change following observation.

In many environments, the reaction time of an agent directly impacts task performance by permitting the environment to transition into either an undesirable terminal state or a state where performing the chosen action is inappropriate. We propose a class of reactive reinforcement learning algorithms that address this problem of asynchronous environments by immediately acting after observing new state information. We compare a reactive SARSA learning algorithm with the conventional SARSA learning algorithm on two asynchronous robotic tasks emergency stopping and impact prevention , and show that the reactive RL algorithm reduces the reaction time of the agent by approximately the duration of the algorithm's learning update.

This new class of reactive algorithms may facilitate safer control and faster decision making without any change to standard learning guarantees. The performance of TD methods often depends on well chosen step-sizes, yet few algorithms have been developed for setting the step-size automatically for TD learning. An important limitation of current methods is that they adapt a single step-size shared by all the weights of the learning system. A vector step-size enables greater optimization by specifying parameters on a per-feature basis. Furthermore, adapting parameters at different rates has the added benefit of being a simple form of representation learning.

We demonstrate that TIDBD is able to find appropriate step-sizes in both stationary and non-stationary prediction tasks, outperforming ordinary TD methods and TD methods with scalar step-size adaptation; we demonstrate that it can differentiate between features which are relevant and irrelevant for a given task, performing representation learning; and we show on a real-world robot prediction task that TIDBD is able to outperform ordinary TD methods and TD methods augmented with AlphaBound and RMSprop.

Our goal is to train these networks in an incremental manner, without the computationally expensive experience replay. We propose reducing such interferences with two efficient input transformation methods that are geometric in nature and match well the geometric property of ReLU gates. The first one is tile coding, a classic binary encoding scheme originally designed for local generalization based on the topological structure of the input space. The second one EmECS is a new method we introduce; it is based on geometric properties of convex sets and topological embedding of the input space into the boundary of a convex set.

We discuss the behavior of the network when it operates on the transformed inputs. We also compare it experimentally with some neural nets that do not use the same input transformations, and with the classic algorithm of tile coding plus a linear function approximator, and on several online reinforcement learning tasks, we show that the neural net with tile coding or EmECS can achieve not only faster learning but also more accurate approximations.

Our results strongly suggest that geometric input transformation of this type can be effective for interference reduction and takes us a step closer to fully incremental reinforcement learning with neural nets. They address a bias-variance trade off between reliance on current estimates, which could be poor, and incorporating longer sampled reward sequences into the updates.

Especially in the off-policy setting, where the agent aims to learn about a policy different from the one generating its behaviour, the variance in the updates can cause learning to diverge as the number of sampled rewards used in the estimates increases. In this paper, we introduce per-decision control variates for multi-step TD algorithms, and compare them to existing methods. Our results show that including the control variates can greatly improve performance on both on and off-policy multi-step temporal difference learning tasks.

The third case, that of an expectation model , is particularly appealing because the expectation is compact and deterministic; this is the case most commonly used, but often in a way that is not sound for non-linear models such as those obtained with deep learning. In this paper we introduce the first MBRL algorithm that is sound for non-linear expectation models and stochastic environments. Key to our algorithm, based on the Dyna architecture, is that the model is never iterated to produce a trajectory, but only used to generate single expected transitions to which a Bellman backup with a linear approximate value function is applied.

In our results, we also consider the extension of the Dyna architecture to partial observability. We show the effectiveness of our algorithm by comparing it with model-free methods on partially-observable navigation tasks. ABSTRACT: Temporal difference TD learning is an important approach in reinforcement learning, as it combines ideas from dynamic programming and Monte Carlo methods in a way that allows for online and incremental model-free learning.

A key idea of TD learning is that it is learning predictive knowledge about the environment in the form of value functions, from which it can derive its behavior to address long-term sequential decision making problems. In this paper, we introduce an alternative view on the discount rate, with insight from digital signal processing, to include complex-valued discounting. Our results show that setting the discount rate to appropriately chosen complex numbers allows for online and incremental estimation of the Discrete Fourier Transform DFT of a signal of interest with TD learning.

We thereby extend the types of knowledge representable by value functions, which we show are particularly useful for identifying periodic effects in the reward sequence. ABSTRACT: Temporal-difference TD learning methods are widely used in reinforcement learning to estimate the expected return for each state, without a model, because of their significant advantages in computational and data efficiency. For many applications involving risk mitigation, it would also be useful to estimate the variance of the return by TD methods. In this paper, we describe a way of doing this that is substantially simpler than those proposed by Tamar, Di Castro, and Mannor in , or those proposed by White and White in We show that two TD learners operating in series can learn expectation and variance estimates.

With these two modifications, the variance learning problem becomes a conventional TD learning problem to which standard theoretical results can be applied. Our formal results are limited to the table lookup case, for which our method is still novel, but the extension to function approximation is immediate, and we provide some empirical results for the linear function approximation case. Our experimental results show that our direct method behaves just as well as a comparable indirect method, but is generally more robust. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful.

Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory.

Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting. ABSTRACT: Unifying seemingly disparate algorithmic ideas to produce better performing algorithms has been a longstanding goal in reinforcement learning. Currently, there are a multitude of algorithms that can be used to perform TD control, including Sarsa, Q-learning, and Expected Sarsa.

These methods are often studied in the one-step case, but they can be extended across multiple time steps to achieve better performance. Each of these algorithms is seemingly distinct, and no one dominates the others for all problems. The mixture can also be varied dynamically which can result in even greater performance.

ABSTRACT: We consider off-policy temporal-difference TD learning in discounted Markov decision processes, where the goal is to evaluate a policy in a model-free way by using observations of a state process generated without executing the policy. These results not only lead immediately to a characterization of the convergence behavior of least-squares based implementation of our scheme, but also prepare the ground for further analysis of gradient-based implementations.

In such continuous domain, we also propose four off-policy IPI methods—two are the ideal PI forms that use advantage and Q-functions, respectively, and the other two are natural extensions of the existing off-policy IPI schemes to our general RL framework. Compared to the IPI methods in optimal control, the proposed IPI schemes can be applied to more general situations and do not require an initial stabilizing policy to run; they are also strongly relevant to the RL algorithms in CTS such as advantage updating, Q-learning, and value-gradient based VGB greedy policy improvement.

Our on-policy IPI is basically model-based but can be made partially model-free; each off-policy method is also either partially or completely model-free. The mathematical properties of the IPI methods—admissibility, monotone improvement, and convergence towards the optimal solution—are all rigorously proven, together with the equivalence of on- and off-policy IPI. Finally, the IPI methods are simulated with an inverted-pendulum model to support the theory and verify the performance. ABSTRACT: To estimate the value functions of policies from exploratory data, most model-free off-policy algorithms rely on importance sampling, where the use of importance sampling ratios often leads to estimates with severe variance.

It is thus desirable to learn off-policy without using the ratios. However, such an algorithm does not exist for multi-step learning with function approximation. In this paper, we introduce the first such algorithm based on temporal-difference TD learning updates. We show that an explicit use of importance sampling ratios can be eliminated by varying the amount of bootstrapping in TD updates in an action-dependent manner.

Our new algorithm achieves stability using a two-timescale gradient-based TD update. A prior algorithm based on lookup table representation called Tree Backup can also be retrieved using action-dependent bootstrapping, becoming a special case of our algorithm. In two challenging off-policy tasks, we demonstrate that our algorithm is stable, effectively avoids the large variance issue, and can perform substantially better than its state-of-the-art counterpart. These examples were chosen to illustrate a diversity of application types, the engineering needed to build applications, and most importantly, the impressive results that these methods are able to achieve.

ABSTRACT: This work presents an overarching perspective on the role that machine intelligence can play in enhancing human abilities, especially those that have been diminished due to injury or illness. As a primary contribution, we develop the hypothesis that assistive devices, and specifically artificial arms and hands, can and should be viewed as agents in order for us to most effectively improve their collaboration with their human users.

We believe that increased agency will enable more powerful interactions between human users and next generation prosthetic devices, especially when the sensorimotor space of the prosthetic technology greatly exceeds the conventional control and communication channels available to a prosthetic user.

We then introduce the idea of communicative capital as a way of thinking about the communication resources developed by a human and a machine during their ongoing interaction. Using this schema of agency and capacity, we examine the benefits and disadvantages of increasing the agency of a prosthetic limb.

To do so, we present an analysis of examples from the literature where building communicative capital has enabled a progression of fruitful, task-directed interactions between prostheses and their human users. We then describe further work that is needed to concretely evaluate the hypothesis that prostheses are best thought of as agents. The agent-based viewpoint developed in this article significantly extends current thinking on how best to support the natural, functional use of increasingly complex prosthetic enhancements, and opens the door for more powerful interactions between humans and their assistive technologies.

It introduces a new hyper-parameter, the memory buffer size, which needs carefully tuning. However unfortunately the importance of this new hyper-parameter has been underestimated in the community for a long time. In this paper we did a systematic empirical study of experience replay under various function representations.

We showcase that a large replay buffer can significantly hurt the performance. Moreover, we propose a simple O 1 method to remedy the negative influence of a large replay buffer. We showcase its utility in both simple grid world and challenging domains like Atari games. Eligibility traces, the usual way of handling them, works well with linear function approximators. However, this was limited to action-value methods. In this paper, we extend this approach to handle n-step returns, generalize this approach to policy gradient methods and empirically study the effect of such delayed updates in control tasks.

Specifically, we introduce two novel forward actor-critic methods and empirically investigate our proposed methods with the conventional actor-critic method on mountain car and pole-balancing tasks. From our experiments, we observe that forward actor-critic dramatically outperforms the conventional actor-critic in these standard control tasks.

Notably, this forward actor-critic method has produced a new class of multi-step RL algorithms without eligibility traces. The performance of a learning system depends on the type of representation used for representing the data. Typically, these representations are hand-engineered using domain knowledge. More recently, the trend is to learn these representations through stochastic gradient descent in multi-layer neural networks, which is called backprop.

Learning the representations directly from the incoming data stream reduces the human labour involved in designing a learning system. More importantly, this allows in scaling of a learning system for difficult tasks. In this paper, we introduce a new incremental learning algorithm called crossprop, which learns incoming weights of hidden units based on the meta-gradient descent approach, that was previously introduced by Sutton and Schraudolph for learning step-sizes.

The final update equation introduces an additional memory parameter for each of these weights and generalizes the backprop update equation. From our experiments, we show that crossprop learns and reuses its feature representation while tackling new and unseen tasks whereas backprop re- learns a new feature representation. In addition to learning from the current trial, the new model supposes that animals store and replay previous trials, learning from the replayed trials using the same learning rule.

This simple idea provides a unified explanation for diverse phenomena that have proved challenging to earlier associative models, including spontaneous recovery, latent inhibition, retrospective revaluation, and trial spacing effects. For example, spontaneous recovery is explained by supposing that the animal replays its previous trials during the interval between extinction and test. These include earlier acquisition trials as well as recent extinction trials, and thus there is a gradual re-acquisition of the conditioned response.

DCA - Website of Le Thi Hoai An

We present simulation results for the simplest version of this replay idea, where the trial memory is assumed empty at the beginning of an experiment, all experienced trials are stored and none removed, and sampling from the memory is performed at random. Even this minimal replay model is able to explain the challenging phenomena, illustrating the explanatory power of an associative model enhanced by learning from remembered as well as real experiences. To accomplish this, machines need to learn about their human users' intentions and adapt to their preferences.

In most current research, a user has conveyed preferences to a machine using explicit corrective or instructive feedback; explicit feedback imposes a cognitive load on the user and is expensive in terms of human effort. The primary objective of the current work is to demonstrate that a learning agent can reduce the amount of explicit feedback required for adapting to the user's preferences pertaining to a task by learning to perceive a value of its behavior from the human user, particularly from the user's facial expressionswe call this face valuing.

We empirically evaluate face valuing on a grip selection task. Our preliminary results suggest that an agent can quickly adapt to a user's changing preferences with minimal explicit feedback by learning a value function that maps facial features extracted from a camera image to expected future reward. We believe that an agent learning to perceive a value from the body language of its human user is complementary to existing interactive machine learning approaches and will help in creating successful human-machine interactive applications.

ABSTRACT: In this paper we introduce the idea of improving the performance of parametric temporal-difference TD learning algorithms by selectively emphasizing or de-emphasizing their updates on different time steps. Our treatment includes general state-dependent discounting and bootstrapping functions, and a way of specifying varying degrees of interest in accurately valuing different states. Machine learning, and in particular learned predictions about user intent, could help to reduce the time and cognitive load required by amputees while operating their prosthetic device.

Objectives: The goal of this study was to compare two switching-based methods of controlling a myoelectric arm: non-adaptive or conventional control and adaptive control involving real-time prediction learning. Study Design: Case series study. Methods: We compared non-adaptive and adaptive control in two different experiments. In the first, one amputee and one non-amputee subject controlled a robotic arm to perform a simple task; in the second, three able-bodied subjects controlled a robotic arm to perform a more complex task. For both tasks, we calculated the mean time and total number of switches between robotic arm functions over three trials.

Results: Adaptive control significantly decreased the number of switches and total switching time for both tasks compared with the conventional control method. Conclusion: Real-time prediction learning was successfully used to improve the control interface of a myoelectric robotic arm during uninterrupted use by an amputee subject and able-bodied subjects.

Clinical Relevance: Adaptive control using real-time prediction learning has the potential to help decrease both the time and the cognitive load required by amputees in real-world functional situations when using myoelectric prostheses. Conventional algorithms wait until observing actual outcomes before performing the computations to update their predictions. If predictions are made at a high rate or span over a large amount of time, substantial computation can be required to store all relevant observations and to update all predictions when the outcome is finally observed.

We show that the exact same predictions can be learned in a much more computationally congenial way, with uniform per-step computation that does not depend on the span of the predictions. We apply this idea to various settings of increasing generality, repeatedly adding desired properties and each time deriving an equivalent span-independent algorithm for the conventional algorithm that satisfies these desiderata. Interestingly, along the way several known algorithmic constructs emerge spontaneously from our derivations, including dutch eligibility traces, temporal difference errors, and averaging.

Each step, we make sure that the derived algorithm subsumes the previous algorithms, thereby retaining their properties. Ultimately we arrive at a single general temporal-difference algorithm that is applicable to the full setting of reinforcement learning. ABSTRACT: In this article we develop the perspective that assistive devices, and specifically artificial arms and hands, may be beneficially viewed as goal-seeking agents.

We further suggest that taking this perspective enables more powerful interactions between human users and next generation prosthetic devices, especially when the sensorimotor space of the prosthetic technology greatly exceeds the conventional myoelectric control and communication channels available to a prosthetic user. Using this schema, we present a brief analysis of three examples from the literature where agency or goal-seeking behaviour by a prosthesis has enabled a progression of fruitful, task-directed interactions between a prosthetic assistant and a human director.

While preliminary, the agent-based viewpoint developed in this article extends current thinking on how best to support the natural, functional use of increasingly complex prosthetic enhancements. Their appeal comes from their good performance, low computational cost, and their simple interpretation, given by their forward view.

Algorithmically, these true online methods only make two small changes to the update rules of the regular methods, and the extra computational cost is negligible in most cases. However, they follow the ideas underlying the forward view much more closely. In particular, they maintain an exact equivalence with the forward view at all times, whereas the traditional versions only approximate it for small step-sizes.

We hypothesize that these true online methods not only have better theoretical properties, but also dominate the regular methods empirically. In this article, we put this hypothesis to the test by performing an extensive empirical comparison. We use linear function approximation with tabular, binary, and non-binary features.

Our results suggest that the true online methods indeed dominate the regular methods. An additional advantage is that no choice between traces has to be made for the true online methods. Besides the empirical results, we provide an in-dept analysis of the theory behind true online temporal-difference learning. In addition, we show that new true online temporal- difference methods can be derived by making changes to the online forward view and then rewriting the update equations.

Our domains include a challenging one-state and two-state example, random Markov reward processes, and a real-world myoelectric prosthetic arm. We assess the algorithms along three dimensions: computational cost, learning speed, and ease of use. ABSTRACT: Emphatic algorithms are temporal-difference learning algorithms that change their effective state distribution by selectively emphasizing and de-emphasizing their updates on different time steps.

Recent works by Sutton, Mahmood and White and Yu show that by varying the emphasis in a particular way these algorithms become stable and convergent under off-policy training with linear function approximation. This paper serves as a unified summary of the available results from both works.

Additionally, we empirically demonstrate the benefits of emphatic algorithms, due to the flexible learning using state-dependent discounting, bootstrapping and a user-specified allocation of function approximation resources. Weighted importance sampling WIS is generally considered superior to ordinary importance sampling but, when combined with function approximation, it has hitherto required computational complexity that is O n 2 or more in the number of features.

In this paper we introduce new off-policy learning algorithms that obtain the benefits of WIS with O n computational complexity. Our algorithms maintain for each component of the parameter vector a measure of the extent to which that component has been used in previous examples. This measure is used to determine component-wise step sizes, merging the ideas of stochastic gradient descent and sample averages. We present our main WIS-based algorithm first in an intuitive acausal form the forward view and then derive a causal algorithm using eligibility traces that is equivalent but more efficient the backward view.

In three small experiments, our algorithms performed significantly better than prior O n algorithms for off-policy policy evaluation. ABSTRACT: In reinforcement learning, the notions of experience replay, and of planning as learning from replayed experience, have long been used to find good policies with minimal training data. Replay can be seen either as model-based reinforcement learning, where the store of past experiences serves as the model, or as a way to avoid a conventional model of the environment altogether.

In this paper, we look more deeply at how replay blurs the line between model-based and model-free methods. First, we show for the first time an exact equivalence between the sequence of value functions found by a model-based policy-evaluation method and by a model-free method with replay. Second, we present a general replay method that can mimic a spectrum of methods ranging from the explicitly model-free TD 0 to the explicitly model-based linear Dyna.

Finally, we use insights gained from these relationships to design a new model-based reinforcement learning algorithm for linear function approximation. ABSTRACT: The present experiment tested whether or not the time course of a conditioned eyeblink response, particularly its duration, would expand and contract, as the magnitude of the conditioned response CR changed massively during acquisition, extinction, and reacquisition.

The CR duration remained largely constant throughout the experiment, while CR onset and peak time occurred slightly later during extinction. The results suggest that computational models can account for these results by using two layers of plasticity conforming to the sequence of synapses in the cerebellar pathways that mediate eyeblink conditioning. However, its most effective variant, weighted importance sampling, does not carry over easily to function approximation and, because of this, it is not utilized in existing off-policy learning algorithms.

In this paper, we take two steps toward bridging this gap. First, we show that weighted importance sampling can be viewed as a special case of weighting the error of individual training samples, and that this weighting has theoretical and empirical benefits similar to those of weighted importance sampling. ABSTRACT: We consider the problem of learning models of options for real-time abstract plan ning, in the setting where reward functions can be specified at any time and their expected returns must be efficiently computed.

We introduce a new model for an option that is independent of any reward function, called the universal option model UOM. We prove that the UOM of an option can construct a traditional option model given a reward function, and also supports efficient computation of the option-conditional return. We extend the UOM to linear function approximation, and we show it gives the TD solution of option returns and value functions of policies over options. We provide a stochastic approximation algorithm for incrementally learning UOMs from data and prove its consistency.

We demonstrate our method in two domains. The first domain is a real-time strategy game, where the controller must select the best game unit to accomplish dynamically-specified tasks. Our experiments show that UOMs are substantially more efficient than previously known methods in evaluating option returns and policies over options. However, their algorithm is restricted to on-policy learning. In the more general case of off-policy learning, in which the policy whose outcome is predicted and the policy used to generate data may be different, their algorithm cannot be applied.

One reason for this is that the algorithm bootstraps and thus is subject to instability problems when function approximation is used. To address these limitations, we generalize their equivalence result and use this generalization to construct the first online algorithm to be exactly equivalent to an off-policy forward view. In the general theorem that allows us to derive this new algorithm, we encounter a new general eligibility-trace update. ABSTRACT: Q-learning, the most popular of reinforcement learning algorithms, has always included an extension to eligibility traces to enable more rapid learning and improved asymptotic performance on non-Markov problems.

Its appeal comes from its equivalence to a clear and conceptually simple forward view, and the fact that it can be implemented online in an inexpensive manner.

In a sense this is unavoidable for the conventional forward view, as it itself presumes that the estimates are unchanging during an episode. Our algorithm uses a new form of eligibility trace similar to but different from conventional accumulating and replacing traces. In this paper we present results with a robot that learns to next in real time, making thousands of predictions about sensory input signals at timescales from 0. Our predictions are formulated as a generalization of the value functions commonly used in reinforcement learning, where now an arbitrary function of the sensory input signals is used as a pseudo reward, and the discount rate determines the timescale.

This approach is sufficiently computationally efficient to be used for real-time learning on the robot and sufficiently data efficient to achieve substantial accuracy within 30 minutes. Moreover, a single tile-coded feature representation suffices to accurately predict many different signals over a significant range of timescales. We also extend nexting beyond simple timescales by letting the discount rate be a function of the state and show that nexting predictions of this more general form can also be learned with substantial accuracy.

General nexting provides a simple yet powerful mechanism for a robot to acquire predictive knowledge of the dynamics of its environment. ABSTRACT: We introduce a new method for robot control that combines prediction learning with a fixed, crafted response—the robot learns to make a temporally-extended prediction during its normal operation, and the prediction is used to select actions as part of a fixed behavioral response. Our method for robot control combines a fixed response with online prediction learning, thereby producing an adaptive behavior.

This method is different from standard non-adaptive control methods and also from adaptive reward-maximizing control methods. We show that this method improves upon the performance of two reactive controls, with visible benefits within 2. In the first experiment, the robot turns off its motors when it predicts a future over-current condition, which reduces the time spent in unsafe over-current conditions and improves efficiency.

In the second experiment, the robot starts to move when it predicts a human-issued request, which reduces the apparent latency of the human-robot interface. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices.

We suggest that by learning to anticipate predict a user's behaviour, artificial limbs could take on an active role in a human's control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions GVFs could be used to accurately detect a user's control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm.

Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user's body similar to conventional artificial limb control. Our approach was able to acquire and maintain forecasts about a user's switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system.

Welcome to the IDEALS Repository

We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems. ABSTRACT: Temporal-difference TD learning is one of the most successful and broadly applied solutions to the reinforcement learning problem; it has been used to achieve master-level play in chess, checkers and backgammon. Monte-Carlo tree search is a recent algorithm for simulation-based search, which has been used to achieve master-level play in Go.

We have introduced a new approach to high-performance planning Silver et al. Our method, TD search, combines TD learning with simulation-based search. Like Monte-Carlo tree search, value estimates are updated by learning online from simulated experience. Like TD learning, it uses value function approximation and bootstrapping to efficiently generalise between related states. Without any explicit search tree, our approach outperformed a vanilla Monte-Carlo tree search with the same number of simulations.

Many different approaches exist for learning representations, but what constitutes a good representation is not yet well understood. Much less is known about how to learn representations from an unending stream of data. In this work, we view the problem of representation learning as one of learning features e. We study an important case where learning is done fully online i. In the presence of an unending stream of data, the computational cost of the learning element should not grow with time and cannot be much more than that of the performance element.

Few methods can be used effectively in this case. We show that a search approach to representation learning can naturally fit into this setting. In this approach good representations are searched by generating different features and then testing them for utility. We show that a fully online method can be developed, which utilizes this generate and test approach, learns representations continually, and adds only a small fraction to the overall computation.

We believe online representation search constitutes an important step to- ward effective and inexpensive solutions to representation learning problems. This efficiency is largely determined by the order in which backups are performed. A particularly effective ordering strategy is the strategy employed by prioritized sweeping. Prioritized sweeping orders backups according to a heuristic, such that backups that cause a large value change are selected first.

The Bellman error predicts the value change caused by a full backup exactly, but its computation is expensive. Hence, methods do not use the Bellman error as a heuristic, but try to approximate the ordering induced by it with a heuristic that is cheaper to compute. We introduce the first efficient prioritized sweeping method that uses exact Bellman error ordering. The core of this method is a novel backup that allows for efficient computation of the Bellman error, while its effect as well as its computational complexity is equal to that of a full backup.

We demonstrate empirically that the obtained method achieves substantially higher convergence rates than other prioritized sweeping implementations. In this article, we present a preliminary study of different cases where it may be beneficial to use a set of temporally extended predictions—learned and maintained in real time—within an engineered or learned prosthesis controller.

Our study demonstrates the first successful combination of actor-critic reinforcement learning with real-time prediction learning. We evaluate this new approach to control learning during the myoelectric operation of a robot limb. Our results suggest that the integration of real-time prediction and control learning may speed control policy acquisition, allow unsupervised adaptation in myoelectric controllers, and facilitate synergies in highly actuated limbs.

These experiments also show that temporally extended prediction learning enables anticipatory actuation, opening the way for coordinated motion in assistive robotic devices. Our work therefore provides initial evidence that real-time prediction learning is a practical way to support intuitive joint control in increasingly complex prosthetic systems.

The ability to make accurate and timely predictions enhances our ability to control our situation and our environment. Assistive robotics is one prominent area where foresight of this kind can bring improved quality of life. In this article, we present a new approach to acquiring and maintaining predictive knowledge during the online, ongoing operation of an assistive robot. To our knowledge, this work is the first demonstration of a practical method for real-time prediction learning during myoelectric control.

Our approach therefore represents a fundamental tool for addressing one major unsolved problem: amputee-specific adaptation during the ongoing operation of a prosthetic device. The findings in this article also contribute a first explicit look at prediction learning in prosthetics as an important goal in its own right, independent of its intended use within a specific controller or system. Our results suggest that real-time learning of predictions and anticipations is a significant step towards more intuitive myoelectric prostheses and other assistive robotic devices.

ABSTRACT:Rabbits were classically conditioned using compounds of tone and light conditioned stimuli CSs presented with either simultaneous onsets Experiment 1 or serial onsets Experiment 2 in a delay conditioning paradigm. CR peaks were consistently clustered around the time of unconditioned stimulus US delivery. In both cases, serial compound training altered CR timing. Since online learning is performed in a sequence of rounds, we denote by x t the tth vector in a sequence of vectors x, x 2, The ith element of x t is denoted by x t,i.

Sets are designated by upper case letters e. A norm of a vector x is denoted by x. Throughout the dissertation, we make extensive use of several notions from convex analysis. In the appendix we overview basic definitions and derive some useful tools. Here we summarize some of our notations.

The gradient of a differentiable function f is denoted by f and the Hessian is denoted by 2 f. If f is non-differentiable, we denote its sub-differential set by f. Random variables are designated using upper case letters e. We use the notation P[A] to.

The expected value of a random variable is denoted by E[Z]. In some situations, we have a deterministic function h that receives a random variable as input. We denote by E[h Z ] the expected value of the random variable h z. Occasionally, we omit the dependence of h on Z. In this case, we clarify the meaning of the expectation by using the notation E Z [h]. In this section we give a high level overview of related work in different research fields. The last section of each of the chapters below includes a detailed review of previous work relevant to the specific contents of each chapter.

In game theory, the problem of sequential prediction has been addressed in the context of playing repeated games with mixed strategies. A player who can achieve low regret i. Hannan consistent strategies have been obtained by Hannan [63], Blackwell [9] in his proof of the approachability theorem , Foster and Vohra [49, 50], Freund and Schapire [55], and Hart and Mas-collel [64]. Von Neumann s classical minimax theorem has been recovered as a simple application of regret bounds [55].

The importance of low regret strategies was further amplified by showing that if all players follow certain low regret strategies then the game converges to a correlated equilibrium see for example [65, 0]. Playing repeated games with mixed strategies is closely related to the expert setting widely studied in the machine learning literature [42, 82, 85, 9].

Prediction problems have also intrigued information theorists since the early days of the information theory field. For example, Shannon estimated the entropy of the English language by letting humans predict the next symbol in English texts []. Motivated by applications of data compression, Ziv and Lempel [24] proposed an online universal coding system for arbitrary individual sequences. In the compression setting, the learner is not committed to a single prediction but rather assigns a probability over the set of possible outcomes.

The success of the coding system is measured by the total likelihood of the entire sequence of symbols. Feder, Merhav, and Gutman [47] applied universal coding systems to prediction problems, where the goal is to minimize the number of prediction errors. Their basic idea is to use an estimation of the conditional probabilities of the outcomes given previous symbols, as calculated by the Lempel-Ziv coding system, and then to randomly guess the next symbol based on this conditional probability. Another related research area is the statistics without probability approach developed by Dawid and Vovk [34, 35], which is based on the theory of prediction with low regret.

See also their inspiring book [20] about learning, prediction, and games. The potential-based decision strategy formulated by Cesa-Bianchi and Lugosi differs from our construction, which is based on online convex programming [23]. The analysis presented in [9] relies on a generalized Blackwell s condition, which was proposed in [65].

This type of analysis is also similar to the analysis presented by [62] for the quasi-additive family of online learning algorithms. Our analysis is different and is based on the weak duality theorem, the generalized Fenchel duality, and strongly convex functions with respect to arbitrary norms. Casting Online Learning as Online Convex Programming In this chapter we formally define the setting of online learning. We then describe several assumptions under which the online learning setting can be cast as an online convex programming procedure.

Online learning is performed in a sequence of T consecutive rounds. On round t, the learner is first given a question, cast as a vector x t, and is required to provide an answer to this question. For example, x t can be an encoding of an message and the question is whether the is spam or not. The learner s prediction is performed based on a hypothesis, h t : X Y, where X is the set of questions and Y is the set of possible answers.

After predicting an answer, the learner receives the correct answer to the question, denoted y t, and suffers a loss according to a loss function l h t, x t, y t. The function l assesses the quality of the hypothesis h t on the example x t, y t. Formally, let H be the set of all possible hypotheses, then l is a function from H X Y into the reals. The ultimate goal of the learner is to minimize the cumulative loss he suffers along his run. To achieve this goal, the learner may choose a new hypothesis after each round so as to be more accurate in later rounds.

Thus, rather than saying that on round t the learner chooses a hypothesis, we can say that the learner chooses a parameter vector w t and his hypothesis is h wt. Next, we note that once the environment chooses a question-answer pair x t, y t , the loss function becomes a function over the hypothesis space or equivalently over the set of parameter vectors S. We can therefore redefine 2. On round t, the learner chooses a vector w t S, which defines a hypothesis h wt to be used for prediction.

Let us further assume that the set of admissible parameter vectors, S, is convex and that the loss functions g t are convex functions for an overview of convex analysis see the appendix. In online convex programming, the set S is known in advance, but the objective function may change along the online process. The goal of the online optimizer, which we call the learner, is to minimize the cumulative objective value g t w t.

In summary, we have shown that the online learning setting can be cast as the task of online convex programming, assuming that:. In Section 2. We conclude this section with a specific example in which the above assumptions clearly hold. The setting we describe is called online regression with squared loss. Moreover, in a utopian case, the cumulative loss of h on the entire sequence is zero. In this case, we would like the cumulative loss of our online algorithm to be independent of T. In the more realistic case there is no h that correctly predicts the correct answers of all observed instances.

In this case, we would like the cumulative loss of our online algorithm not to exceed by much the cumulative loss of any fixed hypothesis h. Formally, we assess the performance of the learner using the notion of regret. In the next chapter, we present an algorithmic framework for online convex programming that guarantees a regret of O T with respect to any vector u S. Since we have shown that online learning can be cast as online convex programming, we also obtain a low regret algorithmic framework for online learning. A well known example in which this assumption does not hold is online classification with the 0- loss function.

In this section we describe the mistake bound model, which extends the utilization of online convex programming to online learning with non-convex loss functions. Extending the technique presented below to general non-convex loss functions is straightforward. We now show that no algorithm can obtain a sub-linear regret bound for the 0- loss function. An adversary can make the number of mistakes of any online algorithm to be equal to T, by simply waiting for the learner s prediction and then providing the opposite answer as the true answer.

To overcome the above hardness result, the common solution in the online learning literature is to find a convex loss function that upper bounds the original non-convex loss function. It is straightforward to verify that l hinge h w, x, y is a convex function. Denote by M the set of rounds in which h wt x t y t and note that the left-hand side of the above is equal to M. In the next chapter, we present a low regret algorithmic framework for online learning. Running this algorithm on the examples in M results in the famous Perceptron algorithm [95]. Combining the regret bound with Eq.

The bound we have obtained is the best relative mistake bound known for the Perceptron algorithm [58, 03, 06]. This model is also closely related to the model of relative loss bounds presented by Kivinen and Warmuth [76, 75, 77]. Our presentation of relative mistake bounds follows the works of Littlestone [84], and Kivinen and Warmuth [76]. The impossibility of obtaining a regret bound for the 0- loss function dates back to Cover [26], who showed that any online predictor that makes deterministic predictions cannot have a vanishing regret universally for all sequences.

One way to circumvent this difficulty is to allow the online predictor to make randomized predictions and to analyze its expected regret. In this dissertation we adopt another way and follow the mistake bound model, namely, we slightly modify the regret-based model in which we analyze our algorithm.

Online Convex Programming by Incremental Dual Ascend In this chapter we present a general low regret algorithmic framework for online convex programming. Informally, the function f measures the complexity of vectors in S and the scalar L is related to some generalized Lipschitz property of the functions g, We defer the exact requirements we impose on f and L to later sections.

Our algorithmic framework emerges from a representation of the regret bound given in Eq. Specifically, we rewrite Eq. Note that the optimization problem on the right-hand side of 8. Nevertheless, writing the regret bound as in Eq. By generalizing the notion of Fenchel duality, we are able to derive a dual optimization problem. While the primal minimization problem of finding the best vector in S can only be solved in hindsight, the dual problem can be optimized incrementally, as the online learning progresses. In order to derive explicit quantitative regret bounds we make immediate use of the fact that dual objective lower bounds the primal objective.

We therefore reduce the process of online convex optimization to the task of incrementally increasing the dual objective function. By doing so we are able to link the primal objective value, the learner s cumulative loss, and the increase in the dual. First, it should be recalled that the Fenchel conjugate of a function f is the function see Section A.

An equivalent problem is inf w 0,w, Recall that we would like our learning algorithm to achieve a regret bound of the form given in Eq. We start by rewriting Eq. Thus, up to the sublinear term c L, the learner s cumulative loss lower bounds the optimum of the minimization problem on the right-hand side of Eq. Based on the previous section, the generalized Fenchel dual function of the right-hand side of Eq. Our construction is based on the weak duality theorem stating that any value of the dual problem is smaller than the optimal value of the primal problem.

The algorithmic framework we propose is. Kakade Toyota Technological Institute at Chicago sham tti-c. IID assumption. In this lecture, however, we look at a different model, called online. Let us first compare and contrast the two. In batch. Daniel Golovin 1 The Setup Optimization: Model the problem objective, constraints Pick best decision from a feasible set. Learning: Model the. Big Data - Lecture 1 Optimization reminders S. Other combination techniques like voting, bagging etc are also described.

The function g is convex if either of the following two conditions. Part 1. An Introduction to SVMs 1. SVMs for binary classification. Soft margins and multi-class classification. Linear Threshold Units w x hx Hebrew University Jerusalem , Israel kobics cs. Nanyang Tech. University Singapore zhao6 ntu. Hoi School of Comp.

Ryan Rossi

Chapter 2 The Multiplicative Weights Update method The Multiplicative Weights method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game. Shalev-Shwartz DOI: Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to. Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d x, y between every pair of points x, y X.

The purpose of this chapter is to introduce metric spaces and give some. Vandenberghe EEC Spring Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing Proximal point method a conceptual algorithm for minimizing. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type. Vert mines. Introduction III. Model The goal of our research.

Research jl yahoo-inc. Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality.

The Hebrew University. OPRE : 2. Rita Osadchy 1 Outline 1. About the Course 2. What is Machine Learning? Types of problems and Situations 4. Introduction to Lagrange Multipliers Enhanced Fritz John Optimality Conditions Informative Lagrange Multipliers Information Theory and Coding Prof. Introduction Data production rate has been increased dramatically Big Data and we are able store much more data than before E.

Available Online at www. Many figures. Assume you collected the. Can be used. Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, Today we describe four specific algorithms useful for classification problems: linear regression,.

Random variables and measurable functions 2. Cumulative distribution functions 3. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no. IL Abstract Strongly adaptive algorithms are algorithms whose performance. EE, Rm PT 3. Quasi- Newton methods 1 Introduction 1.

Given a starting. Introduction 5 1. Introduction and assumptions The classical normal linear regression model can be written. Finn November 30th, In the next few days, we will introduce some of the basic problems in geometric modelling, and. Rennie Massachusetts Institute of Technology Comp.

Rennie Tommi S. Jaakkola Computer Science and Artificial. Different spaces and objective functions but in general same optimal. We briefly return to online algorithms. What is learning? Learning is a very general term denoting the way in which agents: Acquire and organize knowledge by building, modifying and organizing internal representations of some external reality ;. Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept.

First, these shadow. Log in Registration. Search for. Online Learning: Theory, Algorithms, and Applications. Size: px. Start display at page:. Amos Cummings 4 years ago Views:. View more. Similar documents. Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass. Online Learning. More information. Online Learning 9. In batch More information.

Research in Focus: Deep Learning Research and the Future of AI

Learning: Model the More information. Chapter 6. The stacking ensemble approach 82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described More information. The function g is convex if either of the following two conditions More information. Introduction to Support Vector Machines. University More information. Online Learning and Online Convex Optimization. Besides the familiar Euclidean norm based on the dot product, there are a number More information.

Section 1. In your first exposure to More information. Metric Spaces. Chapter 7. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d x, y between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some More information. Proximal point method L. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing Proximal point method a conceptual algorithm for minimizing More information. If a type More information.

Thesis Detail Page

Model The goal of our research More information. Geometric More information. The Hebrew University More information. Duality in General Programs. Rita Osadchy Date: April 12, Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically Big Data and we are able store much more data than before E. Many figures More information. Assume you collected the More information. Schapire More information. Can be used More information. Irizarry and Hector Corrada Bravo February, Today we describe four specific algorithms useful for classification problems: linear regression, More information.

Discrete More information.

Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Online Learning: Theory, Algorithms, and Applications - Ph.D thesis
Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Online Learning: Theory, Algorithms, and Applications - Ph.D thesis
Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Online Learning: Theory, Algorithms, and Applications - Ph.D thesis
Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Online Learning: Theory, Algorithms, and Applications - Ph.D thesis
Online Learning: Theory, Algorithms, and Applications - Ph.D thesis Online Learning: Theory, Algorithms, and Applications - Ph.D thesis

Related Online Learning: Theory, Algorithms, and Applications - Ph.D thesis



Copyright 2019 - All Right Reserved