PORTO-
FREI

Reinforcement Learning Algorithms: Analysis and Applications

von Reinforcement Learning Algorithms: Analysis and Applications   (Autor)

This book reviews research developments in diverse areas of reinforcement learning such as model-free actor-critic methods, model-based learning and control, information geometry of policy searches, reward design, and exploration in biology and the behavioral sciences. Special emphasis is placed on advanced ideas, algorithms, methods, and applications. The contributed papers gathered here grew out of a lecture course on reinforcement learning held by Prof. Jan Peters in the winter semester 2018/2019 at Technische Universität Darmstadt. The book is intended for reinforcement learning students and researchers with a firm grasp of linear algebra, statistics, and optimization. Nevertheless, all key concepts are introduced in each chapter, making the content self-contained and accessible to a broader audience.

Buch (Gebunden)

EUR 160,49

Alle Preisangaben inkl. MwSt.

Auch verfügbar als:

  Verlagsbedingte Lieferzeit ca. 3 - 6 Werktage.
(Print on Demand. Lieferbar innerhalb von 3 bis 6 Tagen)

Versandkostenfrei*

Dieser Artikel kann nicht bestellt werden.
 

Produktbeschreibung

This book reviews research developments in diverse areas of reinforcement learning such as model-free actor-critic methods, model-based learning and control, information geometry of policy searches, reward design, and exploration in biology and the behavioral sciences. Special emphasis is placed on advanced ideas, algorithms, methods, and applications. The contributed papers gathered here grew out of a lecture course on reinforcement learning held by Prof. Jan Peters in the winter semester 2018/2019 at Technische Universität Darmstadt. The book is intended for reinforcement learning students and researchers with a firm grasp of linear algebra, statistics, and optimization. Nevertheless, all key concepts are introduced in each chapter, making the content self-contained and accessible to a broader audience. 

Inhaltsverzeichnis


Prediction Error and Actor-Critic Hypotheses in the Brain.-ÿ Reviewing on-policy / o -policy critic learning in the context of Temporal Di erences and Residual Learning.- Reward Function Design in Reinforcement Learning.- Exploration Methods In Sparse Reward Environments.- A Survey on Constraining Policy Updates Using the KL Divergence.- Fisher Information Approximations in Policy Gradient Methods.- Benchmarking the Natural gradient in Policy Gradient Methods and Evolution Strategies.- Information-Loss-Bounded Policy Optimization.- Persistent Homology for Dimensionality Reduction.- Model-free Deep Reinforcement Learning - Algorithms and Applications.- Actor vs Critic.- Bring Color to Deep Q-Networks.- Distributed Methods for Reinforcement Learning.- Model-Based Reinforcement Learning.- Challenges of Model Predictive Control in a Black Box Environment.- Control as Inference?  

Autoreninfo


Boris Belousov is a Ph.D. student at Technische Universität Darmstadt, Germany, advised by Prof. Jan Peters. He received his M.Sc. degree from the University of Erlangen-Nuremberg, Germany, in 2016, supported by a DAAD scholarship for academic excellence. Boris is now working toward combining optimal control and information theory with applications to robotics and reinforcement learning.

Hany Abdulsamad is a Ph.D. student at the TU Darmstadt, Germany. He graduated with a Master's degree in Automation and Control from the faculty of Electrical Engineering and Information Technology at the TU Darmstadt. His research interests range from optimal control and trajectory optimization to reinforcement learning and robotics. Hany's current research focuses on learning hierarchical structures for system identification and control.
After graduating with a Master's degree in Autonomous Systems from the Technische Universität Darmstadt, Pascal Klink pursued his Ph.D. studies at the Intelligent Autonomous Systems Group of the TU Darmstadt, where he developed methods for reinforcement learning in unstructured, partially observable real-world environments. Currently, he is investigating curriculum learning methods and how to use them to facilitate learning in these environments. 

Mehr vom Verlag:

k.A.

Mehr aus der Reihe:

Produktdetails

Medium: Buch
Format: Gebunden
Seiten: 216
Sprache: Englisch
Erschienen: Januar 2021
Auflage: 1st edition 2021
Band-Nr.: 2
Sonstiges: 978-3-030-41187-9
Maße: 241 x 160 mm
Gewicht: 494 g
ISBN-10: 3030411877
ISBN-13: 9783030411879

Bestell-Nr.: 28920553 
Libri-Verkaufsrang (LVR):
Libri-Relevanz: 0 (max 9.999)
 

Ist ein Paket? 0
Rohertrag: 34,50 €
Porto: 1,84 €
Deckungsbeitrag: 32,66 €

LIBRI: 0000000
LIBRI-EK*: 115.49 € (23%)
LIBRI-VK: 160,49 €
Libri-STOCK: 0
LIBRI: 097 Print on Demand. Lieferbar innerhalb von 7 bis 10 Tagen * EK = ohne MwSt.

UVP: 2 
Warengruppe: 16820 

KNO: 81709623
KNO-EK*: 80.92 € (25%)
KNO-VK: 160,49 €
KNO-STOCK: 0
KNO-MS: 97

KNO-SAMMLUNG: Studies in Computational Intelligence 883
KNOABBVERMERK: 2021. viii, 206 S. VIII, 206 p. 45 illus., 35 illus. in color. 235 mm
KNOSONSTTEXT: 978-3-030-41187-9
KNOMITARBEITER: Herausgegeben von Belousov, Boris; Abdulsamad, Hany; Klink, Pascal; Parisi, Simone; Peters, Jan
KNO-BandNr. Text:2
Einband: Gebunden
Auflage: 1st edition 2021
Sprache: Englisch
Beilage(n): HC runder Rücken kaschiert

Im Themenkatalog stöbern

› Start › Bücher

Entdecken Sie mehr

Alle Preise inkl. MwSt. , innerhalb Deutschlands liefern wir immer versandkostenfrei . Informationen zum Versand ins Ausland .

Kostenloser Versand *

innerhalb eines Werktages

OHNE RISIKO

30 Tage Rückgaberecht

Käuferschutz

mit Geld-Zurück-Garantie