//]]>
Normal View MARC View ISBD View

Discrete–Time Stochastic Control and Dynamic Potential Games

by González-Sánchez, David.
Authors: Hernández-Lerma, Onésimo.%author. | SpringerLink (Online service) Series: SpringerBriefs in Mathematics, 2191-8198 Physical details: XIV, 69 p. online resource. ISBN: 331901059X Subject(s): Mathematics. | Systems theory. | Distribution (Probability theory). | Mathematics. | Systems Theory, Control. | Probability Theory and Stochastic Processes. | Control.
Tags from this library:
No tags from this library for this title.

Introduction and summary.- Direct problem: the Euler equation approach.- The inverse optimal control problem.- Dynamic games -- Conclusion -- References -- Index.

There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well–suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self–contained presentation of stochastic dynamic potential games.

There are no comments for this item.

Log in to your account to post a comment.

Languages: 
English |
العربية