Product details:

ISBN13:9781032722122
ISBN10:1032722126
Binding:Paperback
No. of pages:408 pages
Size:254x178 mm
Weight:750 g
Language:English
Illustrations: 21 Illustrations, color; 21 Line drawings, color
699
Category:

AlphaGo Simplified

Rule-Based AI and Deep Learning in Everyday Games
 
Edition number: 1
Publisher: Chapman and Hall
Date of Publication:
 
Normal price:

Publisher's listprice:
GBP 44.95
Estimated price in HUF:
21 710 HUF (20 677 HUF + 5% VAT)
Why estimated?
 
Your price:

19 540 (18 609 HUF + 5% VAT )
discount is: 10% (approx 2 171 HUF off)
The discount is only available for 'Alert of Favourite Topics' newsletter recipients.
Click here to subscribe.
 
Availability:

Estimated delivery time: In stock at the publisher, but not at Prospero's office. Delivery time approx. 3-5 weeks.
Not in stock at Prospero.
Can't you provide more accurate information?
 
  Piece(s)

 
Short description:

What exactly is ML? How is it related to AI? Why is deep learning (DL) so popular these days? This book explains how traditional rule-based AI and ML work.

Long description:

May 11, 1997, was a watershed moment in the history of artificial intelligence (AI): the IBM supercomputer chess engine, Deep Blue, beat the world Chess champion, Garry Kasparov. It was the first time a machine had triumphed over a human player in a Chess tournament. Fast forward 19 years to May 9, 2016, DeepMind?s AlphaGo beat the world Go champion Lee Sedol. AI again stole the spotlight and generated a media frenzy. This time, a new type of AI algorithm, namely machine learning (ML) was the driving force behind the game strategies.


What exactly is ML? How is it related to AI? Why is deep learning (DL) so popular these days? This book explains how traditional rule-based AI and ML work and how they can be implemented in everyday games such as Last Coin Standing, Tic Tac Toe, or Connect Four. Game rules in these three games are easy to implement. As a result, readers will learn rule-based AI, deep reinforcement learning, and more importantly, how to combine the two to create powerful game strategies (the whole is indeed greater than the sum of its parts) without getting bogged down in complicated game rules.


Implementing rule-based AI and ML in these straightforward games is quick and not computationally intensive. Consequently, game strategies can be trained in mere minutes or hours without requiring GPU training or supercomputing facilities, showcasing AI's ability to achieve superhuman performance in these games. More importantly, readers will gain a thorough understanding of the principles behind rule-based AI, such as the MiniMax algorithm, alpha-beta pruning, and Monte Carlo Tree Search (MCTS), and how to integrate them with cutting-edge ML techniques like convolutional neural networks and deep reinforcement learning to apply them in their own business fields and tackle real-world challenges.


Written with clarity from the ground up, this book appeals to both general readers and industry professionals who seek to learn about rule-based AI and deep reinforcement learning, as well as students and educators in computer science and programming courses.

Table of Contents:

List of Figures


Preface


Acknowledgments


Section I Rule-Based A.I.


Chapter 1 Rule-Based AI in the Coin Game


Chapter 2 Look-Ahead Search in Tic Tac Toe


Chapter 3 Planning Three Steps Ahead in Connect Four


Chapter 4 Recursion and MiniMax Tree Search


Chapter 5 Depth Pruning in MiniMax


Chapter 6 Alpha-Beta Pruning


Chapter 7 Position Evaluation in MiniMax


Chapter 8 Monte Carlo Tree Search


Section II Deep Learning


Chapter 9 Deep Learning in the Coin Game


Chapter 10 Policy Networks in Tic Tac Toe


Chapter 11 A Policy Network in Connect Four


Section III Reinforcement Learning


Chapter 12 Tabular Q-Learning in the Coin Game


Chapter 13 Self-Play Deep Reinforcement Learning


Chapter 14 Vectorization to Speed Up Deep Reinforcement Learning


Chapter 15 A Value Network in Connect Four


Section IV AlphaGo Algorithms


Chapter 16 Implement AlphaGo in the Coin Game


Chapter 17 AlphaGo in Tic Tac Toe and Connect Four


Chapter 18 Hyperparameter Tuning in AlphaGo


Chapter 19 The Actor-Critic Method and AlphaZero


Chapter 20 Iterative Self-Play and AlphaZero in Tic Tac Toe


Chapter 21 AlphaZero in Unsolved Games


Bibliography