Foundations of Deep Learning - He, Fengxiang; Tao, Dacheng; - Prospero Internet Bookshop

Foundations of Deep Learning
 
Product details:

ISBN13:9789811682322
ISBN10:9811682321
Binding:Hardback
No. of pages:284 pages
Size:235x155 mm
Language:English
Illustrations: 3 Illustrations, black & white; 17 Illustrations, color
700
Category:

Foundations of Deep Learning

 
Edition number: 2024
Publisher: Springer
Date of Publication:
Number of Volumes: 1 pieces, Book
 
Normal price:

Publisher's listprice:
EUR 149.79
Estimated price in HUF:
63 855 HUF (60 814 HUF + 5% VAT)
Why estimated?
 
Your price:

51 084 (48 651 HUF + 5% VAT )
discount is: 20% (approx 12 771 HUF off)
Discount is valid until: 31 December 2024
The discount is only available for 'Alert of Favourite Topics' newsletter recipients.
Click here to subscribe.
 
Availability:

Not yet published.
 
  Piece(s)

 
Short description:

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a ?cloud? to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. 



The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the ?effective? hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.



 



We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

Long description:

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a ?cloud? to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. 



The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the ?effective? hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.



 We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

Table of Contents:
Introduction.- Background.- Conventional Statistical Learning Theory.- Difficulty of Conventional Statistical Learning Theory.- Developing Deep Learning Theory.- Generalization Bounds on Hypothesis Complexity.- Interplay of Optimization, Bayesian Inference, and Generalization.- Geometrical Properties of Loss Surface.- The Role of Over-parametrization.- Rising Concerns in Ethics and Security.- Privacy Preservation.- Fairness Protection.- Algorithmic Robustness.