Foundations of Deep Learning - He, Fengxiang; Tao, Dacheng; - Prospero Internetes Könyváruház

Foundations of Deep Learning
 
A termék adatai:

ISBN13:9789811682322
ISBN10:9811682321
Kötéstípus:Keménykötés
Terjedelem:284 oldal
Méret:235x155 mm
Nyelv:angol
Illusztrációk: 3 Illustrations, black & white; 17 Illustrations, color
700
Témakör:

Foundations of Deep Learning

 
Kiadás sorszáma: 2024
Kiadó: Springer
Megjelenés dátuma:
Kötetek száma: 1 pieces, Book
 
Normál ár:

Kiadói listaár:
EUR 149.79
Becsült forint ár:
63 855 Ft (60 814 Ft + 5% áfa)
Miért becsült?
 
Az Ön ára:

51 084 (48 651 Ft + 5% áfa )
Kedvezmény(ek): 20% (kb. 12 771 Ft)
A kedvezmény érvényes eddig: 2024. december 31.
A kedvezmény csak az 'Értesítés a kedvenc témákról' hírlevelünk címzettjeinek rendeléseire érvényes.
Kattintson ide a feliratkozáshoz
 
Beszerezhetőség:

Még nem jelent meg, de rendelhető. A megjelenéstől számított néhány héten belül megérkezik.
 
  példányt

 
Rövid leírás:

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a ?cloud? to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. 



The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the ?effective? hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.



 



We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

Hosszú leírás:

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a ?cloud? to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. 



The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the ?effective? hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.



 We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

Tartalomjegyzék:
Introduction.- Background.- Conventional Statistical Learning Theory.- Difficulty of Conventional Statistical Learning Theory.- Developing Deep Learning Theory.- Generalization Bounds on Hypothesis Complexity.- Interplay of Optimization, Bayesian Inference, and Generalization.- Geometrical Properties of Loss Surface.- The Role of Over-parametrization.- Rising Concerns in Ethics and Security.- Privacy Preservation.- Fairness Protection.- Algorithmic Robustness.