• Contact

  • Newsletter

  • About us

  • Delivery options

  • News

  • 0
    Introduction to Foundation Models

    Introduction to Foundation Models by Chen, Pin-Yu; Liu, Sijia;

      • GET 8% OFF

      • The discount is only available for 'Alert of Favourite Topics' newsletter recipients.
      • Publisher's listprice EUR 74.89
      • The price is estimated because at the time of ordering we do not know what conversion rates will apply to HUF / product currency when the book arrives. In case HUF is weaker, the price increases slightly, in case HUF is stronger, the price goes lower slightly.

        31 768 Ft (30 255 Ft + 5% VAT)
      • Discount 8% (cc. 2 541 Ft off)
      • Discounted price 29 226 Ft (27 835 Ft + 5% VAT)

    31 768 Ft

    db

    Availability

    Not yet published.

    Why don't you give exact delivery time?

    Delivery time is estimated on our previous experiences. We give estimations only, because we order from outside Hungary, and the delivery time mainly depends on how quickly the publisher supplies the book. Faster or slower deliveries both happen, but we do our best to supply as quickly as possible.

    Product details:

    • Edition number 2025
    • Publisher Springer
    • Date of Publication 3 June 2025
    • Number of Volumes 1 pieces, Book

    • ISBN 9783031767692
    • Binding Hardback
    • No. of pages310 pages
    • Size 235x155 mm
    • Language English
    • Illustrations 55 Illustrations, black & white
    • 700

    Categories

    Short description:

    This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in foundation models:





    • Part I introduces the core principles of foundation models and generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.




    • Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series machine learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.




    • Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.





    Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.

    More

    Long description:

    This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in foundation models:





    • Part I introduces the core principles of foundation models and generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.




    • Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series machine learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.




    • Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.





    Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.



     

    More

    Table of Contents:

    Part I-Fundamentals of Foundation Models.-Chapter 1-Foundation Models and Generative AI.- Chapter 2-Neural Networks.- Chapter 3- Learning and Generalization of Vision Transformers.- Chapter 4-Formalizing In-Context Learning in Transformers.- Part II Advanced Topics in Foundation Model.- Chapter 5-Automated Visual Prompting.- Chapter 6-Prompting Large Language Models with Privacy.- Chapter 7- Memory-Efficient Fine-Tuning for Foundation Models.- Chapter 8 Large Language Models Meet Time Series.- Chapter 9-Large Language Models Meet Speech Recognition.- Chapter 10-Benchmarking Foundation Models using Synthetic Datasets.- Chapter 11-Machine Unlearning for Foundation Models.- Chapter 12-Part III Trust and Safety in Foundation Models.- Chapter 12-Trustworthiness Evaluation of Large Language Models.- Chapter 13-Attacks and Defenses on Aligned Large Language Models.- Chapter 14- Safety Risks in Fine-tuning Large Language Models.- Chapter15- Watermarks for Large Language Models.- Chapter 16- AI-Generated Text Detection.- Chapter 17- Backdoor Risks in Diffusion Models.- Chapter 18- Prompt Engineering for Safety Red-teaming: A Case Study on Text-to-Image Diffusion Models.

    More