Skip to content

y-lahari/Generative_AI_LLMs

Repository files navigation

Generative AI with Large Language Models - Deeplearning.AI and Amazon Web Services

This repository contains all the resources I used and created for the Generative AI with LLMs course on Coursera

Course Syllabus

Week 1

  • Transformer Architecture
  • Prompting and Prompt Engineering
  • Inference Configuration Parameters
  • Generative AI Project Lifecycle
  • Pre-training LLMs
  • Challenges - Quantisation & Computational Memory
  • Pre-training vs Fine-tuning LLMs
  • Efficient multi-GPU Strategies - DDP, FSDP
  • Scaling laws and optimal compute models
  • Pre-training for domain specific adaption

Week 2

  • Instruction Fine-Tuning
  • Catastrophic forgetting
  • Multi Task Instruction Fine tuning
  • Model Evaluation and Benchmarks
  • Parameter Efficient Fine-Tuning (PEFT)
  • Selective, Reparameterization, Additive methods
  • LoRA and Soft Prompts

Week 3

  • Reinforcement Learning from Human Feedback (RLHF)
  • Reward model
  • Proximal Policy Optimization
  • Reward Hacking and KL Divergence
  • Reinforcement Learning with AI Feedback (RLAIF)
  • Model optimization for deployment - Distillation, Quantisation, Pruning
  • LLM application architecture - Knowledge Cutoff & Hallucinations
  • Retrieval Agumented Generation
  • Chain-of-thought prompting
  • Program aided language models (PAL)
  • ReAct - Reasoning and action
  • Langchain
  • Responsible AI

About

Generative AI with Large Language Models on Coursera offered by Deeplearning.AI and AWS.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published