Skip to content Skip to sidebar Skip to footer

Generative AI Architectures with LLM, Prompt, RAG, Vector DB

Generative AI Architectures with LLM, Prompt, RAG, Vector DB

Generative AI Architectures with LLM, Prompt, RAG, Vector DB

Design and Integrate AI-Powered S/LLMs into Enterprise Apps using Prompt Engineering, RAG, Fine-Tuning and Vector DBs

Preview this Course

What you'll learn

  • Generative AI Model Architectures (Types of Generative AI Models)
  • Transformer Architecture: Attention is All you Need
  • Large Language Models (LLMs) Architectures
  • Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search
  • Generate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on)
  • Function Calling and Structured Outputs in Large Language Models (LLMs)
  • LLM Providers: OpenAI, Meta AI, Anthropic, Hugging Face, Microsoft, Google and Mistral AI
  • LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI Grok
  • SLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5
  • How to Choose LLM Models: Quality, Speed, Price, Latency and Context Window
  • Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3
  • Installing and Running Llama and Gemma Models Using Ollama
  • Modernizing Enterprise Apps with AI-Powered LLM Capabilities
  • Designing the 'EShop Support App' with AI-Powered LLM Capabilities
  • Advanced Prompting Techniques: Zero-shot, One-shot, Few-shot, COT
  • Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAG
  • The RAG Architecture: Ingestion with Embeddings and Vector Search
  • E2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG Workflow
  • End-to-End RAG Example for EShop Customer Support using OpenAI Playground
  • Fine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, Transfer
  • End-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI Playground
  • Choosing the Right Optimization – Prompt Engineering, RAG, and Fine-Tuning

Description

In this course, you'll learn how to Design Generative AI Architectures with integrating AI-Powered S/LLMs into EShop Support Enterprise Applications using Prompt Engineering, RAG, Fine-tuning and Vector DBs.

We will design Generative AI Architectures with below components;

Small and Large Language Models (S/LLMs)

Prompt Engineering

Retrieval Augmented Generation (RAG)

Fine-Tuning

Vector Databases

We start with the basics and progressively dive deeper into each topic. We'll also follow LLM Augmentation Flow is a powerful framework that augments LLM results following the Prompt Engineering, RAG and Fine-Tuning.

Large Language Models (LLMs) module;

How Large Language Models (LLMs) works?

Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation

Generate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on)

Function Calling and Structured Output in Large Language Models (LLMs)

LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI Grok

SLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5

Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3

Interacting OpenAI Chat Completions Endpoint with Coding

Installing and Running Llama and Gemma Models Using Ollama to run LLMs locally

Modernizing and Design EShop Support Enterprise Apps with AI-Powered LLM Capabilities

Prompt Engineering module;

Steps of Designing Effective Prompts: Iterate, Evaluate and Templatize

Advanced Prompting Techniques: Zero-shot, One-shot, Few-shot, Chain-of-Thought, Instruction and Role-based

Design Advanced Prompts for EShop Support – Classification, Sentiment Analysis, Summarization, Q&A Chat, and Response Text Generation

Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAG

Retrieval-Augmented Generation (RAG) module;

The RAG Architecture Part 1: Ingestion with Embeddings and Vector Search

The RAG Architecture Part 2: Retrieval with Reranking and Context Query Prompts

The RAG Architecture Part 3: Generation with Generator and Output

E2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG Workflow

Design EShop Customer Support using RAG

End-to-End RAG Example for EShop Customer Support using OpenAI Playground

Fine-Tuning module;

Fine-Tuning Workflow

Fine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, Transfer

Design EShop Customer Support Using Fine-Tuning

End-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI Playground

Lastly, we will discuss

Choosing the Right Optimization – Prompt Engineering, RAG, and Fine-Tuning

This course is more than just learning Generative AI, it's a deep dive into the world of how to design Advanced AI solutions by integrating LLM architectures into Enterprise applications.

You'll get hands-on experience designing a complete EShop Customer Support application, including LLM capabilities like Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation.

Who this course is for:

  • Beginner to integrate AI-Powered LLMs into Enterprise Apps

Post a Comment for "Generative AI Architectures with LLM, Prompt, RAG, Vector DB"