Overview
Step into the world of advanced language processing with our course, “LangChain in Action: Develop LLM-Powered Applications.” This dynamic programme offers a unique approach to mastering the LangChain framework, empowering you to harness the power of Large Language Models (LLMs) for developing cutting-edge applications. From understanding the fundamentals to exploring advanced concepts like Prompt Engineering and Retrieval Augmented Generation (RAG), this course provides a comprehensive journey through theory and practical implementation. Dive deep into topics such as prompt chaining, memory integration, and microservice architecture, equipping you with the skills to build sophisticated LLM-powered applications with confidence and proficiency.
How will I get my certificate?
You may have to take a quiz or a written test online during or after the course. After successfully completing the course, you will be eligible for the certificate.
Who is This course for?
- Developers aspiring to leverage Large Language Models (LLMs) for advanced application development.
- Data scientists and AI enthusiasts interested in exploring cutting-edge language processing techniques.
- Students studying computer science or related fields seeking practical experience in LLM-powered application development.
- Professionals aiming to enhance their skills in microservice architecture and language processing technologies.
- Anyone intrigued by the intersection of artificial intelligence and software engineering, eager to dive into the realm of LLM applications.
Requirements
Our LangChain in Action: Develop LLM-Powered Applications has been designed to be fully compatible with tablets and smartphones. Here are some common requirements you may need:
- Computer, smartphone, or tablet with internet access.
- English language proficiency.
- Required software/tools. (if needed)
- Commitment to study and participate.
There is no time limit for completing this course; it can be studied at your own pace.
Career Path
Popular Career Paths for a LangChain in Action: Develop LLM-Powered Applications Course:
- AI Application Developer: £40,000 – £60,000
- Machine Learning Engineer: £45,000 – £70,000
- Natural Language Processing (NLP) Specialist: £50,000 – £80,000
- Software Architect: £55,000 – £90,000
- AI Research Scientist: £60,000 – £100,000
- Technical Lead in AI Development: £65,000 – £110,000
Salary ranges can vary by location and experience.
Course Curriculum
- 13 sections
- 56 lectures
- 3 hours, 36 minutes total length
-
Why this course is different00:01:00
-
Prerequisites00:01:00
-
Essential topics and terms (theory)00:04:00
-
Why this course does not cover Open Source models like LLama200:01:00
-
Optional: Install Visual Studio Code00:02:00
-
Get the source files with Git from Github00:02:00
-
Create OpenAI Account and create API Key00:02:00
-
Setup of a virtual environment00:03:00
-
Setup OpenAI Api-Key as environment variable00:03:00
-
Exploring the vanilla OpenAI package00:03:00
-
LLM Basics00:07:00
-
Prompting Basics00:02:00
-
Theory: Prompt Engineering Basics00:02:00
-
Few Shot Prompting00:05:00
-
Chain of thought prompting00:02:00
-
Pipeline-Prompts00:04:00
-
Prompt Serialisation00:03:00
-
Introduction to chains00:01:00
-
Basic chains – the LLMChain00:03:00
-
Response Schemas and OutputParsers00:06:00
-
LLMChain with multiple inputs00:02:00
-
SequentialChains00:04:00
-
RouterChains00:04:00
-
Callbacks00:05:00
-
Memory basics – ConversationBufferMemory00:04:00
-
ConversationSummaryMemory00:03:00
-
EXERCISE: Use Memory to build a streamlit Chatbot00:01:00
-
SOLUTION: Chatbot with Streamlit00:03:00
-
OpenAI Function Calling – Vanilla OpenAI Package00:08:00
-
Function Calling with LangChain00:04:00
-
Limits and issues of the langchain Implementation00:03:00
-
RAG – Theory and building blocks00:03:00
-
Loaders and Splitters00:04:00
-
Embeddings – Theory and practice00:04:00
-
VectorStores and Retrievers00:07:00
-
RAG Service with FastAPI00:05:00
-
Agents Basics – LLMs learn to use tools00:06:00
-
Agents with a custom RAG-Tool00:07:00
-
ChatAgents00:03:00
-
Indexing API – keep your documents in sync00:02:00
-
PREREQUISITE: Docker Installation00:01:00
-
Setup of PgVector and RecordManager00:04:00
-
Indexing Documents in practice00:06:00
-
Document Retrieval with PgVector00:03:00
-
Introduction to LangSmith (User Interface and Hub)00:02:00
-
LangSmith Projects00:08:00
-
LangSmith Datasets and Evaluation00:13:00
-
Introduction to Microservice Architecture00:04:00
-
How our Chatbot works in a Microservice Architecture00:02:00
-
Introduction to Docker00:05:00
-
Introduction to Kubernetes00:02:00
-
Deployment of the LLM Microservices to Kubernetes00:13:00
-
Intro to LangChain Expression Language00:01:00
-
LCEL Part 1 – Pipes and OpenAI Function Calling
-
LCEL – Part 2 – VectorStores, ItemGetter, Tools00:06:00
-
LCEL – Part 3 – Arbitrary Functions, Runnable Interface, Fallbacks00:07:00