
Large Language Model (LLM) Talk
AI-Talk
AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.
Categories: Technology
Listen to the last episode:
The Model Context Protocol (MCP), introduced by Anthropic in November 2024, is an open protocol standardizing how applications provide context to LLMs. Acting like a "USB-C port for AI applications," it provides a standardized way to connect AI models to different data sources and tools. MCP employs a client-server architecture to overcome the complex "MxN integration problem" by establishing a common interface, reducing complexity to M+N. This allows for more robust and scalable AI applications by eliminating the need for custom connectors and fostering a unified ecosystem for LLM integration.
Previous episodes
-
54 - Model Context Protocol (MCP) Wed, 09 Apr 2025
-
53 - LLM Post-Training: Reasoning Mon, 17 Mar 2025
-
52 - Agent AI Overview Mon, 17 Mar 2025
-
51 - FlashAttention-3 Fri, 07 Mar 2025
-
50 - FlashAttention-2 Wed, 05 Mar 2025
-
49 - FlashAttention Wed, 05 Mar 2025
-
48 - PPO (Proximal Policy Optimization) Sat, 15 Feb 2025
-
47 - "Deep Dive into LLMs like ChatGPT" - Andrej Karpathy's Tech Talk Learning Sat, 15 Feb 2025
-
46 - "Intro to Large Language Models" - Andrej Karpathy's Tech Talk Learning Thu, 13 Feb 2025
-
45 - DeepSeek-V2 Mon, 10 Feb 2025
-
44 - Matrix Calculus in Deep Learning Mon, 10 Feb 2025
-
43 - S1: Simple Test-time Scaling Sun, 09 Feb 2025
-
42 - RLHF (Reinforcement Learning from Human Feedback) Fri, 07 Feb 2025
-
41 - GRPO (Group Relative Policy Optimization) Wed, 05 Feb 2025
-
40 - Model/Knowledge Distillation Wed, 05 Feb 2025
-
39 - Qwen-2.5 Sat, 01 Feb 2025
-
38 - Qwen-2 Sat, 01 Feb 2025
-
37 - Qwen-1 Sat, 01 Feb 2025
-
36 - OpenAI-o1 Sat, 25 Jan 2025
-
35 - GPT-4o Sat, 25 Jan 2025
-
34 - Kimi k1.5 Thu, 23 Jan 2025
-
33 - DeepSeek-R1 Wed, 22 Jan 2025
-
32 - Claude-3 Mon, 20 Jan 2025
-
31 - GPT-4 Mon, 20 Jan 2025
-
30 - LLM Training Mon, 20 Jan 2025
-
29 - MiniMax-01 Mon, 20 Jan 2025
-
28 - DeepSeek v3 Mon, 20 Jan 2025
-
27 - Tree-of-Thoughts Sun, 19 Jan 2025
-
26 - LLM Reasoning Sun, 19 Jan 2025
-
25 - LangChain Sat, 18 Jan 2025
-
24 - LlamaIndex Sat, 18 Jan 2025
-
23 - Chain of Thought (CoT) Sat, 18 Jan 2025
-
22 - Retrieval-Augmented Generation (RAG) Sat, 18 Jan 2025
-
21 - Fine-Tuning Sat, 18 Jan 2025
-
20 - Scaling Laws Fri, 17 Jan 2025
-
19 - LLaMA-3 Fri, 17 Jan 2025
-
18 - LLaMA-2 Fri, 17 Jan 2025
-
17 - LLaMA-1 Fri, 17 Jan 2025
-
16 - Survey of Large Language Models  Fri, 17 Jan 2025
-
15 - Mixture of Experts (MoE) Fri, 17 Jan 2025
-
14 - Multi-Task Learning Thu, 16 Jan 2025
-
13 - Gradient Descent Optimization Algorithms Thu, 16 Jan 2025
-
12 - GPT-1 (Generative Pre-trained Transformer) Thu, 16 Jan 2025
-
11 - Linear Transformers Thu, 16 Jan 2025
-
10 - BERT Wed, 15 Jan 2025
-
9 - Sora Tue, 14 Jan 2025
-
8 - Word2Vec Tue, 14 Jan 2025
-
7 - Stable Diffusion Tue, 14 Jan 2025
-
6 - Retrieval Transformer Tue, 14 Jan 2025
-
5 - GPT-2 Tue, 14 Jan 2025