This course teaches engineering AI agents through a structured, hands-on curriculum across 6 weeks. Each week builds upon the last, blending theory, frameworks, and practical projects.
- Understand agentic architectures
- Use LLMs to build basic agents without frameworks
- Final project: Personal career agent for your website
- Learn the OpenAI Agents SDK
- Implement guardrails
- Build the Deep Research app
- Low-code tool to configure agent teams
- Multiple projects to explore use cases
- Full-code, powerful and complex framework
- Tackle advanced workflows
- Agent collaboration environment
- Explore modular ecosystem
- Learn about MCP from Anthropic β for multi-model collaboration
- Final capstone: Ties all learnings together
Projects grow in complexity over time:
- Career Agent (Week 1)
- Deep Research App (Week 2)
- Multi-agent collaborations (CrewAI)
- LangGraph applications
- Engineering Team Simulation: Agents simulate dev roles
- Sidekick Agent: Local browser-based assistant
- Creator Agent: Agents that build other agents
- Trading Simulation: Agents analyze news & simulate trades
- Educational: Master agent development concepts & skills
- Commercial: Apply learnings in real-world B2B/B2C settings
Frameworks simplify AI agent development by abstracting:
- Prompt orchestration
- Tool use
- System glue logic
π‘ Goal: Let developers focus on solving problems, not wiring systems.
- Direct API calls to LLMs
- Full control over prompts & logic
- Manual but transparent
- Recommended by Anthropic
- Not a framework, but a protocol by Anthropic
- Enables plug-and-play across models, tools, and data
- Open-source, standard-based, minimal glue code
- Clean, new, and developer-friendly
- Ideal for small teams and fast prototyping
- Collaborative multi-agent systems
- Low-code (YAML config)
- Slightly more complex than OpenAI SDK
- Build agents as computational graphs
- High flexibility for complex workflows
- Steep learning curve
- Best for structured multi-agent conversations
- Modular, versatile, requires conceptual investment
Depends on:
- Complexity of your use case
- Need for state/collaboration
- Developer skill set
- Preference: control π abstraction
π§βπ« Instructor prefers lightweight tools that stay out of your way, while acknowledging power of heavyweight options.
Definition: Extra context injected into prompts to make an LLM smarter
- Basic: Manually insert data into prompts
- Smarter: Use RAG (Retrieval-Augmented Generation) to fetch only relevant info
Provide ticket pricing data to a travel assistant LLM so it can answer questions.
Definition: Capabilities the LLM can use to act, like querying APIs or sending messages.
π― Goal: Give the LLM autonomy to act, not just respond.
Prompt: βYou can use 'fetch_ticket_price'. If needed, reply with JSON.β
User: βI want to fly to Paris.β
LLM: { "tool": "fetch_ticket_price", "destination": "Paris" }| Feature | Resources | Tools |
|---|---|---|
| Role | Provide information | Execute actions |
| How Used | Injected into prompt | Structured responses from LLM |
| Autonomy | LowβMedium (assistive) | High (autonomous behavior) |
- Async Python is foundational across all major agent frameworks (OpenAI SDK, CrewAI, LangGraph, etc.).
- It enables agents to handle many concurrent tasks efficiently β especially I/O-bound operations like:
- Waiting on LLM responses
- Fetching web data
- Querying databases or tools
- A lightweight concurrency model in Python (since 3.5)
- NOT multithreading or multiprocessing β avoids OS-level threads
- Runs a single-threaded event loop that switches between tasks while they wait
- Declares a coroutine (not a regular function)
- Returns a coroutine object β doesnβt execute immediately
- Pauses execution and waits for a coroutine to finish
async def fetch_data():
return "done"
result = await fetch_data()β Coroutine
- A function that can pause and resume
- Scheduled and managed by Pythonβs event loop
π The Event Loop
The engine that powers asyncio:
- Starts and pauses coroutines
- Runs other coroutines while one is waiting
- Enables concurrent behavior in I/O-heavy programs
- Run multiple coroutines concurrently.
results = await asyncio.gather(
fetch_data1(),
fetch_data2(),
fetch_data3()
)Efficient when calling multiple APIs or agents at once.
- Async Python = manual multitasking at the code level.
- Great for multi-agent orchestration, background tasks, and responsiveness.
| Concept | Meaning |
|---|---|
async def |
Define a coroutine |
await |
Run the coroutine and wait for result |
| Coroutine | Pauseable, resumable function |
| Event loop | Orchestrates coroutine execution |
asyncio.gather |
Run multiple coroutines concurrently |
- Lightweight, flexible, and not opinionated β lets you design agents your way.
- Simplifies common tasks like tool use and LLM orchestration (e.g., managing JSON, if-statements).
- Great for rapid prototyping without getting bogged down in boilerplate.
- Handles routine complexity like:
- Tool invocation structure
- Multi-step agent coordination
- Prompt formatting & response parsing
- Makes tool usage clean and maintainable without losing control over the logic.
| Term | Meaning |
|---|---|
| Agent | A defined role + behavior built around LLM calls |
| Handoff | Interaction or communication between agents |
| Guardrails | Checks and constraints to keep agents on-task & safe |
-
Create an agent instance
β Define what role the agent plays (e.g., researcher, planner, responder). -
Use with
traceblock
β Optional but recommended for debugging and logging interactions using OpenAIβs trace viewer. -
Run the agent with
runner.run()
β This is a coroutine β must useawait.
async def main():
agent = Agent(role=..., instructions=...)
with trace(agent):
result = await runner.run(agent)Coined by Andrej Karpathy, vibe coding is a relaxed, iterative way of coding with LLMs β generating snippets, tweaking them, and building up functionality quickly without overplanning.
- Boosts creativity and momentum.
- Lets you explore and learn new frameworks or APIs quickly.
- Perfect for prototyping and experimenting.
- Craft high-quality prompts you can reuse.
- Ask for concise, modern code (LLMs can be verbose or outdated).
- Include todayβs date to get up-to-date API usage.
- Donβt trust one model blindly.
- Cross-check outputs by asking the same question to multiple LLMs (e.g., ChatGPT & Claude).
- Avoid giant blobs of LLM-generated code.
- Ask for code in small, testable chunks (e.g., one function at a time).
- Not sure how to break it down? Ask the LLM to design the step-by-step breakdown first.
- Use a second LLM to review or optimize what the first LLM wrote.
- Ask for feedback: βIs this the cleanest/best way to do it?β
- Emulates the evaluator-optimizer agent pattern manually.
- Ask for multiple solutions to the same problem.
- Encourage creativity and comparison.
- Request explanations to deepen your understanding and catch flaws.
Vibe coding is fun only if you understand what's going on.
Always follow up by asking the LLM to explain the code until itβs fully clear to you.
Otherwise, debugging becomes painful fast.
Crew AI refers to three distinct offerings:
- Crew AI Enterprise β A commercial platform for deploying, running, and managing agents (with UI dashboards).
- Crew AI UI Studio β A low-code/no-code tool for building agent workflows (similar to Addendum).
- Crew AI Framework β An open-source Python framework to orchestrate agent teams.
π‘ This is what the course focuses on.
- Crew = A team of agents collaborating autonomously.
- Flows = Structured, rule-based workflows with precise steps.
- Use Crew for: creative, open-ended, autonomous collaboration.
- Use Flows for: auditability, control, deterministic logic.
π This course focuses on Crews because it's all about agent autonomy and collaboration.
- Crew has different constructs and terminology, but many shared ideas.
- Youβll need to adapt mentally to the new structure, but itβs worth it.
- Like OpenAIβs SDK, Crew AI is also:
- β Flexible
- β Well-suited for multi-agent orchestration
- β Increasingly popular
Each framework has strengths. As you explore more (e.g., Crew, Autogen, etc.), notice:
- Whatβs similar
- Whatβs better
- What fits your project needs best
- Smallest autonomous unit; wraps around an LLM.
- Has:
- Role β its function in the crew
- Goal β its purpose
- Backstory β context to improve behavior
- Optional: memory & tools
π Compared to OpenAI SDK: More prescriptive (vs. just using βinstructionsβ).
- A specific assignment for an agent.
- Has:
- Description
- Expected Output
- Assigned Agent
β New concept not present in OpenAI SDK.
- A team of agents and tasks.
- Two execution modes:
- Sequential: tasks run one after the other.
- Hierarchical: a manager LLM assigns tasks dynamically.
- Used to define agents and tasks separately from code.
- Benefits:
- Cleaner codebase
- Easier editing/testing of prompts
- YAML is human-readable
Example:
agent = Agent(config="researcher")Defines:
- Agents β with
@agentdecorator - Tasks β with
@taskdecorator - Crew β with
@crewand@crew_basedecorators
The
@agentdecorator registers agents for use inself.agents, and similarly for tasks.
β Pros:
- Encourages best practices in prompting (role, goal, backstory)
- YAML config promotes clean separation of concerns
- Hierarchical mode supports more dynamic workflows
- More opinionated and rigid than OpenAI SDK
- Less direct control over system prompts unless you dig deeper
- Crew uses LiteLLM under the hood to connect to any LLM provider
- Lightweight, simple, and more flexible than LangChain
Model format:
"provider_name/model_name"
Examples:
"openai/gpt-4""anthropic/claude-3""google/gemini""openrouter/meta-llama-3""ollama/local-model"(with a base URL)
β Advantage over OpenAI SDK: Easy switching across models and providers.
To create a new agent project:
crewai create crew my_crewProject scaffold:
my_crew/
βββ src/
β βββ my_crew/
β βββ config/
β β βββ agents.yaml β Agent definitions
β β βββ tasks.yaml β Task definitions
β βββ crew.py β Defines agents, tasks & crew (with decorators)
β βββ main.py β Entry point to run the crew
βββ pyproject.toml β Project config (UV managed)
βββ other uv-related files
- Located in
config/folder - Keeps prompts/config clean and separate
- Use decorators:
@agent,@task,@crew - Reference YAML or manually create agent/task instances
- Pass runtime parameters (e.g., topic of a task)
- Call the crew object and run the workflow
crewai runagents.yaml/tasks.yamlβ YAML configs (role, goal, LLM, etc.)crew.pyβ Decorator-based Python code to define crew logicmain.pyβ Execution scriptcrewai runβ Runs the crew by executingmain.py
- β‘ Fast LLM switching via LiteLLM
- π Clean codebase with YAML + Python separation
- π§± Structured projects support scalability and clarity
- π§° Uses
uvfor project management (integrates well with your course setup)
- Equip agents with external capabilities
- Similar to tools in OpenAI SDK
- Explicit way to pass outputs from one task to another
- Crucial for multi-step logic or sequential tasks
- A lightweight Google Search API for agent tools
- Free with 2500 credits
Add key to .env:
SERPER_API_KEY=your_key_hereπ Note: not the same as SerpAPI β use serper.dev
Crew supports 5 memory types β focus on these 3:
- Uses vector database (e.g., Chroma)
- Ideal for related task context passing
- Uses RAG (retrieval augmented generation)
- Stores persistent info with SQLite
- Builds knowledge across long timelines
- Like short-term, but tracks named entities (people, places, concepts)
- Vector-based storage
Additional Types:
- Contextual Memory β Umbrella term combining short/long/entity
- User Memory β For user-specific info (manual handling required)
π§ Context: LangChain Ecosystem Overview
- LangChain
- Originally built to abstract away LLM-specific integration complexities.
- Focused on chaining LLM calls, tool use, memory, RAG, prompt templates.
- Powerful, but:
- Opinionated.
- Less transparent (abstracts away prompt logic).
- Less necessary now that LLM APIs are standardized (especially OpenAI-style).
- LangGraph
- Not part of LangChain per se (though from same company).
- Focus: stable, scalable, fault-tolerant workflows for agent systems.
- Think of it as a graph of nodes, where each node is:
- An agent
- A logic step
- A human-in-the-loop
- A memory checkpoint
π Core Idea: Model complex workflows as graphs to enable control, repeatability, and fault tolerance.
Key Features: - Human-in-the-loop support
- Memory handling (via LangChain or external)
- "Time travel": checkpointing and rewinding to prior graph state
- Robust retry & fallback
- Plug-and-play: you can use LangChain or not
- LangSmith
- Separate product for debugging, visibility, and analytics
- Can be used with:
- LangChain
- LangGraph
- Standalone LLM apps
- Tracks inputs, outputs, reasoning paths
- Useful for monitoring and debugging your agent workflows
βοΈ Why Use LangGraph?
| Need | Solution LangGraph Offers |
|---|---|
| Complex agent collaboration | Graph-based workflow control |
| Fail-safety & retries | Fault tolerance built-in |
| Workflow versioning/checkpoints | βTime travelβ to restore previous states |
| Inter-agent communication | Modeled through graph edges |
| Monitoring | Via LangSmith integration |
π§ Summary
- LangChain = abstraction framework for LLM apps (with memory, RAG, tools, etc.)
- LangGraph = graph-based framework for building resilient multi-agent systems
- LangSmith = monitoring/debugging tool for both
π¦ LangGraph = 3 Distinct Parts
| Component | Purpose |
|---|---|
| LangGraph (Framework) | Open-source core used to build agent workflows (like Crewβs framework). |
| LangGraph Studio | Visual builder to design graphs via UI (similar to Crew Studio). |
| LangGraph Platform | Hosted enterprise service to deploy/run LangGraph apps (like Crew Enterprise). |
π§ Anthropicβs Perspective on Frameworks
From their blog βBuilding Effective Agentsβ:
- Frameworks like LangGraph help by abstracting:
- LLM calls
- Tool usage
- Chaining logic
- But:
- Abstractions can hide the real prompts and make debugging harder.
- Encourage unnecessary complexity when simple code would do.
Their recommendation:
πΉ Start with direct LLM API use (simple JSON, raw calls).
πΉ Use frameworks only if you understand what they abstract.
πΉ Incorrect assumptions about frameworks often lead to errors.
π§ Summary Takeaway
| LangGraph Strengths | Anthropicβs Caution |
|---|---|
| Stability, resilience, checkpointing via graph | Adds complexity and hides prompt details |
| Ideal for large, multi-agent workflows | May not be needed for small/simple applications |
| Visual tools and hosted platform | Prefer lightweight, transparent implementations |
π§ Core Concepts in Landgraf
| Term | Description |
|---|---|
| Graph | The entire agent workflow, represented as a graph (like a tree structure). |
| State | A shared object that holds the current snapshot of your application. Immutable β always return a new version of state. |
| Node | A Python function that performs a task. It takes in state, does something (like LLM call), and returns a new state. |
| Edge | A Python function that determines what node runs next based on the current state. Can be simple or conditional. |
| π Nodes do the work, π΅ Edges decide what happens next. |
π οΈ The 5 Steps to Building a Landgraf Agent
- Define the State Class
- Holds shared info used by all nodes.
- Immutable: always return a new version after updates.
- Start a Graph Builder
- This is where you define how your workflow is laid out.
- Think of it as setting the blueprint before running.
- Create Nodes
- Each one is a Python function (e.g., LLM call, file write).
- Operates on state and returns an updated version.
- Create Edges
- Define transitions between nodes.
- Can be fixed (always runs next) or conditional (based on state).
- Compile the Graph
- Turns your design into an executable workflow.
- After compiling, you can run the graph.
βοΈ Execution Flow of a Landgraf App
There are 2 phases when running a graph:
- Phase 1: Define your agent workflow (the 5 steps above).
- Phase 2: Run the compiled graph with initial input.
This two-phase design is different from typical Python scripts and may feel unfamiliar at first β but it gives powerful structure and clarity.
π§ What Is Immutable State?
- Immutable means: once created, the state cannot be modified.
- Instead of changing it, you:
- Read from the current state (e.g., state.count)
- Create and return a new state with updated values.
def my_counting_node(state: MyState):
new_count = state.count + 1
return MyState(count=new_count)β Helps with traceability and consistency across agent execution.
| Concept | Description |
|---|---|
| Reducer | A special function tied to a state field that tells Landgraf how to combine values. |
| Purpose | Enables parallel execution of nodes safely. Avoids overwriting updates from other nodes. |
- If multiple nodes update the same field at the same time, reducer logic safely merges them.
- Without reducers, simultaneous updates would conflict or overwrite each other.
If multiple nodes increment count, a reducer might define how to sum values instead of overwrite:
@state.reducer("count")
def combine_counts(old_value, new_value):
return old_value + new_value| Principle | Why Itβs Important |
|---|---|
| Immutability | Keeps state clean, traceable, and rollback-friendly. |
| Reducers | Enable safe, parallel execution with no lost updates. |
| Two-phase model | First build the graph, then run it β this supports scalability and clarity. |
- This lab is built in a notebook format (like previous non-Crew weeks).
- Code still plays a central role β donβt worry if you prefer that.
Annotatedis used to add metadata to a variable or parameter.- Python ignores this metadata at runtime β but tools like Landgraf can read it.
from typing import Annotated
def shout(text: Annotated[str, "Something to be shouted"]) -> str:
return text.upper()π¨ This has no runtime effect but gives extra context β useful for tools like Landgraf.
- Landgraf requires you to annotate state fields when using reducers.
- This tells Landgraf how to combine field values when multiple nodes return updates simultaneously.
from landgraf.message import add_messages
from pydantic import BaseModel
from typing import Annotated, List
class MyState(BaseModel):
messages: Annotated[List[str], add_messages]β
add_messagesis a built-in reducer that simply concatenates lists of messages.
- The state is often implemented using a
pydantic.BaseModel(orTypedDict). - State is immutable β you return a new state object every time.
- β Reducers help merge states when nodes update in parallel β avoiding overwrites.
Step 2 of the Landgraf setup process:
from landgraf import StateGraph
graph = StateGraph(MyState)πΉ You pass the class (not an instance) to
StateGraph.
| Step | Description |
|---|---|
| 1 | Define State class with annotated fields & reducers |
| 2 | Start the graph builder using StateGraph(State) |
| 3β5 | Next: Create nodes, edges, and compile the graph |
- A superstep = one full invocation of a graph.
- Every time the graph is run (e.g. in response to a user input), it's a new superstep.
- Nodes that execute in parallel belong to the same superstep.
- Sequential interactions (like another user input) trigger a new superstep.
β Think of each user interaction (message, prompt, etc.) as one superstep.
Each graph run involves:
-
Defining the graph:
- State class
- Graph builder
- Nodes
- Edges
- Compilation
-
Invoke the graph with some input β Superstep 1
-
Get output
-
Another input β Superstep 2
-
And so onβ¦
[Graph Definition]
β
[User Input 1] β [Graph Invocation 1] β Output 1
β
[User Input 2] β [Graph Invocation 2] β Output 2
β
...- They're the unit of execution in LangGraph.
- Reducers (used to combine state) only apply within a single superstep.
- To persist and manage state between supersteps, you need checkpointing.
- Checkpointing = saving the final state at the end of each superstep.
- On the next invocation (next superstep), LangGraph can resume from this checkpointed state.
- Essential for memory persistence, context tracking, and conversation continuity.
π Without checkpointing, your graph starts from scratch every time.
You'll explore:
- How LangSmith logs info
- Tool calling (built-in & custom)
- Implementing checkpointing for memory across interactions
- Even though LangGraph uses state and reducers, that state only exists during a single superstep (one invocation of the graph).
- Without checkpointing, state is lost between supersteps β e.g., the graph wonβt remember your name across turns.
-
Use
MemorySaver- Acts as an in-memory checkpoint storage (not a long-term database).
- Captures the entire state after each superstep.
-
Pass
checkpointer=memorywhen compiling the graph:
graph = graph.compile(checkpointer=memory)- Use a config dict with a thread ID to associate calls with a memory thread:
config = {
"configurable": {
"thread_id": "1" # identifies the conversation
}
}- Pass config to
graph.invoke()to tie the call to that memory thread.
- State is remembered per thread ID.
- When you change the thread ID, the memory resets (like starting a new conversation).
- When you reuse the same thread ID, LangGraph pulls from the last checkpoint.
You can:
- Call
.get_state(config)β gives the latest state for a thread. - Call
.get_state_history(config)β gives the full history of each superstep's state. - Use a checkpoint ID to rewind and rerun from a specific past moment.
π°οΈ This lets you βtime travelβ β e.g., replay past conversation states or resume from failure.
-
Unlike hacks like storing state in UI globals (e.g., in Gradio), checkpointing is:
- Structured
- Repeatable
- Resilient (easy to debug and retry)
-
Autogen is an open-source agent framework by Microsoft.
-
Focused on async, event-driven architecture to enable:
- Better observability
- Improved flexibility, control, and scalability
-
The version used in the course: v0.5.1 (based on the rewrite that started at v0.4)
- The official track used in this course
- More structured, stable, and enterprise-focused
- Actively supported by Microsoft
-
Forked by original Autogen creators after leaving Microsoft
-
Based on older version (v0.2) but claims to move faster
-
Owns the
autogenname on PyPI:pip install autogeninstalls AG2, not Microsoftβs version
-
Has taken over original Discord and some community spaces
π Key Caution: Always verify if docs/tutorials refer to AG2 or Microsoft Autogen β theyβre not compatible.
-
The lowest layer of Autogen.
-
A framework for agent interaction, not agent implementation.
-
Similar to LangGraph in structure, but with different focus:
- LangGraph β Focused on repeatable workflows
- Autogen Core β Focused on dynamic agent interactions
- Enables agents to run across multiple processes or machines.
β οΈ Still experimental:
- APIs may change
- Not production-ready β more of a preview
-
Acts as a container and coordinator.
-
Responsibilities:
- Handles message delivery
- Manages agent sessions
- Uses gRPC for cross-machine/process communication
-
Manages actual agent instances
-
Responsibilities:
- Hosts and executes agent logic
- Registers available agents with the host
- Handles execution during tasks
-
Multi-agent systems distributed across machines or languages
-
Separation of infrastructure and logic:
- Host deals with communication
- Worker handles agent behavior
- Introduction to MCP (Model Context Protocol) by Anthropic
- Return to the OpenAI Agents SDK (used alongside MCP)
- Week 1: Raw tools and APIs
- Week 2: OpenAI Agents SDK
- Week 3: Crew (structured, YAML-first)
- Week 4: LangGraph (graph-style agent coordination)
- Week 5: Autogen (multi-agent, experimental, with Autogen Core)
- Week 6: MCP β not a framework, but a standard
- Not an agent framework (unlike Crew or OpenAI SDK)
- Not used to build agents directly
- Not a fundamental rework of how agents function
- Does not include tools β it defines how to integrate them
- A protocol / standard: Defines how agents access tools, resources, and prompts
- Think of it as the USB-C of AI: a universal connector for agent functionality
- Tool integration (most hyped use case)
- Resources (e.g., context providers)
- Prompt templates
- Feels like just another spec
- Frameworks like LangChain already support tool ecosystems
- You can always write your own tools manually
- Frictionless integration: Easier than custom JSON or framework-specific tool wrappers
- Framework-agnostic: Works with Autogen, LangChain, OpenAI Agents SDK, etc.
- Exploding adoption: Big network effects + active community
- Tons of MCP tools already available: Ready-to-use marketplace ecosystem
- Backed by Anthropic: A major AI player giving it legitimacy and traction
- Standards matter: Like HTML, a well-adopted protocol becomes foundational
MCP doesnβt replace frameworks β it complements them by making tool access universal and standardized.
-
The application or environment using the tools.
-
Examples:
- Claude Desktop App (most common)
- Cursor
- Your custom agent app
-
Important: Claude web chat is not a host, only the desktop app is (as of now).
-
Lives inside the host
-
Connects the host to one or more MCP Servers
-
One MCP client per MCP server
-
Examples:
- One client for Google Maps tools
- One for file system access
- One for stock market lookups
-
A process that provides tools, resources (context), or prompt templates.
-
Think of it like a plugin provider.
-
Examples:
- A server that provides weather lookup via a weather API
- A file system access server
- A web page fetcher (e.g., βFetcherβ using Playwright)
Your app (Host) β has MCP clients β which connect to MCP servers β that expose tools to use.
- Local-Local: Client + server run locally; work done locally.
- Local + External API: Server runs locally, but connects to external APIs (like Google or Slack).
- Local-Remote: MCP server is on a remote machine. Less common, but supported.
β³ Most common: #2 β server on your machine, calling out to APIs.
- People often think MCP servers must be remote.
- Truth: Most MCP servers are run locally, even if they access remote APIs.
| Transport | Use Case | Tech | Notes |
|---|---|---|---|
stdio |
Local | Standard Input/Output | Most common for local servers |
sse |
Remote | HTTPS + SSE | Used for streaming remote data |
Think of MCP as a universal socket system β like USB-C β for plugging tools into agent apps.
-
Simple lines of code = powerful actions:
- Web scraping
- File generation
-
Thousands of ready-to-use tools available in MCP marketplaces like:
- MCP
- Smithery
- GlamourAI
- Thousands of tools (e.g., Slack, GitHub, Google Drive, Weather, Time, Blender)
- Some run locally but call remote APIs
- Others can be remote MCP servers
- Sites like Hugging Face Community Blog share highlights and best MCP tools
- Yes, MCP is that simple β just a standard for tool access
- Its simplicity is the power: plug-and-play tool expansion
- Next step: build your own MCP server and client