Tool Use & Function Calling
Learn how agents interact with the world through tools, APIs, and function calling.
What Are Tools?
Tools are functions that an AI agent can call to interact with the external world. They're the bridge between the agent's reasoning and actual actions. Without tools, an LLM can only generate text - with tools, it can do anything.
Analogy
Imagine you're blindfolded in a room. You can think and speak, but you can't see or touch anything. That's an LLM without tools. Now imagine someone gives you a set of labeled buttons - "turn on lights," "open door," "check temperature." By pressing these buttons, you can finally affect your environment. That's what tools provide to an AI agent.
Common Tool Categories
Read the contents of a file
Create or overwrite a file
Make targeted changes to a file
Find files matching a pattern
How Function Calling Works
Function calling is the mechanism that allows LLMs to use tools. Here's the flow:
// 1. Define the tool schema
const tools = [{
name: "read_file",
description: "Read the contents of a file",
parameters: {
type: "object",
properties: {
path: {
type: "string",
description: "The file path to read"
}
},
required: ["path"]
}
}];
// 2. Send to LLM with tools available
const response = await claude.chat({
messages: [{ role: "user", content: "What's in package.json?" }],
tools: tools
});
// 3. LLM responds with a tool call
// response.tool_calls = [{
// name: "read_file",
// arguments: { path: "package.json" }
// }]
// 4. Execute the tool
const fileContents = await readFile("package.json");
// 5. Send result back to LLM
const finalResponse = await claude.chat({
messages: [
...previousMessages,
{ role: "tool", content: fileContents }
]
});
// 6. LLM provides final answer based on tool resultTool Schema Design
Well-designed tool schemas are crucial for agent effectiveness. Key principles:
Clear Names
read_file is better than rf. The LLM uses the name to understand what the tool does.
Descriptive Descriptions
The description is your chance to explain when and how to use the tool. Be specific about limitations and expected inputs.
Typed Parameters
Use proper types (string, number, array) and include descriptions for each parameter. Mark required vs optional clearly.
Bounded Outputs
Tools should return reasonably-sized outputs. If a file is huge, truncate it. The LLM has limited context.
Real Example: Claude Code Tools
Claude Code provides a sophisticated set of tools. Here's how some key ones work:
// Read Tool
{
name: "Read",
description: "Reads a file from the filesystem",
parameters: {
file_path: "string (required) - absolute path",
offset: "number (optional) - line to start from",
limit: "number (optional) - max lines to read"
}
}
// Edit Tool
{
name: "Edit",
description: "Makes targeted edits to a file",
parameters: {
file_path: "string (required) - file to edit",
old_string: "string (required) - text to replace",
new_string: "string (required) - replacement text"
}
}
// Bash Tool
{
name: "Bash",
description: "Executes shell commands",
parameters: {
command: "string (required) - command to run",
timeout: "number (optional) - max execution time"
}
}Tool Safety & Permissions
Giving an AI agent tools is powerful but requires careful consideration:
Safety Considerations
- Sandboxing - Run tools in isolated environments when possible
- Permission scoping - Limit tools to specific directories or actions
- User confirmation - Ask before destructive operations
- Rate limiting - Prevent runaway tool calls
- Audit logging - Track what tools were used and why
MCP: Model Context Protocol
The Model Context Protocol (MCP) is an emerging standard for tool integration. It allows you to connect AI agents to external services through a standardized interface:
- Standardized - One protocol for all tools
- Discoverable - Tools describe themselves
- Secure - Built-in authentication and permissions
- Extensible - Easy to add new tools
MCP servers can provide tools for databases, cloud services, APIs, and more - all through a consistent interface that any MCP-compatible agent can use.
Key Takeaways
- Tools are functions that let agents interact with the external world
- Function calling is the mechanism that connects LLM reasoning to tool execution
- Good tool design includes clear names, descriptions, and typed parameters
- Safety measures are essential when giving agents tool access
- MCP provides a standardized way to integrate tools