IntegrationsLangChain

LangChain Integration

The Surfinguard Python SDK includes a LangChain integration that wraps LangChain tools with security checks, preventing agents from executing dangerous actions.

Installation

pip install surfinguard langchain

Quick Start

from surfinguard import Guard
from surfinguard.integrations.langchain import SurfinguardToolGuard
from langchain.tools import Tool
 
# Create the guard
guard = Guard(api_key="sg_live_your_key_here", policy="moderate")
tool_guard = SurfinguardToolGuard(guard)
 
# Define a LangChain tool
shell_tool = Tool(
    name="shell",
    description="Execute a shell command",
    func=lambda cmd: subprocess.check_output(cmd, shell=True).decode(),
)
 
# Wrap it with Surfinguard protection
safe_shell = tool_guard.wrap(shell_tool, action_type="command")
 
# Safe commands execute normally
result = safe_shell.run("ls -la")  # Works fine
 
# Dangerous commands are blocked
result = safe_shell.run("rm -rf /")  # Raises NotAllowedError

Wrapping Tools

The SurfinguardToolGuard.wrap() method creates a new tool that checks the input before calling the original tool:

from surfinguard.integrations.langchain import SurfinguardToolGuard
 
tool_guard = SurfinguardToolGuard(guard)
 
# Wrap with explicit action type
safe_tool = tool_guard.wrap(original_tool, action_type="command")
 
# Wrap with auto-detection based on tool name
safe_tool = tool_guard.wrap(original_tool)
# If tool.name contains "url", "web", "fetch" -> checks as URL
# If tool.name contains "shell", "exec", "command" -> checks as command
# If tool.name contains "sql", "query", "database" -> checks as query
# Otherwise -> checks as text

Using with LangChain Agents

from langchain.agents import initialize_agent, AgentType
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from surfinguard import Guard
from surfinguard.integrations.langchain import SurfinguardToolGuard
 
# Setup
guard = Guard(api_key="sg_live_...", policy="strict")
tool_guard = SurfinguardToolGuard(guard)
llm = ChatOpenAI(model="gpt-4")
 
# Define tools
tools = [
    Tool(
        name="web_fetch",
        description="Fetch a webpage by URL",
        func=fetch_page,
    ),
    Tool(
        name="shell_exec",
        description="Execute a shell command",
        func=run_command,
    ),
    Tool(
        name="database_query",
        description="Run a SQL query",
        func=run_query,
    ),
]
 
# Wrap all tools with Surfinguard
safe_tools = [tool_guard.wrap(tool) for tool in tools]
 
# Create agent with protected tools
agent = initialize_agent(
    tools=safe_tools,
    llm=llm,
    agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)
 
# The agent will be blocked from executing dangerous actions
try:
    agent.run("Delete all files in the home directory")
except Exception as e:
    print(f"Agent blocked: {e}")

Custom Action Extraction

For tools with complex input schemas, provide a custom extractor:

safe_tool = tool_guard.wrap(
    original_tool,
    action_type="command",
    extract_value=lambda input_dict: input_dict.get("command", ""),
)

Error Handling

When a tool is blocked, NotAllowedError is raised:

from surfinguard import NotAllowedError
 
try:
    result = safe_tool.run("rm -rf /")
except NotAllowedError as e:
    print(f"Blocked: {e}")
    print(f"Score: {e.result.score}")
    print(f"Level: {e.result.level}")
    print(f"Reasons: {e.result.reasons}")

In agent context, the error is caught by the agent framework and reported as a tool failure, causing the agent to try a different approach.

Multiple Guard Policies

Use different policies for different tools based on risk:

# Strict for shell commands
strict_guard = SurfinguardToolGuard(
    Guard(api_key="sg_live_...", policy="strict")
)
 
# Moderate for URL fetching
moderate_guard = SurfinguardToolGuard(
    Guard(api_key="sg_live_...", policy="moderate")
)
 
safe_shell = strict_guard.wrap(shell_tool, action_type="command")
safe_fetch = moderate_guard.wrap(fetch_tool, action_type="url")