by danielscholl
A simple task tracking and backlog management MCP server for AI assistants (hack project)
Backlog Manager is an MCP (Machine-Consumable Programming) server for issue and task management with a file-based approach. It provides tools for AI agents and other clients to create issues, add tasks to them, and track task status. Issues represent high-level feature requests or bugs, while tasks represent specific work items needed to resolve the issue.
Built using Anthropic's MCP protocol, it supports both SSE and stdio transports for flexible integration with AI assistants like Claude, or other MCP-compatible clients.
# Clone the repository git clone https://github.com/username/backlog-manager-mcp.git cd backlog-manager-mcp # Install dependencies uv pip install -e . # Verify installation uv run backlog-manager # This should start the server
# Build the Docker image docker build -t backlog/manager --build-arg PORT=8050 . # Run the container docker run -p 8050:8050 backlog/manager # Verify container is running docker ps | grep backlog/manager
Configure the server behavior using environment variables in a .env
file:
# Create environment file from example cp .env.example .env
Example .env
file content:
# Transport mode: 'sse' or 'stdio'
TRANSPORT=sse
# Server configuration (for SSE transport)
HOST=0.0.0.0
PORT=8050
# Data storage
TASKS_FILE=tasks.json
Variable | Description | Default | Required |
---|---|---|---|
TRANSPORT | Transport protocol (sse or stdio) | sse | No |
HOST | Host to bind to when using SSE transport | 0.0.0.0 | No |
PORT | Port to listen on when using SSE transport | 8050 | No |
TASKS_FILE | Path to the tasks storage file | tasks.json | No |
# Using the CLI command uv run backlog-manager # Or directly with Python uv run src/backlog_manager/main.py
You should see output similar to:
INFO: Started server process [12345]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8050 (Press CTRL+C to quit)
Note: The server does not support the
--help
flag since it's designed as an MCP server, not a traditional CLI application.
When using stdio mode, you don't need to start the server separately - the MCP client will start it automatically when configured properly (see Integration with MCP Clients).
The Backlog Manager exposes the following tools via MCP:
Tool | Description | Parameters |
---|---|---|
create_issue | Create a new issue | name (string), description (string, optional), status (string, optional) |
list_issues | Show all available issues | None |
select_issue | Set the active issue | name (string) |
initialize_issue | Create or reset an issue | name (string), description (string, optional), status (string, optional) |
update_issue_status | Update issue status | name (string), status (string) |
Tool | Description | Parameters |
---|---|---|
add_task | Add task to active issue | title (string), description (string, optional) |
list_tasks | List tasks in active issue | status (string, optional) |
update_task_status | Update task status | task_id (string), status (string) |
Tasks and issues can have one of the following statuses:
New
(default for new tasks/issues)InWork
(in progress)Done
(completed)Once you have the server running with SSE transport, connect to it using this configuration:
*Configuration content*
Windsurf Configuration:
*Configuration content*
n8n Configuration:
Use host.docker.internal
instead of localhost
to access the host machine from n8n container:
http://host.docker.internal:8050/sse
*Configuration content*
*Configuration content*
Backlog Manager is designed to work seamlessly with AI assistants to help you organize your project work. The most powerful use case is having the AI read specifications and automatically create a structured backlog.
Simply ask your AI assistant:
Read the spec and create a backlog for features not completed.
The AI assistant will:
No version information available