Frequently Asked Questions

Everything you need to know about SignalManager AI — how signals work, pricing, self-hosting, and more.

What is a signal?

A signal is any event, alert, advisory, or notification from your dev tools. Sentry errors, NVD CVEs, Datadog alerts, GitHub security advisories, PostHog session anomalies, npm audit warnings — anything that demands attention.

SignalManager AI ingests these signals via MCP server or REST API, correlates them with your choice of AI model, and surfaces what actually matters — so your team stops reviewing noise and starts fixing real problems.

How does credit-based pricing work?

Each signal processed through AI analysis consumes one credit. Ingestion and storage are free — you only pay when AI analyzes, correlates, or prioritizes a signal.

When self-hosting becomes available, those users will bring their own AI model and will not need credits at all.

See full pricing details

Can I self-host SignalManager AI?

We plan to offer self-hosting in the future. You will be able to run the entire stack with Docker Compose and bring your own AI model — OpenAI, Anthropic, Ollama, or vLLM.

Self-hosting will give you full control over your data, your infrastructure, and your AI costs. No data will leave your network unless you want it to.

How do MCP and the API work?

SignalManager AI includes a built-in MCP (Model Context Protocol) server and REST API on every plan. Two ways to connect — use whichever fits your workflow:

  • MCP Server — Connects directly to AI clients like Claude Desktop for conversational access to your signals, correlations, and tickets.
  • REST API — Programmatic access for custom integrations, CI/CD pipelines, dashboards, and automation scripts.

Both give you full access to everything in the platform. Use one or both — they work alongside each other.

What AI models are supported?

SignalManager AI supports multiple AI providers via MCP or API — no vendor lock-in:

  • OpenAI-compatible APIs — GPT-4o, GPT-4, or any OpenAI-compatible endpoint
  • Anthropic Claude — Claude (latest models)
  • Ollama — Run local models like Llama, Mistral, Qwen on your own hardware
  • vLLM — Self-hosted high-throughput inference for production workloads

Swap models anytime without changing your setup. Use different models for different tasks if you want.

Do I need a vector database?

No. SignalManager AI uses PostgreSQL with smart SQL queries for correlation. No vector DB, no embedding pipelines, no extra infrastructure.

Just PostgreSQL — a database your team already knows how to operate, back up, and scale. One less moving part in your stack.

What tools does SignalManager AI connect to?

SignalManager AI connects to the tools your team already uses via MCP server, REST API, webhooks, or Connector SDK:

  • Error tracking — Sentry
  • Product analytics — PostHog
  • Monitoring — Datadog
  • Source control — GitHub (security advisories, Dependabot)
  • Vulnerability feeds — NVD, npm advisories
  • Ticketing — Jira, Linear, GitHub Issues

Need a connector we do not have yet? The Connector SDK lets you build custom integrations for any tool with a webhook or API.

How are tickets created?

When SignalManager AI identifies a signal (or cluster of correlated signals) that requires action, it auto-generates a ticket in your project tracker — Jira, Linear, or GitHub Issues.

Each ticket includes a severity score, full context from all correlated signals, and AI-suggested remediation steps. Tickets land in your existing workflow — no new tool to check, no context switching.

Is SignalManager AI only for developers?

No. SignalManager AI is built for your entire engineering organization.

  • Developers get actionable, deduplicated tickets with full context — stack traces, CVE links, and remediation steps — delivered right into Jira, Linear, or GitHub Issues. Less noise, more context, faster fixes.
  • Engineering Managers get visibility into signal health trends across teams, release readiness scores before approving deploys, and risk hotspot identification — without asking for status updates.
  • Project Managers / TPMs get automated ticket creation that lands issues in the backlog without manual entry, sprint planning informed by real signal data, and backlog prioritization driven by AI severity scoring.

If your role involves dealing with — or being responsible for — the health of a software system, SignalManager AI is built for you.

Can SignalManager AI help with sprint planning?

Yes. SignalManager AI continuously feeds AI-driven insights into your backlog — so when sprint planning starts, the work is already scoped and ranked by severity.

Signal trend data shows which areas of the codebase are generating the most issues, helping PMs and EMs forecast capacity needs. Backlog prioritization is driven by real signal data — not gut feeling or stale spreadsheets.

Still Have Questions?

Reach out to us directly or join the waitlist to be notified when we launch.