Trigger.dev’s cover photo
Trigger.dev

Trigger.dev

Software Development

Build and deploy fully‑managed AI agents and workflows

About us

Trigger.dev is the platform for building AI workflows in TypeScript. Long-running tasks with retries, queues, observability, and elastic scaling.

Website
https://trigger.dev
Industry
Software Development
Company size
2-10 employees
Headquarters
London
Type
Privately Held
Founded
2022

Locations

Employees at Trigger.dev

Updates

  • We're heading to AI Engineer Europe in London! April 8-10 | QEII Centre, London UK | Booth G6 Come and chat with us If you're building AI agents, workflow automation, or background jobs. We'll be doing live demos, sharing what we're building, and handing out swag. See you there!

    • No alternative text description for this image
  • Most developers use Claude Code like a chatbot with a shell wrapper. That's barely scratching the surface. After digging deep into the CLI, here are 10 advanced patterns that genuinely change how you orchestrate AI-assisted development: 1. Session forking (--fork-session) — create a "master session" loaded with architectural context, then branch it for each feature. Think git branch for your LLM context window. 2. Code review loops (--from-pr) — resume the exact agent session that wrote the code. It comes back with full awareness of its original decisions. No more cold-start reviews. 3. Ctrl+G to escape the REPL — opens your $EDITOR for proper multi-line prompt crafting. Small feature, massive quality improvement. 4. Inline shell with ! — run commands directly, and stdout/stderr get injected into context automatically. Run the test, type "fix it", done. 5. Effort levels — four tiers from Low to Max. Boilerplate doesn't deserve the same compute as debugging a race condition. Your API bill will thank you. 6. Parallel worktrees (--worktree) — each agent gets a fully isolated working directory via native git worktree. Same repo, zero conflicts. 7. Structured JSON output (--json-schema) — turn the LLM into a strictly typed function. Essential for automation pipelines. 8. Context compaction (Esc+Esc) — compress failed debugging attempts into dense summaries. Reclaim your token budget without losing the narrative thread. 9. Dynamic subagents (--agents) — define session-scoped specialists on the fly with model routing. Opus for architecture, Haiku for repetitive tasks. 10. Budget-capped CI/CD — combine --max-turns and --max-budget-usd as circuit breakers. Non-negotiable for putting autonomous agents in production pipelines. The gap between "I use Claude Code" and "I orchestrate Claude Code" is wide and getting wider. Full deep dive with code examples in the article. Link in the comments below.

  • Introducing Query and Dashboards. Full SQL-powered observability over your background jobs and AI agent runs. Under the hood is TRQL, a SQL-style language that compiles to ClickHouse. You write familiar SELECT statements. ClickHouse executes them. Queries over millions of runs come back in milliseconds. Two tables right now: `runs` for status, timing, costs, and tags, and more coming. You don't need to memorize the schema. There's an AI assistant built into the editor. Describe what you want in plain English: → "Why did failures spike after my last deploy?" → "What's the p95 duration for my chat task?" → "What are my most expensive runs?" It writes the TRQL for you. If it fails, "Try fix error" diagnoses and corrects it. Every project ships with a pre-configured dashboard: run volume, success rates, failures, costs, version breakdowns. You can also build your own with three widget types: big numbers for KPIs, charts for trends, tables for breakdowns. Drag to reorder, resize to fit, filter by time, task, queue, or scope. And not only that, TRQL isn't just for humans. `query.execute()` lets you embed run data into your own product. Power a status page. Feed results to an AI agent for debugging. Build custom alerting. All against the same data the dashboard uses. Live now for all users. Every project already has a built-in dashboard. Full details in the comments 👇

  • Just launched: our Vercel integration for Trigger. Push code. Vercel deploys your app. Trigger deploys your tasks. Env vars sync both ways. Your app never goes live with mismatched task versions. 1. Atomic deployments: Trigger gates your Vercel deployment until the task build completes, sets the correct TRIGGER_VERSION, then triggers a redeployment. Your app always runs against the exact task version it was built with. This used to require a custom GitHub Actions workflow. Now it's a toggle. 2. Env var sync works both directions: Vercel → Trigger: your Vercel env vars get pulled per-environment (production, staging, preview) before each build. Trigger → Vercel: API keys like TRIGGER_SECRET_KEY sync back automatically. No more copy-pasting between dashboards, and you can control sync behavior per-variable from your environment variables page. 3. Deployments reference each other on both sides: Trigger creates deployment checks on your Vercel deployments so you can see task build status without leaving Vercel. Each Trigger deployment links back to the corresponding Vercel deployment. No more tab-switching to figure out which app deploy matches which task deploy. 4. Fun fact: this was also our most requested feature, with 354 votes. Read more in our changelog: https://lnkd.in/epFnchAh

  • Run Cursor's headless CLI agent inside a Trigger task and stream its output live to your app's frontend. This open source demo: Next.js + Trigger. ~1,000 lines of code in total. Trigger tasks run in their own isolated environments. You can install any binary via our build extensions, spawn it as a child process, and stream it to stdout. This demo uses Cursor's CLI. Same pattern works for FFmpeg, Playwright, etc. The build extension runs 𝚌𝚞𝚛𝚕 -𝚏𝚜𝚂𝙻 𝚑𝚝𝚝𝚙𝚜://𝚌𝚞𝚛𝚜𝚘𝚛.𝚌𝚘𝚖/𝚒𝚗𝚜𝚝𝚊𝚕𝚕 | 𝚋𝚊𝚜𝚑 at image build time; the official installer, nothing custom. At runtime the task spawns the cursor-agent Node binary. Cursor CLI outputs NDJSON. We parse it line by line, push events into Realtime Streams v2, and render each one as a row in a React terminal component. One CursorEvent type definition flows from task → stream → useRealtimeRunWithStreams hook → React component. You get full-stack type safety with zero duplication. The repo is open source. If you want to run a CLI tool in the cloud and stream its output to a browser, this is a working reference you can fork. https://tgr.dev/XI25EOs

  • We just shipped Vercel AI SDK 6 support in Trigger. That means full compatibility across all major versions of the AI SDK (4, 5, and 6) so you can upgrade on your own terms. Here's what this unlocks when you run AI agents on Trigger: * Durable execution: Your ToolLoopAgent runs as a long-lived task. If something fails, it retries automatically. No babysitting infrastructure. * Real-time streaming: Stream agent activity directly to your frontend with Realtime Streams. Your users see what the agent is doing as it happens. * Human-in-the-loop: Pause execution mid-task for approval using waitpoints. Zero compute cost while waiting for a human decision. * Autonomous tool use: Agents decide what to do next: call tools, gather context, or return a final answer. On the v6-specific side, we've added async validation handling for the new Schema type and made migration seamless, existing jobs keep working without changes. Full changelog: https://tgr.dev/mPNJqEG

  • The "Weekend Demo" vs. "Production Reality" in AI Development. We've all been there. You hack together an AI agent on a Saturday. You use Vercel's AI SDK, throw in some LangChain, and it works perfectly on your localhost. It answers quickly. It handles errors. Then you push to production. Suddenly, reality hits: 1. Timeouts: Your sophisticated reasoning chain takes 75 seconds. Your serverless function kills it at 60. Hard stop. 2. Flakiness: The OpenAI API hiccups. Your script crashes. The user has to restart the entire process. 3. Concurrency: 50 users try it at once. Your rate limits explode. Jobs get dropped. This is the "Production Gap". Building reliable AI agents requires more than just prompt engineering. It requires reliable infrastructure. At Trigger, we built the infrastructure specifically for this gap. We call it Durable Execution. - No Timeouts: Run tasks for hours or days. Perfect for deep research agents. - Checkpointing: If an API fails, we retry just that step. We don't restart the whole run. - Queueing: Heavy load? We queue the jobs and process them as capacity allows. Nothing gets dropped. Stop trying to shoehorn long-running AI processes into short-lived serverless functions. Use infrastructure designed for the job. Learn more 🧵

  • Tierly uses Trigger to orchestrate 10+ AI models for competitive pricing analysis. Each analysis: dozens of scraping tasks, human review gates, and real-time progress updates. They analyze SaaS pricing pages, discovers competitors, and generates recommendations. Workflows take 5-15 minutes with multiple AI calls. Their initial sync API routes hit timeouts, rate limits, and had zero visibility into failures. Moving to Trigger fixed all of it. Two chains run in parallel via batch triggers which cut analysis time in half. Wait tokens pause execution for human review with no webhooks needed. Shared queue keeps Firecrawl requests under limit across all concurrent analyses. Progressive model escalation: gpt-4o-mini → gpt-4o → gpt-4o + markdown fallback. Trigger handles retries automatically. The results: → Reliable 10+ AI call workflows → Human review gates without webhook complexity → Automatic rate limiting → Full visibility into every step → Workflows in TypeScript alongside their Next.js app Read the full story: https://lnkd.in/gV5eyduq

    • No alternative text description for this image
  • We've just shipped 5 Trigger Skills. Install them and your AI coding assistant will automatically gain in-depth @triggerdotdev knowledge. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚜𝚎𝚝𝚞𝚙 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚊𝚐𝚎𝚗𝚝𝚜 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚝𝚊𝚜𝚔𝚜 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚛𝚎𝚊𝚕𝚝𝚒𝚖𝚎 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚌𝚘𝚗𝚏𝚒𝚐 But what are skills? Skills are portable instruction sets that teach AI coding assistants how to use a framework correctly; patterns, anti-patterns, and examples it follows automatically. They use the Agent Skills standard, and you can install them with Vercel's open-source CLI. Our available skills ↓ ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚜𝚎𝚝𝚞𝚙 Go from zero to running: SDK installation, project init, directory structure, and first task. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚊𝚐𝚎𝚗𝚝𝚜 Build AI agent workflows: prompt chaining, parallel tool calling, routing between models, evaluator-optimizer loops, and human-in-the-loop approval gates. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚝𝚊𝚜𝚔𝚜 Write background jobs with retries, queues, concurrency control, cron scheduling, and batch triggering -- all with the correct patterns from the first prompt. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚛𝚎𝚊𝚕𝚝𝚒𝚖𝚎 Add live progress indicators, streaming AI responses, and real-time status updates to your frontend using React hooks. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚌𝚘𝚗𝚏𝚒𝚐 Set up build extensions for Prisma, FFmpeg, Playwright, Python, and custom deploy configurations in trigger.config.ts. Without skills, AI assistants can hallucinate APIs that don't exist, use deprecated import paths, forget to export tasks, and wrap triggerAndWait in Promise.all (which breaks retries entirely). Skills give your assistant the actual patterns so it writes correct code the very first time. Install our skills now: npx skills add triggerdotdev/skills

Similar pages

Browse jobs

Funding

Trigger.dev 2 total rounds

Last Round

Pre seed

US$ 500.0K

See more info on crunchbase