Skip to content

Logfire API References & Documentation

Logfire is the observability tool focused on developer experience.

Logfire

The main logfire class.

Methods

trace

def trace(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log a trace message.

import logfire

logfire.configure()

logfire.trace('This is a trace log')
Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

debug

def debug(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log a debug message.

import logfire

logfire.configure()

logfire.debug('This is a debug log')
Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

info

def info(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log an info message.

import logfire

logfire.configure()

logfire.info('This is an info log')
Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

notice

def notice(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log a notice message.

import logfire

logfire.configure()

logfire.notice('This is a notice log')
Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

warning

def warning(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log a warning message.

import logfire

logfire.configure()

logfire.warning('This is a warning log')

logfire.warn is an alias of logfire.warning.

Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

error

def error(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log an error message.

import logfire

logfire.configure()

logfire.error('This is an error log')
Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

fatal

def fatal(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = False,
    attributes: Any = {},
) -> None

Log a fatal message.

import logfire

logfire.configure()

logfire.fatal('This is a fatal log')
Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

exception

def exception(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _exc_info: ExcInfo = True,
    attributes: Any = {},
) -> None

The same as error but with _exc_info=True by default.

This means that a traceback will be logged for any currently handled exception.

Returns

None

Parameters

msg_template : str

The message to log.

attributes : Any Default: \{\}

The attributes to bind to the log.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

_exc_info : ExcInfo Default: True

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

span

def span(
    msg_template: str,
    _tags: Sequence[str] | None = None,
    _span_name: str | None = None,
    _level: LevelName | None = None,
    _links: Sequence[tuple[SpanContext, otel_types.Attributes]] = (),
    _span_kind: SpanKind = SpanKind.INTERNAL,
    attributes: Any = {},
) -> LogfireSpan

Context manager for creating a span.

import logfire

logfire.configure()

with logfire.span('This is a span {a=}', a='data'):
    logfire.info('new log 1')
Returns

LogfireSpan

Parameters

msg_template : str

The template for the span message.

_span_name : str | None Default: None

The span name. If not provided, the msg_template will be used.

_tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the span.

_level : LevelName | None Default: None

An optional log level name.

_links : Sequence[tuple[SpanContext, otel_types.Attributes]] Default: ()

An optional sequence of links to other spans. Each link is a tuple of a span context and attributes.

_span_kind : SpanKind Default: SpanKind.INTERNAL

The OpenTelemetry span kind. If not provided, defaults to INTERNAL. Users don’t typically need to set this. Not related to the kind column of the records table in Logfire.

attributes : Any Default: \{\}

The arguments to include in the span and format the message template with. Attributes starting with an underscore are not allowed.

instrument

def instrument(
    msg_template: LiteralString | None = None,
    span_name: str | None = None,
    extract_args: bool | Iterable[str] = True,
    record_return: bool = False,
    allow_generator: bool = False,
    new_trace: bool = False,
) -> Callable[[Callable[P, R]], Callable[P, R]]
def instrument(func: Callable[P, R]) -> Callable[P, R]

Decorator for instrumenting a function as a span.

import logfire

logfire.configure()


@logfire.instrument('This is a span {a=}')
def my_function(a: int):
    logfire.info('new log {a=}', a=a)
Returns

Callable[[Callable[P, R]], Callable[P, R]] | Callable[P, R]

Parameters

msg_template : Callable[P, R] | LiteralString | None Default: None

The template for the span message. If not provided, the module and function name will be used.

span_name : str | None Default: None

The span name. If not provided, the msg_template will be used.

extract_args : bool | Iterable[str] Default: True

By default, all function call arguments are logged as span attributes. Set to False to disable this, or pass an iterable of argument names to include.

record_return : bool Default: False

Set to True to record the return value of the function as an attribute. Ignored for generators.

allow_generator : bool Default: False

Set to True to prevent a warning when instrumenting a generator function. Read https://logfire.pydantic.dev/docs/guides/advanced/generators/#using-logfireinstrument first.

new_trace : bool Default: False

Set to True to start a new trace with a span link to the current span instead of creating a child of the current span.

log

def log(
    level: LevelName | int,
    msg_template: str,
    attributes: dict[str, Any] | None = None,
    tags: Sequence[str] | None = None,
    exc_info: ExcInfo = False,
    console_log: bool | None = None,
) -> None

Log a message.

import logfire

logfire.configure()

logfire.log('info', 'This is a log {a}', {'a': 'Apple'})
Returns

None

Parameters

level : LevelName | int

The level of the log.

msg_template : str

The message to log.

attributes : dict[str, Any] | None Default: None

The attributes to bind to the log.

tags : Sequence[str] | None Default: None

An optional sequence of tags to include in the log.

exc_info : ExcInfo Default: False

Set to an exception or a tuple as returned by sys.exc_info() to record a traceback with the log message.

Set to True to use the currently handled exception.

console_log : bool | None Default: None

Whether to log to the console, defaults to True.

with_tags

def with_tags(tags: str = ()) -> Logfire

A new Logfire instance which always uses the given tags.

import logfire

logfire.configure()

local_logfire = logfire.with_tags('tag1')
local_logfire.info('a log message', _tags=['tag2'])

# This is equivalent to:
logfire.info('a log message', _tags=['tag1', 'tag2'])
Returns

Logfire — A new Logfire instance with the tags added to any existing tags.

Parameters

tags : str Default: ()

The tags to add.

with_settings

def with_settings(
    tags: Sequence[str] = (),
    console_log: bool | None = None,
    custom_scope_suffix: str | None = None,
) -> Logfire

A new Logfire instance which uses the given settings.

Returns

Logfire — A new Logfire instance with the given settings applied.

Parameters

tags : Sequence[str] Default: ()

Sequence of tags to include in the log.

console_log : bool | None Default: None

Whether to log to the console, defaults to True.

custom_scope_suffix : str | None Default: None

A custom suffix to append to logfire. e.g. logfire.loguru.

It should only be used when instrumenting another library with Logfire, such as structlog or loguru.

See the instrumenting_module_name parameter on TracerProvider.get_tracer for more info.

force_flush

def force_flush(timeout_millis: int = 3000) -> bool

Force flush all spans and metrics.

Returns

bool — Whether the flush of spans was successful.

Parameters

timeout_millis : int Default: 3000

The timeout in milliseconds.

url_from_eval

def url_from_eval(report: EvaluationReport[Any, Any, Any]) -> str | None

Generate a Logfire URL to view an evaluation report.

Returns

str | None — The URL string, or None if the project URL or trace/span IDs are not available.

Parameters

report : EvaluationReport[Any, Any, Any]

An evaluation report from pydantic_evals.

log_slow_async_callbacks

def log_slow_async_callbacks(slow_duration: float = 0.1) -> AbstractContextManager[None]

Log a warning whenever a function running in the asyncio event loop blocks for too long.

This works by patching the asyncio.events.Handle._run method.

Returns

AbstractContextManager[None] — A context manager that will revert the patch when exited. This context manager doesn’t take into account threads or other concurrency. Calling this method will immediately apply the patch without waiting for the context manager to be opened, i.e. it’s not necessary to use this as a context manager.

Parameters

slow_duration : float Default: 0.1

the threshold in seconds for when a callback is considered slow.

install_auto_tracing

def install_auto_tracing(
    modules: Sequence[str] | Callable[[AutoTraceModule], bool],
    min_duration: float,
    check_imported_modules: Literal['error', 'warn', 'ignore'] = 'error',
) -> None

Install automatic tracing.

See the Auto-Tracing guide for more info.

This will trace all non-generator function calls in the modules specified by the modules argument. It’s equivalent to wrapping the body of every function in matching modules in with logfire.span(...):.

This works by inserting a new meta path finder into sys.meta_path, so inserting another finder before it may prevent it from working.

It relies on being able to retrieve the source code via at least one other existing finder in the meta path, so it may not work if standard finders are not present or if the source code is not available. A modified version of the source code is then compiled and executed in place of the original module.

Returns

None

Parameters

List of module names to trace, or a function which returns True for modules that should be traced. If a list is provided, any submodules within a given module will also be traced.

min_duration : float

A minimum duration in seconds for which a function must run before it’s traced. Setting to 0 causes all functions to be traced from the beginning. Otherwise, the first time(s) each function is called, it will be timed but not traced. Only after the function has run for at least min_duration will it be traced in subsequent calls.

check_imported_modules : Literal[‘error’, ‘warn’, ‘ignore’] Default: 'error'

If this is 'error' (the default), then an exception will be raised if any of the modules in sys.modules (i.e. modules that have already been imported) match the modules to trace. Set to 'warn' to issue a warning instead, or 'ignore' to skip the check.

instrument_surrealdb

def instrument_surrealdb(
    obj: SyncTemplate | AsyncTemplate | type[SyncTemplate] | type[AsyncTemplate] | None = None,
) -> None

Instrument SurrealDB connections, creating a span for each method.

Returns

None

Parameters

obj : SyncTemplate | AsyncTemplate | type[SyncTemplate] | type[AsyncTemplate] | None Default: None

Pass a single connection instance to instrument only that connection. Pass a connection class to instrument all instances of that class. By default, all connection classes are instrumented.

instrument_mcp

def instrument_mcp(propagate_otel_context: bool = True) -> None

Instrument the MCP Python SDK.

Instruments both the client and server side. If possible, calling this in both the client and server processes is recommended for nice distributed traces.

Returns

None

Parameters

propagate_otel_context : bool Default: True

Whether to enable propagation of the OpenTelemetry context for distributed tracing. Set to False to prevent setting extra fields like traceparent on the metadata of requests.

instrument_claude_agent_sdk

def instrument_claude_agent_sdk() -> AbstractContextManager[None]

Instrument the Claude Agent SDK.

All ClaudeSDKClient instances created after this call will be automatically traced. Existing instances created before this call will not have tool call tracing.

Returns

AbstractContextManager[None] — A context manager that will revert the instrumentation when exited. This context manager doesn’t take into account threads or other concurrency. Calling this method will immediately apply the instrumentation without waiting for the context manager to be opened, i.e. it’s not necessary to use this as a context manager.

instrument_pydantic

def instrument_pydantic(
    record: PydanticPluginRecordValues = 'all',
    include: Iterable[str] = (),
    exclude: Iterable[str] = (),
) -> None

Instrument Pydantic model validations.

This must be called before defining and importing the model classes you want to instrument. See the Pydantic integration guide for more info.

Returns

None

Parameters

record : PydanticPluginRecordValues Default: 'all'

The record mode for the Pydantic plugin. It can be one of the following values:

  • all: Send traces and metrics for all events. This is default value.
  • failure: Send metrics for all validations and traces only for validation failures.
  • metrics: Send only metrics.
  • off: Disable instrumentation.

include : Iterable[str] Default: ()

By default, third party modules are not instrumented. This option allows you to include specific modules.

exclude : Iterable[str] Default: ()

Exclude specific modules from instrumentation.

instrument_pydantic_ai

def instrument_pydantic_ai(
    obj: pydantic_ai.Agent | None = None,
    include_binary_content: bool | None = None,
    include_content: bool | None = None,
    version: Literal[1, 2, 3] | None = None,
    event_mode: Literal['attributes', 'logs'] | None = None,
    kwargs: Any = {},
) -> None
def instrument_pydantic_ai(
    obj: pydantic_ai.models.Model,
    include_binary_content: bool | None = None,
    include_content: bool | None = None,
    version: Literal[1, 2, 3] | None = None,
    event_mode: Literal['attributes', 'logs'] | None = None,
    kwargs: Any = {},
) -> pydantic_ai.models.Model

Instrument Pydantic AI.

Returns

pydantic_ai.models.Model | None

Parameters

obj : pydantic_ai.Agent | pydantic_ai.models.Model | None Default: None

What to instrument. By default, all agents are instrumented. You can also pass a specific model or agent. If you pass a model, a new instrumented model will be returned.

include_binary_content : bool | None Default: None

Whether to include base64 encoded binary content (e.g. images) in the telemetry. On by default. Requires Pydantic AI 0.2.5 or newer.

include_content : bool | None Default: None

Whether to include prompts, completions, and tool call arguments and responses in the telemetry. On by default. Requires Pydantic AI 0.3.4 or newer.

version : Literal[1, 2, 3] | None Default: None

Version of the data format. This is unrelated to the Pydantic AI package version. Requires Pydantic AI 0.7.5 or newer. Version 1 is based on the legacy event-based OpenTelemetry GenAI spec and will be removed in a future release. The parameter event_mode is only relevant for version 1. Version 2 uses the newer OpenTelemetry GenAI spec and stores messages in the following attributes:

  • gen_ai.system_instructions for instructions passed to the agent.
  • gen_ai.input.messages and gen_ai.output.messages on model request spans.
  • pydantic_ai.all_messages on agent run spans. Version 3 changes the names of some attributes and spans but not the shape of the data. The default version depends on Pydantic AI.

event_mode : Literal[‘attributes’, ‘logs’] | None Default: None

The mode for emitting events in version 1. If 'attributes', events are attached to the span as attributes. If 'logs', events are emitted as OpenTelemetry log-based events.

kwargs : Any Default: \{\}

Additional keyword arguments to pass to InstrumentationSettings for future compatibility.

instrument_fastapi

def instrument_fastapi(
    app: FastAPI,
    capture_headers: bool = False,
    request_attributes_mapper: Callable[[Request | WebSocket, dict[str, Any]], dict[str, Any] | None] | None = None,
    excluded_urls: str | Iterable[str] | None = None,
    record_send_receive: bool = False,
    extra_spans: bool = False,
    opentelemetry_kwargs: Any = {},
) -> AbstractContextManager[None]

Instrument a FastAPI app so that spans and logs are automatically created for each request.

Uses the OpenTelemetry FastAPI Instrumentation under the hood, with some additional features.

Returns

AbstractContextManager[None] — A context manager that will revert the instrumentation when exited. This context manager doesn’t take into account threads or other concurrency. Calling this method will immediately apply the instrumentation without waiting for the context manager to be opened, i.e. it’s not necessary to use this as a context manager.

Parameters

app : FastAPI

The FastAPI app to instrument.

capture_headers : bool Default: False

Set to True to capture all request and response headers.

request_attributes_mapper : Callable[[Request | WebSocket, dict[str, Any]], dict[str, Any] | None] | None Default: None

A function that takes a Request or WebSocket and a dictionary of attributes and returns a new dictionary of attributes. The input dictionary will contain:

  • values: A dictionary mapping argument names of the endpoint function to parsed and validated values.
  • errors: A list of validation errors for any invalid inputs.

The returned dictionary will be used as the attributes for a log message. If None is returned, no log message will be created.

You can use this to e.g. only log validation errors, or nothing at all. You can also add custom attributes.

The default implementation will return the input dictionary unchanged. The function mustn’t modify the contents of values or errors.

excluded_urls : str | Iterable[str] | None Default: None

A string of comma-separated regexes which will exclude a request from tracing if the full URL matches any of the regexes. This applies to both the Logfire and OpenTelemetry instrumentation. If not provided, the environment variables OTEL_PYTHON_FASTAPI_EXCLUDED_URLS and OTEL_PYTHON_EXCLUDED_URLS will be checked.

record_send_receive : bool Default: False

Set to True to allow the OpenTelemetry ASGI middleware to create send/receive spans.

These are disabled by default to reduce overhead and the number of spans created, since many can be created for a single request, and they are not often useful. If enabled, they will be set to debug level, meaning they will usually still be hidden in the UI.

extra_spans : bool Default: False

Whether to include the extra ‘FastAPI arguments’ and ‘endpoint function’ spans.

opentelemetry_kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry FastAPI instrumentation.

instrument_openai

def instrument_openai(
    openai_client: openai.OpenAI | openai.AsyncOpenAI | type[openai.OpenAI] | type[openai.AsyncOpenAI] | None = None,
    suppress_other_instrumentation: bool = True,
    version: SemconvVersion | Sequence[SemconvVersion] = 1,
) -> AbstractContextManager[None]

Instrument an OpenAI client so that spans are automatically created for each request.

This instruments the standard OpenAI SDK package, for instrumentation of the OpenAI “agents” framework, see instrument_openai_agents().

The following methods are instrumented for both the sync and the async clients:

When stream=True a second span is created to instrument the streamed response.

Example usage:

import openai

import logfire

client = openai.OpenAI()
logfire.configure()
logfire.instrument_openai(client)

response = client.chat.completions.create(
    model='gpt-4',
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        {'role': 'user', 'content': 'What is four plus five?'},
    ],
)
print('answer:', response.choices[0].message.content)
Returns

AbstractContextManager[None] — A context manager that will revert the instrumentation when exited. Use of this context manager is optional.

Parameters

openai_client : openai.OpenAI | openai.AsyncOpenAI | type[openai.OpenAI] | type[openai.AsyncOpenAI] | None Default: None

The OpenAI client or class to instrument:

  • None (the default) to instrument both the openai.OpenAI and openai.AsyncOpenAI classes.
  • The openai.OpenAI class or a subclass
  • The openai.AsyncOpenAI class or a subclass
  • An instance of openai.OpenAI
  • An instance of openai.AsyncOpenAI

suppress_other_instrumentation : bool Default: True

If True, suppress any other OTEL instrumentation that may be otherwise enabled. In reality, this means the HTTPX instrumentation, which could otherwise be called since OpenAI uses HTTPX to make HTTP requests.

version : SemconvVersion | Sequence[SemconvVersion] Default: 1

The version(s) of the span attribute format to use:

  • 1 (the default): Uses request_data and response_data attributes.
  • 'latest': Uses OpenTelemetry Gen AI semantic convention attributes (gen_ai.input.messages, gen_ai.output.messages, etc.) and omits the full response_data attribute. A minimal request_data (e.g. \{"model": ...\}) is still recorded for message template compatibility. This format may change between releases.
  • [1, 'latest']: Emits both the full legacy attributes and the semantic convention attributes simultaneously, useful for migration and testing.

instrument_openai_agents

def instrument_openai_agents() -> None

Instrument the agents framework from OpenAI.

For instrumentation of the standard OpenAI SDK package, see instrument_openai().

Returns

None

instrument_anthropic

def instrument_anthropic(
    anthropic_client: anthropic.Anthropic | anthropic.AsyncAnthropic | anthropic.AnthropicBedrock | anthropic.AsyncAnthropicBedrock | type[anthropic.Anthropic] | type[anthropic.AsyncAnthropic] | type[anthropic.AnthropicBedrock] | type[anthropic.AsyncAnthropicBedrock] | None = None,
    suppress_other_instrumentation: bool = True,
    version: SemconvVersion | Sequence[SemconvVersion] = 1,
) -> AbstractContextManager[None]

Instrument an Anthropic client so that spans are automatically created for each request.

The following methods are instrumented for both the sync and async clients:

When stream=True a second span is created to instrument the streamed response.

Example usage:

import anthropic

import logfire

client = anthropic.Anthropic()

logfire.configure()
logfire.instrument_anthropic(client)

response = client.messages.create(
    model='claude-3-haiku-20240307',
    system='You are a helpful assistant.',
    messages=[
        {'role': 'user', 'content': 'What is four plus five?'},
    ],
)
print('answer:', response.content[0].text)
Returns

AbstractContextManager[None] — A context manager that will revert the instrumentation when exited. Use of this context manager is optional.

Parameters

anthropic_client : anthropic.Anthropic | anthropic.AsyncAnthropic | anthropic.AnthropicBedrock | anthropic.AsyncAnthropicBedrock | type[anthropic.Anthropic] | type[anthropic.AsyncAnthropic] | type[anthropic.AnthropicBedrock] | type[anthropic.AsyncAnthropicBedrock] | None Default: None

The Anthropic client or class to instrument:

  • None (the default) to instrument all Anthropic client types
  • The anthropic.Anthropic or anthropic.AnthropicBedrock class or subclass
  • The anthropic.AsyncAnthropic or anthropic.AsyncAnthropicBedrock class or subclass
  • An instance of any of the above classes

suppress_other_instrumentation : bool Default: True

If True, suppress any other OTEL instrumentation that may be otherwise enabled. In reality, this means the HTTPX instrumentation, which could otherwise be called since OpenAI uses HTTPX to make HTTP requests.

version : SemconvVersion | Sequence[SemconvVersion] Default: 1

The version(s) of the span attribute format to use:

  • 1 (the default): Uses request_data and response_data attributes.
  • 'latest': Uses OpenTelemetry Gen AI semantic convention attributes (gen_ai.input.messages, gen_ai.output.messages, etc.) and omits the full response_data attribute. A minimal request_data (e.g. \{"model": ...\}) is still recorded for message template compatibility. This format may change between releases.
  • [1, 'latest']: Emits both the full legacy attributes and the semantic convention attributes simultaneously, useful for migration and testing.

instrument_google_genai

def instrument_google_genai(kwargs: Any = {})

Instrument the Google Gen AI SDK (google-genai).

Uses the GoogleGenAiSdkInstrumentor().instrument() method of the opentelemetry-instrumentation-google-genai package, to which it passes **kwargs.

instrument_litellm

def instrument_litellm(kwargs: Any = {})

Instrument the LiteLLM Python SDK.

Uses the LiteLLMInstrumentor().instrument() method of the openinference-instrumentation-litellm package, to which it passes **kwargs.

instrument_dspy

def instrument_dspy(kwargs: Any = {})

Instrument DSPy.

Uses the DSPyInstrumentor().instrument() method of the openinference-instrumentation-dspy package, to which it passes **kwargs.

instrument_print

def instrument_print() -> AbstractContextManager[None]

Instrument the built-in print function so that calls to it are logged.

If Logfire is configured with inspect_arguments=True, the names of the arguments passed to print will be included in the log attributes and will be used for scrubbing.

The fallback attribute name logfire.print_args will be used if:

  • inspect_arguments is False
  • Inspection fails for any reason
  • Multiple starred arguments are used (e.g. print(*args1, *args2)) in which case names can’t be unambiguously determined.
Returns

AbstractContextManager[None] — A context manager that will revert the instrumentation when exited. Use of this context manager is optional.

instrument_asyncpg

def instrument_asyncpg(kwargs: Any = {}) -> None

Instrument the asyncpg module so that spans are automatically created for each query.

Returns

None

instrument_httpx

def instrument_httpx(
    client: httpx.Client,
    capture_all: bool = False,
    capture_headers: bool = False,
    capture_request_body: bool = False,
    capture_response_body: bool = False,
    request_hook: HttpxRequestHook | None = None,
    response_hook: HttpxResponseHook | None = None,
    kwargs: Any = {},
) -> None
def instrument_httpx(
    client: httpx.AsyncClient,
    capture_all: bool = False,
    capture_headers: bool = False,
    capture_request_body: bool = False,
    capture_response_body: bool = False,
    request_hook: HttpxRequestHook | HttpxAsyncRequestHook | None = None,
    response_hook: HttpxResponseHook | HttpxAsyncResponseHook | None = None,
    kwargs: Any = {},
) -> None
def instrument_httpx(
    client: None = None,
    capture_all: bool = False,
    capture_headers: bool = False,
    capture_request_body: bool = False,
    capture_response_body: bool = False,
    request_hook: HttpxRequestHook | None = None,
    response_hook: HttpxResponseHook | None = None,
    async_request_hook: HttpxAsyncRequestHook | None = None,
    async_response_hook: HttpxAsyncResponseHook | None = None,
    kwargs: Any = {},
) -> None

Instrument the httpx module so that spans are automatically created for each request.

Optionally, pass an httpx.Client instance to instrument only that client.

Uses the OpenTelemetry HTTPX Instrumentation library, specifically HTTPXClientInstrumentor().instrument(), to which it passes **kwargs.

Returns

None

Parameters

client : httpx.Client | httpx.AsyncClient | None Default: None

The httpx.Client or httpx.AsyncClient instance to instrument. If None, the default, all clients will be instrumented.

capture_all : bool | None Default: None

Set to True to capture all HTTP headers, request and response bodies. By default checks the environment variable LOGFIRE_HTTPX_CAPTURE_ALL.

capture_headers : bool Default: False

Set to True to capture all HTTP headers.

If you don’t want to capture all headers, you can customize the headers captured. See the Capture Headers section for more info.

capture_request_body : bool Default: False

Set to True to capture the request body.

capture_response_body : bool Default: False

Set to True to capture the response body.

request_hook : HttpxRequestHook | HttpxAsyncRequestHook | None Default: None

A function called right after a span is created for a request.

response_hook : HttpxResponseHook | HttpxAsyncResponseHook | None Default: None

A function called right before a span is finished for the response.

async_request_hook : HttpxAsyncRequestHook | None Default: None

A function called right after a span is created for an async request.

async_response_hook : HttpxAsyncResponseHook | None Default: None

A function called right before a span is finished for an async response.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument method, for future compatibility.

instrument_celery

def instrument_celery(kwargs: Any = {}) -> None

Instrument celery so that spans are automatically created for each task.

Uses the OpenTelemetry Celery Instrumentation library.

For distributed tracing to work correctly, this must be called in both the worker processes and the application that enqueues tasks (e.g., your Django or FastAPI web server). See the distributed tracing guide.

See the Celery guide for more details.

Returns

None

Parameters

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument method, for future compatibility.

instrument_django

def instrument_django(
    capture_headers: bool = False,
    is_sql_commentor_enabled: bool | None = None,
    request_hook: Callable[[trace_api.Span, HttpRequest], None] | None = None,
    response_hook: Callable[[trace_api.Span, HttpRequest, HttpResponse], None] | None = None,
    excluded_urls: str | None = None,
    kwargs: Any = {},
) -> None

Instrument django so that spans are automatically created for each web request.

Uses the OpenTelemetry Django Instrumentation library.

Returns

None

Parameters

capture_headers : bool Default: False

Set to True to capture all request and response headers.

is_sql_commentor_enabled : bool | None Default: None

Adds comments to SQL queries performed by Django, so that database logs have additional context.

This does NOT create spans/logs for the queries themselves. For that you need to instrument the database driver, e.g. with logfire.instrument_psycopg().

To configure the SQL Commentor, see the OpenTelemetry documentation for the values that need to be added to settings.py.

request_hook : Callable[[trace_api.Span, HttpRequest], None] | None Default: None

A function called right after a span is created for a request. The function should accept two arguments: the span and the Django Request object.

response_hook : Callable[[trace_api.Span, HttpRequest, HttpResponse], None] | None Default: None

A function called right before a span is finished for the response. The function should accept three arguments: the span, the Django Request object, and the Django Response object.

excluded_urls : str | None Default: None

A string containing a comma-delimited list of regexes used to exclude URLs from tracking.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument method, for future compatibility.

instrument_requests

def instrument_requests(
    excluded_urls: str | None = None,
    request_hook: Callable[[Span, requests.PreparedRequest], None] | None = None,
    response_hook: Callable[[Span, requests.PreparedRequest, requests.Response], None] | None = None,
    kwargs: Any = {},
) -> None

Instrument the requests module so that spans are automatically created for each request.

Returns

None

Parameters

excluded_urls : str | None Default: None

A string containing a comma-delimited list of regexes used to exclude URLs from tracking

request_hook : Callable[[Span, requests.PreparedRequest], None] | None Default: None

A function called right after a span is created for a request.

response_hook : Callable[[Span, requests.PreparedRequest, requests.Response], None] | None Default: None

A function called right before a span is finished for the response.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument methods, for future compatibility.

instrument_flask

def instrument_flask(
    app: Flask,
    capture_headers: bool = False,
    enable_commenter: bool = True,
    commenter_options: FlaskCommenterOptions | None = None,
    excluded_urls: str | None = None,
    request_hook: FlaskRequestHook | None = None,
    response_hook: FlaskResponseHook | None = None,
    kwargs: Any = {},
) -> None

Instrument app so that spans are automatically created for each request.

Uses the OpenTelemetry Flask Instrumentation library, specifically FlaskInstrumentor().instrument_app(), to which it passes **kwargs.

Returns

None

Parameters

app : Flask

The Flask app to instrument.

capture_headers : bool Default: False

Set to True to capture all request and response headers.

enable_commenter : bool Default: True

Adds comments to SQL queries performed by Flask, so that database logs have additional context.

commenter_options : FlaskCommenterOptions | None Default: None

Configure the tags to be added to the SQL comments. See more about it on the SQLCommenter Configurations.

excluded_urls : str | None Default: None

A string containing a comma-delimited list of regexes used to exclude URLs from tracking.

request_hook : FlaskRequestHook | None Default: None

A function called right after a span is created for a request.

response_hook : FlaskResponseHook | None Default: None

A function called right before a span is finished for the response.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry Flask instrumentation.

instrument_starlette

def instrument_starlette(
    app: Starlette,
    capture_headers: bool = False,
    record_send_receive: bool = False,
    server_request_hook: ServerRequestHook | None = None,
    client_request_hook: ClientRequestHook | None = None,
    client_response_hook: ClientResponseHook | None = None,
    kwargs: Any = {},
) -> None

Instrument app so that spans are automatically created for each request.

Uses the OpenTelemetry Starlette Instrumentation library, specifically StarletteInstrumentor.instrument_app(), to which it passes **kwargs.

Returns

None

Parameters

app : Starlette

The Starlette app to instrument.

capture_headers : bool Default: False

Set to True to capture all request and response headers.

record_send_receive : bool Default: False

Set to True to allow the OpenTelemetry ASGI middleware to create send/receive spans.

These are disabled by default to reduce overhead and the number of spans created, since many can be created for a single request, and they are not often useful. If enabled, they will be set to debug level, meaning they will usually still be hidden in the UI.

server_request_hook : ServerRequestHook | None Default: None

A function that receives a server span and the ASGI scope for every incoming request.

client_request_hook : ClientRequestHook | None Default: None

A function that receives a span, the ASGI scope and the receive ASGI message for every ASGI receive event.

client_response_hook : ClientResponseHook | None Default: None

A function that receives a span, the ASGI scope and the send ASGI message for every ASGI send event.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry Starlette instrumentation.

instrument_asgi

def instrument_asgi(
    app: ASGIApp,
    capture_headers: bool = False,
    record_send_receive: bool = False,
    kwargs: Unpack[ASGIInstrumentKwargs] = {},
) -> ASGIApp

Instrument app so that spans are automatically created for each request.

Uses the ASGI OpenTelemetryMiddleware under the hood, to which it passes **kwargs.

Returns

ASGIApp — The instrumented ASGI application.

Parameters

app : ASGIApp

The ASGI application to instrument.

capture_headers : bool Default: False

Set to True to capture all request and response headers.

record_send_receive : bool Default: False

Set to True to allow the OpenTelemetry ASGI middleware to create send/receive spans.

These are disabled by default to reduce overhead and the number of spans created, since many can be created for a single request, and they are not often useful. If enabled, they will be set to debug level, meaning they will usually still be hidden in the UI.

**kwargs : Unpack[ASGIInstrumentKwargs] Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry ASGI middleware.

instrument_wsgi

def instrument_wsgi(
    app: WSGIApplication,
    capture_headers: bool = False,
    request_hook: WSGIRequestHook | None = None,
    response_hook: WSGIResponseHook | None = None,
    kwargs: Any = {},
) -> WSGIApplication

Instrument app so that spans are automatically created for each request.

Uses the WSGI OpenTelemetryMiddleware under the hood, to which it passes **kwargs.

Returns

WSGIApplication — The instrumented WSGI application.

Parameters

app : WSGIApplication

The WSGI application to instrument.

capture_headers : bool Default: False

Set to True to capture all request and response headers.

request_hook : WSGIRequestHook | None Default: None

A function called right after a span is created for a request.

response_hook : WSGIResponseHook | None Default: None

A function called right before a span is finished for the response.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry WSGI middleware.

instrument_aiohttp_client

def instrument_aiohttp_client(
    capture_all: bool | None = None,
    capture_headers: bool = False,
    capture_request_body: bool = False,
    capture_response_body: bool = False,
    request_hook: AiohttpClientRequestHook | None = None,
    response_hook: AiohttpClientResponseHook | None = None,
    kwargs: Any = {},
) -> None

Instrument the aiohttp module so that spans are automatically created for each client request.

Uses the OpenTelemetry aiohttp client Instrumentation library, specifically AioHttpClientInstrumentor().instrument(), to which it passes **kwargs.

Returns

None

instrument_aiohttp_server

def instrument_aiohttp_server(kwargs: Any = {}) -> None

Instrument the aiohttp module so that spans are automatically created for each server request.

Uses the OpenTelemetry aiohttp server Instrumentation library, specifically AioHttpServerInstrumentor().instrument(), to which it passes **kwargs.

Returns

None

instrument_sqlalchemy

def instrument_sqlalchemy(
    engine: AsyncEngine | Engine | None = None,
    engines: Iterable[AsyncEngine | Engine] | None = None,
    enable_commenter: bool = False,
    commenter_options: SQLAlchemyCommenterOptions | None = None,
    kwargs: Any = {},
) -> None

Instrument the sqlalchemy module so that spans are automatically created for each query.

Uses the OpenTelemetry SQLAlchemy Instrumentation library, specifically SQLAlchemyInstrumentor().instrument(), to which it passes **kwargs.

Returns

None

Parameters

engine : AsyncEngine | Engine | None Default: None

The sqlalchemy engine to instrument.

engines : Iterable[AsyncEngine | Engine] | None Default: None

An iterable of sqlalchemy engines to instrument.

enable_commenter : bool Default: False

Adds comments to SQL queries performed by SQLAlchemy, so that database logs have additional context.

commenter_options : SQLAlchemyCommenterOptions | None Default: None

Configure the tags to be added to the SQL comments.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument methods.

instrument_sqlite3

def instrument_sqlite3(
    conn: SQLite3Connection = None,
    kwargs: Any = {},
) -> SQLite3Connection

Instrument the sqlite3 module or a specific connection so that spans are automatically created for each operation.

Uses the OpenTelemetry SQLite3 Instrumentation library.

Returns

SQLite3Connection — If a connection is provided, returns the instrumented connection. If no connection is provided, returns None.

Parameters

conn : SQLite3Connection Default: None

The sqlite3 connection to instrument, or None to instrument all connections.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument methods.

instrument_aws_lambda

def instrument_aws_lambda(
    lambda_handler: LambdaHandler,
    event_context_extractor: Callable[[LambdaEvent], Context] | None = None,
    kwargs: Any = {},
) -> None

Instrument AWS Lambda so that spans are automatically created for each invocation.

Uses the OpenTelemetry AWS Lambda Instrumentation library, specifically AwsLambdaInstrumentor().instrument(), to which it passes **kwargs.

Returns

None

Parameters

lambda_handler : LambdaHandler

The lambda handler function to instrument.

event_context_extractor : Callable[[LambdaEvent], Context] | None Default: None

A function that returns an OTel Trace Context given the Lambda Event the AWS.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument methods for future compatibility.

instrument_mysql

def instrument_mysql(conn: MySQLConnection = None, kwargs: Any = {}) -> MySQLConnection

Instrument the mysql module or a specific MySQL connection so that spans are automatically created for each operation.

Uses the OpenTelemetry MySQL Instrumentation library.

Returns

MySQLConnection — If a connection is provided, returns the instrumented connection. If no connection is provided, returns None.

Parameters

conn : MySQLConnection Default: None

The mysql connection to instrument, or None to instrument all connections.

**kwargs : Any Default: \{\}

Additional keyword arguments to pass to the OpenTelemetry instrument methods.

instrument_system_metrics

def instrument_system_metrics(
    config: SystemMetricsConfig | None = None,
    base: SystemMetricsBase = 'basic',
) -> None

Collect system metrics.

See the guide for more information.

Returns

None

Parameters

config : SystemMetricsConfig | None Default: None

A dictionary where the keys are metric names and the values are optional further configuration for that metric.

base : SystemMetricsBase Default: 'basic'

A string indicating the base config dictionary which config will be merged with, or None for an empty base config.

metric_counter

def metric_counter(name: str, unit: str = '', description: str = '') -> Counter

Create a counter metric.

A counter is a cumulative metric that represents a single numerical value that only ever goes up.

import logfire

logfire.configure()
counter = logfire.metric_counter('exceptions', unit='1', description='Number of exceptions caught')

try:
    raise Exception('oops')
except Exception:
    counter.add(1)

See the Opentelemetry documentation about counters.

Returns

Counter — The counter metric.

Parameters

name : str

The name of the metric.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

metric_histogram

def metric_histogram(name: str, unit: str = '', description: str = '') -> Histogram

Create a histogram metric.

A histogram is a metric that samples observations (usually things like request durations or response sizes).

import logfire

logfire.configure()
histogram = logfire.metric_histogram('bank.amount_transferred', unit='$', description='Amount transferred')


def transfer(amount: int):
    histogram.record(amount)

See the Opentelemetry documentation about

Returns

Histogram — The histogram metric.

Parameters

name : str

The name of the metric.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

metric_gauge

def metric_gauge(name: str, unit: str = '', description: str = '') -> Gauge

Create a gauge metric.

Gauge is a synchronous instrument which can be used to record non-additive measurements.

import logfire

logfire.configure()
gauge = logfire.metric_gauge('system.cpu_usage', unit='%', description='CPU usage')


def update_cpu_usage(cpu_percent):
    gauge.set(cpu_percent)

See the Opentelemetry documentation about gauges.

Returns

Gauge — The gauge metric.

Parameters

name : str

The name of the metric.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

metric_up_down_counter

def metric_up_down_counter(
    name: str,
    unit: str = '',
    description: str = '',
) -> UpDownCounter

Create an up-down counter metric.

An up-down counter is a cumulative metric that represents a single numerical value that can be adjusted up or down.

import logfire

logfire.configure()
up_down_counter = logfire.metric_up_down_counter('users.logged_in', unit='1', description='Users logged in')


def on_login(user):
    up_down_counter.add(1)


def on_logout(user):
    up_down_counter.add(-1)

See the Opentelemetry documentation about up-down counters.

Returns

UpDownCounter — The up-down counter metric.

Parameters

name : str

The name of the metric.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

metric_counter_callback

def metric_counter_callback(
    name: str,
    callbacks: Sequence[CallbackT],
    unit: str = '',
    description: str = '',
) -> None

Create a counter metric that uses a callback to collect observations.

The callback is called every 60 seconds in a background thread.

The counter metric is a cumulative metric that represents a single numerical value that only ever goes up.

import psutil
from opentelemetry.metrics import CallbackOptions, Observation

import logfire

logfire.configure()


def cpu_usage_callback(options: CallbackOptions):
    cpu_percents = psutil.cpu_percent(percpu=True)

    for i, cpu_percent in enumerate(cpu_percents):
        yield Observation(cpu_percent, {'cpu': i})


cpu_usage_counter = logfire.metric_counter_callback(
    'system.cpu.usage',
    callbacks=[cpu_usage_callback],
    unit='%',
    description='CPU usage',
)

See the Opentelemetry documentation about asynchronous counter.

Returns

None

Parameters

name : str

The name of the metric.

callbacks : Sequence[CallbackT]

A sequence of callbacks that return an iterable of Observation.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

metric_gauge_callback

def metric_gauge_callback(
    name: str,
    callbacks: Sequence[CallbackT],
    unit: str = '',
    description: str = '',
) -> None

Create a gauge metric that uses a callback to collect observations.

The callback is called every 60 seconds in a background thread.

The gauge metric is a metric that represents a single numerical value that can arbitrarily go up and down.

import threading

from opentelemetry.metrics import CallbackOptions, Observation

import logfire

logfire.configure()


def thread_count_callback(options: CallbackOptions):
    yield Observation(threading.active_count())


logfire.metric_gauge_callback(
    'system.thread_count',
    callbacks=[thread_count_callback],
    unit='1',
    description='Number of threads',
)

See the Opentelemetry documentation about asynchronous gauge.

Returns

None

Parameters

name : str

The name of the metric.

callbacks : Sequence[CallbackT]

A sequence of callbacks that return an iterable of Observation.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

metric_up_down_counter_callback

def metric_up_down_counter_callback(
    name: str,
    callbacks: Sequence[CallbackT],
    unit: str = '',
    description: str = '',
) -> None

Create an up-down counter metric that uses a callback to collect observations.

The callback is called every 60 seconds in a background thread.

The up-down counter is a cumulative metric that represents a single numerical value that can be adjusted up or down.

from opentelemetry.metrics import CallbackOptions, Observation

import logfire

logfire.configure()

items = []


def inventory_callback(options: CallbackOptions):
    yield Observation(len(items))


logfire.metric_up_down_counter_callback(
    name='store.inventory',
    description='Number of items in the inventory',
    callbacks=[inventory_callback],
)

See the Opentelemetry documentation about asynchronous up-down counters.

Returns

None

Parameters

name : str

The name of the metric.

callbacks : Sequence[CallbackT]

A sequence of callbacks that return an iterable of Observation.

unit : str Default: ''

The unit of the metric.

description : str Default: ''

The description of the metric.

suppress_scopes

def suppress_scopes(scopes: str = ()) -> None

Prevent spans and metrics from being created for the given OpenTelemetry scope names.

To get the scope name of a span/metric, check the value of the otel_scope_name column in the Logfire database.

Returns

None

shutdown

def shutdown(timeout_millis: int = 30000, flush: bool = True) -> bool

Shut down all tracers and meters.

This will clean up any resources used by the tracers and meters and flush any remaining spans and metrics.

Returns

boolFalse if the timeout was reached before the shutdown was completed, True otherwise.

Parameters

timeout_millis : int Default: 30000

The timeout in milliseconds.

flush : bool Default: True

Whether to flush remaining spans and metrics before shutting down.

var

def var(name: str, default: T, description: str | None = None) -> Variable[T]
def var(
    name: str,
    type: type[T],
    default: T | ResolveFunction[T],
    description: str | None = None,
) -> Variable[T]

Define a managed variable.

Managed variables let you externalize runtime configuration from your code, controlling values from the Logfire UI without redeploying. Use .get() on the returned Variable to resolve the current value.

See the managed variables guide for more details.

import logfire

logfire.configure()

# Simple primitive variable (type inferred from default)
feature_enabled = logfire.var('feature_enabled', default=False)

# Use the variable
with feature_enabled.get(targeting_key='user-123') as resolved:
    if resolved.value:
        ...
Returns

Variable[T]

Parameters

name : str

Unique identifier for the variable. Must match the name configured in the Logfire UI when using remote variables.

type : type[T] | None Default: None

Expected type for validation and JSON schema generation. Can be a primitive type or a Pydantic model. If not provided, the type is inferred from default. Required when default is a resolve function.

default : T | ResolveFunction[T]

Default value used when no remote configuration is found. When type is not provided, the type is inferred from this value. Can also be a callable with targeting_key and attributes parameters (requires type to be set explicitly).

description : str | None Default: None

Optional human-readable description of what the variable controls.

variables_clear

def variables_clear() -> None

Clear all registered variables from this Logfire instance.

This removes all variables previously registered via var(), allowing them to be re-registered. This is primarily intended for use in tests to ensure a clean state between test cases.

Returns

None

variables_get

def variables_get() -> list[Variable[Any]]

Get all variables registered with this Logfire instance.

Returns

list[Variable[Any]]

variables_push

def variables_push(
    variables: list[Variable[Any]] | None = None,
    dry_run: bool = False,
    yes: bool = False,
    strict: bool = False,
) -> bool

Push variable definitions (metadata only) to the configured variable provider.

This method syncs local variable definitions with the provider:

  • Creates new variables that don’t exist in the provider
  • Updates JSON schemas for existing variables if they’ve changed
  • Warns about existing label values that are incompatible with new schemas

The provider is determined by the Logfire configuration. For remote providers, this requires proper authentication (via VariablesOptions or LOGFIRE_API_KEY).

Returns

bool — True if changes were applied (or would be applied in dry_run mode), False otherwise.

Parameters

variables : list[Variable[Any]] | None Default: None

Variable instances to push. If None, all variables registered with this Logfire instance will be pushed.

dry_run : bool Default: False

If True, only show what would change without applying.

yes : bool Default: False

If True, skip confirmation prompt.

strict : bool Default: False

If True, fail if any existing label values are incompatible with new schemas.

variables_push_types

def variables_push_types(
    types: Sequence[type[Any] | tuple[type[Any], str]],
    dry_run: bool = False,
    yes: bool = False,
    strict: bool = False,
) -> bool

Push variable type definitions to the configured variable provider.

Variable types are reusable schema definitions that can be referenced by variables. They help organize and standardize variable schemas across your project.

This method syncs local Python types with the provider:

  • Creates new types that don’t exist in the provider
  • Updates schemas for existing types if they’ve changed
  • Shows a diff of changes before applying
  • Checks if existing variable label values are compatible with the new schemas

The provider is determined by the Logfire configuration. For remote providers, this requires proper authentication (via VariablesOptions or LOGFIRE_API_KEY).

Returns

bool — True if changes were applied (or would be applied in dry_run mode), False otherwise.

Parameters

types : Sequence[type[Any] | tuple[type[Any], str]]

Types to push. Items can be:

  • A type (name defaults to name or str(type))
  • A tuple of (type, name) for explicit naming

dry_run : bool Default: False

If True, only show what would change without applying.

yes : bool Default: False

If True, skip confirmation prompt.

strict : bool Default: False

If True, abort when existing label values are incompatible with the new type schema.

variables_validate

def variables_validate(variables: list[Variable[Any]] | None = None) -> ValidationReport

Validate that provider-side variable label values match local type definitions.

This method fetches the current variable configuration from the provider and validates that all label values can be deserialized to the expected types defined in the local Variable instances.

Returns

ValidationReport — A ValidationReport containing any errors found. Use report.is_valid to check ValidationReport — if validation passed, and report.format() to get a human-readable summary.

Parameters

variables : list[Variable[Any]] | None Default: None

Variable instances to validate. If None, all variables registered with this Logfire instance will be validated.

variables_push_config

def variables_push_config(
    config: VariablesConfig,
    mode: Literal['merge', 'replace'] = 'merge',
    dry_run: bool = False,
    yes: bool = False,
) -> bool

Push a VariablesConfig to the configured provider.

This method pushes a complete VariablesConfig (including labels and rollouts) to the provider. It’s useful for:

  • Pushing configs generated or modified locally
  • Pushing configs read from files
  • Partial updates (merge mode) or full replacement (replace mode)
Returns

bool — True if changes were applied (or would be applied in dry_run mode), False otherwise.

Parameters

config : VariablesConfig

The VariablesConfig to sync.

mode : Literal[‘merge’, ‘replace’] Default: 'merge'

‘merge’ updates/creates only variables in config (leaves others unchanged). ‘replace’ makes the server match the config exactly (deletes missing variables).

dry_run : bool Default: False

If True, only show what would change without applying.

yes : bool Default: False

If True, skip confirmation prompt.

variables_pull_config

def variables_pull_config() -> VariablesConfig

Pull the current variable configuration from the provider.

This method fetches the complete configuration from the provider, useful for generating local copies of the config that can be modified.

Returns

VariablesConfig — The current VariablesConfig from the provider.

variables_build_config

def variables_build_config(
    variables: list[Variable[Any]] | None = None,
) -> VariablesConfig

Build a VariablesConfig from registered Variable instances.

This creates a minimal config with just the name, schema, and example for each variable. No labels or versions are created - use this to build a template config that can be edited.

Returns

VariablesConfig — A VariablesConfig with minimal configs for each variable.

Parameters

variables : list[Variable[Any]] | None Default: None

Variable instances to include. If None, uses all registered variables.

Logfire is the observability tool focused on developer experience.

AutoTraceModule

Information about a module being imported that should maybe be traced automatically.

This object will be passed to a function that should return True if the module should be traced. In particular it’ll be passed to a function that’s passed to install_auto_tracing as the modules argument.

Attributes

name

Fully qualified absolute name of the module being imported.

Type: str

filename

Filename of the module being imported.

Type: str | None

Methods

parts_start_with

def parts_start_with(prefix: str | Sequence[str]) -> bool

Return True if the module name starts with any of the given prefixes, using dots as boundaries.

For example, if the module name is foo.bar.spam, then parts_start_with('foo') will return True, but parts_start_with('bar') or parts_start_with('foo_bar') will return False. In other words, this will match the module itself or any submodules.

If a prefix contains any characters other than letters, numbers, and dots, then it will be treated as a regular expression.

Returns

bool

StructlogProcessor

Logfire processor for structlog.

Methods

__call__

def __call__(logger: WrappedLogger, name: str, event_dict: EventDict) -> EventDict

A middleware to process structlog event, and send it to Logfire.

Returns

EventDict

LogfireLoggingHandler

Bases: LoggingHandler

A logging handler that sends logs to Logfire.

Constructor Parameters

level : int | str Default: NOTSET

The threshold level for this handler. Logging messages which are less severe than level will be ignored.

fallback : LoggingHandler Default: StreamHandler()

A fallback handler to use when instrumentation is suppressed.

logfire_instance : Logfire | None Default: None

The Logfire instance to use when emitting logs. Defaults to the default global instance.

Methods

emit

def emit(record: LogRecord) -> None

Send the log to Logfire.

Returns

None

Parameters

record : LogRecord

The log record to send.

fill_attributes

def fill_attributes(record: LogRecord) -> dict[str, Any]

Fill the attributes to send to Logfire.

This method can be overridden to add more attributes.

Returns

dict[str, Any] — The attributes for the log record.

Parameters

record : LogRecord

The log record.

ScrubMatch

An object passed to a ScrubbingOptions.callback function.

Attributes

path

The path to the value in the span being considered for redaction, e.g. ('attributes', 'password').

Type: JsonPath

value

The value in the span being considered for redaction, e.g. 'my_password'.

Type: Any

pattern_match

The regex match object indicating why the value is being redacted. Use pattern_match.group(0) to get the matched string.

Type: re.Match[str]

SamplingOptions

Options for logfire.configure(sampling=...).

See the sampling guide.

Attributes

head

Head sampling options.

If it’s a float, it should be a number between 0.0 and 1.0. This is the probability that an entire trace will randomly included.

Alternatively you can pass a custom OpenTelemetry Sampler.

Type: float | Sampler Default: 1.0

tail

An optional tail sampling callback which will be called for every span.

It should return a number between 0.0 and 1.0, the probability that the entire trace will be included. Use SamplingOptions.level_or_duration for a common use case.

Every span in a trace will be stored in memory until either the trace is included by tail sampling or it’s completed and discarded, so large traces may consume a lot of memory.

Type: Callable[[TailSamplingSpanInfo], float] | None Default: None

Methods

level_or_duration

@classmethod

def level_or_duration(
    cls,
    head: float | Sampler = 1.0,
    level_threshold: LevelName | None = 'notice',
    duration_threshold: float | None = 5.0,
    background_rate: float = 0.0,
) -> Self

Returns a SamplingOptions instance that tail samples traces based on their log level and duration.

If a trace has at least one span/log that has a log level greater than or equal to level_threshold, or if the duration of the whole trace is greater than duration_threshold seconds, then the whole trace will be included. Otherwise, the probability is background_rate.

The head parameter is the same as in the SamplingOptions constructor.

Returns

Self

ScrubbingOptions

Options for redacting sensitive data.

Attributes

callback

A function that is called for each match found by the scrubber. If it returns None, the value is redacted. Otherwise, the returned value replaces the matched value. The function accepts a single argument of type logfire.ScrubMatch.

Type: ScrubCallback | None Default: None

extra_patterns

A sequence of regular expressions to detect sensitive data that should be redacted. For example, the default includes 'password', 'secret', and 'api[._ -]?key'. The specified patterns are combined with the default patterns.

Type: Sequence[str] | None Default: None

ConsoleOptions

Options for controlling console output.

Attributes

span_style

How spans are shown in the console.

Type: Literal[‘simple’, ‘indented’, ‘show-parents’] Default: 'show-parents'

include_timestamps

Whether to include timestamps in the console output.

Type: bool Default: True

include_tags

Whether to include tags in the console output.

Type: bool Default: True

verbose

Whether to show verbose output.

It includes the filename, log level, and line number.

Type: bool Default: False

min_log_level

The minimum log level to show in the console.

Type: LevelName Default: 'info'

Whether to print the URL of the Logfire project after initialization.

Type: bool Default: True

output

The output stream to write console output to (default: stdout).

Type: TextIO | None Default: None

AdvancedOptions

Options primarily used for testing by Logfire developers.

Attributes

base_url

Base URL for the Logfire API.

If not set, Logfire will infer the base URL from the token (which contains information about the region).

Type: str | None Default: None

id_generator

Generator for trace and span IDs.

The default generates random IDs and is unaffected by calls to random.seed().

Type: IdGenerator Default: dataclasses.field(default_factory=(lambda: SeededRandomIdGenerator(None)))

ns_timestamp_generator

Generator for nanosecond start and end timestamps of spans.

Type: Callable[[], int] Default: time.time_ns

log_record_processors

Configuration for OpenTelemetry logging. This is experimental and may be removed.

Type: Sequence[LogRecordProcessor] Default: ()

exception_callback

Callback function that is called when an exception is recorded on a span.

This is experimental and may be modified or removed.

Note: When using ProcessPoolExecutor, this callback must be defined at the module level (not as a local function) to be picklable. Local functions will be excluded from the serialized configuration sent to child processes. See the distributed tracing guide for more details.

Type: ExceptionCallback | None Default: None

PydanticPlugin

Options for the Pydantic plugin.

This class is deprecated for external use. Use logfire.instrument_pydantic() instead.

Attributes

record

The record mode for the Pydantic plugin.

It can be one of the following values:

  • off: Disable instrumentation. This is default value.
  • all: Send traces and metrics for all events.
  • failure: Send metrics for all validations and traces only for validation failures.
  • metrics: Send only metrics.

Type: PydanticPluginRecordValues Default: 'off'

include

By default, third party modules are not instrumented. This option allows you to include specific modules.

Type: set[str] Default: field(default_factory=set)

exclude

Exclude specific modules from instrumentation.

Type: set[str] Default: field(default_factory=set)

MetricsOptions

Configuration of metrics.

Attributes

DEFAULT_VIEWS

The default OpenTelemetry metric views applied by Logfire.

This class variable is provided for reference so you can extend the defaults when configuring custom views: MetricsOptions(views=[*MetricsOptions.DEFAULT_VIEWS, View(...), View(...)])

The default views include:

  • Exponential bucket histogram aggregation for all Histogram instruments, which provides better resolution and smaller payload sizes compared to fixed-bucket histograms.
  • Attribute filtering for the http.server.active_requests UpDownCounter, limiting attributes to url.scheme, http.scheme, http.flavor, http.method, and http.request.method to reduce cardinality.

Type: Sequence[View] Default: (View(instrument_type=Histogram, aggregation=(ExponentialBucketHistogramAggregation())), View(instrument_type=UpDownCounter, instrument_name='http.server.active_requests', attribute_keys=\{'url.scheme', 'http.scheme', 'http.flavor', 'http.method', 'http.request.method'\}))

additional_readers

Sequence of metric readers to be used in addition to the default which exports metrics to Logfire’s API.

Type: Sequence[MetricReader] Default: ()

collect_in_spans

Experimental setting to add up the values of counter and histogram metrics in active spans.

Type: bool Default: False

views

Sequence of OpenTelemetry metric views to apply during metric collection.

Defaults to DEFAULT_VIEWS. To add custom views while keeping the defaults, use: MetricsOptions(views=[*MetricsOptions.DEFAULT_VIEWS, View(...), View(...)])

To replace the defaults entirely, pass your own sequence of views.

Type: Sequence[View] Default: field(default_factory=(lambda: MetricsOptions.DEFAULT_VIEWS))

CodeSource

Settings for the source code of the project.

Attributes

repository

The repository URL for the code e.g. https://github.com/pydantic/logfire

Type: str

revision

The git revision of the code e.g. branch name, commit hash, tag name etc.

Type: str

root_path

The path from the root of the repository to the current working directory of the process.

If you run the code from the directory corresponding to the root of the repository, you can leave this blank.

Type: str Default: ''

VariablesOptions

Configuration for managed variables using the Logfire remote API.

This is the recommended configuration for production use. Variables are managed through the Logfire UI and fetched via the Logfire API.

Attributes

block_before_first_resolve

Whether the remote variables should be fetched before first resolving a value.

Type: bool Default: True

polling_interval

The time interval for polling for updates to the variables config.

Polling is only a fallback — all updates are delivered instantly via SSE unless something goes wrong. Must be at least 10 seconds. Defaults to 60 seconds.

Type: timedelta | float Default: timedelta(seconds=60)

timeout

Timeout for HTTP requests to the variables API as (connect_timeout, read_timeout) in seconds.

Type: tuple[float, float] Default: (10, 10)

include_resource_attributes_in_context

Whether to include OpenTelemetry resource attributes when resolving variables.

Type: bool Default: True

include_baggage_in_context

Whether to include OpenTelemetry baggage when resolving variables.

Type: bool Default: True

instrument

Whether to create spans when resolving variables.

Type: bool Default: True

LocalVariablesOptions

Configuration for managed variables using a local in-memory configuration.

Use this for development, testing, or self-hosted setups where you don’t want to connect to the Logfire API.

Attributes

config

A local variables config containing variable definitions.

Type: VariablesConfig

include_resource_attributes_in_context

Whether to include OpenTelemetry resource attributes when resolving variables.

Type: bool Default: True

include_baggage_in_context

Whether to include OpenTelemetry baggage when resolving variables.

Type: bool Default: True

instrument

Whether to create spans when resolving variables.

Type: bool Default: True

LogfireSpan

Bases: ReadableSpan

Methods

set_attribute

def set_attribute(key: str, value: Any) -> None

Sets an attribute on the span.

Returns

None

Parameters

key : str

The key of the attribute.

value : Any

The value of the attribute.

set_attributes

def set_attributes(attributes: dict[str, Any]) -> None

Sets the given attributes on the span.

Returns

None

record_exception

def record_exception(
    exception: BaseException,
    attributes: otel_types.Attributes = None,
    timestamp: int | None = None,
    escaped: bool = False,
) -> None

Records an exception as a span event.

Delegates to the OpenTelemetry SDK Span.record_exception method.

Returns

None

set_level

def set_level(level: LevelName | int)

Set the log level of this span.

add_non_user_code_prefix

def add_non_user_code_prefix(path: str | Path) -> None

Add a path to the list of prefixes that are considered non-user code.

This prevents the stack info from including frames from the given path.

This is for advanced users and shouldn’t often be needed. By default, the following prefixes are already included:

  • The standard library
  • site-packages (specifically wherever opentelemetry is installed)
  • The logfire package

This function is useful if you’re writing a library that uses logfire and you want to exclude your library’s frames. Since site-packages is already included, this is already the case by default for users of your library. But this is useful when testing your library since it’s not installed in site-packages.

Returns

None

get_context

def get_context() -> ContextCarrier

Create a new empty carrier dict and inject context into it.

Usage:

import logfire

logfire_context = logfire.get_context()

...

# later on in another thread, process or service
with logfire.attach_context(logfire_context):
    ...

You could also inject context into an existing mapping like headers with:

import logfire

existing_headers = {'X-Foobar': 'baz'}
existing_headers.update(logfire.get_context())
...

Returns

ContextCarrier — A new dict with the context injected into it.

set_baggage

def set_baggage(values: str = {}) -> Iterator[None]

Context manager that attaches key/value pairs as OpenTelemetry baggage to the current context.

See the Baggage documentation for more details.

Note: this function should always be used in a with statement; if you try to open and close it manually you may run into surprises because OpenTelemetry Baggage is stored in the same contextvar as the current span.

Example usage:

from logfire import set_baggage

with set_baggage(my_id='123'):
    # All spans opened inside this block will have baggage '{"my_id": "123"}'
    with set_baggage(my_session='abc'):
        # All spans opened inside this block will have baggage '{"my_id": "123", "my_session": "abc"}'
        ...

Returns

Iterator[None]

Parameters

values : str Default: \{\}

The key/value pairs to attach to baggage. These should not be large or sensitive. Strings longer than 1000 characters will be truncated with a warning.

attach_context

def attach_context(
    carrier: ContextCarrier,
    third_party: bool = False,
    propagator: TextMapPropagator | None = None,
) -> Iterator[None]

Attach a context as generated by get_context to the current execution context.

Since attach_context is a context manager, it restores the previous context when exiting.

Set third_party to True if using this inside a library intended to be used by others. This will respect the distributed_tracing argument of logfire.configure(), so users will be warned about unintentional distributed tracing by default and they can suppress it. See Unintentional Distributed Tracing for more information.

Returns

Iterator[None]

loguru_handler

def loguru_handler() -> Any

Create a Logfire handler for Loguru.

Returns

Any — A dictionary with the handler and format for Loguru.

no_auto_trace

def no_auto_trace(x: T) -> T

Decorator to prevent a function/class from being traced by logfire.install_auto_tracing.

This is useful for small functions that are called very frequently and would generate too much noise.

The decorator is detected at import time. Only @no_auto_trace or @logfire.no_auto_trace are supported. Renaming/aliasing either the function or module won’t work. Neither will calling this indirectly via another function.

Any decorated function, or any function defined anywhere inside a decorated function/class, will be completely ignored by logfire.install_auto_tracing.

This decorator simply returns the argument unchanged, so there is zero runtime overhead.

Returns

T

logfire_info

def logfire_info() -> str

Show versions of logfire, OS and related packages.

Returns

str

suppress_instrumentation

def suppress_instrumentation()

Context manager to suppress all logs/spans generated by logfire or OpenTelemetry.

configure

def configure(
    local: bool = False,
    send_to_logfire: bool | Literal['if-token-present'] | None = None,
    token: str | list[str] | None = None,
    api_key: str | None = None,
    service_name: str | None = None,
    service_version: str | None = None,
    environment: str | None = None,
    console: ConsoleOptions | Literal[False] | None = None,
    config_dir: Path | str | None = None,
    data_dir: Path | str | None = None,
    additional_span_processors: Sequence[SpanProcessor] | None = None,
    metrics: MetricsOptions | Literal[False] | None = None,
    scrubbing: ScrubbingOptions | Literal[False] | None = None,
    inspect_arguments: bool | None = None,
    sampling: SamplingOptions | None = None,
    min_level: int | LevelName | None = None,
    add_baggage_to_attributes: bool = True,
    code_source: CodeSource | None = None,
    variables: VariablesOptions | LocalVariablesOptions | None = None,
    distributed_tracing: bool | None = None,
    advanced: AdvancedOptions | None = None,
    deprecated_kwargs: Unpack[DeprecatedKwargs] = {},
) -> Logfire

Configure the logfire SDK.

Returns

Logfire

Parameters

local : bool Default: False

If True, configures and returns a Logfire instance that is not the default global instance. Use this to create multiple separate configurations, e.g. to send to different projects.

send_to_logfire : bool | Literal[‘if-token-present’] | None Default: None

Whether to send logs to logfire.dev.

Defaults to the LOGFIRE_SEND_TO_LOGFIRE environment variable if set, otherwise defaults to True. If if-token-present is provided, logs will only be sent if a token is present.

token : str | list[str] | None Default: None

The project write token(s). Can be a single token string or a list of tokens to send data to multiple projects simultaneously (useful for project migration).

Defaults to the LOGFIRE_TOKEN environment variable (supports comma-separated tokens).

api_key : str | None Default: None

API key for the Logfire API.

If not provided, will be loaded from the LOGFIRE_API_KEY environment variable.

service_name : str | None Default: None

Name of this service.

Defaults to the LOGFIRE_SERVICE_NAME environment variable.

service_version : str | None Default: None

Version of this service.

Defaults to the LOGFIRE_SERVICE_VERSION environment variable, or the current git commit hash if available.

environment : str | None Default: None

The environment this service is running in, e.g. 'staging' or 'prod'. Sets the deployment.environment.name resource attribute. Useful for filtering within projects in the Logfire UI.

Defaults to the LOGFIRE_ENVIRONMENT environment variable.

console : ConsoleOptions | Literal[False] | None Default: None

Whether to control terminal output. If None uses the LOGFIRE_CONSOLE_* environment variables, otherwise defaults to ConsoleOption(colors='auto', indent_spans=True, include_timestamps=True, include_tags=True, verbose=False). If False disables console output. It can also be disabled by setting LOGFIRE_CONSOLE environment variable to false.

config_dir : Path | str | None Default: None

Directory that contains the pyproject.toml file for this project. If None uses the LOGFIRE_CONFIG_DIR environment variable, otherwise defaults to the current working directory.

data_dir : Path | str | None Default: None

Directory to store credentials, and logs. If None uses the LOGFIRE_CREDENTIALS_DIR environment variable, otherwise defaults to '.logfire'.

additional_span_processors : Sequence[SpanProcessor] | None Default: None

Span processors to use in addition to the default processor which exports spans to Logfire’s API.

metrics : MetricsOptions | Literal[False] | None Default: None

Set to False to disable sending all metrics, or provide a MetricsOptions object to configure metrics, e.g. additional metric readers.

scrubbing : ScrubbingOptions | Literal[False] | None Default: None

Options for scrubbing sensitive data. Set to False to disable.

inspect_arguments : bool | None Default: None

Whether to enable f-string magic. If None uses the LOGFIRE_INSPECT_ARGUMENTS environment variable.

Defaults to True if and only if the Python version is at least 3.11.

Also enables magic argument inspection in logfire.instrument_print().

min_level : int | LevelName | None Default: None

Minimum log level for logs and spans to be created. By default, all logs and spans are created. For example, set to ‘info’ to only create logs with level ‘info’ or higher, thus filtering out debug logs. For spans, this only applies when _level is explicitly specified in logfire.span. Changing the level of a span after it is created will be ignored by this. If a span is not created, this has no effect on the current active span, or on logs/spans created inside the filtered logfire.span context manager. If set to None, uses the LOGFIRE_MIN_LEVEL environment variable; if that is not set, there is no minimum level.

sampling : SamplingOptions | None Default: None

Sampling options. See the sampling guide.

add_baggage_to_attributes : bool Default: True

Set to False to prevent OpenTelemetry Baggage from being added to spans as attributes. See the Baggage documentation for more details.

code_source : CodeSource | None Default: None

Settings for the source code of the project.

variables : VariablesOptions | LocalVariablesOptions | None Default: None

Options related to managed variables.

distributed_tracing : bool | None Default: None

By default, incoming trace context is extracted, but generates a warning. Set to True to disable the warning. Set to False to suppress extraction of incoming trace context. See Unintentional Distributed Tracing for more information. This setting always applies globally, and the last value set is used, including the default value.

advanced : AdvancedOptions | None Default: None

Advanced options primarily used for testing by Logfire developers.

LevelName

Level names for records.

Default: Literal['trace', 'debug', 'info', 'notice', 'warn', 'warning', 'error', 'fatal']

get_baggage

Get all OpenTelemetry baggage for the current context as a mapping of key/value pairs.

Default: baggage.get_all