Skip to content

base

optimus_dl.modules.loggers.base

Base class for metrics loggers in the Optimus-DL framework.

This module provides the abstract interface that all metrics logging backends must implement to integrate with the metrics system.

BaseMetricsLogger

Bases: ABC

Abstract base class for metrics logging backends.

All metrics loggers in the framework should inherit from this class. The logger receives computed metrics from various training phases (train, eval, etc.) and is responsible for persisting them (e.g., to a file, a database, or a cloud service).

Attributes:

Name Type Description
cfg

Configuration object for the logger.

enabled

Whether the logger is active.

Source code in optimus_dl/modules/loggers/base.py
class BaseMetricsLogger(ABC):
    """Abstract base class for metrics logging backends.

    All metrics loggers in the framework should inherit from this class.
    The logger receives computed metrics from various training phases (train,
    eval, etc.) and is responsible for persisting them (e.g., to a file,
    a database, or a cloud service).

    Attributes:
        cfg: Configuration object for the logger.
        enabled: Whether the logger is active.
    """

    def __init__(self, cfg, state_dict=None, **kwargs):
        """Initialize the metrics logger.

        Args:
            cfg: Logger configuration (subclass of MetricsLoggerConfig).
            state_dict: Optional state for resuming.
            **kwargs: Additional keyword arguments.
        """
        self.cfg = cfg
        self.enabled = cfg.enabled if hasattr(cfg, "enabled") else True

        if not self.enabled:
            logger.info(f"{self.__class__.__name__} disabled via configuration")

    @abstractmethod
    def setup(self, experiment_name: str, config: dict[str, Any]) -> None:
        """Setup the logger with experiment metadata and config.

        This is typically called once at the start of a training run.

        Args:
            experiment_name: A unique name for the experiment.
            config: The full training configuration (as a dictionary).
        """
        pass

    @abstractmethod
    def log_metrics(
        self, metrics: dict[str, Any], step: int, group: str = "train"
    ) -> None:
        """Record a set of metrics for a specific training step.

        Args:
            metrics: Dictionary mapping metric names to values.
            step: The current training iteration or step.
            group: The metrics group (e.g., 'train', 'eval').
        """
        pass

    @abstractmethod
    def close(self) -> None:
        """Perform any necessary cleanup and flush remaining logs.

        Called at the end of the training or evaluation process.
        """
        pass

    def state_dict(self) -> dict[str, Any]:
        """Return the logger's internal state for checkpointing.

        Returns:
            A dictionary containing any state needed to resume the logger
            (e.g., a run ID for WandB).
        """
        return {}

__init__(cfg, state_dict=None, **kwargs)

Initialize the metrics logger.

Parameters:

Name Type Description Default
cfg

Logger configuration (subclass of MetricsLoggerConfig).

required
state_dict

Optional state for resuming.

None
**kwargs

Additional keyword arguments.

{}
Source code in optimus_dl/modules/loggers/base.py
def __init__(self, cfg, state_dict=None, **kwargs):
    """Initialize the metrics logger.

    Args:
        cfg: Logger configuration (subclass of MetricsLoggerConfig).
        state_dict: Optional state for resuming.
        **kwargs: Additional keyword arguments.
    """
    self.cfg = cfg
    self.enabled = cfg.enabled if hasattr(cfg, "enabled") else True

    if not self.enabled:
        logger.info(f"{self.__class__.__name__} disabled via configuration")

close() abstractmethod

Perform any necessary cleanup and flush remaining logs.

Called at the end of the training or evaluation process.

Source code in optimus_dl/modules/loggers/base.py
@abstractmethod
def close(self) -> None:
    """Perform any necessary cleanup and flush remaining logs.

    Called at the end of the training or evaluation process.
    """
    pass

log_metrics(metrics, step, group='train') abstractmethod

Record a set of metrics for a specific training step.

Parameters:

Name Type Description Default
metrics dict[str, Any]

Dictionary mapping metric names to values.

required
step int

The current training iteration or step.

required
group str

The metrics group (e.g., 'train', 'eval').

'train'
Source code in optimus_dl/modules/loggers/base.py
@abstractmethod
def log_metrics(
    self, metrics: dict[str, Any], step: int, group: str = "train"
) -> None:
    """Record a set of metrics for a specific training step.

    Args:
        metrics: Dictionary mapping metric names to values.
        step: The current training iteration or step.
        group: The metrics group (e.g., 'train', 'eval').
    """
    pass

setup(experiment_name, config) abstractmethod

Setup the logger with experiment metadata and config.

This is typically called once at the start of a training run.

Parameters:

Name Type Description Default
experiment_name str

A unique name for the experiment.

required
config dict[str, Any]

The full training configuration (as a dictionary).

required
Source code in optimus_dl/modules/loggers/base.py
@abstractmethod
def setup(self, experiment_name: str, config: dict[str, Any]) -> None:
    """Setup the logger with experiment metadata and config.

    This is typically called once at the start of a training run.

    Args:
        experiment_name: A unique name for the experiment.
        config: The full training configuration (as a dictionary).
    """
    pass

state_dict()

Return the logger's internal state for checkpointing.

Returns:

Type Description
dict[str, Any]

A dictionary containing any state needed to resume the logger

dict[str, Any]

(e.g., a run ID for WandB).

Source code in optimus_dl/modules/loggers/base.py
def state_dict(self) -> dict[str, Any]:
    """Return the logger's internal state for checkpointing.

    Returns:
        A dictionary containing any state needed to resume the logger
        (e.g., a run ID for WandB).
    """
    return {}