Skip to content

Metrics

When running your code in AskAnna, you might want to track relevant metrics related to your run. With AskAnna it is possible to track metrics and to get tracked metrics from runs. You can use the Python SDK or the AskAnna API. The different ways to track and get metrics are described here.

Track metrics

When you are training models, working on experiments, or serving results you might want to track metrics that are relevant for the run. For example, with metrics you can compare the accuracy of runs, or see how a model's loss function is converging for every iteration.

If you are using Python, the AskAnna Python SDK makes it easy for you to track metrics. When you are working with another lanuage, you can use the AskAnna API to track metrics for your run.

Python

If you are running on Python, and you want to track a metric you only have to add two lines and make sure you installed the Python SDK.

The two lines:

from askanna import track_metric

track_metric(name, value)

When you run a job in AskAnna, every metric you track will be stored in the run. On the run page you find all metrics that are tracked for that run. For the value, AskAnna support the following data types:

  • String
  • Integer
  • Float/numeric
  • Date
  • Time
  • Datetime (ISO8601)
  • Boolean
  • Tag
  • Dictionary

Local run

If you run the code locally, there is no run SUUID set. We store a temporary JSON file with metrics locally. We will print the location of the JSON file in case the run SUUID is not available.

It is also possible to add labels. For example, this can be usefull when you run multiple models and want to compare the accuracy per model. You can add a label, a list of labels, or a dictionary with labels with a value. If you add a label without a value, then it will be processed as a label with value type tag. Some examples:

track_metric(..., label="label a")

track_metric(..., label=["label a", "label b"])

track_metric(..., label={
                           "model": "regression",
                           "accuracy_type": "R-squared",
                         })

It's also possible to track multiple metrics at the same time. Use track_metrics and add a dictionary that you want to track. Optionally, you can also add labels to track_metrics.

from askanna import track_metrics

track_metrics({
                "accuracy": accuracy,
                "f1-score": f1score,
                "precision": precision,
              }, label="label a")

Get metrics

Run page

On the run page, you van view the metrics tracked for that run. In the next example we tracked the classification report from scikit-learn.

In the table you find find the name of the metric, and the value tracked. In this example we run different models, and we used the labels to make it possible to identify the metrics for the different models.

Run page - Metrics

Next to the table view, you can also select to view the metrics JSON. Also you can download, or copy, the JSON with the metrics for the run.

Python

You can use the tracked metrics directly in Python. If you use metrics.get you get the metrics of a specific run. The output is a JSON dictionary with the metrics of the specified run:

from askanna import metrics

run_metrics = metrics.get(run="{{ RUN SUUID }}")

If you want to filter the metrics, for example to only keep metrics with the name accuracy, you can use a filter:

metrics_accuracy = run_metrics.filter(name="accuracy")

If you use the runs module, you can also get all metrics from a job next to the other info this module provides. If you want to include metrics, you should set include_metrics to True. Note that by default we will return a maximum of 100 runs, you can increase the limit value to get more runs.

from askanna import runs

job_runs = runs.get(job="{{ JOB SUUID or JOB NAME}}",
                    limit=100,
                    include_metrics=True)

If you want to get run information including metrics for a specific set of runs, you can list the runs you want to retrieve:

runs = runs.get(runs=["{{ RUN SUUID 1 }}", "{{ RUN SUUID 2 }}"],
                include_metrics=True)

API

You can use the AskAnna API to retrieve the metric information. It's possible to get all metrics from a job, or to get the metrics from a single run:

GET /v1/runinfo/{{ RUN SUUID }}/metrics/

GET /v1/job/{{ JOB SUUID }}/metrics/

The following parameters are available:

  • limit / offset
  • metric_name (string to query only metrics with name "x")
  • metric_value (string to query a specific value for metrics)
  • label_name (string to query metrics that only has label "x")
  • label_value (strig to query metrics that only has label value "x")

If we go back to the example from the run page. If you only want get get the metrics for accuracy, with the API you could filter them by using:

GET /v1/runinfo/{{ RUN SUUID }}/metrics/?metric_name=accuracy