How this Helps

Add custom metadata like temperature, timestamps, and tools to images via the API. This will help you improve dataset filtering and searchability.


Overview

This guide explains how to import custom metadata to your datasets via the API. Each metadata field is user-defined and supports one of several data types.

Custom metadata is associated at the image level and is immutable once uploaded.

To import custom metadata in Visual Layer, you must:

  • Declare a metadata field (name + type)
  • Upload a JSON file containing values for that field

All API requests in the Cloud environment require authentication using a valid API token.

Supported value types include:

  • string
  • float
  • datetime (UTC format)
  • enum (single or multi-select)
  • link (URL)

Each upload task is used to add one metadata field only. If you want to add multiple fields, you’ll need to create a separate task for each - and make sure each field has a unique field_name.


Steps

Pre-processing

  1. Ensure your dataset is in a READY state.
  2. Prepare a JSON file as described below.

Uploading the metadata

  1. Declare a new metadata attribute and receive a task_id.
  2. Upload metadata values in a JSON file.
  3. Track the ingestion progress via the status endpoint.
  4. Metadata becomes available for filtering and retrieval.

Preparing the JSON File

To upload custom metadata, you’ll need to associate values with the correct media items in your dataset.

Each entry must include:

  • A media_id – a unique identifier generated by Visual Layer for each image
  • A value – the metadata value for that image

How to get media_ids:

  1. Export your dataset from the Visual Layer platform.
  2. The exported JSON file will include all image-level metadata, including the media_id assigned to each media item.
  3. Use these media_ids to match your custom metadata with the corresponding images.

Your custom metadata JSON file should follow this structure:

[
  { "media_id": "f3e3d00c-1a8e-4a58-85dc-cdea7a8365a0", "value": "Sunny" },
  { "media_id": "b1ac12e7-53dc-4c7c-b93d-1836b4aa4f95", "value": "Cloudy" }
]

Click here for the full explanation on how to create the JSON file.


Step 1: Declare a New Metadata Field

Each declaration creates a new upload task for a single field. You can repeat the process to add more fields—just ensure each field_name is unique within the dataset.

POST /api/v1/datasets/{dataset_id}/custom_metadata/tasks

Request Body

{
  "field_name": "Your_field_name",
  "field_type": "string" // or - float, datetime, link
}

For enum

is_multi = false

{
  "field_name": "priority",
  "field_type": "enum",
  "enum_options": ["Low", "Medium", "High"],
  "is_multi": false
}

is_multi = true

{
  "field_name": "multi_colors",
  "field_type": "enum",
  "enum_options": [
    "red", "green", "blue", "yellow", "purple"
  ],
  "is_multi": true
}

The is_multi flag determines whether each image can have one or multiple values from the enum options.

  • is_multi: false (Each image can have only one value.)
  • is_multi: true (Each image can have multiple values.)

Response Body:

{
  "task_id": "1234e567-e89b-12d3-a456-426614174000",
  "status": "INIT"
}

Make sure to save the task_id- you’ll need it for the next steps in the flow.


Step 2: Upload Metadata File

Use the received task_id to upload your metadata file. Each entry must include a valid media_id and a value matching the declared field type.

POST /api/v1/datasets/{dataset_id}/custom_metadata/tasks/{task_id}

Upload Format

[
  { "media_id": "f3e3d00c-1a8e-4a58-85dc-cdea7a8365a0", "value": 23.5 },
  { "media_id": "b1ac12e7-53dc-4c7c-b93d-1836b4aa4f95", "value": 24.0 }
]

For enum - when is_multi = false

[
  {
    "media_id": "2283963d-6c78-442e-9606-352ab1c30e64",
    "value": "yellow"
  }
]

For enum - when is_multi = true

[
  {
    "media_id": "346ba239-5422-475c-aba3-afffc984c13a",
    "value": [
      "black",
      "yellow",
      "blue",
      "pink"
    ]
  }
]

Step 3: Check Upload Status

Track progress and get visibility into errors (e.g., type mismatches).

GET /api/v1/datasets/{dataset_id}/custom_metadata/tasks/{task_id}/status

Response Example:

{
  "task_id": "...",
  "status": "COMPLETED",
  "progress": 100.0,
  "inserted_rows": 6535,
  "error_count": 0,
  "sample_errors": []
}

Error Response Example

{
  "task_id": "...",
  "status": "COMPLETED_WITH_ERRORS",
  "progress": 100.0,
  "inserted_rows": 6535,
  "error_count": 594,
  "sample_errors": [
    { "reason": "Expected String", "row_index": 17 }
  ]
}

At this stage, uploaded metadata is immutable - meaning it cannot be edited or replaced once submitted.
If any errors occur during upload, only the problematic rows will fail; all valid rows will proceed successfully.
You can always declare a new field_name and reupload the data.
Support for updating existing metadata will be added in the future.


Step 4: Retrieve Metadata

List Declared Metadata Fields

GET /api/v1/datasets/{dataset_id}/custom_metadata/schema

Returns all declared fields and types.

Get Metadata for an Image

GET /api/v1/datasets/{dataset_id}/image/{image_id}/custom_metadata

Response:

[
  {
    "field_name": "lighting",
    "field_type": "enum",
    "value": "Sunny"
  },
  {
    "field_name": "temperature",
    "field_type": "float",
    "value": 23.5
  }
]

Step 5: Search and Filter Using Custom Metadata

Once your custom metadata is uploaded successfully, the declared field_name will automatically appear in the Filters section of the UI.

You can then filter dataset images using these fields — just like any other built-in filter.

Supported filter types:

  • String - substring match
  • Float — numeric comparisons (>, <, =)
  • Datetime — range queries (BETWEEN start_date AND end_date)
  • Enum — exact match (single or multi-select, depending on configuration)
  • Link - substring or exact match

Filtering behavior may vary depending on the field type and the selected operator.

Notes

  • Only supported at the image level (not object level—yet!).
  • Metadata is immutable and cannot be updated or replaced.
  • Each field must have a unique name per dataset.