Search your image dataset using natural language captions—powered by semantic understanding.
GET /api/v1/explore/{dataset_id}
endpoint enables semantic caption search using the image_caption
parameter.
This type of search returns results based on intent and meaning, not just exact text.
"beach sunset"
may return:
red car
— matches loosely related red items and cars"red car"
— matches images of an actual red carName | Type | Description |
---|---|---|
image_caption | string | The search query (e.g. "beach sunset" ) |
threshold | integer | Clustering threshold (0–4) |
entity_type | string | Must be IMAGES or OBJECTS |
textual_similarity_threshold | float | Score cutoff (0.0–1.0) for relevance |
clusters
: Visual similarity groupsmedia
: Individual resultscaption
: Caption matchrelevance_score
: Quality of the matchpreview
: Thumbnail previewrelevance_score