Reference for every panel, control, and action inside the Explore tab of a Visual Layer dataset.
The Explore tab is the primary workspace inside a dataset. All viewing, search, filtering, cluster navigation, quality analysis, and selection happen here. For a task-oriented walk-through of the workflow this tab supports, see Exploring Datasets.
The Explore tab is organized into four numbered regions that stay in place as you navigate, plus the Insights Panel on the right.
#
Region
Purpose
Top Header
Dataset title, dataset-level stats, status indicator, and the tab row for Data, Views, Explore, and Enrich
Filter & Search Area
Search Type Toggle, Layout Toggle, Search box, and Filter Menu
Dataset Actions
Dataset-wide controls on the right side of the top header: status, VL Chat, notifications, share, and the three-dot menu
Content Grid
The main display area showing clusters, images, or objects
The Insights Panel on the right of the workspace hosts insights, quality issues, labels, user tags, and metadata for the current view or selection. It stays visible alongside the numbered regions.
The filter and search area is the horizontal row directly beneath the header. It holds the Search Type Toggle on the left, the Layout Toggle on the right, and the Search box and Filter Menu in the center.
The Search Type Toggle on the left side of the filter and search area sets what you are searching against: Images or Objects. This is more than a cosmetic change — it determines which embeddings are used for search, which metadata appears in the workspace, and what every filter, search, and hover action operates on.
The Objects option is available once an object-detection model has run against the dataset. If the option is disabled, run a model such as VL-Object-Detector from the Enrich tab first.For what changes in the workspace when you flip between Images and Objects, see Switching the Search Type.
Each cluster card displays one metadata type at a time. Use the switch at the top of the cluster to change which metadata is shown.
The three options answer different questions about the cluster:
Option
Source
Question Answered
Labels
Class names from annotations or detection models. Image labels come from image-level classification (for example, VL-Image-Tagger) or from annotation files imported at creation time. Object labels come from an object-detection model (for example, VL-Object-Detector) and describe each detected crop individually. The two label types are independent: an image can carry its own labels while the objects inside it carry different ones from a different model.
What class does the platform think this is?
Captions
Natural-language descriptions generated by a captioning model such as VL-Image-Captioner, VL Advanced Captioner, or VL-Object-Captioner. Captions power Caption Search.
What is happening in this image or crop?
User Tags
Custom tags applied manually by users or teammates. User tags are independent of any model output. See Tag Items for how tags are created and managed.
How does our team want to organize this?
Which labels and captions appear on a cluster depends on which enrichment models have run against the dataset. For the full mapping of models to the metadata they produce, see the Model Catalog. To run or add models, go to the Enrich tab.
Click any label, caption term, or user tag in a cluster card to apply it as a filter across the dataset.
The Layout Toggle on the right side of the filter and search area switches between Visual Similarity (clustered) and Flat View. It controls how the content grid is arranged, not what appears in it, so it combines with the Search Type Toggle to produce clustered images, clustered objects, a flat image grid, or a flat object grid.
The center of the filter and search area contains the Search box and two adjacent buttons.
Element
Description
Search box
Text field that drives both Semantic Search and Caption Search. Type a natural-language description to find conceptually matching content, or use boolean operators (AND, OR, -, "phrase") when filtering by captions. See How to Search & Filter for the full syntax.
Visual Search button
Opens the external-image upload flow for finding visually similar content. Drag and drop an image into the dialog or click Browse Images to pick one from disk. See Visual Search for all four ways to trigger visual similarity.
Filter Menu
Opens the filter menu for building and managing active queries. See Filter Controls below.
Selecting a filter opens the Query Modal for operator and threshold configuration. Active filters appear as chips in the Query Panel below the bar, combined with AND logic. A chip can be edited by clicking it, removed with X, or cleared entirely with Clear All. Save View stores the current filter combination as a named view, which then appears in both the Save View dropdown and the Views tab.For every operator, threshold, and filter-specific workflow, see the Filter Options Reference.
Dataset Actions occupy the right side of the top header and apply to the entire dataset, regardless of which tab is open.
Control
Description
Status indicator
The current dataset state (Indexing, Ready, and so on). A dataset must be Ready before its content appears in the Explore tab. Dataset status is also visible from the Dataset Inventory.
VL Chat
Conversational launcher for building searches and filters from a natural-language prompt. See VL Chat.
The content grid is the main display area and the source of all in-place actions. Its contents depend on the active layout selected from the Layout Toggle and the search type selected from the Search Type Toggle.
In Visual Similarity mode, the grid displays cluster cards. Each card shows a representative thumbnail, the cluster size, and label coverage. When the Search Type Toggle is set to Objects, each card groups detected object crops instead of full images.
Hovering any cluster card reveals in-place actions:
Action
Description
Find Similar
Launches a Visual Search using the cluster’s representative image as the anchor.
Open cluster
Click the card to drill into its individual images or objects.
For the concepts behind clustering and embeddings, see How Search & Filter Work. For cluster navigation tips and examples, see Understanding Clusters. For multi-step workflows that combine clusters with filters, see Recipes.
In Flat View, the grid displays every item in a single ungrouped layout. When the Search Type Toggle is set to Images, each cell is a full image; when set to Objects, each cell is a detected object crop. Object-level browsing becomes available once an object-detection model has run against the dataset — see the Enrich tab for the workflow.
Runs a Visual Search using the hovered item’s embedding as the anchor.
Select
Toggles the item into the current selection.
Clicking an image opens its details page, where a Region of Interest crop can launch a crop-based Find Similar search. See Search Against a Region of Interest for the full flow.
The Insights Panel runs along the right edge of the workspace and shows insights about the current view or selection. Its panels update as filters change, views change, or items are selected.
The panel exposes the following sections:
#
Panel
Description
Enrichment Models
The models that have run against the dataset and any available preview shortcuts. See Enrich Your Datasets for the run flow and the Model Catalog for every available model.
Issues
Quality-issue counts with drill-in links for each category: Duplicates, Outliers, Dark, and Blurry. Clicking any row applies the corresponding filter. For the signal-quality concepts behind these categories, see How Signal Quality Is Measured.
Labels
The label distribution for the current view. Clicking a label filters by it. To bring labels in from an external annotation file, see Importing Annotations.
User Tags
The user tags applied to items in the current view. Clicking a tag filters by it. See Tag Items for how tags are created and managed.
Metadata
The custom metadata fields available for filtering. See Custom Metadata for how to attach metadata at creation time or via the API.