Skip to main content
VL Chat enables you to explore visual datasets through natural language conversations. Ask questions in plain English, and the system translates your queries into structured dataset operations, delivering results instantly.

Why Use VL Chat

VL Chat eliminates the need to understand complex filter syntax or navigate through multiple UI panels. Instead of clicking through filters and dropdown menus, you can:
  • Ask questions naturally: “Show me blurry images from last week.”
  • Combine multiple criteria: “Find images with temperature above 30 that have blur issues.”
  • Navigate your data conversationally: “Show me the clearest examples from that cluster.”
  • Refine results iteratively: “Now filter those to only show images tagged as urgent.”
The system interprets your intent, validates it against your dataset structure, and returns relevant results with clear explanations of what it found.

Accessing VL Chat

To access VL Chat, open any dataset in Visual Layer and click the Chat icon in the top navigation bar. The chat interface appears as a panel on the right side of your screen, allowing you to explore while viewing your dataset.

How to Use VL Chat

Asking Questions

Type your question in natural language into the chat input field and press Enter. The system processes your query and returns results along with an explanation of what it understood and what it found. Example queries:
  • “Show me images with blur issues”
  • “Find all images tagged as defective”
  • “Display images from cluster 5”
  • “Show me images with high uniqueness scores”
  • “Find images labeled as cats”

Understanding Responses

When you ask a question, VL Chat provides:
  1. Interpretation summary: A clear statement of what the system understood from your query.
  2. Validation feedback: Information about which parts of your query were applied successfully and which weren’t available.
  3. Visual results: The actual images or objects matching your criteria.
  4. Alternative interpretations: Suggestions if your query was ambiguous or if certain fields aren’t available.
Example response:
Understood query with confidence 0.85. Applied blur filter successfully and
filtered to cluster 5. Showing 47 images that match your criteria.
If part of your query can’t be processed, the system explains why:
I found 23 blurry images in your dataset. Note: This dataset doesn't have a
'Temperature' custom metadata field configured, so I could only search for blur.

Multi-Turn Conversations

VL Chat maintains context across multiple messages, allowing you to refine your queries progressively: You: “Show me images with blur” VL Chat: Returns 156 blurry images You: “Now show only the ones from last week” VL Chat: Filters the previous results to show 23 images from last week You: “Which cluster has the most of these?” VL Chat: Analyzes the filtered results and highlights cluster 12 Each follow-up question builds on the previous context, making exploration feel natural and conversational.

Types of Queries You Can Ask

VL Chat understands queries about different aspects of your dataset:

Filter by Issues

Find images with specific quality issues detected by Visual Layer:
  • “Show me blurry images”
  • “Find all images with duplicates”
  • “Display images that have outlier issues”
  • “Show me images with mislabel issues above 80% confidence”

Filter by Labels

Search for specific labels or annotations in your dataset:
  • “Show me images labeled as cats”
  • “Find all images with car labels”
  • “Display images labeled as defective”

Filter by Tags

Query images based on user-assigned tags:
  • “Show me images tagged as urgent”
  • “Find all images with the reviewed tag”
  • “Display images tagged for training”

Filter by Custom Metadata

If your dataset includes custom metadata fields, query them directly:
  • “Show me images with temperature above 30”
  • “Find images from Station A”
  • “Display images where batch number is 12345”
Explore similarity clusters in your dataset:
  • “Show me cluster 5”
  • “Display the largest cluster”
  • “Find clusters with more than 100 images”

Combine Multiple Criteria

Build complex queries by combining different filter types:
  • “Show me blurry images from cluster 3”
  • “Find images labeled as cats with high uniqueness scores”
  • “Display images tagged as urgent that also have blur issues”

Reviewing Query Results in JSON

VL Chat provides a detailed JSON view of the structured query it generated from your natural language input. This allows you to see exactly how your question was interpreted and what filters were applied.

Accessing the JSON View

When VL Chat returns results, look for the View JSON or Expand Details option in the response. Click it to see the structured representation of your query.

Understanding the JSON Structure

The JSON output shows the complete query structure:
{
  "interpretation": "Understood query with confidence 0.85",
  "confidence": 0.85,
  "extracted_context": {
    "entity_type": "IMAGES",
    "vql_filters": [
      {
        "issues": {
          "op": "issue",
          "value": "blur",
          "confidence_min": 0.8,
          "confidence_max": 1.0
        }
      }
    ]
  },
  "alternative_interpretations": [
    "show all images (without blur filter)",
    "adjust confidence threshold for blur detection"
  ]
}
Key fields:
  • interpretation: Plain language summary of what was understood.
  • confidence: How confident the system is in its interpretation (0.0 to 1.0).
  • extracted_context: The structured filters and parameters that will be applied.
  • entity_type: Whether the query targets images, objects, or clusters.
  • vql_filters: The specific filter conditions applied to your dataset.
  • alternative_interpretations: Other ways the system considered interpreting your query.

Using JSON to Refine Queries

Reviewing the JSON helps you understand:
  • Which filters were successfully applied.
  • Which parts of your query couldn’t be processed.
  • What parameters the system inferred from your natural language input.
  • Whether the interpretation matches your intent.
If the JSON reveals a misinterpretation, you can rephrase your question to be more explicit about what you want.

Tips for Effective Queries

Be Specific About Filter Types

When referencing labels, tags, or custom fields, use clear terminology:
  • Good: “Show me images labeled as defective”
  • Less clear: “Show me defective images” (could refer to labels, tags, or quality issues)

Use Exact Field Names

For custom metadata, use the exact field name as it appears in your dataset:
  • Good: “Show me images where Station equals A”
  • Less clear: “Show me images from station A” (if the field is named “StationID”)

Specify Thresholds Explicitly

When filtering by numeric values or confidence scores, include specific thresholds:
  • Good: “Show me blur issues above 85% confidence”
  • Less clear: “Show me images with high blur” (what threshold defines “high”?)

Build Complex Queries Iteratively

Start with a simple query and refine it through follow-up questions:
  1. “Show me images with blur”
  2. “Now filter to cluster 5”
  3. “Show only the ones tagged as urgent”
This approach is often clearer than trying to express everything in a single complex query.

Check Alternative Interpretations

If results don’t match your expectations, review the alternative interpretations provided in the response. These suggestions often reveal where the system’s understanding differed from your intent.

Common Use Cases

Quality Assurance Workflows

Quickly identify and review quality issues:
  • “Show me all blur issues from today’s production run”
  • “Find outlier images in batch 12345”
  • “Display mislabeled images above 90% confidence”

Dataset Curation

Organize and filter datasets for training:
  • “Show me the most unique images from each cluster”
  • “Find images labeled as cats that don’t have the reviewed tag”
  • “Display images with low uniqueness scores”

Manufacturing Inspection

Query inspection results using custom metadata:
  • “Show me images from Station B with defects”
  • “Find images where temperature exceeded 35 degrees”
  • “Display images from shift 2 with quality issues”

Research and Analysis

Explore dataset composition and patterns:
  • “Show me the largest clusters”
  • “Find images with the most labels”
  • “Display images that appear in multiple clusters”

Limitations and Considerations

Field Availability

VL Chat can only query fields that exist in your dataset. If you reference a custom metadata field that hasn’t been configured, the system explains this and shows results using the filters it could apply.

Confidence Scores

The system provides a confidence score with each interpretation. Lower confidence (below 0.5) may indicate ambiguity in your query. Review the JSON output to see how it was interpreted.

Dataset-Specific Behavior

Query capabilities depend on your dataset configuration:
  • Datasets without custom metadata cannot filter by custom fields.
  • Datasets without annotations cannot filter by labels.
  • Similarity clusters must be generated before cluster-based queries work.

Semantic Understanding

VL Chat interprets natural language but has specific parsing rules. Extremely complex nested queries may need to be broken into multiple simpler questions.