Why Use VL Chat
VL Chat eliminates the need to understand complex filter syntax or navigate through multiple UI panels. Instead of clicking through filters and dropdown menus, you can:- Ask questions naturally: “Show me blurry images from last week.”
- Combine multiple criteria: “Find images with temperature above 30 that have blur issues.”
- Navigate your data conversationally: “Show me the clearest examples from that cluster.”
- Refine results iteratively: “Now filter those to only show images tagged as urgent.”
Accessing VL Chat
To access VL Chat, open any dataset in Visual Layer and click the Chat icon in the top navigation bar. The chat interface appears as a panel on the right side of your screen, allowing you to explore while viewing your dataset.How to Use VL Chat
Asking Questions
Type your question in natural language into the chat input field and press Enter. The system processes your query and returns results along with an explanation of what it understood and what it found. Example queries:- “Show me images with blur issues”
- “Find all images tagged as defective”
- “Display images from cluster 5”
- “Show me images with high uniqueness scores”
- “Find images labeled as cats”
Understanding Responses
When you ask a question, VL Chat provides:- Interpretation summary: A clear statement of what the system understood from your query.
- Validation feedback: Information about which parts of your query were applied successfully and which weren’t available.
- Visual results: The actual images or objects matching your criteria.
- Alternative interpretations: Suggestions if your query was ambiguous or if certain fields aren’t available.
Multi-Turn Conversations
VL Chat maintains context across multiple messages, allowing you to refine your queries progressively: You: “Show me images with blur” VL Chat: Returns 156 blurry images You: “Now show only the ones from last week” VL Chat: Filters the previous results to show 23 images from last week You: “Which cluster has the most of these?” VL Chat: Analyzes the filtered results and highlights cluster 12 Each follow-up question builds on the previous context, making exploration feel natural and conversational.Types of Queries You Can Ask
VL Chat understands queries about different aspects of your dataset:Filter by Issues
Find images with specific quality issues detected by Visual Layer:- “Show me blurry images”
- “Find all images with duplicates”
- “Display images that have outlier issues”
- “Show me images with mislabel issues above 80% confidence”
Filter by Labels
Search for specific labels or annotations in your dataset:- “Show me images labeled as cats”
- “Find all images with car labels”
- “Display images labeled as defective”
Filter by Tags
Query images based on user-assigned tags:- “Show me images tagged as urgent”
- “Find all images with the reviewed tag”
- “Display images tagged for training”
Filter by Custom Metadata
If your dataset includes custom metadata fields, query them directly:- “Show me images with temperature above 30”
- “Find images from Station A”
- “Display images where batch number is 12345”
Navigate Clusters
Explore similarity clusters in your dataset:- “Show me cluster 5”
- “Display the largest cluster”
- “Find clusters with more than 100 images”
Combine Multiple Criteria
Build complex queries by combining different filter types:- “Show me blurry images from cluster 3”
- “Find images labeled as cats with high uniqueness scores”
- “Display images tagged as urgent that also have blur issues”
Reviewing Query Results in JSON
VL Chat provides a detailed JSON view of the structured query it generated from your natural language input. This allows you to see exactly how your question was interpreted and what filters were applied.Accessing the JSON View
When VL Chat returns results, look for the View JSON or Expand Details option in the response. Click it to see the structured representation of your query.Understanding the JSON Structure
The JSON output shows the complete query structure:- interpretation: Plain language summary of what was understood.
- confidence: How confident the system is in its interpretation (0.0 to 1.0).
- extracted_context: The structured filters and parameters that will be applied.
- entity_type: Whether the query targets images, objects, or clusters.
- vql_filters: The specific filter conditions applied to your dataset.
- alternative_interpretations: Other ways the system considered interpreting your query.
Using JSON to Refine Queries
Reviewing the JSON helps you understand:- Which filters were successfully applied.
- Which parts of your query couldn’t be processed.
- What parameters the system inferred from your natural language input.
- Whether the interpretation matches your intent.
Tips for Effective Queries
Be Specific About Filter Types
When referencing labels, tags, or custom fields, use clear terminology:- Good: “Show me images labeled as defective”
- Less clear: “Show me defective images” (could refer to labels, tags, or quality issues)
Use Exact Field Names
For custom metadata, use the exact field name as it appears in your dataset:- Good: “Show me images where Station equals A”
- Less clear: “Show me images from station A” (if the field is named “StationID”)
Specify Thresholds Explicitly
When filtering by numeric values or confidence scores, include specific thresholds:- Good: “Show me blur issues above 85% confidence”
- Less clear: “Show me images with high blur” (what threshold defines “high”?)
Build Complex Queries Iteratively
Start with a simple query and refine it through follow-up questions:- “Show me images with blur”
- “Now filter to cluster 5”
- “Show only the ones tagged as urgent”
Check Alternative Interpretations
If results don’t match your expectations, review the alternative interpretations provided in the response. These suggestions often reveal where the system’s understanding differed from your intent.Common Use Cases
Quality Assurance Workflows
Quickly identify and review quality issues:- “Show me all blur issues from today’s production run”
- “Find outlier images in batch 12345”
- “Display mislabeled images above 90% confidence”
Dataset Curation
Organize and filter datasets for training:- “Show me the most unique images from each cluster”
- “Find images labeled as cats that don’t have the reviewed tag”
- “Display images with low uniqueness scores”
Manufacturing Inspection
Query inspection results using custom metadata:- “Show me images from Station B with defects”
- “Find images where temperature exceeded 35 degrees”
- “Display images from shift 2 with quality issues”
Research and Analysis
Explore dataset composition and patterns:- “Show me the largest clusters”
- “Find images with the most labels”
- “Display images that appear in multiple clusters”
Limitations and Considerations
Field Availability
VL Chat can only query fields that exist in your dataset. If you reference a custom metadata field that hasn’t been configured, the system explains this and shows results using the filters it could apply.Confidence Scores
The system provides a confidence score with each interpretation. Lower confidence (below 0.5) may indicate ambiguity in your query. Review the JSON output to see how it was interpreted.Dataset-Specific Behavior
Query capabilities depend on your dataset configuration:- Datasets without custom metadata cannot filter by custom fields.
- Datasets without annotations cannot filter by labels.
- Similarity clusters must be generated before cluster-based queries work.