How this Helps

Semantic Search helps you discover relevant visual data based on concepts and context—not just labels or keywords. It’s ideal for exploration, discovery, and working with unstructured datasets.


What Is Semantic Search?

Semantic Search allows you to retrieve images and objects based on the meaning and context of your text queries, instead of relying on exact keyword matches. This enables more intuitive, concept-driven exploration of your dataset.

It works by matching your queries against metadata generated by Visual Layer’s enrichment models:

  • VL-Image Semantic Search
  • VL-Object Semantic Search

These models interpret visual content and associate it with descriptive embeddings, enabling deeper, context-aware querying.


How It Works

  • Queries are matched against generated semantic metadata (not manual tags).
  • Available at both the image and object level, depending on your current view.
  • Requires enrichment using either VL-Image Semantic Search or VL-Object Semantic Search.
  • Ideal for:
    • Surfacing subtle visual patterns
    • Exploring abstract ideas like “urban solitude” or “festival crowd”
    • Navigating datasets with minimal labeling

Example Queries

Try using natural, descriptive phrases. The system is designed to parse both broad concepts and specific scenes:

Simple Query

“sunset over mountains”

Detailed Scene

“bright sunset over rocky mountains with clear sky”

Combined Attributes

“blue sports car on city street at night”

More detailed phrases yield more focused results. But even simple terms like "crowd", "outdoor event", or "forest animals" can surface useful matches.