Although this post was written by AI, it was reviewed by human.
Hey there! Welcome back to the blog! π
If you’ve spent any time working with Service Portal or Workspaces lately, you’ve probably encountered AI Search. It’s fast, it’s smart, and it feels like magic compared to the old Zing text search. But how does it actually work? Is it just a fancy widget, or is there more to the story?
Spoiler alert: It’s all about the architecture! ποΈ
AI Search behavior makes a lot more sense when you view it as a layered pipeline that enriches data as it moves toward the end-user experience. Let’s break down exactly what happens at each stage of the journey. π
The 5 Layers of AI Search π₯
When a user types a query like “email issue” or “request iPhone”, the platform doesn’t just blindly query every table. It filters through a highly structured, 5-step process.
1. Indexed Sources (The Foundation) π§±
This is the absolute base level. Before anything can be searched, it needs to be ingested into the AI Search index.
- What it does: You define exactly which tables (like
kb_knowledge,sc_cat_item, orincident) should send their data to the index. You can also include external data sources (like SharePoint or Confluence) using External Mappings! - Developer tip: You don’t just index the whole table blindly. You define Field Settings hereβspecifying which fields are searchable, which are just returnable (used for display but not search), and which can be used as facets.
- π Docs: Indexed Sources in AI Search
2. Search Index (The Engine Room) βοΈ
Once the data is ingested, the engine processes it. This isn’t just a simple database table; it’s a highly optimized inverted index designed specifically for rapid text retrieval.
- What it does: This layer handles tokenization, stemming, and lemmatization. It understands that “running”, “ran”, and “run” all come from the same root word.
- Developer tip: Unlike the legacy Zing search which relied on
ts_wordandts_indextables directly inside your instance database, AI Search uses a dedicated, externalized infrastructure. This is why it’s so incredibly fast! β‘ - π Docs: AI Search Architecture & Indexing
3. Search Sources (The Filter) π°
Just because a record is in the index doesn’t mean users should see it. Search Sources act as your dynamic query conditions.
- What it does: Think of these as pre-defined filters. For example, you might have a Search Source for Knowledge Articles, but you apply a condition so it only includes articles where
Workflow state is PublishedandValid to date is after Today. - Developer tip: This is crucial for performance and security. By filtering out retired articles or inactive catalog items at this layer, you prevent the engine from wasting brainpower ranking irrelevant records.
- π Docs: Search Sources in AI Search
4. Search Profile (The Brain π§ )
This is where the true “AI” and machine learning magic happens. The Search Profile acts as an umbrella that combines your Search Sources with complex relevance behaviors.
- Dictionaries & Typo Handling: This is where you configure Synonyms (mapping “laptop” to “notebook”) and Stop Words (ignoring “the”, “a”, “is”). The engine also auto-corrects typos here.
- Result Improvement Rules (RIR): Want to manually boost a specific catalog item when someone searches “VPN”? Or block an outdated article? RIRs let you tweak the machine learning algorithm with your own business logic.
- Genius Results: Using Natural Language Understanding (NLU), this feature detects the user’s actual intent and puts a highly actionable, card-based answer right at the top of the results (like a direct button to “Submit a VPN Request” instead of just linking to the item).
- π Docs: Search Profiles in AI Search
5. Search Application (The UI Glue) π
Finally, the context. The Search Application connects your brilliant, tuned Search Profile to the actual interface the user is looking at.
- What it does: Are they searching from the Service Portal? A Configurable Workspace? Now Mobile? The Search Application dictates the final end-user experience.
- Developer tip: This is where you configure your Facets (those handy checkboxes on the left side to filter by Category or Author) and your Navigation Tabs. Most importantly, this layer ties into EVAM (Entity View Action Mapper)βwhich, as you might remember from my Customizing AI Search Results in Service Portal post, is how you define exactly what the result cards look like! π¨
- π Docs: Search Applications in AI Search
Why Does This Matter? π€
Understanding this pipeline is crucial for developers and architects. Why? Because you can’t fix a data quality issue at the widget level, and you shouldn’t try to fix a UI issue in the Search Index!
If you are gearing up to design an AI Search experience, remember to think like a platform architect. Start with your data quality, map your fields correctly, build your profiles with intent, and finally, craft a beautiful UI using EVAM.