NLWeb (Evaluation Release) – In-App Conversational Experience

Modified on Fri, 13 Feb at 1:59 PM

Overview

NLWeb is now available in the Schema App platform as an Evaluation Release for selected customers. This release allows you to interact conversationally with your own Content Knowledge Graph (CKG) directly inside the app. It is designed to help teams:

  • Explore their structured content in natural language

  • Evaluate the quality and coverage of their schema markup

  • Assess readiness for a public-facing conversational deployment

NLWeb synthesizes answers from your structured data and provides supporting source citations with every response.


To access your Conversational UI login to the app and go to https://app.schemaapp.com/nlweb 


What This Release Is Designed For

This version of NLWeb is intended to help you:

  • Evaluate how your Knowledge Graph performs in a conversational interface

  • Identify strengths and coverage gaps in your schema markup

  • Determine whether you want to deploy NLWeb on your public website

It is not yet a fully hardened, public-facing chatbot deployment. That configuration can be discussed separately based on your requirements.


How NLWeb Works

Grounded Summaries with Citations

NLWeb operates in Summarize Mode by default.

Each response:

  • Synthesizes information from your Knowledge Graph

  • Includes multiple supporting citations

  • Draws only from your structured content

This approach reduces hallucination risk and ensures responses are tied to your own authoritative content.

You can click citations to review the source documents used to generate the summary.


Selecting Data Scope

You can choose which content NLWeb uses:

  • Single Account – Query one website at a time

  • All Accounts – Query across all selected websites combined

This is especially useful for organizations managing multiple domains or brands.

If querying across all accounts, responses may synthesize information from multiple sites.


Conversation Memory

NLWeb includes session memory within your browser.

  • Conversation history persists between visits

  • Memory is stored locally in your browser (localStorage)

  • Memory is not stored server-side

  • You can clear your history at any time using the Clear History button

  • Clearing your browser cache will also remove conversation memory

Memory is scoped to your user session and does not carry across users. 


What to Test During Evaluation

To properly evaluate NLWeb, we recommend testing the following:

1. Core Product or Service Topics

Ask NLWeb to summarize your main offerings and review:

  • Accuracy

  • Completeness

  • Citation quality

Example:

“Summarize your enterprise analytics platform.”


2. Comparison Questions

Test cross-entity synthesis.

Example:

“Compare Product A and Product B.”

Look for:

  • Structured comparison

  • Multiple supporting citations

  • Clear differentiation


3. Cross-Site Queries (If Using ‘All’)

If enabled, test aggregation across brands or domains.

Example:

“Summarize your healthcare initiatives across all brands.”


4. Coverage Gaps

Ask about topics you suspect may not be fully marked up.

If responses are thin or incomplete, this typically indicates:

  • Schema coverage gaps

  • Outdated markup

  • Content not yet modeled in your Knowledge Graph

These are opportunities to improve structured data coverage before public deployment.


What This Release Does Not Include

This evaluation version does not currently include:

  • Admin dashboards

  • Usage analytics reporting

  • Booking or action workflows

  • CRM integrations

  • Advanced guardrails for regulated PII workflows

  • OAuth sign-in for end users

  • Public-site hardening

If you require any of these capabilities, please speak with your Customer Success Manager to discuss roadmap and deployment planning.


Security & Data Handling

  • NLWeb operates within Schema App’s secure, tenant-isolated infrastructure

  • Logs are retained per account

  • Conversation memory is stored client-side only

  • NLWeb synthesizes responses strictly from your structured Knowledge Graph

If you are evaluating NLWeb for use in regulated environments, please coordinate with your Customer Success Manager to review security and compliance requirements.


Understanding Response Quality

If a response appears incomplete or inaccurate, common causes include:

  1. Incomplete schema coverage

  2. Outdated structured data

  3. Content not modeled in the Knowledge Graph

NLWeb reflects your structured data layer. Improving markup quality directly improves conversational output quality.

If you believe a response is incorrect:

  • Verify the source citations

  • Confirm markup coverage for that content

  • Contact support if the issue persists


When You’re Ready for Public Deployment

If you decide to deploy NLWeb on your website, additional steps may include:

  • UI branding and customization

  • Security review and hardening

  • SLA alignment

  • Guardrail configuration

  • Deployment configuration

Your Customer Success Manager can guide you through these next steps. 


Need Help?

If you have questions about:

  • Coverage gaps

  • Response quality

  • Multi-account configuration

  • Public deployment options

Please contact your Customer Success Manager or submit a support request through the Schema App platform.


NLWeb turns your Knowledge Graph into a conversational interface.
This evaluation release helps you assess readiness before going live.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article