BrXM AI Content Assistant developer guide

Overview

This feature is available since Bloomreach Experience Manager verison 16.4.0, and requires a standard or premium license. Please contact Bloomreach for more information.

The BrXM AI Content Assistant, powered by Loomi, brings generative AI capabilities directly into the BrXM CMS, helping you automate and streamline content creation, editing, and management.

This guide shows you how to configure, initialize, and use the AI chat assistant, focusing on technical setup and supported operations. The AI chat assistant is the central feature, enabling developers and content teams to interact with large language models (LLMs) from supported providers. It can help you perform tasks like summarization, translation, and SEO optimization—all within the CMS interface.

Prerequisites

Before you begin, make sure you meet the following requirements:

  • Before you begin, make sure you meet the following requirements:

  • Access to a BrXM CMS instance (version 16.4 or later).

  • Access to the console or the project's properties files.

  • Understanding of your organization’s data privacy and residency requirements.

  • An account and an API key with a supported AI provider, such as OpenAI or VertexAI Gemini, or an AI model running locally, such as Ollama.

Create an API key with OpenAI to access ChatGPT models. Create an account at OpenAI signup page and generate the API key on the API Keys page.
To authenticate with your VertexAI credentials, set up ADC using the ADC setup guide.
Ollama can be downloaded and run locally.

Supported LLM providers

The AI Content Assistant can use AI models from the following providers:

  • OpenAI
  • VertexAI Gemini
  • Ollama
  • LiteLLM

Installations

Please see Initialize and configure the AI Content Assistant documentation.

Technical architecture overview

The Content GenAI integration is built on a modular architecture that separates the UI, backend AI service, and model providers. Here’s how the main components interact:

  • Document editor: The main UI where users trigger AI operations.

  • AI backend service: Handles all AI-related requests from the UI. Initially exposed as an internal service, with a REST API layer planned for future releases.

  • Spring AI bridge: Acts as a middleware between the backend service and various model providers.

  • Model providers: External LLM services such as OpenAI, Gemini, or Ollama.

How the AI Content Assistant works

  1. The user initiates an AI operation in the document editor.

  2. The UI sends a request to the AI backend service.

  3. The backend service prepares and forwards the request to the selected model provider via the Spring AI bridge.

  4. The model provider processes the request and returns a response.

  5. The backend service sends the result back to the UI for display or further action.

This architecture abstracts the complexity of model integration and ensures that only approved operations are exposed to users, improving security and maintainability.

Supported operations and API usage

The AI chat assistant supports a range of content operations. You can trigger these actions directly from the document editor or via the assistant panel.

Refer to the BrXM AI Content Assistant User Manual for full details on usage and capabilities.

Field-specific operations:

Field-specific operations generate a response from the AI for a single field. To update the content of that field, you need to manually copy the AI’s response and paste it into the desired field. Example operations include:

  • Summarize a field: Generate a concise summary of the selected field.

  • Expand content: Elaborate on existing text or expand bullet points.

  • Spelling and grammar checks: Identify and fix errors in a specific field.

Document-level operations:

These operate on the whole document. Example operations include:

  • Summarize a document: Create a summary of the entire document.

  • Tag extraction: Identify key themes or keywords for categorization.

  • Translate a document: Convert content into different languages.

  • Sentiment analysis: Analyze the emotional tone of the content.

  • SEO optimization suggestions: Get recommendations for improving search engine visibility.

  • Translation: Translate a document.

Image-based operations (Available from v16.4.1):

The AI assistant can now use images as a primary context. When you view or edit an image in the CMS, the assistant can be prompted to perform tasks related to that image, such as analysis or generating descriptive text.

If you prompt the AI assistant on an image type unsupported by the configured AI model, it may return a general response like "We are having trouble processing your request right now. Please try again later."

Repeatedly getting such a response on an image-based operation might mean that its file type is unsupported by the model used.

Extended Context with References (Available from v16.6.0):

The AI can reference and process content from multiple specified documents within the BrXM repository. This allows the AI to draw on broader internal CMS knowledge for creating and editing content accurately and consistently. 

Please note that attaching large documents or files, or too many of them, to a conversation, can have a significant impact to token usage for that conversation. 

Conversation history (Available from v16.6.0):

The functionality to save different chat sessions (conversations) in the history is available for all providers starting from v16.6.

The AI chat keeps your conversation history for the duration of your CMS session. You can create, manage, and rename conversation threads, and you can continue previous discussions without losing context when switching topics or documents.

Session behavior and infrastructure considerations:

  • The CMS session is tied to the pod(s) serving your requests. If the serving pod changes, the CMS creates a new session and the AI chat history displayed in the UI is cleared.

  • Conversation logs are retained in system logs and remain available even if the visible chat history is cleared.

Conversation auto-naming

The system automatically generates and assigns a name to each conversation for identification purposes. This name can be edited by the user at any time.

The auto-generation of the  conversation name is initiated shortly after the first message. Note that the conversation name may take a few messages to generate; until then, the default “New conversation” name is displayed.

The auto-naming is automatically disabled once a name is generated or if the user has entered a custom name themselves.

The auto-naming request consumes tokens and is charged to the account of the user making the conversation. The request is included and can be monitored in the logs.

Conversation logs

All data that is transferred to and from the AI provider is available for inspection in your logs. To examine the requests and responses from the AI model provider, we provide two loggers, both of which need to be enabled in your log4j configuration in order to start logging:

  • A prompt logger that logs the conversation as typed by the user, as well as the responses from the AI model. To enable, set <Logger name="com.bloomreach.brx.ai.service.impl.client.advisors.PromptLoggerAdvisor" level="info"/>

  • Spring's default SimpleLoggerAdvisor. To enable, add the following property: <Logger name="org.springframework.ai.chat.client.advisor.SimpleLoggerAdvisor" level="debug"/>

The logs are printed in the terminal or Humio, depending on your configuration. Each log entry of the PromptLoggerAdvisor is formatted with identification and discoverability in mind. An example can be seen below:

INFO  http-nio-8080-exec-4 [PromptLoggerAdvisor.before:88] >>>>> Outgoing message for admin, conversation 559df082-955e-41ad-a1a7-39535ad4b20d (type:USER)  >>>>>

[INFO] What do you see in this document

[INFO] >>>>>

Here are the characteristics of the logs, and the information they contain:

  • The user and conversation IDs.

  • All outgoing and incoming requests (including document references and auto-naming requests).

  • Total number of tokens consumed by the user after each request.

Limitations

  • In version 16.4, the AI Content Assistant is limited to accessing only the published and unpublished content of a document; draft versions are not supported. Users must, therefore, save their changes for the AI to have the most up-to-date document information. The limitation is addressed in version 16.4.1, where the AI assistant now supports draft versions of documents. This enhancement allows users to access the most current document information without the need to save their changes beforehand.

  • Assets (fields and document types) are not supported.

  • Value list fields and document types are not supported.

  • The assistant is only available in the content perspective; other perspectives are not supported.

  • Document-level operations may require manual import of generated content by user.

Important: Incubating Features

Bloomreach is introducing a formal process to release some new functionalities as "Incubating Features" to accelerate innovation, particularly in rapidly evolving technologies. While these features are production-ready and tested, they may undergo significant changes (including backward-incompatible modifications or removal) outside of standard major releases. 

Such changes will not affect the out-of-the-box CMS experience, but may require configuration updates in custom integrations or extensions using these features. If you customize or extend an incubating feature, you may need to update your custom solution in subsequent minor or patch releases. All incubating features will be clearly documented and marked as such.

Please refer to the Incubating Features Policy for more information.

As of v16.6.0, AI Content Assistant includes Incubating Features and modules. 

Changes in v16.6

GroupId

Change from com.bloomreach.brx.ai to com.bloomreach.xm.ai

Artifacts

The term 'incubating' was appended to two artifacts:

<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-impl-incubating</artifactId>

</dependency>
<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-rest-incubating</artifactId>

</dependency>

The following modules only need the <groupId> to change:

<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-client-bootstrap</artifactId>

    </dependency>
<dependency>

      <groupId>com.bloomreach.xm.ai</groupId>

      <artifactId>content-ai-service-client-assistant-angular</artifactId>

    </dependency>
Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?