BrXM AI Content Assistant (BETA) developer guide

Overview

This feature is available since Bloomreach Experience Manager verison 16.4.0, and requires a standard or premium license. Please contact Bloomreach for more information.

The BrXM AI Content Assistant, powered by Loomi, brings generative AI capabilities directly into the BrXM CMS, helping you automate and streamline content creation, editing, and management.

This guide shows you how to configure, initialize, and use the AI chat assistant, focusing on technical setup and supported operations. The AI chat assistant is the central feature, enabling developers and content teams to interact with large language models (LLMs) from supported providers. It can help you perform tasks like summarization, translation, and SEO optimization—all within the CMS interface.

Prerequisites

Before you begin, make sure you meet the following requirements:

  • Access to a BrXM CMS instance (version 16.4 or later).

  • Access to the console.

  • Understanding of your organization’s data privacy and residency requirements.

  • An API key from a supported AI provider, such as OpenAI or VertexAI Gemini.

Supported LLM providers

The AI Content Assistant can use AI models from the following providers:

  • OpenAI
  • VertexAI Gemini
  • Ollama

Technical architecture overview

The Content GenAI integration is built on a modular architecture that separates the UI, backend AI service, and model providers. Here’s how the main components interact:

  • Document editor: The main UI where users trigger AI operations.

  • AI backend service: Handles all AI-related requests from the UI. Initially exposed as an internal service, with a REST API layer planned for future releases.

  • Spring AI bridge: Acts as a middleware between the backend service and various model providers.

  • Model providers: External LLM services such as OpenAI, Gemini, or Ollama.

How the AI Content Assistant works

  1. The user initiates an AI operation in the document editor.

  2. The UI sends a request to the AI backend service.

  3. The backend service prepares and forwards the request to the selected model provider via the Spring AI bridge.

  4. The model provider processes the request and returns a response.

  5. The backend service sends the result back to the UI for display or further action.

This architecture abstracts the complexity of model integration and ensures that only approved operations are exposed to users, improving security and maintainability.

Initalize the AI Content Assistant

You can initialize the AI Content Assistant with the Essentials application. To do so:

  1. Go to Essentials.

  2. Go to Library.

  3. Look for Content AI and click Install feature.

  1. Now, rebuild and restart your project.

  2. Once your project has restarted, go to Installed features.

  3. Find Content AI and click Configure.

  1. Choose the desired AI Model from the available options of supported providers.

  2. Configure the other details such as API URL (endpoint), API key, and so on. Each provider has different configuration options (see Configuration options section below).

  3. Once you’re done, click Save.

  4. Lastly, rebuild and restart your project again.

Configuration options

This feature is only works if you configure the API key / Project ID.

Each model provider can have different settings to configure, like:

  • API key/project ID: Enter your own configurable API keys or project ID. This gives you flexibility and control over data privacy but requires you to manage provider agreements and keys.

  • Model to use: Specify the exact model name and version to use in the AI Assistant. This allows you to choose the best performing model for a particular type of tasks.

  • Temperature: Set the sampling temperature of the model that controls the creativity and depth of the generated output.

  • Max tokens: Limit the maximum number of tokens to generate in the chat completion. This helps you keep your token usage in check, so it doesn't exceed your allowed limit.

Using the AI Content Assistant

  1. Open a content document in the editor.

  2. Click the AI Content Assistant icon (only available after completing initialization).

  3. Type in your request in the chat interface.

  4. Review the AI-generated response.

  5. Apply the changes to your content manually as needed.

Supported operations and API usage

The AI chat assistant supports a range of content operations. You can trigger these actions directly from the document editor or via the assistant panel.

Field-specific operations:

Field-specific operations generate a response from the AI for a single field. To update the content of that field, you need to manually copy the AI’s response and paste it into the desired field. Example operations include:

  • Summarize a field: Generate a concise summary of the selected field.

  • Expand content: Elaborate on existing text or expand bullet points.

  • Spelling and grammar checks: Identify and fix errors in a specific field.

Document-level operations:

These operate on the whole document. Example operations include:

  • Summarize a document: Create a summary of the entire document.

  • Tag extraction: Identify key themes or keywords for categorization.

  • Translate a document: Convert content into different languages.

  • Sentiment analysis: Analyze the emotional tone of the content.

  • SEO optimization suggestions: Get recommendations for improving search engine visibility.

  • Translation: Translate a document.

Image-based operations (New, available since v16.4.1):

The AI assistant can now use images as a primary context. When a user is viewing or editing an image document in the CMS, the assistant can be prompted to perform tasks related to that image, such as analysis or generating descriptive text

Limitations

  • In version 16.4, the AI Content Assistant accesses only the unpublished document content. Draft versions are not yet supported. Therefore, users must save changes to provide the AI with the most current document information.

As of version 16.4.1, the AI assistant now supports draft versions of documents, allowing users to access the most current document information without needing to save changes.
  • Assets (fields and document types) are not supported.

  • Value list fields and document types are not supported.

  • The assistant is only available in the content perspective; other perspectives are not supported.

  • Document-level operations may require manual import of generated content via the Content API.

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?