Initialize and configure the AI Content Assistant via Properties files

This guide walks you through the process of setting up and configuring the BrXM AI Content Assistant using your project’s properties files.

The AI Content Assistant is only accessible to users with the xm.chatbot.user role
If you are upgrading to v16.6 or v16.8, see also AI Module upgrade instructions.

Installation

The following dependencies need to be added to your project's cms-dependencies pom file:

<dependency>
    <groupId>com.bloomreach.xm.ai</groupId>
    <artifactId>content-ai-service-impl-incubating</artifactId>
</dependency>
<dependency>
    <groupId>com.bloomreach.xm.ai</groupId>
    <artifactId>content-ai-service-rest-incubating</artifactId>
</dependency>
<dependency>
    <groupId>com.bloomreach.xm.ai</groupId>
    <artifactId>content-ai-service-client-bootstrap</artifactId>
</dependency>
<dependency>
    <groupId>com.bloomreach.xm.ai</groupId>
    <artifactId>content-ai-service-client-assistant-angular</artifactId>
</dependency>

Configure via properties files

To configure the Content Assistant in a production ready way, use properties files in any of the locations listed below. The order of this list is important: we look for properties files in all these locations but if a property is found in more than one files, the property from the location higher in this list takes precedence.

  1. System properties passed on the command line

  2. A properties file named xm-ai-service.properties, visible in the classpath

  3. The project's platform.properties file

More information on managing properties files and System properties is available in the following documentation:
  - For Bloomreach Cloud implementations, Set Environment Configuration Properties
  - For On-premise implementations, HST-2 Container Configuration

Multiplicity of configurations

Configuring via properties allows configuring multiple different model providers and vector stores, with only one however being active. 

Global Configuration options

The brxm.ai.provider property is used to specify the name of the active model provider. Possible values are:

  • OpenAI 
  • VertexAIGemini
  • Ollama
  • LiteLLM
Providing an empty value for this property disables all AI backend services, including the Vector Store and Ingestion process.

The brxm.ai.chat.max-messages property is used to set the maximum number of messages allowed in a single conversation. An integer number is expected. If not specified, the default value is 100.

The brxm.ai.chat.pdf.max-size-bytes can be used to specify the maximum number, in bytes, over which a pdf will not be allowed to be added as a reference to a conversation. An integer number is expected. If not specified, it defaults to 1MB (1048576 bytes).

Model Provider options

Next you need to provide required configuration parameters for your specific provider. Given below are the names of each model provider, and the lists of required and optional properties for each.

If no embedding model is registered, the Vector Store and Ingestion process won't initialize.

OpenAI

Prerequisite: an API key with OpenAI to access ChatGPT models. Create an account at OpenAI signup page and generate the API key on the API Keys page
Property Required Type Description Default Example
spring.ai.openai.api.url yes url The OpenAI endpoint   https://api.openai.com/
spring.ai.openai.api_key yes string The key of your OpenAI account    
spring.ai.openai.chat.options.model yes model name A valid OpenAI supported model. See models   gpt-4o
spring.ai.openai.chat.options.temperature no double

Set the temperature of the model.
That controls the creativity, depth and
randomness of the ai responses. Best kept low

0.0 0.1
spring.ai.openai.chat.options.maxTokens no integer Specify the maximum number of tokens that can be used per conversation 4096 15000
spring.ai.openai.chat.completions-path no url path Allows setting a custom path for OpenAI's Completions endpoint latest OpenAI endpoint /v1/completions
spring.ai.openai.embedding.embeddings-path no url path Allows setting a custom path for OpenAI's Embeddings endpoint latest OpenAI embeddings endpoint /embeddings
spring.ai.openai.embedding.options.model no model name A valid OpenAI embeddings model

latest OpenAI embeddings model

text-embedding-3-large

spring.ai.openai.embedding.options.dimensions

no integer The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models

depends on model
(for text-embedding-3-large: 3072)

1024

spring.ai.openai.embedding.options.encoding-format no string The format to return the embeddings in. Can be float or base64 float base64

VertexAIGemini

Prerequisite: to authenticate with your VertexAI credentials, set up ADC using the ADC setup guide
Property Required Type Description Default Example
spring.ai.vertex.ai.gemini.project-id yes string Your Google Cloud Platform project ID   myprojectid
spring.ai.vertex.ai.gemini.location yes string Your Google Cloud Platform region   mylocation
spring.ai.vertex.ai.gemini.chat.options.model yes model name Vertex AI Gemini Chat model   gemini-2.0-flash
spring.ai.vertex.ai.gemini.chat.options.temperature no double Set the temperature of the model.
That controls the creativity, depth and
randomness of the ai responses. Best kept low
0.0 0.3
spring.ai.vertex.ai.gemini.chat.options.max-tokens no integer Specify the maximum number of tokens that can be used per conversation 4096 15000
spring.ai.vertex.ai.embedding.project-id no string Your Google Cloud Platform project ID   myprojectid
spring.ai.vertex.ai.embedding.location no string Your Google Cloud Platform region

 

mylocation
spring.ai.vertex.ai.embedding.text.options.model no model name The Vertex Text Embedding model to use   text-embedding-004
spring.ai.vertex.ai.embedding.text.options.dimensions no integer The number of dimensions the resulting output embeddings should have. Supported for model version 004 and later.

depends on model

1024

spring.ai.vertex.ai.embedding.text.options.auto-truncate no boolean When set to true, input text will be truncated true false

Ollama

Ollama can be downloaded and run locally
Ollama currently does not support calling tools (last tested with gemma3)
Property Required Type Description Default Example
spring.ai.ollama.api.url yes url The Ollama endpoint   https://myollama/
spring.ai.ollama.chat.options.model yes model name An Ollama model, see
supported models 
  gemma3
spring.ai.ollama.chat.options.model.pull.strategy yes enum Whether to pull models at startup-time and how   WHEN_MISSING
spring.ai.ollama.embedding.options.model no model name The name of a supported model to use   nomic-embed-text
spring.ai.ollama.embedding.options.truncate no boolean Truncates the end of each input to fit within context length true false

LiteLLM

LiteLLM is a versatile LLM model gateway. Installation can either be local or provided as a managed service
Property Required Type Description Default Example
spring.ai.litellm.api.url yes url Your LiteLLM endpoint   https://mylitellm/
spring.ai.litellm.api_key yes string The key for your LiteLLM account    
spring.ai.litellm.chat.options.model yes model name A valid model name enabled in your LiteLLM   openai/gpt-4o
spring.ai.litellm.chat.options.temperature no double Set the temperature of the model.
That controls the creativity, depth and
randomness of the ai responses. Best kept low
0.0 0.1
spring.ai.litellm.chat.options.max-tokens no integer Specify the maximum number of tokens that can be used per conversation 4096 15000
spring.ai.litellm.chat.completions-path no url path Allows setting a custom path for the Chat endpoint latest provider endpoint
(managed in LiteLLM)
/v1/completions
spring.ai.litellm.embedding.embeddings-path no url path Allows setting a custom path for the Embeddings endpoint latest provider embeddings endpoint (managed in LiteLLM) /embeddings
spring.ai.litellm.embedding.options.model no model name A valid emebddings model name enabled in your LiteLLM

latest provider embeddings model enabled in your LiteLLM

openai/text-embedding-3-large

spring.ai.litellm.embedding.options.dimensions no integer The number of dimensions the resulting output embeddings should have.

depends on model

1024

spring.litellm.openai.embedding.options.encoding-format no string The format to return the embeddings in. Can be float or base64 float base64

Vector Store and Ingestion options

Please see Initialize and configure the Vector Store and Ingestion.

Maintenance Scripts

The AI module installs tooling for maintenance of your Vector Store, in the form of Groovy scripts. See more details in Maintenance Groovy Scripts.

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?