Initialize and configure the AI Content Assistant via Essentials

This guide walks you through the process of setting up and configuring the BrXM AI Content Assistant using the Essentials application.

The Essentials application stores the API keys in the JCR configuration, which is not secure for production environments. 

Configuring the Content Assistant via Essentials results in:

  1. Addition of new dependencies in your cms-dependencies pom file.

  2. Addition of JCR configuration under /hippo:configuration/hippo:modules/ai-service/hipposys:moduleconfig

If you are upgrading to v16.6 or v16.8, see also AI Module upgrade instructions.

Initialize and configure via Essentials

You can initialize and configure the AI Content Assistant with the Essentials application. To do so:

  1. Go to Essentials.

  2. Go to Library - Make sure Enterprise features are enabled.

  3. Look for Content AI and click Install feature

  1. Now, rebuild and restart your project.

  2. Once your project has restarted, go to Installed features.

  3. Find Content AI and click Configure.

  1. Choose the desired AI Model from the available options of supported providers.

  2. Configure the other details such as API URL (endpoint), API key, and so on. Each provider has different configuration options (see Configuration options section below).

  1. Once you’re done, click Save.

  2. Lastly, rebuild and restart your project again.

 

Configuration options

The AI Content Assistant is only accessible to users with the xm.chatbot.user role

Each model provider can have different settings to configure, like:

  • API key/project ID: Enter your API key or project ID, depending on your model provider.

  • Model to use: Specify the exact model name and version to use in the AI Assistant. This allows you to choose the best performing model for a particular type of tasks.

  • Temperature: Set the temperature of the model. That controls the creativity, depth and randomness of the ai responses.

  • Max tokens: Specify the maximum number of tokens that can be used by a conversation. This helps you keep your token usage in check, so it doesn't exceed your allowed limit with your AI provider.

  • Max messages: Limit the maximum number of messages allowed in a single conversation. The user will not be allowed to send more messages once they have exhausted the limit, thereby keeping token usage in check.

A number of configuration options are not yet supported through essentials. A few examples can be seen below. To use those, add the properties directly to your jcr configuration (as String type properties):
- OpenAI's completions-path, add property spring.ai.openai.chat.completions-path 
- LiteLLM's completions-path, add property spring.ai.litellm.chat.completions-path
- Maximum allowed size for a referenced pdf: brxm.ai.chat.pdf.max-size-bytes

Please see Initialize and configure via Properties for more property names and provide them either via your jcr configuration or via properties files.

The Vector Store and Ingestion process also cannot currently be configured via Essentials. See Initialize and configure the Vector Store and Ingestion for the property names and provide them either via your jcr configuration or via properties files.

Maintenance Scripts

The AI module installs tooling for maintenance of your Vector Store, in the form of Groovy scripts. See more details in Maintenance Groovy Scripts.

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?