Bloomreach CMS Integration with Docker, Docker Compose and MySQL

​ Nicola Cogotti

​ 2019-01-16

Java CMS Integration with Docker, Docker Compose and MySQL

 

Here at Alpha Cogs we are constantly researching and evaluating the best technologies out there in order to deliver the finest product with the highest performance to our clients.

When we had to decide on a Content Management System (CMS), we went through a deep evaluation of the best CMS out there. Our main criteria were flexibility and openness and Bloomreach Experience Manager (formerly known as Hippo CMS), definitely won over all the other available options. There were multiple technical reasons that motivated our choice, but explaining them all is beyond the scope of this discussion. In this article, we will focus on one aspect - how to efficiently and effectively integrate Bloomreach’s CMS with Docker, Docker Compose and MySQL. Our production configuration is slightly different from the recommended and fully supported configuration found in the official Bloomreach documentation. This is mainly because we have decided to use NGINX as our proxy server. 

This article will cover the project structure, Maven plug-in used, CMS Dockerization, Docker images and their orchestration through Docker Compose, and NGINX configuration.

While we’ll explain each step of the configuration and give links to relevant documents to learn more, a basic knowledge of the following is strongly beneficial:

  • Docker

  • Docker Compose

  • MySQL

  • Maven

  • Visual Studio Code

  • NGINX

So... let’s begin.

 

Project Structure

Any Java IDE such as NetBeans or Eclipse, to name a few of the most popular ones, are perfectly fine to be used but the following discussion is going to be tailored for Visual Studio Code. For the easiest configuration we suggest installing the following plugins, you can find them in the Visual Studio Code Marketplace.

  1. Docker

  2. Docker Compose

  3. Maven for Java







 

Now we are ready to proceed.

We have created a standard project following the official Bloomreach documentation you can find at this link. The version used is this explanation is 11 but these steps are easily portable to new versions with minor adjustments.

After the standard creation, add the following to the root folder of the new project:

  • A file called Dockerfile: this will contains the Docker instructions to create our Docker image

  • A file called docker-compose.yml: this is the Docker Compose file that defines how all the Docker images will collaborate together in order to create our system

  • A folder called image_config_docker: this will contain all our configuration files and scripts that the Docker image will use to initialize and correctly configure the container

  • A file called nginx.conf: this is the configuration file for NGINX within the NGINX Docker image.

At the end of these steps your project root folder should look something like this:


 

Configure the project to create a Docker image

We used Dockerfile-maven-plugin for our project in order to create a Docker image and here is the POM configuration for the plugin:

 

       <profile>

           <id>docker</id>

           <build>

               <plugins>

                   <plugin>

                       <groupId>com.spotify</groupId>

                       <artifactId>dockerfile-maven-plugin</artifactId>

                       <version>1.3.6</version>

                       <inherited>false</inherited>

                       <configuration>

                           <repository>alphaagency</repository>

                           <tag>${project.version}</tag>

                           <pullNewerImage>true</pullNewerImage>

                           <useMavenSettingsForAuth>true</useMavenSettingsForAuth>

                           <buildArgs>

                               <IMAGE_CONF_FOLDER>

image_config_docker/

</IMAGE_CONF_FOLDER>

                               <TAR_BALL>target/${project.artifactId}-${project.version}-distribution-with-content.tar.gz

</TAR_BALL>

                           </buildArgs>

                       </configuration>

                       <executions>

                           <execution>

                               <id>docker-build</id>

                               <phase>compile</phase>

                               <goals>

                                   <goal>build</goal>

                               </goals>

                           </execution>

                       </executions>

                   </plugin>

               </plugins>

           </build>

           <modules/>

       </profile>

 

Docker-maven-plugin is a powerful plugin that immensely helped streamline the image creation process directly with Maven. The most important parts of this configuration are:

  • IMAGE_CONFING_FOLDER: the path of the previously created folder called image_config_docker.

  • TAR_BALL: the path of the tar.gz created by the command mvn clean verify -P dist-with-content (NB this is changed in v 12.0 please refer the official document for the right command with content).

 

Dockerfile

The following is the Dockerfile we use to generate the image for our project:

FROM  openweb/oracle-tomcat:8.5-jre8

MAINTAINER Nicola Cogotti



ENV ENCODING=UTF-8

ENV    CATALINA_BASE=/usr/local/tomcat



ENV MIN_HEAP 1024

ENV MAX_HEAP 2048

ENV    EXTRA_OPTS="" \

\

   RMI_SERVER_HOSTNAME=127.0.0.1 \

\

   MAIL_SESSION_RESOURCE_NAME=mail/Session \

   MAIL_USERNAME="" \

   MAIL_PASSWORD="" \

   MAIL_HOST=localhost \

   MAIL_DEBUG=false \

   MAIL_PROTOCOL=smtp \

   MAIL_AUTH=true \

   MAIL_PORT=25 \

   MAIL_FROM="" \

   MAIL_TLS_ENABLE=true \

\

   DB_RESOURCE_NAME=jdbc/repositoryDS \

   DB_HOST="" \

   DB_PORT=3306 \

   DB_NAME=hippo \

   DB_USER=hippo \

   DB_PASS="" \

\

   MYSQL_CONNECTOR_VERSION=8.0.11 \

\

   REPO_BOOTSTRAP=true \

   CONSISTENCY_CHECK=none



WORKDIR $CATALINA_BASE

ARG IMAGE_CONF_FOLDER

COPY ["${IMAGE_CONF_FOLDER}/bin/setenv.sh", \

       "${IMAGE_CONF_FOLDER}/bin/wait-for-it.sh", \

       "${IMAGE_CONF_FOLDER}/bin/entrypoint.sh", \

   "$CATALINA_BASE/bin/"]



COPY ["${IMAGE_CONF_FOLDER}/conf/repository.xml", \

       "${IMAGE_CONF_FOLDER}/conf/repository-consistency.xml", \

       "${IMAGE_CONF_FOLDER}/conf/repository-force.xml", \

       "${IMAGE_CONF_FOLDER}/conf/context.xml.template", \

       "${IMAGE_CONF_FOLDER}/conf/server.xml", \

       "${IMAGE_CONF_FOLDER}/conf/catalina.properties", \

       "${IMAGE_CONF_FOLDER}/conf/catalina.policy", \

       "${IMAGE_CONF_FOLDER}/conf/log4j.xml", \

   "$CATALINA_BASE/conf/"]

RUN mkdir -p /usr/local/share/tomcat-common/lib

RUN mkdir -p /usr/local/tomcat/common/lib

RUN curl -s -o "/usr/local/share/tomcat-common/lib/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar" -L https://repo1.maven.org/maven2/mysql/mysql-connector-java/$MYSQL_CONNECTOR_VERSION/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar

RUN curl -s -o "/usr/local/tomcat/lib/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar" -L https://repo1.maven.org/maven2/mysql/mysql-connector-java/$MYSQL_CONNECTOR_VERSION/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar

RUN curl -s -o "/usr/local/tomcat/common/lib/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar" -L https://repo1.maven.org/maven2/mysql/mysql-connector-java/$MYSQL_CONNECTOR_VERSION/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar




RUN rm -rf $CATALINA_BASE/webapps/* &&\

   mkdir -p $CATALINA_BASE/endorsed &&\

   curl -s -o $CATALINA_BASE/endorsed/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar -L https://repo1.maven.org/maven2/mysql/mysql-connector-java/$MYSQL_CONNECTOR_VERSION/mysql-connector-java-$MYSQL_CONNECTOR_VERSION.jar &&\

chmod +x $CATALINA_BASE/bin/setenv.sh &&\

chmod +x bin/wait-for-it.sh &&\

chmod +x bin/entrypoint.sh



EXPOSE 1099



VOLUME ["/usr/local/repository/", "/usr/local/tomcat/logs"]

ENTRYPOINT ["bin/entrypoint.sh"]

CMD ["/bin/bash", "catalina.sh", "run"]

ARG TAR_BALL

ADD ${TAR_BALL} ${CATALINA_BASE}

 

Understanding the Dockerfile

We base our image on the official Docker image openweb/oracle-tomcat:8.5-jre8 because we found this image contains the best environment to host the CMS.

The first set of instructions is used to set  CATILINA_BASE and some Java Virtual Machine (JVM) configuration, while the second set could potentially be used to configure an email server, should our application need one.

The third set of instructions is related to the MySQL database configuration that is going to sit in its own Docker container instance.

In our case, the database is called hippo and listens at the standard port 3306, we also create a dedicated MySQL user named hippo.

To set up the environment:

  • ARG IMAGE_CONF_FOLDER take the value defined inside the POM file using it during image configuration.

  • Copy all the files from IMAGE_CONF_FOLDER to the correct destination folders inside the container under CATALINA_BASE.

 

Each command is practically self-explanatory, hence we won’t explore each one of them in detail. You should customize it to best suit your setup. Most importantly, great care should be taken to give execution privileges to the files.

This script is significantly inspired by the guide at this link: recipe for dockerizing hippo cms.

The image_config_docker folder now contains the following:



Introducing Docker Compose

We now have our docker image ready to host the CMS. However, this is not yet enough because we at least need a database - especially for the production setup.

We are now going to see how easily docker-compose.yml allows us to integrate MySQL in our production environment (which incidentally is the recommended choice by Bloomreach).

Open the docker-compose.yml file and include the following:

version: '2'

services:

 hippo:

   image: alphaagency:0.1.0-SNAPSHOT

   container_name: alphacogs-web

   networks:

     - app_network

   volumes:

     - hippo_repository:/usr/local/repository/

     - hippo_logs:/usr/local/tomcat/logs

     - ../shared :/usr/local/shared

   environment:

     DB_HOST: "alpha-db"

     DB_PORT: "3306"

     DB_NAME: "hippo"

     DB_USER: "hippo"

     DB_PASS: "hippoPassword"

   depends_on:

     - mysql

   ports:

     - 8080:8080

   restart: always

 mysql:

   image: mysql:5.7

   ports:

     - "3306:3306"

   container_name: alpha-db

   volumes:

     - mysql_data:/var/lib/mysql

   environment:

     MYSQL_ROOT_PASSWORD: "rootPassword"

     MYSQL_DATABASE: "hippo"

     MYSQL_USER: "hippo"

     MYSQL_PASSWORD: "hippoPassword"

   networks:

     app_network:

       aliases:

         - database

   restart: always

volumes:

 mysql_data:

   driver: local

 hippo_repository:

   driver: local

 hippo_logs:

   driver: local

networks:

 app_network:

   driver: bridge

 

How does this configuration work?

Well the first portion issues the creation of a container named alphacogs-web from the Docker file image we just described while also setting the environment variable for MySQL.

There are four crucial things here:

  1. The networks definition

  2. The DB_HOST: "alpha-db" configuration

  3. The depends_on instruction

  4. The ports definition

Every container may communicate one to another through the network. They do so by defining a common “virtual” network which, in our case, is named app_network. Containers on the same network can connect to the others by simply using the container name.

This is why we can configure the  DB_HOST variable directly providing the container name of MySQL container.

With the “depends_on” keyword we specify the dependencies between containers. Since our Bloomreach container needs a MySQL database ready and running, we can deduce that our Bloomreach container depends on the MySQL one.

Bloomreach is going to communicate with MySQL via port 3306 so this is why on its container definition we use the instructions:

ports:

     - "3306:3306"

The same explains the following for the image:

ports:

   - 8080:8080

Now, this is enough to run our Bloomreach project on Docker, but in a production environment, we don’t really want to write something like www.alphacogs.com/site in order to reach our website.

For this reason (but not limited to it) we added NGINX to the mix.

 

Adding NGINX Docker image to our project

We decided to use NGINX instead the official Apache setup because of its extreme configuration simplicity and its lightweight installation.

Going back to our Docker Compose file, we add an NGINX container by doing the following: 

nginx:

   image: nginx:latest

   container_name: production_nginx

   networks:

     - app_network

   depends_on:

     - hippo

   volumes:

     - nginx-cache:/data/nginx/cache

     - ./nginx.conf:/etc/nginx/nginx.conf

   ports:

     - 80:80

     - 443:443

   restart: always

 

and introduce a volume definition:

nginx-cache:

   driver: local

 

Every consideration made so far for the other containers still applies, but we should dig a bit more into this newly introduced volume section.

We are defining two volumes: one for caching purposes and another one for the container to read the necessary NGINX configuration.

 

Nginx configuration file

Here is the nginx.conf:

user www-data;

worker_processes auto;

## Default: 1

error_log /var/log/nginx/error.log;

pid /run/nginx.pid;

worker_rlimit_nofile 8192;

events {

   worker_connections 4096;

}

http {

   proxy_cache_path /data/nginx/cache keys_zone=one:10m;

   upstream site.backend.dev {

       server hippo:8080;

   }

   server {

       proxy_cache one;

       server_name url.to.reach.website

# optimise the data transfer to the client

       gzip on;

       access_log /var/log/nginx/access.log;

       listen 80;

       location / {

           proxy_set_header Host $host;

           proxy_set_header X-Forwarded-Host $host;

           proxy_set_header X-Forwarded-Server $host;

           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

           proxy_set_header X-Forwarded-Proto $scheme;

         #adding cache if the files are ico css js gif pngs in the header for the client  

if ($request_uri ~* ".(ico|css|js|gif|jpe?g|png)$") {

               expires 30d;

               access_log off;

               add_header Pragma public;

               add_header Cache-Control "public";

               break;

           }

           proxy_pass http://site.backend.dev/site/;

           proxy_redirect default;

           proxy_cookie_path /site/ /;

       }

   }

   server {

# proxy_cache one;

       server_name url.to.point.cms

       gzip on;

       access_log /var/log/nginx/access.log;

       listen 80;

       location / {

           proxy_set_header X-Forwarded-Host $host;

           proxy_set_header X-Forwarded-Server $host;

           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

           proxy_set_header X-Forwarded-Proto $scheme;

           proxy_pass http://site.backend.dev/cms/;

           proxy_cookie_path ~*^/.* /;

       }

       location /site/ {

           proxy_pass http://site.backend.dev/site/;

       }

   }

}

 

The whole file shouldn’t surprise anyone familiar with the usual NGINX setup (if you are not, please refer to the official NGINX documentation), but a few points are worth highlighting in here.

The first one is the upstream configuration for NGINX.

 

The NGINX container receives requests from the host machine on port 80, but NGINX will have to proxy that request to our Docker container: so in our http definition of the server, at the path / (root path), we must use the NGINX directive proxy_pass referring our NGINX upstream:

proxy_pass  http://site.backend.dev/site/;

The upstream is already inside the Docker container so we can take advantage of the virtual network defined in the docker-compose and reference our Bloomreach server simply using the container name in this way:

   upstream site.backend.dev {

       server hippo:8080;

   }

Another important setting that needs to be included on the CMS URL endpoint is:

proxy_cookie_path ~*^/.* /;

Without this, the site will be visualized correctly but the CMS will be impossible to reach.

 

Conclusions

The configuration with Docker and Bloomreach’s CMS makes deployment replicable and even more customizable to meet everybody’s requirements.

At the recent Bloomreach Connect event in Amsterdam, we had the opportunity to meet some of the awesome people behind the product and learn about the product vision. We are eager, and most importantly happy, to support this technology as much as we can.

In the near future, we will expand our explanation of CMS Dockerization with the inclusion of HTTPS setup.

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?