Logs and monitoring

The logs in Collaboration Server On-Premises are written to stdout and stderr. Most of them are formatted in JSON. They may be used for debugging purposes or for monitoring requests rate, errors, duration and warnings (invalid requests). In production environments, we recommend storing logs to files or using a distributed logging system (like ELK or CloudWatch).

# Docker

Docker has built-in logging mechanisms that capture logs from the output of containers. The default logging driver writes logs to files.

When using this driver, you can use the docker logs command to show logs from the container. You can add the -f flag to view logs in real time. Refer to the official Docker documentation for more information about the logs command.

When the application is run in multiple instances, the command will show you logs only from a specific instance. To collect logs from multiple instances, please refer to the distributed logging section.

When the container is running for a long time, the logs can take up a lot of space. To avoid this problem, you should make sure that the log rotation is enabled. This can be set with the max-size option.

# Log level

Every log contains the level property. Log levels help to determine importance of a log.

The CKEditor Collaboration Sever groups logs into 6 levels:

  • 10 - Trace - Most granular information about processes taking place in the application,
  • 20 - Debug - Logs added temporarily to monitor hard-to-debug cases,
  • 30 - Info - Information about modification of a resource or audit logs,
  • 40 - Warn - Usually they can be ignored because they were caused the asynchronous nature of processing events in the application. They may also inform you that some process failed the first time, but another attempt solved the problem. If they appear simultaneously with errors log level: ≥50, they can speed up the debugging of the problem by giving more context,
  • 50 - Error - Indicates that some process has been stopped unexpectedly or does not work properly,
  • 60 - Fatal - Errors that cause the entire application to stop working.

By default, only logs with level=40 and above are printed out.
The default setting can be changed by setting the LOG_LEVEL=[value] environmental variable.
It is not recommended to set the value under 40, as it may generate huge amount of logs. Depending on your setup it may lead to performance or memory issues.

# Distributed logging

If you are running more than one instance of Collaboration Server On-Premises, it is recommended to use a distributed logging system. It allows you to view and analyze logs from all instances in one place.

# AWS CloudWatch and other cloud solutions

If you are running Collaboration Server On-Premises in the cloud, the simplest and recommended way is to use a service which is available at the selected provider. Some of the available services include:

To use CloudWatch with AWS ECS, you have to create a log group first and change the log driver to awslogs. When the log driver is configured properly, the logs will be streamed directly to CloudWatch.

The logConfiguration of logging may look similar to this:

"logConfiguration": {
  "logDriver": "awslogs",
  "options": {
    "awslogs-region": "us-west-2",
    "awslogs-group": "cksource",
    "awslogs-stream-prefix": "ck-cs-logs"
  }
}

Refer to the Using the awslogs Log Driver article for more information.

# On-Premises solutions

If you are using your own infrastructure or for some reason cannot use the service offered by your provider, you can always use some on-premises distributed logging system.

There are a lot of solutions available, including:

  • ELK + Filebeat
    This is a stack built on top of Elasticsearch, Logstash and Kibana. In this configuration, Elasticsearch stores the logs, Filebeat reads the logs from Docker and sends them to Elasticsearch, and Kibana is used to view them. Logstash is not necessary because the logs are already structured.

  • Fluentd
    It uses a dedicated Docker log driver to send logs. It has a built-in frontend, but can be also integrated with Elasticsearch and Kibana for better filtering.

  • Graylog
    It uses a dedicated Docker log driver to send logs. It has a built-in frontend and needs Elasticsearch to store logs as well as a MongoDB database to store the configuration.

# Example configuration

The example configuration uses Fluentd, Elasticsearch and Kibana to capture logs from Docker.

Before running Collaboration Server On-Premises, you have to prepare the logging services. For the purposes of this example, Docker Compose is used. Create the fluentd, elasticsearch and kibana services inside the docker-compose.yml file:

version: '3.7'
services:
  fluentd:
    build: ./fluentd
    volumes:
      - ./fluentd/fluent.conf:/fluentd/etc/fluent.conf
    ports:
      - "24224:24224"
      - "24224:24224/udp"

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.5
    expose:
      - 9200
    ports:
      - "9200:9200"

  kibana:
    image: docker.elastic.co/kibana/kibana:6.8.5
    environment:
      ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"
    ports:
      - "5601:5601"

To integrate Fluentd with Elasticsearch, you first need to install fluent-plugin-elasticsearch in the Fluentd image. To do this, create a fluentd/Dockerfile with the following content:

FROM fluent/fluentd:v1.10-1

USER root

RUN apk add --no-cache --update build-base ruby-dev \
    && gem install fluent-plugin-elasticsearch \
    && gem sources --clear-all

Next, configure the input server and connection to Elasticsearch in the fluentd/fluent.conf file:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<match *.**>
  @type copy
  <store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix fluentd
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name access_log
    tag_key @log_name
    flush_interval 1s
  </store>
  <store>
    @type stdout
  </store>
</match>

Now you are ready to run the services:

docker-compose up --build

When the services are ready, you can finally start Collaboration Server On-Premises.

docker run --init -p 8000:8000 \
--log-driver=fluentd \
--log-opt fluentd-address=[Fluentd address]:24224 \
[Your config here] \
docker.cke-cs.com/cs:[version]

Now open Kibana in your browser. It is available at http://localhost:5601/. In the first run, you may be asked about creating an index. Use the fluentd-* pattern and press the “Create” button. After this step, your logs should appear in the “Discover” tab.