Micro-services


Introduction

You can install TRAX LRS 3.0 as a monolithic application or as a set of micro-services. In this page, we are going to see the differences between these 2 approaches and try to explain how you can deploy TRAX LRS as a set of micro-services.

{info} Before reading this page, we recommend to read and apply the standard installation procedure (monolithic approach) in order to get familiar with the application, its concepts and main configuration options.

TRAX LRS services

The architecture of TRAX LRS is organized in multiple services, each service being an independent peace of code implementing a set of features.

Core application services

  • The auth service manages the authentication and authorization
  • The commands service provides console commands and jobs management
  • The dev service implements some useful features for developers
  • The gateway service handles all the incoming requests before routing them to specific services
  • The logging service manages the application logs

xAPI services

  • The activities service implements the xAPI Activities API
  • The activity profiles service implements the xAPI Activity Profiles API
  • The agents service implements the xAPI Agents API
  • The agent profiles service implements the xAPI Agent Profiles API
  • The states service implements the xAPI States API
  • The statements service implements the xAPI Statements API
  • The vocab service indexes the xAPI vocabulary

Front-end service

  • The extended service implements the user interface of TRAX LRS Extended Edition

Enabling / Disabling services

The TRAX LRS 3.0 Extended Edition Github repo contains the Laravel application with all the above services. When you deploy this repo and install the application, all the services are enabled by default. This is exactly what we want with a classic installation (monolithic approach). All the services are running into the same PHP application, communicating with direct function calls and local PHP events.

As an alternative, you can deploy several instances of the TRAX LRS Github repo and enable for each instance a specific set of services. When 2 services are located into the same instance, they communicate directly. However, when 2 services are located in separated instances, they use HTTP calls and an events stream (Redis Stream) to communicate with each other.

So with proper deployment and configuration of several TRAX LRS instances, you can implement various micro-service strategies.

Monolithic approach

When 2 services communicate in a monolithic application, the communication process is synchronous. This means that when a service calls another service (which is usually done with the emission of a local event), it has to wait for the response of the called service before being able to continue doing its job.

In order to understand the consequences of this behavior, let's take the example of an xAPI client posting some statements to the LRS:

  1. The xAPI client makes a POST request to the LRS statements API.
  2. TRAX LRS receives the request and call the statements service (via the gateway service).
  3. The statements service processes the request and records the statements.
  4. The statements service emits a statements-recorded event.
  5. The activities service catches the statements-recorded event.
  6. The activities service updates the activity definitions extracted from the recorded statements.
  7. TRAX LRS sends back a response to the client.

As you can see, each request may involve successively several services, which may take some times. So using a monolithic approach may lead to some lacks of reactivity.

On the other hand, the monolithic approach approach is from far the simplest one in terms of deployment. It is also the more reliable strategy as there is no risk to loose an event between 2 services.

Micro-services approach

When 2 services communicate in a micro-services oriented application, the communication process is asynchronous. This means that when a service has to call another service, it sends an event on the events stream, and does not have to wait for the called service to catch and process the event.

Let's take again the above example in order to understand the consequences.

  1. The xAPI client makes a POST request to the LRS statements API.
  2. TRAX LRS receives the request and call the statements service (via the gateway service).
  3. The statements service processes the request and records the statements.
  4. The statements service emits a statements-recorded event.
  5. TRAX LRS sends back a response to the client.
  6. The activities service catches the statements-recorded event.
  7. The activities service updates the activity definitions extracted from the recorded statements.

As you can see, the LRS responds to the client before the activities service being involved, giving more reactivity to the client. Furthermore, using micro-services may be a good solution to run and monitor services in a more independent way, with dedicated resources that may vary depending of the xAPI use case.

On the other hand, the micro-services approach is more complex to deploy and monitor. It requires the deployment of an events stream (Redis Stream in our case), and monitoring these services becomes crucial as the lost of a single component may compromise the overall behavior of the application.

Configuration

For each TRAX LRS instance deployed, the list of local and remote services must be configured. The instance which contains the gateway service must also have the remote service URLs in order to be able to call these services.

Monolithic example

With the monolithic approach, all the services are enabled and there is not remote service.

LOCAL_SERVICES=all
REMOTE_SERVICES=[]

Micro-services example

Here is an example of micro-services strategy where the application has been spiltted in:

  • The main part of the application with the gateway service and most of the other services
  • The statements and activities services which have been grouped
  • The logging service which has been isolated

Configuration of the "main"" part of the application, hosted on trax3.main:

LOCAL_SERVICES=["gateway", "commands", "auth", "extended", "activities", "activity-profiles", "agents", "agent-profiles", "states", "vocab"]
REMOTE_SERVICES=["statements", "activities", "logging"]

SERVICE_STATEMENTS_HOST=trax3.statements_activities
SERVICE_ACTIVITIES_HOST=trax3.statements_activities
SERVICE_LOGGING_HOST=trax3.logging

Configuration of the "statements + activities" part, hosted on trax3.statements_activities:

LOCAL_SERVICES=["statements", "activities"]
REMOTE_SERVICES=[]

Configuration of the "logging" part, hosted on trax3.logging:

LOCAL_SERVICES=["logging"]
REMOTE_SERVICES=[]

For all the instances, a Redis server must be configured and the event stream must be enabled:

REDIS_HOST=redis-5
REDIS_PASSWORD=null
REDIS_PORT=6379

EVENT_STREAM_ENABLED=true

Event listeners

With the micro-services approach, some services have to catch events from the events stream thanks to event listeners that must be run on the related instances. Here is the list of services with related listeners and their console commands:

Service Listener command Purpose
Activities php artisan activities:update Update activity definitions
Agents php artisan agents:update Update agent definitions
Logging php artisan logs:catch Catch application logs
Vocab php artisan vocab:update Update xAPI vocabulary
Dev php artisan dev:listen For testing

All the above listener commands support the following options:

  • --daemon: run the worker in the background
  • --keep-alive: continue to listen events when something fails
  • --workers=2: launch a given number (here 2) of workers
  • --status: show the status of the worker(s)
  • --stop: stop the worker(s)

In order to keep the listeners alive, you can use a tool such as Supervisor.

Putting all together

Now that we have seen how to configure the TRAX LRS instances and how to launch the required listeners, you can try your own deployment strategy. In order to help you, TRAX LRS comes with a configuration tool based on Docker.

To go further, read the Docker documentation page.