You can install TRAX LRS 3.0 as a monolithic application or as a set of micro-services. In this page, we are going to see the differences between these 2 approaches and try to explain how you can deploy TRAX LRS as a set of micro-services.
{info} Before reading this page, we recommend to read and apply the standard installation procedure (monolithic approach) in order to get familiar with the application, its concepts and main configuration options.
The architecture of TRAX LRS is organized in multiple services, each service being an independent peace of code implementing a set of features.
The TRAX LRS 3.0 Extended Edition Github repo contains the Laravel application with all the above services. When you deploy this repo and install the application, all the services are enabled by default. This is exactly what we want with a classic installation (monolithic approach). All the services are running into the same PHP application, communicating with direct function calls and local PHP events.
As an alternative, you can deploy several instances of the TRAX LRS Github repo and enable for each instance a specific set of services. When 2 services are located into the same instance, they communicate directly. However, when 2 services are located in separated instances, they use HTTP calls and an events stream (Redis Stream) to communicate with each other.
So with proper deployment and configuration of several TRAX LRS instances, you can implement various micro-service strategies.
When 2 services communicate in a monolithic application, the communication process is synchronous. This means that when a service calls another service (which is usually done with the emission of a local event), it has to wait for the response of the called service before being able to continue doing its job.
In order to understand the consequences of this behavior, let's take the example of an xAPI client posting some statements to the LRS:
statements-recorded
event.statements-recorded
event.As you can see, each request may involve successively several services, which may take some times. So using a monolithic approach may lead to some lacks of reactivity.
On the other hand, the monolithic approach approach is from far the simplest one in terms of deployment. It is also the more reliable strategy as there is no risk to loose an event between 2 services.
When 2 services communicate in a micro-services oriented application, the communication process is asynchronous. This means that when a service has to call another service, it sends an event on the events stream, and does not have to wait for the called service to catch and process the event.
Let's take again the above example in order to understand the consequences.
statements-recorded
event.statements-recorded
event.As you can see, the LRS responds to the client before the activities service being involved, giving more reactivity to the client. Furthermore, using micro-services may be a good solution to run and monitor services in a more independent way, with dedicated resources that may vary depending of the xAPI use case.
On the other hand, the micro-services approach is more complex to deploy and monitor. It requires the deployment of an events stream (Redis Stream in our case), and monitoring these services becomes crucial as the lost of a single component may compromise the overall behavior of the application.
For each TRAX LRS instance deployed, the list of local and remote services must be configured. The instance which contains the gateway service must also have the remote service URLs in order to be able to call these services.
With the monolithic approach, all the services are enabled and there is not remote service.
LOCAL_SERVICES=all
REMOTE_SERVICES=[]
Here is an example of micro-services strategy where the application has been spiltted in:
Configuration of the "main"" part of the application, hosted on trax3.main:
LOCAL_SERVICES=["gateway", "commands", "auth", "extended", "activities", "activity-profiles", "agents", "agent-profiles", "states", "vocab"]
REMOTE_SERVICES=["statements", "activities", "logging"]
SERVICE_STATEMENTS_HOST=trax3.statements_activities
SERVICE_ACTIVITIES_HOST=trax3.statements_activities
SERVICE_LOGGING_HOST=trax3.logging
Configuration of the "statements + activities" part, hosted on trax3.statements_activities:
LOCAL_SERVICES=["statements", "activities"]
REMOTE_SERVICES=[]
Configuration of the "logging" part, hosted on trax3.logging:
LOCAL_SERVICES=["logging"]
REMOTE_SERVICES=[]
For all the instances, a Redis server must be configured and the event stream must be enabled:
REDIS_HOST=redis-5
REDIS_PASSWORD=null
REDIS_PORT=6379
EVENT_STREAM_ENABLED=true
With the micro-services approach, some services have to catch events from the events stream thanks to event listeners that must be run on the related instances. Here is the list of services with related listeners and their console commands:
Service | Listener command | Purpose |
---|---|---|
Activities | php artisan activities:update |
Update activity definitions |
Agents | php artisan agents:update |
Update agent definitions |
Logging | php artisan logs:catch |
Catch application logs |
Vocab | php artisan vocab:update |
Update xAPI vocabulary |
Dev | php artisan dev:listen |
For testing |
All the above listener commands support the following options:
--daemon
: run the worker in the background--keep-alive
: continue to listen events when something fails--workers=2
: launch a given number (here 2
) of workers--status
: show the status of the worker(s)--stop
: stop the worker(s)In order to keep the listeners alive, you can use a tool such as Supervisor.
Now that we have seen how to configure the TRAX LRS instances and how to launch the required listeners, you can try your own deployment strategy. In order to help you, TRAX LRS comes with a configuration tool based on Docker.
To go further, read the Docker documentation page.