Installing OVE

OVE needs to be installed before it can be used to control a display. OVE can be installed either by downloading and compiling the source code of the corresponding components or by running a specific installer available on the OVE Install repository.

All contributors to OVE are encouraged to download and compile the source code. All users of OVE are encouraged to use the OVE installers.

Installation by running OVE installers

OVE Install scripts are designed to install OVE into a Docker environment.

Prerequisites

Building installers for non-supported platforms also requires:

Downloading the OVE installers

The OVE Install scripts are available for Linux, Mac (OS X) and Windows operating systems either as a Python 3 or a Python 2 executable application:

Building installers for non-supported platforms

OVE Install provides tools for building the setup script for non-supported platforms. The master branch of OVE Install needs to be cloned in order to proceed:

git clone https://github.com/ove/ove-install
cd ove-install

Refer the guidelines on developing/building a single setup file for detailed setup instructions.

Running the installers

Once downloaded, the installation script may not be executable on Linux and Mac operating systems. As a resolution, run the following command:

chmod u+x *-setup

Running the executable will start the step-by-step installation process. This will configure the details of the deployment environment such as hostname, port numbers and environment variables.

The ports are pre-configured to a list of common defaults, but can be changed based on end-user requirements. Each port or port-range is defined as a mapping HOST_PORT:CONTAINER_PORT. Only the host ports can be changed, and it is important to note that container ports must not be changed.

Each installer is capable of installing the current stable, latest unstable or a previous stable version.

Resolving port conflicts

Once the docker-compose.setup.ove.yml file is generated, it is important to ensure all HOST_PORT values defined on the docker-compose.setup.ove.yml file are not currently in use. If this is not the case, corresponding HOST_PORT values need to be changed. For example, if another Tuoris instance exists on the host machine, it is most likely that the port 7080 could be in use. In such a situation, the Tuoris HOST_PORT needs to be changed on the docker-compose.setup.ove.yml file.

Environment variables

Please note that the references to Hostname (or IP address) noted below should not be replaced with localhost, or the Docker hostname because these services need to be accessible from the client/browser. Please replace it with the public hostname or IP address of the host machine. For a local installation, the host machine refers to your own computer. For a server installation the host machine refers to the server on which the Docker environment has been setup. The default PORT numbers for OVE core, Tuoris, OpenVidu, and other services are provided in the Running OVE section.

Before starting up OVE you must configure the environment variables either by providing them during the installation process or by editing the generated docker-compose.setup.ove.yml file. The environment variables that can be configured are:

  • OVE_HOST - Hostname (or IP address) + port of OVE core
  • TUORIS_HOST - Hostname (or IP address) + port of the Tuoris service (dependency of SVG App).
  • OPENVIDU_HOST - Hostname (or IP address) + port of the OpenVidu service (dependency of WebRTC App).
  • openvidu.publicurl - https:// + Hostname (or IP address) + port of the OpenVidu service (dependency of WebRTC App).
  • OPENVIDU_SECRET - The OpenVidu secret. Must match openvidu.secret configured below.
  • openvidu.secret - The OpenVidu secret. Must match OPENVIDU_SECRET configured above.
  • OVE_SPACES_JSON - This variable is optional and not defined in the docker-compose.setup.ove.yml by default. This accepts a URL for the Spaces.json file to be used as a replacement to the default (embedded) Spaces.json file available with OVE.
  • LOG_LEVEL - This variable is optional and not defined in the docker-compose.setup.ove.yml by default. This can have values from 0 to 6 and defaults to 5. The values correspond to:
    • 0 - FATAL
    • 1 - ERROR
    • 2 - WARN (The recommended LOG_LEVEL for production deployments)
    • 3 - INFO
    • 4 - DEBUG
    • 5 - TRACE
    • 6 - TRACE_SERVER (Generates additional server-side TRACE logs)
  • OVE_PERSISTENCE_SYNC_INTERVAL - This variable is optional and not defined in the docker-compose.setup.ove.yml by default. This accepts an interval (in milliseconds) for synchronising an instance of OVE or of an OVE application with a registered persistence service. This optional variable can be set individually for OVE core and for all OVE applications.
  • OVE_<APP_NAME_IN_UPPERCASE>_CONFIG_JSON - This variable is optional and not defined in the docker-compose.setup.ove.yml by default. This accepts a path to an application-specific config.json file. This optional variable is useful when application-specific configuration files are provided at alternative locations on a filesystem (such as when using Docker secrets). <APP_NAME_IN_UPPERCASE> must be replaced with the name of the application in upper-case. For example, the corresponding environment variable for the Networks App would be OVE_NETWORKS_CONFIG_JSON.
  • OVE_MAPS_LAYERS - This variable is optional and not defined in the docker-compose.setup.ove.yml by default. This accepts a URL of a file containing the Map layers Configuration in a JSON format and overrides the default Map layers Configuration of the Maps App.

The OpenVidu server also accepts several other optional environment variables that are not defined in the docker-compose.setup.ove.yml by default. These are explained in the documentation on OpenVidu server configuration parameters.

Using your own certificates for OpenVidu

OpenVidu is a prerequisite for using the WebRTC App. OpenVidu uses secure WebSockets and uses certificates. And, unless you provide your own certificate, it will use a self-signed certificate which will become inconvenient when loading the WebRTC App on multiple web browsers.

You can run OpenVidu with your own certificate by first creating new Java Key Store following the OpenVidu guide on using your own certificate. This will subsequently require the following changes in the auto generated docker-compose.setup.ove.yml file:

version: '3.1'
services:
  ...

  openvidu-openvidu-call:
    image: openvidu/openvidu-call:latest
    ports:
    - "4443:4443"
    environment:
      openvidu.secret: "MY_SECRET"
      openvidu.publicurl: "https://<Hostname (or IP address)>:4443"
      server.ssl.key-store: /run/secrets/openvidu.jks
      server.ssl.key-store-password: "openvidu"
      server.ssl.key-alias: "openvidu"
    secrets:
      - openvidu.jks

  ...

secrets:
  openvidu.jks:
    file: openvidu.jks

To add a trusted CA certificate (trusted_ca.cer) to your Java Key Store, run:

keytool -import -v -trustcacerts -alias root -file trusted_ca.cer -keystore openvidu.jks -keypass openvidu

Starting and stopping the OVE Docker applications

OVE provides separate installation scripts to help users install the necessary components. To install and start OVE on Docker run:

docker-compose -f docker-compose.setup.ove.yml up -d

If you wish to install OVE without it automatically starting, use the command:

docker-compose -f docker-compose.setup.ove.yml up --no-start

Once the installation procedure has completed and OVE has been started, the successful installation of OVE can be verified by accessing the OVE home page (located at: http://OVE_CORE_HOST:PORT as noted in the Running OVE section) using a web browser.

Once the services have started, you can check their status by running:

docker ps

The ps command will list containers along with their CONTAINER_ID. Then, to check logs of an individual container, run:

docker logs <CONTAINER_ID>

To stop the Docker application run:

docker-compose -f docker-compose.setup.ove.yml down

To clean-up the Docker runtime first stop any active instances and then run:

docker system prune
docker volume prune

Installation from source code

All OVE projects use a build system based on Lerna. Most OVE projects are based on Node.js, compiled with Babel, and deployed on a PM2 runtime. Some OVE projects are based on Python.

Prerequisites

  • git
  • Node.js (v8.0+)
  • NPM
  • NPX (install with the command: npm install -global npx)
  • PM2 (install with the command: npm install -global pm2)
  • Lerna (install with the command: npm install -global lerna)

Compiling source code for the Docker environment also requires:

The SVG App requires:

  • Tuoris (installation instructions available on GitHub repository)

The WebRTC App requires:

Downloading source code

All OVE projects can be downloaded from their GitHub repositories:

The master branch of each repository contains the latest code, and can also be cloned if you intend to contribute code or fix issues:

git clone https://github.com/ove/ove

Once the source code has been downloaded OVE can be installed either on a local Node.js environment (such as PM2’s Node.js environment) or within a Docker environment. The two approaches are explained below.

Compiling source code for a local Node.js environment

Once you have cloned or downloaded the code, OVE can be compiled using the Lerna build system:

cd ove
lerna bootstrap --hoist
lerna run clean
lerna run build
lerna run test

Instructions above are only provided for the OVE Core repository. The steps to follow are similar for other repositories.

Starting and stopping OVE using the PM2 process manager

The SVG App requires an instance of Tuoris to be available before starting it. To start Tuoris run:

pm2 start index.js -f -n "tuoris" -- -p PORT -i 1

The WebRTC App requires an instance of OpenVidu to be available before starting it. To start OpenVidu run:

docker run -p 4443:4443 --rm -e openvidu.secret=MY_SECRET openvidu/openvidu-call:latest

OVE can then be started using the PM2 process manager. To start OVE on a Linux or MacOS environment run:

OVE_HOST="OVE_CORE_HOST:PORT" TUORIS_HOST="TUORIS_HOST:PORT" OPENVIDU_HOST="OPENVIDU_HOST:PORT" pm2 start pm2.json

To start OVE on a Windows environment run:

OVE_HOST="OVE_CORE_HOST:PORT" TUORIS_HOST="TUORIS_HOST:PORT" OPENVIDU_HOST="OPENVIDU_HOST:PORT" pm2 start pm2-windows.json

By default, OVE core and all services run on localhost, which should be used in place of OVE_CORE_HOST and TUORIS_HOST names above. The default PORT numbers for OVE core, Tuoris and OpenVidu are provided in the Running OVE section.

Once the services have started, you can check their status by running:

pm2 status

Then, to check logs of all services, run:

pm2 logs

To stop OVE processes managed by PM2 on a Linux or MacOS environment run:

pm2 stop pm2.json

To stop OVE processes managed by PM2 on a Windows environment run:

pm2 stop pm2-windows.json

To clean-up processes managed by PM2 on a Linux or MacOS environment run:

pm2 delete pm2.json

To clean-up processes managed by PM2 on a Windows environment run:

pm2 delete pm2-windows.json

Compiling source code for a Docker environment

This approach currently works only for Linux and MacOS environments. The build.sh script corresponding to each repository can be found under the top most directory of the cloned or downloaded repository or within a packages/PACKAGE_NAME directory corresponding to each package.

The build.sh script can be executed as:

cd ove
./build.sh

Instructions above are only provided for the OVE Core repository. The steps to follow are similar for other repositories.

Starting and stopping the OVE Docker containers

Similar to the build.sh script, the docker-compose.yml file corresponding to each repository can also be found under the top most directory of the cloned or downloaded repository or within a packages/PACKAGE_NAME directory corresponding to each package.

The deployment environment needs to be pre-configured before running these scripts.

To start each individual docker container run:

SERVICE_VERSION="latest" docker-compose -f docker-compose.yml up -d

Once the services have started, you can check their status by running:

docker ps

The ps command will list containers along with their CONTAINER_ID. Then, to check logs of an individual container, run:

docker logs <CONTAINER_ID>

To stop each individual Docker container run:

SERVICE_VERSION="latest" docker-compose -f docker-compose.yml down

To clean-up the Docker runtime first stop any active instances and then run:

docker system prune
docker volume prune

Running OVE

It is recommended to use OVE with Google Chrome, as this is the web browser used for development and in production at the Data Science Institute. However, it should also be compatible with other modern web browsers: if you encounter any browser-specific bugs please report them as an Issue.

For details of how to use OVE, see the Usage page.

After installation, OVE will expose several resources that can be accessed through a web browser:

  • OVE home page http://OVE_CORE_HOST:PORT
  • App control page http://OVE_APP_HOST:PORT/control.html?oveSectionId=0
  • OVE client pages http://OVE_CORE_HOST:PORT/view.html?oveViewId=LocalNine-0
  • OVE JS library http://OVE_CORE_HOST:PORT/ove.js
  • OVE API docs http://OVE_CORE_HOST:PORT/api-docs

By default, OVE core, all apps, and all services run on localhost, which should be used in place of OVE_CORE_HOST and OVE_APP_HOST names above. The default PORT numbers are:

  • 8080 - OVE Core
  • 8081 - OVE App Maps
  • 8082 - OVE App Images
  • 8083 - OVE App HTML
  • 8084 - OVE App Videos
  • 8085 - OVE App Networks
  • 8086 - OVE App Charts
  • 8087 - OVE App Alignment
  • 8088 - OVE App Audio
  • 8089 - OVE App SVG
  • 8090 - OVE App Whiteboard
  • 8091 - OVE App PDF
  • 8092 - OVE App Controller
  • 8093 - OVE App Replicator
  • 8094 - OVE App WebRTC
  • 8180 - OVE Service Layout
  • 8190 - OVE Service Persistence (In-Memory)

The default PORT numbers of OVE dependencies are: