Running Unified Push
Before Unified Push can send push messages, it must be running. This page has guides for a quick; no dependency startup, a stable, low dependency setup, and a full k8s operator managed setup.
Prerequisites
- You will need a container management system installed. On Linux and Mac this includes podman, and Docker is available on most operating systems.
Quick Start
Unified Push is released as a Docker formatted container on quay.io. By default it will launch a self contained, in memory instance of Unified Push.
All you need is a container manager tool like podman or the Docker CLI.
podman run -p 9999:8080 -it aerogear/unifiedpush-configurable-container:latest
docker run -p 9999:8080 -it aerogear/unifiedpush-configurable-container:latest
You should now be able to access the Unified Push admin ui at http://localhost:9999
. This is great for quick tests, demonstrations, etc. If you would like to keep data between launches of Unified Push, please use the other guides in this document.
Running for app developers
If you are an app developer and wanting to run Unified Push locally on your development machine, you probably want to have your configuration persist between environment restarts. Additionally, you may need to configure SSL certificates for native push on some devices and operating systems.
Enabling persistence with Postgres
Unified Push uses Hibernate as an ORM layer, and the shipped container image supports postgres. To setup its table space, Unified Push needs to be given a postgres user that can create tables.
The container supports the following environment variables to configure connecting to a postgres database
Name | Description |
---|---|
POSTGRES_USER | A username to connect to Postgres |
POSTGRES_PASSWORD | A password to connect Postgres |
POSTGRES_SERVICE_HOST | Postgres server hostname or ip address |
POSTGRES_SERVICE_PORT | Postgres server port |
For example, if you had the following postgres database :
POSTGRES_SERVICE_HOST | POSTGRES_SERVICE_PORT | POSTGRES_USER | POSTGRES_PASSWORD |
---|---|---|---|
172.17.0.2 | 5432 | unifiedpush | unifiedpush |
You would run Unified Push with the following podman command
podman run -p 8080:8080 --rm \
-e POSTGRES_USER=unifiedpush \
-e POSTGRES_PASSWORD=unifiedpush \
-e POSTGRES_SERVICE_HOST=172.17.0.2 \
-e POSTGRES_SERVICE_PORT=5432 \
-e POSTGRES_DATABASE=unifiedpush \
quay.io/aerogear/unifiedpush-configurable-container:master
Using SSL Certificates
Running in Production
In production security and scalability are very important concerns. Unified Push supports using Keycloak to provide user athentication, and it is horizontally scalable if you provide an external AMQP broker like Artemis.
Using Keycloak for Authentication
Keycloak is an authentication service that provides out of the box OAuth support for single sign on. By setting the correct environment variables, Unified Push will require users to log into Keycloak before they are allowed access to the Unified Push console. Additionally you will need to configure a keycloak realm using our sample realm.
The container supports the following environment variables to configure keycloak integration
Name | Description |
---|---|
KEYCLOAK_SERVICE_HOST | Keycloak server hostname or ip address |
KEYCLOAK_SERVICE_PORT | Keycloak server port |
A keycloak with the following configuration
KEYCLOAK_SERVICE_HOST | KEYCLOAK_SERVICE_PORT |
---|---|
172.17.0.2 | 8080 |
would be run with the following podman command
podman run -p 8080:8080 --rm \
-e KEYCLOAK_SERVICE_HOST=172.17.0.2 \
-e KEYCLOAK_SERVICE_PORT=8080 \
quay.io/aerogear/unifiedpush-configurable-container:master
Using an external AMQP broker
Unified Push uses JMS to schedule communication with native push services such as Firebase or APNS. Unified Push by default runs its own JMS broker, but it can use an external message broker with the AMQP specification such as Enmasse or Apache Artemis. Using an external broker lets you spread out the workload of sending messages among several Unified Push instances. If the user is allowed, Unified Push will create the messaging resources it needs, otherwise this should be done before hand.
The Unified Push container uses the following variables to define and enable an external AMQP broker connection.
Name | Description |
---|---|
ARTEMIS_USER | A username to connect to an AMQP server |
ARTEMIS_PASSWORD | A password to connect to an AMQP server |
ARTEMIS_SERVICE_HOST | AMQP server hostname or ip address |
ARTEMIS_SERVICE_PORT | AMQP server port |
AMQ_MAX_RETRIES | 'optional' Number of times to retry sending a push message before discarding the JMS message. Default 3 |
AMQ_BACKOFF_SECONDS | 'optional' Number of seconds to delay retrying a JMS message. Default 10 |
If you wished to connect to the following Artemis acceptor :
ARTEMIS_SERVICE_HOST | ARTEMIS_SERVICE_PORT | ARTEMIS_USER | ARTEMIS_PASSWORD |
---|---|---|---|
172.17.0.9 | 61616 | messageuser | messagepassword |
you would run the following podman command
podman run -p 8080:8080 --rm \
-e ARTEMIS_SERVICE_HOST=172.17.0.9 \
-e ARTEMIS_SERVICE_PORT=61616 \
-e ARTEMIS_USER=messageuser \
-e ARTEMIS_PASSWORD=messagepassword \
quay.io/aerogear/unifiedpush-configurable-container:master
Running with Operator
The UnifiedPush Server can be installed and run on OpenShift by using the UnifiedPush Operator. The UnifiedPush Server is created by the Operator from Custom Resource yaml files defined as per your specifications in the operator deploy directory.
Prerequisite installations required for UnifiedPush Operator
Set your $GOPATH evironment variable.
Install the dep package manager.
note
Currently UnifiedPush Operator is only supported by v0.10.1 of the Operator SDK
Configuration
The UnifiedPush Server Image and The PostgreSQL Image that get used by the UPS Operator are recommended not to be changed. We support the released versions of the UPS Operator to manage the versions of the UnifiedPush Server and PostgreSql severs released with it.
note
The Images used can be found in the constants.go file
The UnifiedPush Server does have configurable fields that can be defined in the the custom resource yaml file used in the make install
command of the UnifiedPush Operator Makefile. Here are the configurable fields in a UnifiedPush Server:
Field Name | Description | Default |
---|---|---|
Backups | A list of backup entries that CronJobs will be created from here for an annotated example. | No Backups |
useMessageBroker | Can be set to true to use managed queues, if you are using enmasse. | false |
unifiedPushResourceRequirements | Unified Push Service container resource requirements. | limits: |
oAuthResourceRequirements | OAuth Proxy container resource requirements. | limits: memory: "value of OAUTH_MEMORY_LIMIT passed to operator" cpu: "value of OAUTH_CPU_LIMIT passed to operator" requests: memory: "value of OAUTH_MEMORY_REQUEST passed to operator" cpu: "value of OAUTH_CPU_REQUEST passed to operator" |
postgresResourceRequirements | Postgres container resource requirements. | limits: memory: "value of POSTGRES_MEMORY_LIMIT passed to operator" cpu: "value of POSTGRES_CPU_LIMIT passed to operator" requests: memory: "value of POSTGRES_MEMORY_REQUEST passed to operator" cpu: "value of POSTGRES_CPU_REQUEST passed to operator" |
postgresPVCSize | PVC size for Postgres service | Value of POSTGRES_PVC_SIZE environment variable passed to operator |
If the values for these fields are not defined by you, in the UnifiedPush Server CR, the UnifiedPush Operator will use some default values that are passed to the operator as environment variables. If no environment variable is also passed to the operator, it will use some hardcoded values. The default values for resource sizes, limits and requests can be seen in this table
Variable | Default Value |
---|---|
UPS_MEMORY_LIMIT | 2Gi |
UPS_MEMORY_REQUEST | 512Mi |
UPS_CPU_LIMIT | 1 |
UPS_CPU_REQUEST | 500m |
OAUTH_MEMORY_LIMIT | 64Mi |
OAUTH_MEMORY_REQUEST | 32Mi |
OAUTH_CPU_LIMIT | 20m |
OAUTH_CPU_REQUEST | 10m |
POSTGRES_MEMORY_LIMIT | 512Mi |
POSTGRES_MEMORY_REQUEST | 256Mi |
POSTGRES_CPU_LIMIT | 1 |
POSTGRES_CPU_REQUEST | 250m |
POSTGRES_PVC_SIZE | 5Gi |
The crds directory in the UnifiedPush Operator contains yaml files that can be used as examples of how to configure the different UnifiedPush fields. To implement these different yaml files and test them for yourself simply change the name of the custom Resource yaml file at line 76 of the Makefile.
- kubectl apply -n $(NAMESPACE) -f deploy/crds/push_v1alpha1_unifiedpushserver_cr.yaml
to
- kubectl apply -n $(NAMESPACE) -f deploy/crds/push_v1alpha1_unifiedpushserver_cr_with_backup.yaml
The container names deployed by the UnifiedPush Operator can also be modified, the following table shows the environment variables and the default names for the containers.
Name | Default |
---|---|
UPS_CONTAINER_NAME | ups |
OAUTH_PROXY_CONTAINER_NAME | ups-oauth-proxy |
POSTGRES_CONTAINER_NAME | postgresql |
Development
UnifiedPush can be easily installed and maintained on OpenShift by using the UnifiedPush Operator. This Operator makes managing the UnifiedPush Server and Database a seamless process.There are a number of prerequisites to getting started with the UnifiedPush Operator.
Operator Installation and Usage
The UnifiedPush Operator needs to be downloaded and installed in the github.com directory inside the go directory. You can clone it and add it to you $GOPATH with the following command
git clone git@github.com:aerogear/unifiedpush-operator.git $GOPATH/src/github.com/aerogear/unifiedpush-operator
Once this is finished you can open a terminal in this directory and install the operator by using:
make install
This command is defined in the Operators Makefile.
note
To install the operator you must be logged in as a user with cluster privileges
With the Operator installed, switch to using that project with the oc project unifiedpush
command. Prepare the cluster and install all prerequisites for the UnifiedPush Operator
make cluster/prepare
Once this is finished you can then start the UnifiedPush Operator locally with the command:
make code/run
In another terminal you can use the oc get route
command to get the host address for the UnifiedPush Server admin console on Openshift. You will need to login using your Openshift username and password.
If you are finished using the Operator and want to remove it from your cluster and clean up, you can use the command:
make cluster/clean
note
The Makefile commands for the UnifiedPush Operator can be further explored here
Monitoring UnifiedPush Operator & Server
Monitoring of the UnifiedPush Operator and UnifiedPush Server can be done using the integr8ly/Grafana and Prometheus Operator. These can be installed through OperatorHub onto your
The UnifiedPush Operator will install it’s own monitoring resources required by Grafana and Prometheus on startup and will install the Resources required for monitoring the UnifiedPush Server on creation of the UnifiedPushServer CR.
note
These will be ignored if the required CRDs are not installed on the cluster. Restart the operator to install the resources if the application-monitoring stack is deployed afterwards.
Further Reading
Further information documenting testing the Operator, publishing images, tagging releases and the operator architecture can be found on the UnifiedPush Operator Readme on Github