How do I deploy a Python application on OpenShift?

By Geoff Newson,  Senior Consultant

The good folks at 3scale gave us access to the first beta version of the on-premise API Gateway application. This presented us with an exciting opportunity to test its applicability for a proof of concept IoT application we’re building.

3Scale and IoT Application

The 3scale API Gateway lets us manage access to our concept IoT application in terms of who can access the API (through an application key for registered users) and the rate at which API calls can be made. You can read a more in-depth article about the 3Scale deployment in a previous blog post here.

The IoT application is a web server exposing a REST API to retrieve information from IoT devices in the field. After developing and testing locally, we realised that running the web server on the OpenShift instance where 3Scale was running made everything simpler.  This enabled all team members to access the web server all of the time, instead of just when the development machine was connected to the company network.

The diagram below shows the stack where both 3Scale and the IoT proof of concept are deployed to OpenShift.

IoT Stack

S2I Build Process in OpenShift

The on-premise installation of 3Scale is an OpenShift application that we deployed from a template.  For the IoT application, we created a new OpenShift application from first principles . A search of the OpenShift website returned this article, which correlated closely with what we wanted to do. We had already written the web server using Python, specifically Flask.

The article describes how to deploy a basic Flask application onto OpenShift from a github repository. This is a basic use of the S2I build process in OpenShift. The S2I build is a framework that allows a developer to use a “builder” container and source code directly to produce a runnable application image. It is described in detail here.

After following the article on Python application deployment and understanding the process, we forked the repo on github, cloned it locally and changed it to reflect our existing code. We cloned the repo instead of creating a new one because the Getting Started article, referenced above, used gunicorn rather than the built-in python webserver and had the configuration already in place.

Running through the process with our own repository included the following steps:

 

  • We stored the code in a github repository
  • We created a new project for the IoT application
  • Add to Project option from the OpenShift web console
    • Selected a Python builder image and set the version to 2.7
    • Gave the application the name webserver and pointed to the previous git URL
  • When the builder started, we selected Continue to Overview and watched it complete

 

Using the S2I process we could easily and repeatedly deploy a web server with more functionality than a basic “Hello World” display.

All of the API methods were merely stubs that returned fixed values. What we needed was a database for live data.

We developed the database functionality locally with a MySQL DB running on the local machine. When it came to deploying this onto OpenShift, we wanted the same environment. We knew that there was an OpenShift container for MySQL and it was straightforward to spin it up in the same project.

Persistent Storage in OpenShift

The nature of containers is that, by default, they have only ephemeral storage (temporary, tied to the life of the container). We wanted the database to persist over potential container failures or shutdowns. This required attaching storage, known as a persistent volume to the container.  OpenShift supports different types of persistent volumes including:

  • NFS
  • AWS Elastic Block Stores (EBS)
  • GlusterFS
  • iSCSI
  • RBD (Ceph Block Device)

To progress quickly,  we choose NFS storage and created an NFS share. This NFS share was then provisioned in OpenShift. This involve creating  a file defining the properties of the share and running the command:

oc create -f nfs_vol5.yml

The file contents are shown as follows:  

File Content

Behind the scenes the database application creates a “claim” for a storage volume. A claim is a request for a storage volume of a certain size from the OpenShift platform. If the platform has available storage that meets the size criteria, it is assigned to the claim. If no volume meets the exact size requirements, but a larger volume exists, the larger volume will be assigned. The NFS storage we defined in Openshift met the criteria for this claim and was assigned to the application.

After the persistent volume was added to the application, we used the console tab of the container to edit  the schema.  We set the schema as required but we faced a new issue, connecting from the application to the database.

Connecting the Application to the Database

To connect from the application to the database requires the database-specific variables set in the database container to be exposed in the application container. This is achieved by adding the variables into the deployment configuration. This causes the application to redeploy picking up the new environment variables.

Specifically, the database container is deployed with the following environment variables:  

  • MySQL user
  • MySQL password
  • MySQL database name

These environment variables can be set as part of the initial configuration of the container but if they aren’t, default values are provided.

The environment variables are  set in the application container using the following command:

oc env dc phpdatabase -e MYSQL_USER=myuser  -e MYSQL_PASSWORD=mypassword -e MYSQL_DATABASE=mydatabase

For more details, refer to this article.

Templating the application

The next challenge is how to turn this application into a template so that it can become a push button deployment and we will address that in a future blog post!

 

 

Tags: , , , ,