How many e-mails have you received this week
talking about the “digital transformation” going on everywhere around us?
Businesses in any sector grow increasingly paranoid over digitally competent
competitors, and/or excited about the possibilities. Google “digital
transformation” and you’ll find as many definitions as search hits. The most
respectable consultancy firms will be happy to help define your strategy (and
take a lot of your money).
If there has ever been a good time to talk to the
geeks about where to take the business, now maybe it.
Ammeon just celebrated its 16th birthday, and our
heritage is software development. We have been early adopters of test
automation. We have been early adopters of modern Continuous Integration,
Delivery methods and tools. We build awesome Agile teams. We architect and
build Software Defined Networks, and we build software for the Cloud. We have
worked side by side with our customers on “Automation” for years, learning and
solving problems together. And we are proud of that.
As I mentioned in my previous blog, Ammeon has a
story that still needs to be told. As we’re developing the strategy and plan
for this next chapter of our company’s history we asked ourselves the question;
What is Ammeon’s role in digital transformation??
In plain English, we think it is Better Software Faster.
Do you need help with adopting Agile ways of
working to deliver more with the budget and resources you have?
Do you need help with breaking that large,
monolithic legacy software into modern software based on containers and
micro-services destined for the Cloud?
Do you need help with releasing software with
customer value more frequently? Like every day as opposed to every few months?
Our experience in software Automation can help your team test, integrate and
release software features with quality. All the time. Every time.
Some of our customers need basic solutions to get
started. Some of our customers have the most complex and demanding requirements
for a modern “software factory”.
Whatever your starting point and need, we can help
you deliver Better Software Faster.
The “Snake on the wall” technique is one that has been used by many Agile and Lean teams in various
forms for many years. In its simplest form, the scrum master draws a snake’s
head on a post-it on the wall, and as team members run into distractions,
impediments and other frictions during the course of their work, they note it
on another post-it, and join it to the head … further post-its attach to the
previous, and so on, forming a snake body.
The length of the snake’s body gives an indication of how many problems there are at any moment. The scrum master collapses the snake down as the impediments are resolved.
A variant of this we tried in Ammeon was the 8 Lean Wastes Snake. Here, the Snake is drawn on a large poster in the team area. The snake is divided into 8 sections, one for each of the 8 Lean Wastes(the original 7 Lean Wastes + “Skills”):
Time
Inventory
Motion
Waiting
Overproduction
Overprocessing
Defects
Skills
As team members run into
impediments, they place post-its on the appropriate section of the snake. The scrum master keeps
an eye on any new issues appearing and attempts to resolve as appropriate;
perhaps also presents back to the team at the retro how many issues of
each type were logged this sprint, how
many resolved, and how
many remain.
Another benefit of the lean waste snake is that it can provoke interesting team discussion around addressing waste; I
have used this to spur discussion of what types of waste our team encounters that would fit into the
8 categories, challenge
the team to think of examples (hypothetical or real) for each
category. I found this is very useful to help
the team identify, and put a label on, the various friction and
pain points they encounter; also as a “safety valve”.
Recently I had the opportunity to try introduce this technique to a non-software team who were on the beginning of their agile journey. They found the waste snake intriguing and worth discussing but ultimately a bit hypothetical, as they could not easily identify what section of the snake they should stick their post-its to. Also, they found the amount of space afforded to be quite limiting. For these reasons engagement with the Snake was slow and difficult.
‘The Grid’
So – we decided to iterate simpler and
more user-friendly. Replacing the snake with a 2×4 grid, one box for each Waste,
with written examples in each of that category of waste, and crucially, that the team helped
contributed to themselves, as a reminder. Now we have lots more space for
post-its, along with some written
reminders of grounded examples relevant to them.
While engagement is still growing with
the new Wastes Grid,
going through the exercise of capturing the team’s own examples and a few
reminders during the course of the sprint, helps capture and crucially visualise the current friction
points.
The HR congress was on this week, I couldn’t attend
but once again AgileHR was on everyone’s lips. In my last blog, I spoke about
how adapting Agile concepts to HR can bring about some significant changes in
the productivity and engagement of HR organisations. It sounds like Agile is
great but why are more people not adopting it? Having spoken to some of
my colleagues in the HR community it seems that the difficulty in adopting
agile is in three parts:
The Language of Agile– because of its origins in Software development,
the language of HR can seem inaccessible to non-technical people. It refers to
scrums, retros, ceremonies and sprints. It’s a lot to get through to even begin
to understand what it means for teams
Fear of change – Having met with a
number of HR colleagues either in training, in forums or in conferences its
clear that as a discipline we have a habit of hiding behind convention and
compliance in order to keep things going as they are. One of the reasons for
this is that many HR organisations hold fast to the Ulrich Model of HR, the 4
roles etc. But guess what? That model was built in 1995! Guess what else
happened in 1995. Amazon sold its first book; Netscape the first commercial web
browser; We were still 10 years away from the iPhone! Look how much the
world has changed since then and we are still relying on a decades-old
methodology for HR. Let’s evolve!
Where to start – HR folk by necessity
are both smart and determined. So even if they wade through all the language
it’s hard to know where to start.
With that in mind, I am going to give you a
jargon-free kick start to Agile which you can start using today. I am going to
look at two primary concepts which will help you get off the ground one is
about silos in and around HR and the second is the importance of a brief daily
meeting.
Tear down the walls The first thing to do is to break down as many silos as you can. One of the big
issues found in software development was that software was being developed in
one area and “thrown over the wall” to be tested. The testers had no idea what
the code was how it worked and were expected to make sure it worked after a
quick handover at the end of the development process. Since the advent of Agile
and more recently DevSecOps, Developers, Testers, Security and Operations
cocreate software products and solutions with everyone being involved
throughout the process to ensure everyone knows what’s going on. Clearly, the
same involvement is important in HR functions. A common silo exists between HR
and recruitment with recruited candidates being “thrown over the wall” to HR to
onboard sometimes with little knowledge of who the candidates are.
My recommendation HR and recruitment (and in my teams, I pull in admin) work more closely
together to ensure joined up thinking and action in day to day activities.
Not only does this lead to better candidate experience and better
onboarding but a broader understanding of the roles of other departments. It
leads to a greater understanding of the broader business context; an
understanding of talent and recruitment challenges; cross-training and better
teamwork. Okay so now you’ve kicked these silos into touch what
next?
Daily cross-functional meetings One of the critical aspects of Agile is the daily stand-up meeting ( so called
because it’s daily and you stand up for the meeting to keep things brief!) The
success of the meeting is its fiendish simplicity. One person is dedicated as
the lead for the meeting to ensure the meeting happens, keeps it moving along
by ensuring detailed topics are taken offline. This person also ensures items
are tracked. Each team member speaks very briefly. They say 1. What they did yesterday (brief update but also ensures people keep their
commitments 2. Plans for the day ( what are they committing to do today) 3.
Blockers or help needed. In this quick meeting (15- 20 minutes depending on the
size of the team) everyone learns what other team members are focusing on, can
see how they can help if required, make suggestions and bring others up to speed
with their focus areas. What I have found since I was introduced to the concept
of the stand-up meeting is that it is a very quick way to exchange a lot of
information as quick headlines rather than having to go through slides for
updates. The meetings stop duplication of effort, which can sometimes happen in
HR helpdesk environments. It also provides a broader context for the team about
the workload of each team member and allows those with a reduced workload to
step up and help. It reduces the need for lots of other meetings. For HR teams,
I insist on doing stand-ups in an office and a commitment from participants
that absolutely nothing confidential is mentioned outside the stand-up space
and that employee personal matters are not discussed in this format with the
broader team.
I really think your teams will enjoy the format if
you add pastries to the occasional meeting you will get additional brownie
points. As a manager you can check in on stress levels, happiness index
whatever your temperature check mechanism is. I can’t predict what will happen
in your teams but each time I have implemented this I have seen less stress, a
greater feeling of “being on top of things” and from teams a sense that they
are at once empowered by managers and supported by colleagues.
Why not give those ideas a go and let me know how you get on? Next time I will talk about some common digital tools that will help with organising your team workload and improve team communication both with HR and the customer organisations you support.
By James Ryan, Headof HR and Operational Development
I was delighted to have been asked to talk about my
journey into AgileHR at the Agile
Lean Ireland event in Croke Park in Dublin
in April. Despite overwhelming imposter syndrome talking about Agile to a room
full of Product Owners and Scrum Masters, I was happy to see a connection being
made with these experts by using language and terms which they were familiar.
What was sad from my perspective was how few HR people were at the conference
despite the HR focus. For those of us from HR who were there, it was a wake-up
call.
As a long term HR professional I sat in awe as the
giants of the Agile world such as Barry O’Reilly and Mary Poppendieck talked about applying agile principles to
massive technology projects– and that these incredible achievements were less
about the tools but the product of a people-centric mindset: teamwork,
coaching, innovation, leadership, mentoring and facilitation. Surely, I thought
to myself, this list of skills belongs on HR’s turf!
Well, I’m sorry fellow HR folks but while we have
been hiding behind compliance, admin and working on yet another iteration of
annual performance reviews the technology world has moved on without us,
developing a set of people principles which is driving development in
everything from AI to space rockets.
So how do we get back in the game? Have we missed
the boat? Well, no. The good news is there is a quiet revolution going on in
the HR world. The agile mindset is being applied to HR and is being driven by
thought leaders in this area: the great Kevin Empey, Pia-Maria Thoren and Fabiola Eyholzer are evangelising the AgileHR message to
enthusiastic audiences worldwide. It’s clear that despite being slow off the
blocks, HR professionals have all the skills and competencies to be more than
just bit players in the future of work – we can utilise our natural skills to
bring strategic value to our businesses.
Over the next few articles, I am going to be talking about how you can quickly and easily begin to introduce Agile practices supercharge both your team’s performance and the perception of the people function within your organisation. Sounds good? Stay tuned.
By James Ryan, Head of HR and Operational Development
This 2 day Business
Agility Workshop you will use simulations to engage the intuitive as well
as the rational brain while learning about the foundations of business agility.
Date: 21st & 22nd
November, 2018
Location: Ammeon
Learning Centre, O’Connell Bridge House, Dublin 2
It is more than
scaling agile (development) practices. It is more than the sum of different
organisational units that each implement their own chosen agile method on their
own little island, constrained by a traditional management system. It is a
different way of thinking about agility.
BUSINESS AGILITY
WORKSHOP WILL GIVE YOU:
– New ways of
teaching and coaching agility that engages and mobilizes all levels of the
organisation (including decision makers) across all functions (not just IT or
software development).
– A different way of thinking about change where agility spreads virally through the (informal) network. – Agile thinking at scale to develop unique capabilities to thrive in an ever-changing and highly competitive business landscape .
ENTERPRISE FLOW that balances supply with demand from team to portfolio level,
NETWORKED COLLABORATION where highly engaged teams work, decide and learn together in a network,
ORGANISATIONAL LEARNING where experience and experiment complement each other,
DAY 1 – CORE AGILE
CAPABILITIES
Explore flow, collaboration and learning as core agile capabilities as
opposed to methods and practices that do not scale
Learn how to use simulation as a way of teaching and coaching the core
agile capabilities in a way that inspires action not just talk
Experience how to teach agility at all levels and across all functions
(not just IT) in the organisation
DAY 2 – SCALED
AGILE THINKING
Learn about enterprise flow, networked collaboration and organisational
learning as the core capabilities for business agility
Use simulation to explore scaling problems including cross-team
dependencies and balancing demand with supply in end-to-end flow
Explore the use of Customer Kanban to manage capacity downstream, and
triage and order points to shape demand upstream. Apply what has been learned
in the context of agile portfolio management.
Who the workshop is
targeted at?
The workshop is
intended for agile coaches and practitioners who want to engage business as
well as IT (including decision makers) in their agile initiatives. It is a
must for coaches and trainers who want to use Okaloa Flowlab in their own training and
workshops.
Facilitator
Bio’s
Patrick Steyaert of Okaloa, a principal lean agile coach with extensive experience in practicing Kanban, will be your workshop facilitator. Patrick received a Brickell Key Award in 2015 as a recognition for his contributions to the Kanban community. Patrick is one of the first, and only Belgian, Lean Kanban University accredited trainers. He is also a regular speaker at Kanban and agile conferences.
Lately I’ve been working with OpenShift and its source-to-image capabilities and I must say I am impressed. How impressed you ask? Impressed enough to want to write my own “Hello World” app for it. Currently, there is a simple Python app available which is used for most demos/education material. Even though the app does the job, I think that it doesn’t fully demonstrate some of the more powerful capabilities of OpenShift and OpenShift S2I. And then of course there is the result. Let’s face it, no one gets excited by seeing a “Hello World” message on a white canvas.
Ioannis Georgiou, Cloud & DevOps Consultant, Ammeon
I spent some time thinking
how to enrich the experience and came up with five criteria for the app.
Here goes:
The
app has to be simple and not by means of importing 100 packages.
It
needs dynamic content – “Hello World” is boring.
It
should connect to other APIs – that’s what you’d do in real life, isn’t it?
It
should be configured – how about injecting secrets? If this fails, you’ll know
– no smoke and no mirrors.
It’s
not production, it’s a demo – best practices for secret handling, security and
high availability are not part of this project.
With these specs in mind, and
given my past experience with Twitter from building Chirpsum, a Twitter news
aggregator, I chose it as the source of dynamic content. Consuming the Twitter
API requires configuring secrets so two more ticks. To cover the remaining two
criteria, I chose Python and Flask.
I basically built a search
app for Twitter which returns you the top 10 tweets related to a word or a phrase
and also the top 4 words/mentions that are used along with your search query.
Want to try it out yourself?
What follows is a step-by-step walk-through of deploying the app. If you’re
here just to see results, skip that section – although the deployment of the
app is the real magic.
Deploying the app
This post assumes you have an
understanding of container technologies and know what
Kubernetes, OpenShift and Docker are. If you don’t, that’s fine, you don’t
really need to. It’s just that you won’t be able to fully appreciate the
effort and time saved in DevOps. In any case, try to think of your current
path to a cloud deployment and compare it to the following.
What is OpenShift’s Source-to-Image (S2I)?
Simply put, it is a tool that
automatically builds a Docker image from your source code. Now the real
beauty of it is that for specific languages (Python being one of them)
OpenShift will pick a builder image and build the Docker image for you without
you having to define a Dockerfile. You just need to provide some configuration
details in the expected format.
Prerequisites
Basic
knowledge of OpenShift and Kubernetes (here’s a start)
A
preconfigured OpenShift environment (you can use one of the trials if you don’t have one)
A
Twitter account with a registered phone number
10
minutes
Steps
Create a Twitter app
and obtain the secrets for it
Login
to apps.twitter.com and create a new app
Once
created, switch to the Keys and Access Tokens tab and take note of the
following: ◦ Consumer Key (API Key) ◦ Consumer Secret (API Secret) ◦ Access Token ◦ Access Token Secret
Switch
to Permissions and change to Read only
Deploy the app in
OpenShift
Login
to the OpenShift webconsole
Create
a New Project: ◦ Name: twitter-search ◦ Display Name: Twitter
Search
Select
Python under languages (this might differ depending on the configuration of
your OpenShift environment)
Select
Python 2.7
Click
on Show advanced routing, build, deployment and source options and fill out the
form ◦ Name: twitter-search ◦ Git Repository URL: https://gitlab.com/ammeon-public/twitter-search-s2i.git ◦ Routing: tick the Create a
route to the application box ◦ Deployment Configuration: add
the following environment variables (you need your Twitter secrets now – if in production you should use a more
secure way of injecting secrets) • OAUTH_TOKEN=Access Token • OAUTH_TOKEN_SECRET=Access Token Secret • CONSUMER_KEY=Consumer Key (API Key) • CONSUMER_SECRET=Consumer Secret (API Secret) These environment variables will be exported in the container that the Python
app will run. The Python app then reads these variables and uses them to
authenticate with the Twitter API. Warning! Your browser
might save these values. Make sure to either delete the app at the end,
use an incognito window or clear the Auto-fill form data from your browser.
Scaling: Set the number of Replicas to 2 (this is to avoid downtime
during code updates and also increase availability of the app – these concepts
are not covered in this demo)
Click
Create.
Click Continue to overview
and wait (~2mins) for the app to build and deploy.
Note: the first build will be
considerably slower than subsequent ones as OpenShift has to build
the image and get the required Python packages. On subsequent builds, the
base image is reused changing only the source code (unless significant
configuration/requirement changes are made).
Once the app is deployed click on the link that appears in the top right to visit the app. (Here: http://www.example.com)
Hello World, Twitter Style
That’s it, you’re ready to
greet the world! Just enter a word and click on the Search button.
Example Screen from the app with the search query set to “OpenShift”
Clean up
After you finish demonstrating
the app, it’s a good idea to clean up. To do so, follow these steps:
From
the OpenShift webconsole ◦ Click on the project drop-down on the top-left ◦ Select view all projects ◦ Click the bin icon on the right of the project’s name (“Delete Project”)
Go
to apps.twitter.com and select your app ◦ Scroll to the bottom and select “Delete Application”
To Wrap it up
OpenShift’s source-to-image
capability makes cloud deployment and DevOps extremely easy. For production
environments of big enterprises with complex software that needs to be
optimized at a Docker or OS level, S2I might not be optimal. But for
building and deploying simple apps it saves you the hassle of defining a
Dockerfile and the necessary deployment scripts and files (think yaml).
It just streamlines the
experience and allows the developer to focus on building the best app they can.
Thanks for reading! I hope
you’ll enjoy playing around with the app and perhaps use it as your default
demo app. Please do open pull requests if you want to contribute and of course
follow me on Twitter @YGeorgiou.
The Scrum Guide, the official
definition of Scrum, created and described the role of Product Owner (PO). The
role is described as “responsible for maximizing the value of the product and
the work of the Development Team” [1]. It’s a challenging role; as it requires
someone with technical ability, business analysis ability and authority to make
decisions and deal with the consequences. It is often considered to be the most
difficult role in Scrum [2, 3, 4].
There are a number of tools
available that can help the Product Owner be successful. This post describes
one such tool, called the PICK chart, which can be used to aid planning and
prioritisation between the development team and the stakeholders in the
business.
PICK your Stories (and Battles)
The Scrum Guide describes the
Product Owner’s responsibility as “the sole person responsible for managing the
Product Backlog” [1]. Commonly this is interpreted to mean that the product
backlog is a one-dimensional list of tickets ranked by business value. This is
a bad idea. By ordering in this simplistic manner, some low value stories
remain untouched by the team for a very long time (sometimes years). The
stakeholders who requested this work are effectively placed at the end of a
queue only to see others skipping in front of them.
At the top of the product
backlog, a one-dimensional list also causes problems for the unwitting Product
Owner. This is because some valuable tasks are straightforward to implement and
others are complicated or have very high levels of uncertainty.
The INVEST [5] and MoSCoW [6]
techniques can help improve story refinement and prioritisation. INVEST ensures
that each story satisfies certain criteria. It doesn’t provide a ranking
criteria as long as these criteria are met. MoSCoW provides a method for
managing project scope and identifying out-of-scope and at-risk items. It tends
to be subjective and why an item is in one of its four categories rather than
another can be contentious.
A PICK chart, similar to the
one shown below, is a useful method of addressing the weaknesses of existing
methods while also meeting the needs of the team and stakeholders:
This two-dimensional chart
shows both the potential value and the likely difficulty of each story. The
y-axis shows the value (“payoff”) from delivering story and ranks the highest
value to the top. This is similar to the usual lists used to display product
backlogs. The x-axis shows the effort required to deliver a story. Stories on
the left have lower risk as the effort required to deliver is less than those
further to the right.
The PICK chart can be used
effectively during sprint planning to help the Development Team select stories
that can be implemented in the next sprint as well as identify work that needs
further investigation. This helps ensure the sprint does not consist entirely
of high-effort, high-risk stories.
The chart can also be used as
a visual tool to remove stories from the bottom of the backlog because they are
both low-value and technically challenging. Involving some or all of the team
in backlog grooming gives a degree of empowerment to the work they will and
won’t work on in coming sprints. The outcome of this analysis simplifies the
conversation with stakeholders who need to be told their idea will never be
worked on. Instead of it being your opinion versus theirs, there is business
and technical justification.
The PICK chart is a powerful
tool in any Product Owner’s arsenal. By eliminating long wait times for
features that will never be delivered, it ensures that internal stakeholders
don’t waste time on false hopes. By ensuring work is delivered in each sprint,
the team are seen to be continuously reducing risk and adding value to the
product. Its visual nature and relative ranking in business and technical
dimensions mean there are fewer heated arguments between teams, stakeholders
and the Product Owner. It makes “the most difficult role” that little bit
easier.
Product Owner Training:
Ammeon are holding our 2 day
Product Owner Training Course on June 20th and 21st 2018. Learn More
The 8 Wastes Snake is a continuous
process improvement tool. It is a poster-sized sheet that allows people working
on a process to record any perceived wastes and annoyances when they occur
during process execution. This record can then be reviewed periodically by the
teams and management identify changes to improve the process and conditions of
the people working on it.
The Purpose Of The Waste Snake
The purpose of the waste snake is to embed a culture of continuous improvement. By allowing individuals to express their frustrations at processes they are working on provides better information for managers to identify and eliminate wasted time, effort and money. In turn, by continuously solving frustrations should, in turn, reduce staff dissatisfaction, increase morale and improve staff retention.
The 8 Wastes Snake is a fusion of Schlabach’s “Snake on the wall” with the “8 wastes of lean”. The 8 wastes of lean is, in turn, an extension of Ohno’s original 7 wastes (“7 muda). The “snake on the wall” concept allowed teams to record wasted time in an immediate visual fashion to allow repeated wasteful activities to be identified and reduced/eliminated. However, it considered only lost productivity to be a waste. The 8 wastes uses the mnemonic “TIM WOODS” to consider various types of waste but did not provide an actionable tool to record when each type of waste was encountered. This technique seeks to build and improve on the older techniques.
TIM WOODS:
Transport
Inventory
Movement
Waste
Over-production
Over-processing
Distribution
Unused Skills
How to use it
The 8 Wastes Snake can be used as a brainstorming tool for people to record perceived wastes in a process. However, it’s primary purpose is to be used to record actual experienced wastes during process execution for review at a later stage. This use of the tool varies by if the team using the snake are normally co-located or are distributed to one or more remote sites. It is important that the snake should belong to a single process owner (for scrum teams, this could be the Scrum Master, for example)
Co-located teams
For teams that usually work in the same
area:
Hang the poster close to the work area so that it is visible and accessible to the team.
If they do not have access to them already, provide post-its to the team.
Provide an introduction to the purpose of the snake (to identify and eliminate waste) and an overview of each type of waste.
Agree on a date for the first review (for Scrum teams, this could be part of a retrospective)
Encourage the team to record any wastes (such as time waiting for a process to complete) on a post-it stuck on the snake.
Review with the team and identify actions for improvement and actions for escalation.
Remove any wastes that have been reduced/eliminated and add “dot-votes” for ones that have been witnessed repeatedly.
Distributed / Remote teams
For teams that don’t usually work in the
same area:
Use a virtual tool to create a virtual poster where others can submit. A team wiki or a free virtual tool like Realtimeboard can provide this functionality
Provide an introduction to the purpose of the snake (to identify and eliminate waste) and an overview of each type of waste.
Agree on a date for the first review (for Scrum teams, this could be part of a retrospective)
Encourage the team to record any wastes (such as time waiting for a process to complete) on the same page as the snake.
Review with the team and identify actions for improvement and actions for escalation.
Remove any wastes that have been reduced/eliminated and add “dot-votes” for ones that have been witnessed repeatedly.
Ammeon Enables Cathx Ocean To Deliver Faster Through Agile-Lean Consulting
Cathx Ocean design, manufacture and supply advanced
subsea imaging and measurement systems. Software and hardware
R&D projects in Cathx were challenged by lengthy development cycles
and a lack of project visibility. In addition, Cathx was developing a business-critical
product and needed help to set up new processes to plan and execute its
delivery. Cathx selected Ammeon help overcome these challenges.
Ammeon recommended that Cathx undertake an
Agile-Lean Start programme. Over a 6-week period Ammeon reviewed development
processes and established Agile-Lean work methods. Key improvements
achieved at Cathx Ocean during their participation in Agile-Lean Start include:
Replaced multiple processes with a single
standardised workflow and trained Cathx teams in its use.
A 79% reduction in the work
backlog in the first week through the use of visual management systems and
closer in-team collaboration.
A pilot project was brought from ideation to
delivery in less than 5 days.
“The Agile-Lean Start has been a huge leap forward for us in adopting Agile practices,” said Marie Flynn, COO, Cathx Ocean. “Planning and prioritisation of Research and Development work with the new processes and workflows is much simpler and more efficient. We now need to apply these practices to other areas and embed them in the company.”
The good folks at 3scale gave us access
to the first beta version of the on-premise API Gateway application. This
presented us with an exciting opportunity to test its applicability for a proof
of concept IoT application we’re building.
3Scale
and IoT Application
The 3scale API Gateway lets us manage access to our concept IoT application in terms of who can access the API (through an application key for registered users) and the rate at which API calls can be made.
The IoT application is a web server exposing a REST API to retrieve information from IoT devices in the field. After developing and testing locally, we realised that running the webserver on the OpenShift instance where 3Scale was running made everything simpler. This enabled all team members to access the webserver all of the time, instead of just when the development machine was connected to the company network.
The diagram below shows the stack where both 3Scale and the IoT proof of concept are deployed to OpenShift.
S2I
Build Process in OpenShift
The on-premise installation of 3Scale is an OpenShift application that we deployed from a template. For the IoT application, we created a new OpenShift application from first principles. A search of the OpenShift website returned this article, which correlated closely with what we wanted to do. We had already written the webserver using Python, specifically Flask.
The article describes how to deploy a
basic Flask application onto OpenShift from a github repository. This is a
basic use of the S2I build process in OpenShift. The S2I build is a framework
that allows a developer to use a “builder” container and source code directly
to produce a runnable application image. It is described in detail here.
After following the article on Python
application deployment and understanding the process, we forked the repo on github,
cloned it locally and changed it to reflect our existing code. We cloned the
repo instead of creating a new one because the Getting Started article,
referenced above, used gunicorn rather than the built-in python webserver and
had the configuration already in place.
Running through the process with our own
repository included the following steps:
Add to Project option from the OpenShift web console
Selected a Python builder image and set the version to 2.7
Gave the application the name webserver and pointed to the previous git URL
When the builder started, we selected Continue to Overview and watched it complete.
Using the S2I process we could
easily and repeatedly deploy a web server with more functionality than a basic
“Hello World” display.
All of the API methods were merely stubs
that returned fixed values. What we needed was a database for live data.
We developed the database functionality
locally with a MySQL DB running on the local machine. When it came to deploying
this onto OpenShift, we wanted the same environment. We knew that there was an
OpenShift container for MySQL and it was straightforward to spin it up in the
same project.
Persistent
Storage in OpenShift
The nature of containers is that, by
default, they have only ephemeral storage (temporary, tied to the life of the
container). We wanted the database to persist over potential container failures
or shutdowns. This required attaching storage, known as a persistent volume to
the container. OpenShift supports different types of persistent volumes
including:
NFS
AWS Elastic Block Stores
(EBS)
GlusterFS
iSCSI
RBD (Ceph Block Device)
To progress quickly, we choose NFS storage and created an NFS share. This NFS share was then provisioned in OpenShift. This involves creating a file defining the properties of the share and running the command:
oc create -f nfs_vol5.yml
The file contents are shown as follows:
Behind the scenes, the database application creates a “claim” for a storage volume. A claim is a request for a storage volume of a certain size from the OpenShift platform. If the platform has available storage that meets the size criteria, it is assigned to the claim. If no volume meets the exact size requirements, but a larger volume exists, the larger volume will be assigned. The NFS storage we defined in Openshift met the criteria for this claim and was assigned to the application.
After the persistent volume was added to the application, we used the console tab of the container to edit the schema. We set the schema as required but we faced a new issue, connecting from the application to the database.
Connecting
the Application to the Database
To connect from the application to the
database requires the database-specific variables set in the database container
to be exposed in the application container. This is achieved by adding the
variables into the deployment configuration. This causes the application to
redeploy picking up the new environment variables.
Specifically, the database container is
deployed with the following environment variables:
MySQL user
MySQL password
MySQL database name
These environment variables can be set
as part of the initial configuration of the container but if they aren’t,
default values are provided.
The environment variables are set
in the application container using the following command:
oc env dc phpdatabase -e MYSQL_USER=myuser -e MYSQL_PASSWORD=mypassword -e MYSQL_DATABASE=mydatabase
The next challenge is how to turn this application into a template so that it can become a push-button deployment and we will address that in a future blog post!
The
seven wastes of Lean, when translated from the original Japanese of Taiichi
Ohno, are Transport, Inventory, Motion, Waiting, Overproduction, Overprocessing
and Defects [1]. The 8th waste, which was added later [2], is “under
used skills” and is the least mechanical and most human of all the wastes.
Often it is the most overlooked and, in my experience, the most important
waste.
Under used skills deliver no value
A
few years ago, I performed an analysis for a lean project at the Localisation
division of a major international software vendor. At the time, the
standard process used was to receive the English version of the software,
translate the strings into 26 languages, test and then release. The
process to translate took over six weeks to complete and required translators,
testers and linguists. As I examined the workflow, I discovered that the
product had zero active users in one of the languages. On
further investigation, it turned out that the company had stopped all sales and
distribution in that regional market several years previously, but sales had
failed to inform Localisation. It was a difficult day when I had to
explain to the translators and linguists that not only was their work no longer
needed, they had not added any value to the product for almost
half a decade. Thankfully, these employees were reassigned to other contracts
within the company where they were able to use their skills and experience to
add real value.
Awesome automation
On
another occasion, I discovered that a team of 10 people were performing eight
hours of post-release testing on a piece of software that they had previously
tested pre-release. These tests existed because at one point a failure in the
release process had caused a corruption on a client site. The failure had been
fixed but because no-one could be sure a separate failure might not appear,
these tests remained and were dreaded by the testers because the work was
boring and almost always pointless.
In
this case, our solution was to develop new automated tests to provide the same
function as the manual testing. The automated tests could be triggered
immediately after the release process instead of the next working day. It also
had a run time of less than 80 minutes, which was much less than
the 80 hours need to manually run the tests. The new process
made the team happier as they could focus on more interesting work and, as part
of handover, two of the testers were trained in how to maintain and further
improve the tool.
Independence and objectiveness
At
Ammeon we offer an initial assessment of your workflows for free. We
believe that it is really important to have a regular independent objective
review of processes to identify waste.
Most
of the time our analysis will show that your problems can be solved with improved
tools, improved processes and adapting your culture to
drive toward continuous innovation. Often this will lead to a
recommendation of further training or a supported change through a Bootcamp! If this article has
inspired you to address inefficient work practices in your IT organisation,
request your free assessment by clicking here.
References
T. Ohno. Toyota Production System, Productivity
Press, 1988
J. Liker. The
Toyota Way: 14 Management Principles from the World’s Greatest
Manufacturer, New York, McGraw-Hill, 2004
The management of Application
Programming Interfaces (APIs) is a hot topic. Discussions usually include
mention of phrases like ‘exposure of your customer data’, ‘monetizing your
underlying assets’ or ‘offering value add services to your customer base’.
If you have ‘assets’ or data which you
think may be useful to other third parties or end customers, or if you are
being driven by regulatory changes or market pressure, then an API Gateway has
to form part of your solution strategy.
An API gateway allows an organisation to
expose their internal APIs as external public facing APIs so that application
developers can access the organisation’s data and systems functions. The
capabilities of an API gateway include: management of the APIs, access control,
developer portal and monetization functionality.
There are a number of offerings in the market and in this post we focus on the 3Scale offering, one of the latest entrants into the on-premise space. 3Scale, who were acquired by Red Hat, have had an API management offering as a Software as a Service for several years and have now taken this offering and packaged it for use inside the enterprise.
The good folks at 3Scale gave us access to the first beta version and we gave it a detailed examination. In our evaluation, we look at how to install it, how it works with Red Hat OpenShift and we describe some of the interesting use cases it enables. We also share some insights and top-tips.
How easy is it to Install?
The 3Scale platform comes with several deployment options, one of which is an on-premise option. For this deployment, 3Scale utilises the Red Hat OpenShift environment. The ease of integration between the 3Scale platform and OpenShift demonstrates that Red Hat have put a lot of work into getting the API Gateway working in a containerised environment. The 3Scale platform itself is deployed within OpenShift’s containers and proved relatively easy to install and run.
The architecture of the OpenShift cluster we used was a simple single master and a single minion node, as shown below.
Basic Configuration
The servers, which came configured with
Red Hat Enterprise Linux (RHEL) 7.3 installed, have their own domain and the
API endpoints and portals are contained within it.
Top Tip: When copying the ssh key, make sure it is copied to the host that generated the key.
Otherwise, it can't ssh to itself and the installer notes an error.
Configuration
Tips
With the installation complete, the next
step was to get access to the console. This required us to edit the master configuration
file (/etc/origin/master/master-config.yaml). For our purposes (and since
we are an extremely trusting bunch), we used the AllowAll policy detailed here.
Following the edit, restart the master
by running:
systemctl restart atomic-openshift-master.service
The OpenShift console is available at https://vm-ip-address(1):8443/console/.
To administer OpenShift from the command
line, simply login to the OpenShift master node as root. Again, the
AllowAll policy means that you can log in with any name and password
combination but to keep things consistent you should use the same username all
the time.
You can then create a project for the
3Scale deployment. After this, 3Scale can be deployed within the containers
allocated by OpenShift.
(1) This is the IP Address of the master
node.
3Scale
Prerequisites
3Scale has the following prerequisites:
A working installation of
OpenShift 3.3 or 3.4 with persistent storage
An up to date version of the
OpenShift CLI
A domain, preferably
wildcarded, that resolves to the OpenShift cluster
Access to the Red Hat
container catalogue
(Optionally) a working SMTP
server for email functionality.
Persistent volumes are required by the 3Scale deployment and therefore should be allocated within OpenShift prior to deploying 3Scale. For our deployment, the persistent volumes were configured using NFS shares.
Deployment
Once the persistent storage was set up
for 3Scale, the deployment was straightforward. We were supplied a
template file for the application that just required us to provide the domain
as a parameter.
After about 25 minutes the application
was up and running and we were able to login.
A final setup step was to configure the
SMTP server, this was a simple matter of defining and exporting variables into
the OpenShift configuration.
3Scale
API Gateway: sample use case
In order to exercise the platform we
needed a use case to implement and an API to expose. We decided that an
Internet of Things (IoT) use case made a lot of sense, not least because it’s
such a hot topic right now!
So with that in mind, allow yourself to
be transported on a journey through time and space, to a world where Air
pollution is actively monitored everywhere and something is actually done about
it. And consider the following scenario:
There are a number of IoT devices monitoring Air Quality and Pollution levels throughout a given geographic area.
There may be a number of different makes and types of devices monitoring, for example, carbon monoxide levels, nitrogen dioxide, particulates etc.
A micro-service architecture could be deployed on the Beta IoT platform. Each micro-service could then process its own specific API.
The 3Scale API Gateway would then be responsible for offering these APIs out to public third parties via the Internet to consume.
The 3Scale API Gateway would also be responsible for managing the external access to these micro-service APIs and providing authentication and authorisation, as well as injecting policies. Auto-scaling of micro-service resources could also be provided by the 3Scale platform in conjunction with the OpenShift environment.
The third party applications, which consume the public APIs, could then use this information to provide, for example, a Web Dashboard of pollution levels or a mobile application for users’ smartphones.
SmartCity IoT Use Case
In keeping with this, we wrote a number of user stories and scenarios around the use of the API. To implement our IOT API, which the 3Scale platform was going to ‘front’ to the outside world, we developed a basic web service written as a python flask application. The application was stored in GitHub to ease deployment to OpenShift. Rather than create a completely new project, the OpenShift example python app was forked and changed. This is the GitHub repo we used.
What do you get for your Rates?
The 3Scale platform allows you to
configure rate limits for your API. The 3Scale documentation on rate limits is here.
Rate limits are set up on a per plan per
application basis. This means that each application has a set of plans and each
plan has the capability of setting multiple limits on each method. This is done
using the Admin Portal, where specific rates can be configured. The rates work
by essentially counting the number of calls made on an API method over a time
period and then measuring this against the limit configured on the portal for a
given application. It is possible to stop a method from being used in a plan.
Examples of rate limits are:
10 method calls every minute
1000 method calls every day
20 method calls every minute
with a max of 100 in an hour
When a limit is exceeded, the method returns a 403 Forbidden response and, if
selected, generates alerts via email and the 3Scale dashboard.
Authentication
The API Gateway can be configured to use
either username/password or OAuth v2.0 authentication of applications. The
username/password configuration is pretty simple but OAuth authentication is a
little more tricky.
You can reuse the existing system-redis
service and set either REDIS_HOST or REDIS_URL in the APIcast deployment
configuration (see reference).
If the gateway is deployed in a project different from where the 3Scale AMP
platform is, you will need to use a route.
Analytics
The Analytics feature of the platform
allows you to configure metrics for each of your API method calls,
configuration is done via the platform dashboard. The platform will graph and
show the following information:
number of hits to your API,
hits can be calls to the API or broken out into individual methods on the API
quantity in MB or GB of data
uploaded and downloaded via the API
compute time associated with
calls to the API
count of the number of
records or data objects being returned or total disk storage in use by an
account.
Developer Portal
The Developer Portal allows users to
customise the look and feel of the entire Developer Portal in order to match
any specific branding. You have access to nearly every element of the portal so
you can basically modify to suit your own environment.
It has to be said that the documentation
around how the portal is customised could be better, but if you are an
experienced web developer it will probably be straightforward.
Integrating with Swagger
Swagger is a standard, language-agnostic
interface to REST APIs that allows both humans and computers to discover and
understand the capabilities of the service without access to source code or
documentation. 3Scale allows swagger documentation to be uploaded for the
APIs that are to be exposed.
We used this online editor to create the swagger documentation for the IOT API. The specification can be viewed here. The files above are the basic documentation for the API but require updating to use in 3Scale and you need to reference this documentation. The host needs to change as does the schemes setting, and the user key need to be added. To add the file follow the instructions here.
Top Tip: One of our findings from working with the swagger documentation was that valid
security certificates need to be installed for the 3scale platform. When they aren't,
the swagger-generated curl requests returned an error.
Documentation
Some documentation improvements could be made. For example, to provide an overall context or architecture overview. This would be a benefit as a starting point in order to provide the user with a better understanding of the different components ‘under the hood’. Some of the specifics which need to be modified (such as the Developer Portal web pages) could be explained better, with examples being provided for the most common tasks. Given that we were working on the first Beta version of the product, we’re going to assume that the documentation improvements will be in place prior to general availability.
What
we didn’t do
Due to time and project pressures, we
didn’t perform any stability, HA or performance tests. So we can only go on
what been published elsewhere. 3Scale have stated that they carry out
performance tests in order to provide benchmark data to size the infrastructure
for given API rates. The billing mechanism wasn’t available to test so we
weren’t able to set up any customer billing plans. Therefore we weren’t able to
test monetization options for our fictional API.
Conclusion
Our experience of using both the
OpenShift platform and the 3Scale API Gateway was positive and informative. It
is relatively straightforward to install both OpenShift and 3Scale and get a
simple API up and running. There’s a lot of ‘out of the box’ features which are
useful (and perhaps essential) if you are going to deploy an API Gateway within
your own premise. There’s also a good degree of flexibility within the platform
to set rates, integrate to backends and customise your portals.
Overall, a good experience and a good
addition to the world of API management!
The great promise of DevOps
is that organisations can reap the benefits of an automated workflow that takes
a developer’s commit from test to production with deliberate speed. Inevitably,
problems arise in the process of setting up and continuously improving a DevOps
workflow. Some problems are cultural or organisational in nature, but some are
technical.
This post outlines three
patterns that can be applied when debugging difficult failure modes in a DevOps
environment.
You Don’t Need a Debugger for Debugging
While IDEs provide developers
with the convenience of an environment that integrates code and debugging tools,
there’s nothing that says you can’t inspect running code in a staging or
production environment. While deploying an IDE, or a heavy developer-centric
package on a live environment can be difficult (or impossible) for operational
reasons, there are lightweight CLI tools you can use to aid in
the diagnosis of issues, such as a hanging process on a staging system. Tools
such as ltrace and SystemTap/DTrace, even plain old lsof can
reveal a lot about what’s actually happening. If more visibility into what’s in
memory is needed, you can use gcore to cause a running process to generate a core dump
without killing it so that it can be subsequently analysed with gdb offline.
In the Java world, tools such
as jvmtop leverage the instrumentation capability built
inside the virtual machine (VM) to offer a view of the VM’s threads;
while jmap and VisualVM can be used to generate and
analyse a heap dump, respectively.
Quantify It
While it is frequently useful
to practice rubber duck debugging, some failure modes do not lend
themselves to a dialectic approach. This is particularly true of intermittent
failures seen on a live system the state of which is not fully known. If you
find yourself thinking “this shouldn’t happen, but it does”, consider a
different approach: aggressive quantification. A spreadsheet program can, in
fact, be a debugging tool!
Gather timings, file
descriptor counts, event counts, request timestamps, etc. on a variety of
environments – not just where the problem is seen. This can be achieved by
adding logging or instrumentation to your code or tooling, or by more passive
means such as running tshark or polling the information in procfs for certain processes. Once acquired, transform the data
into CSV files, import it and plot it as time series and/or as statistical
distribution. Pay attention to the outliers. What else was happening when that
bleep occurred? And how does that fit in with your working hypothesis regarding
the nature of the issue?
When All You Have Is a Hammer
Got a tricky issue that only
occurs very intermittently? A suspicion that there is some sort of
race condition between multiple streams of code execution, possibly in
different processes or on different systems, that results in things going
wrong, but only sometimes? If so, it’s hammer time! Inducing
extreme load, or “hammering the system” is an effective way to reproduce these
bugs. Try increasing various factors by an order of magnitude or more above
what is typically seen in regular integration testing environments. This can artificially
increase the period of time during which certain conditions are true, to which
other threads or programs might be sensitive. For instance, by repeatedly
serialising ten or a hundred times as many objects to/from a backing database,
you’ll increase the time during which other DB clients have to wait for their
transactions to run, possibly revealing pathological behaviours in the process.
Applying this debugging pattern goes against the natural inclinations of both developers and operations folks, as both would rather see code run at a scale that is supported! That’s precisely what makes it valuable, as it can reveal unconscious assumptions made about the expected behaviour of systems and environments.
In a recent post I blogged
about how traditional companies are being disrupted by more nimble competitors. Banking is
one sector taking an interesting approach to this challenge.
Like their
counterparts in other sectors (including telecoms, networking, media and life
sciences), banks are turning to digital transformations to
speed delivery and fend off competitive threats. Digital transformations
are about DevOps adoption and infrastructure automation using
public and private cloud.
Banks are also
adopting two other winning strategies:
Spinning up digitally-focused ‘start ups’ that are free from old
processes, oppressive corporate culture and technical debt.
Working closely with the industry’s new Fintech players.
Although not as
high-profile as the UK, Netherlands, Germany and the Nordics as a
place for Financial innovation, Ireland boasts a lot of Fintech clout.
“It makes sense that
Ireland’s Fintech community would be diverse and successful”
Since the set up of
the IFSC in Dublin’s docklands in 1987, the Irish government has
supported the growth potential of the financial services sector. The sector is
supported by a well-developed financial and communications infrastructure, a
young, well-educated workforce, access to European markets and good transport
links to the UK and the US. Coupled with the boom in Ireland’s tech scene as a whole,
it makes sense that Ireland’s Fintech community would be diverse and
successful.
Ireland’s Fintech Map
In spite of this,
much of Ireland’s Fintech talent has gone under noticed.
“Get a handle on the
ecosystem”
I discovered this
when I established my own Fintech start up and was trying to find local
partners to connect with. I struggled to get a handle on the ecosystem and to
identify the key players in each of the areas. So I started something
called Ireland’s Fintech Map.
A labour of love
(and sometimes just a labour!), the Irish Fintech Map provides a snapshot of
this ever-changing, dynamic landscape.
Recent Traction
Ireland’s Fintech
Map has been gaining traction. Recent sightings have been in an Enterprise Ireland presentation
to a group of Nordic Banks, in a meeting of the Business Ireland Kenya network and
it has been spotted doing the rounds in Hong Kong. It has been shared countless
times on LinkedIn and other social media. I also used it as part of a
presentation of the Irish Fintech scene to a contingent from Skandiabanken who were eager to learn from these
bright newcomers.
There are too many
companies featured on the map to mention, but I would like to give a shout out
to three:
CR2: A tech company whose solution stack enables banks to have onmi-channel
relationships with their customers. CR2 and its BankWorld platform were
recently recommended by analyst firm Ovum. And in November CR2 announced a deal with
Jordanian Bank Al Etihad. Wishing every success to new CEO Fintan Byrne.
Leveris: Who provide a turn-key banking platform and are raising €15 Million in
Series A funding. Good luck to CEO Conor Fennelly and team!
Rockall Technologies: Enabling banks to
better control their banking book collateral, Rockall Technologies are ranked
among the world’s top 100 risk
technology companies. This is ‘one to watch’ as it is now steered by CEO
Richard Bryce who has a great track record of driving company growth through
innovation.
FREE
Poster Delivered to Your Door
Eye
candy for Fintech fans!
Given the interest in the Irish Fintech scene Ammeon is offering to provide a FREE printed, A2 size poster version to grace your office, cubicle or bedroom wall. All you need to do is complete the form below. Please allow 10 working days for delivery.
Note:
To keep the
Ireland’s Fintech Map project manageable, I have had to employ some fairly
tight parameters and exclude the following:
Companies founded outside of Ireland, even if they have an office in Ireland or the founders are born and raised in Ireland (sorry Stripe!)
Companies that are “Coming Soon”, in “beta” – or are no longer operating.
Companies that service several industries or sectors, even if Financial Services is part of their market.
Companies that provide consulting, managed services or people-based solutions.
ByDave Anderson, Head of Consulting
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok