loader
HTML Email Template – Blog
Pre-header for the newsletter template
 
 
The Development Backstory of the PetaGys CloudNode Appliance
 
How a small team bootstrapped a Big solution.
 
 
A team effort.
 
 

Over the past few years, PetaCMS founder Greg Graham and a ad-hoc engineering team have been working out a novel approach to DevOps we call Mob DevOps.

Instead of individual developers working on components separately, mob DevOps involves a group of developers working on the same platform together at the same time, as if we were all the same individual working 24/7. Knowing that we were in various time zones, and a few on different Continents, One person acts as a pilot, creating the “flight plan”, and everyone else acts as co-piloyts and crew members, helping to fly from point A to point B.
We wanted to give our take on why our team began mob DevOp and showcase some of the benefits we’ve experienced, as well as what we’ve learned from the process so far.
Back in 2016, one of Greg’s clients, a non-profit, was in trouble. They were constantly getting hacked with ransomware, and also were being threatened will sanctions due to compliance issues. As they had limited budgeting, they were in dire need of some solution that would secure their environment, put their confidential customer data into compliance, all while maintaining their already over-worked IT staff head count.
Firstly, in order to solve this issue, we knew it required 3-2-1 backup, as well as reducing the client’s public internet exposure vectors. But the compliance issues seemed complex.
Secondly, we were started developing a new solution from scratch, and the mob approach seemed like a good way to collaborate on a project that required a lot of thought and discussion.
We followed the same approach that we would on any other project; we iterated. We started with a simple setup – a laptop plugged into a monitor and a group of us joined via Zoom. There wasn’t much of a process defined at this point in our mob DevOps journey, so we would rotate pilot and crew roles through the group and take breaks when we felt like it.
Eventually we evolved to having integrated with Slack so we could all see the problem better. We also realized after some time that having a bit more structure enabled a fairer rotation of roles, so we now switch pilots every day or so, and take a break every 3-4 pilot changes.
We quicky realized the problem wasn’t as complex as first thought. We just had to break it up into smaller functionalities. This required hardware and something then known as “Cloud Storage”.

We quickly learned more.

Our customer had a fixed budget. As we researched Public Cloud storage, it became apparent that within a short time, the hidden fees would grow out of control. Everyone learns something new when they “have to”. Previous engagements for Fortune 500 companies allowed for very liberal budgeting. This client was in the business of saving children’s lives, and every penny saved would help a real child.

Up for the challenge?

Explaining thought processes and the particulars of a certain need also makes a great mentoring opportunity between more experienced developers and junior colleagues. Once we all understood the primary task was equal to the budget, we went to work!
The goal is shared across the development team and collaborating on the challenge took hold. We first used an existing, near retirement server the customer had in inventory, and repurposed it as an on-site network storage target using the worlds best file management system: ZFS. It helped that ZFS was open sourced and did not require stifling license fees. We created API’s and automations around the ZFS based OS to automatically copy all data written to it to another ZFS host.
In order to satisfy the compliance obligations, the secondary host had to be geographically offsite – yet all data would need to be encrypted. We found it better to build our own SD-WAN fork vs. some off the shelf VPN as it would allow for multiple off-site backup and replication locations. In doing so, the team created a Layer-2 system that would stream network packets inside a tunnel at wire speeds.
This changed everything…at least we thought. We showed the customer that they could continue to use data backup software they had already paid for, and have that same software attached to a remote datastore that presented itself as if it were localized on their internal network. They thought this was fantastic, but could we add a public Cloud as an option.

Our working environment improved

Challenges are fun! Except when dealing with the major Public Clouds. It became evident that these Cloud environments maintained highly proprietary systems, as well as extremely expensive VPN access considering data transport cost. At least they were expensive from our customers perspective. We had to find a workaround.
Group DevOps requires prolonged concentration, and this can leave developers feeling frustrated when seeing totally unique programming interfaces for each Cloud system. What was needed is a singular transparent connection methodology. We were able to create this in a standardized container for each Cloud. A “Cloud Native” Container.

We’ve learned that the more autonomy around our development team, the better. Our mob DevOps team created a new Private Multi-Cloud platform that did not sacrifice cost for functionality. We couldn’t.

The team is very proud of this achievement. Almost all of our contemporaries said it couldn’t be done with the cost limitations applied by our client. Since 2018, we have continued to evolve our platform with management, monitoring, and other major functionalities. When our system is presented in real-time, typically, the first question asked is: What big software company created this? Or who was your venture funding partner? Sometimes its productive to know that a group of people in a virtual garage can still come up with the formerly impossible.
 
 
 
You can always contact us via the chat, or email us if you would like to discuss all things Multi-Cloud further. Read more on our Blog