ontinuous Integration (CI) is a software development practice that constantly merges and checks the code provided by a team of programmers or engineers. It is different to what was done before, as people in a development team would work in their local environment, commit code to a repository, and someone would test, approve and merge the changes.
The way Continuous Integration is impacting value creation in Software is very significant, as it can reduce time-to-market for teams while dramatically augmenting the quality of the output. Development cycles are shorter, which means a team can spot problems before they become unmanageable, and checks are automated, which unburdens people from the task.
When I first arrived at Interlink, seven months ago (November 2018), I was tasked with leading change in Research and Development (R&D). My first mission was to perform ground recognition, as I had to observe and learn what the processes in place were, and then be able to improve upon them.
This initial stage took me two months. It was difficult because I moved from Brazil to the Interlink offices in Rosario, Argentina, without speaking a word of Spanish and having to lead a team of developers. But I persevered, interfacing in English wherever possible.
As I learned how everything worked, I found it useful to write down how it all should be, even when fuzzy about the details. I was asked to keep an open mind, and to reimagine processes instead of adapting to them. Naturally, most of my notes were about workflow and the use of technologies.
Implementing new workflows along with a new issue tracker
During the first months the company was using Atlassian for software development and collaboration tools. Atlassian's tools are very good and they represent a complete solution for software development companies. You know the feeling when you see an amazing jacket at a store, and then when you try it on, you realize it does not fit well? That is how we felt when using Atlassian, we knew it is good but it just did not fit us.
Back in September 2018, I had a talk with the management team about a GitLab Webcast that seemed to be very interesting. Our main takeaway after watching the webcast was that GitLab's goal is to make developers contribute faster and have smaller sprint cycles. I was also really impressed by their growth rate (200% per year). So after we talked about it, we decided to start using GitLab just to see how it works and to see if it fit our needs.
After testing, we came up with a plan of what we needed to do before implementing GitLab, in general terms:
- Learn and Adopt Best Practices
- Importing everything we had (and needed) from BitBucket
- Using projects with deadlines and milestones to keep it all in track.
So as a process apart of GitLab we decided first get our hands on Development Workflow. Defining all the rules and methodologies that we expect our team to follow. It seemed tedious at times, but we had the sense that it was going to be an important reference to make the change stick.
At the end of this process we had a solid workflow and we were ready to introduce the changes to the team.
We then tackled the transition head-on and started using GitLab in January 2019, moving the smaller projects from Atlassian into their new home, adjusting different preferences for our workflow, all while still testing under the free plan.
It took us that entire month to migrate it all and adjust our Continuous Integration tools. February then started with us using their 30-days free trial of the Gold plan. And with that we identified what tools we needed. It was eye-opening because we could already feel so much more organized and comfortable than we ever aspire to be under Atlassian’s suite of tools.
Even with our planning, before implementing, we had to deal with new challenges during our first months. For example, to this moment, GitLab still does not allow cross-repository merge requests, so we had to add to our process a manual pattern for merge requests.
Another problem we had was the pipeline quota, which at our current plan is 2000 minutes per month, and was exceeded very fast due to the complexity of one of our projects, so we had to have our own pipeline hosted on a local server only for that project, while the others kept using GitLab's pipeline.
Interlink's systems mainly involve a lot of data concerning Internet Protocol (IP) traffic. Since the company has Software offerings that help Internet Service Providers (ISPs) do their job more efficiently, there is a lot of data related to a good provision of Internet services, including network management statistics. Usually we need to pull in and display this data as fast as possible in order to provide either statistics, geolocation, access records or equipment entries.
When I first arrived, the majority of the systems were written in PHP using MySQL as database, but the company had a clear goal of transitioning away from these and into more modern technology. Therefore, we had two new projects that I should lead with the team, Assist HR and Strings. For these two we decided to not only use different technology, but to also do things differently.
Both of the projects already had their conceptual stage cleared, they were in the working prototype phase. Assist HR was a prototype using NodeJS as backend and jQuery at the frontend with MongoDB for database. The plan was to rebuild Assist HR with React and an improved backend. And that was exactly what we did.
The usage of jQuery was temporary, designed to have a fast prototype that would test the main value proposition and the use cases. But why did we change from jQuery to React? I could go on to write a full article about this, but here are a few, brief key aspects about React which caught our attention:
React is a platform. Components are just an npm/yarn install away, so this makes it easy to share them, which is an advantage that increases over time. Now we can easily have the components developed on one project and imported to another one.
Strings, a solution for network monitoring and statistics, also benefits from this new architecture and faster development cycles. One of the main constraints that we evidenced with PHP and MySQL was that anytime we had to load network devices on a map, the hardware requirements were significant, as it was the loading time when the number of elements was big.
Under NodeJS, we were able to confirm that its asynchronous nature is a great fit for this. Instead of loading map objects one by one, which is what we were forced to do with PHP, now we can load components all at once. This greatly reduces our loading time, and cuts down the strain on our processing resources.
The plan is now to take this same approach to FiberMaps, a network planning and documentation solution that enables you to design an Internet provision network using drag & drop objects. So you can map your network devices, draft changes and have new personnel understand the structure of what was done.
New Functionalities on top of GitLab
One thing we researched and liked a lot is the potential behind Continuous Integration (CI). Basically, CI is a development practice that involves the developer pushing code daily into a shared repository, and this code is verified by an automated build that allows the team to detect problems early.
You can do a lot with CI, right now we are using it to automatically deploy to our staging environment, with checks in place to make sure everything is up to spec.
We also share this with different people outside the development team, for instance, our legacy network provisioning software Flowdat automatically commits the latest changes to our local gogs repository. It generates docker images, version images, and at least sets an instance of the project in order to execute a series of scheduled tests.
Our next goal is to have the CI server build the system and run unit and integration tests, to ensure the quality of the code. We know that things change rapidly in technology, and we have to embrace the set of things that can match our needs today.
Sharing our thinking on Software architecture is never meant to be prescriptive or universal, as its implementation usefulness depends on your team, the moment of development you are in, and the constraints you are working with. But we find that sharing the mentality behind our changes always helps provide a new perspective. Watch this space for more into our engineering processes and future developer diaries.