Articles
Node.js testing for multiple environment variables
Some applications might require the same codebase to be tested with different sets of environment variables. This article proposes a simple way to do so.
Combining two independent git repositories
This article presents how to combine two independent and unrelated git repositories.
TDD for an Express application
TDD has been proven to significantly reduce the amount of bugs in software releases. Moreover, with CI/CD systems, tests can be run automatically before the application deployment, preventing a faulty application to reach its end user. This guide goes through the steps required to set up a TDD workflow with an Express application.
Distributing a Helm chart on Artifact Hub
Building applications in a microservice architecture has become more and more popular recently. With this design pattern, an application is composed of multiple services that run independently and generally share data across network protocols.
Getting Python's requests library to use a local DNS (Core-DNS, Docker-compose, etc.) while behind a proxy
When using Python's requests library, the requests are send through the proxy set as environment variables. Consequently, if the DNS to be used comes before said proxy, the host might not be resolved. This typically happens when resolving a host in Kubernetes using Core-DNS. If the request first leaves the Kubernetes cluster to reach the proxy, then the DNS server becomes unreachable, making the request fail.
Self-hosted GitLab instance for DevOps on Ubuntu 18.04
GitLab provides a great number of tools needed for the DevOps cycle of an application. In this guide, we'll install a GitLab instance on our own server and configure it to fit our DevOps needs. Here, we will use a fresh install of Ubuntu 18.04 as a base.
Containerization of a Flask application
Flask can be seen as the equivalent of Express for Python. However, although an express application is basically production ready, a Flask app outputs the following when executed by itself:
Deployment of a TensorFlow model to Kubernetes
Let’s imagine that you’ve just finished training your new TensorFlow model and want to start using it in your application(s). One obvious way to do so is to simply import it in the source code of every application that uses it. However, it might be more versatile to keep your model in one place as standalone and simply have applications exchange data with it through API calls. This article will go through the steps of building such a system and deploy the result to Kubernetes.
Node.js DevOps example
In this article, we’ll build a simple Node.js application that uses Express to respond to HTTP requests. In order to deploy this application to production, we’ll also configure a GitLab CI/CD pipeline so as to dockerize it and deploy its container to a Kubernetes cluster.
Application containerization
Let's imagine a developer building an application on his computer and that this application is meant to be deployed on a different machine (production environment). In order to execute properly, this application requires multiple libraries, binaries and packages. For example, a Python program requires the Python interpreter as well as all the imported Python modules.