Skip to content

MystiSOC

MystiSOC short introduction

MystiSOC is our SOC (Security Operations Center). It is one of the major tasks assigned to Mysticons by the WIMMA Lab Product owner.

Our goal is to build a production environment and automated security operations center service that will provide cyber security and threat awareness to our customers. We will be researching different technologies and GitOps-methods for implementing and developing the system.

We have three goals and implementations of what we will be working on as a team when it comes to our SOC.

  1. Open Source: Create as comprehensive, automated and cost-effective high-availability SOC center as possible within a given timeframe.
  2. Palo Alto oriented team: Deploy Palo Alto Prisma into its own high-availability K8-Cluster. This is meant to be an All-In-One solution when it comes to managing SOC.
  3. GitOps: Create a pull architecture based CI/CD system, where Argo CD automatically keeps the production environment in check regarding latest updates. Testkube will run automated tests on any given update.

Our customers include other companies under WIMMA Lab: Pengwin Media, Overflow and IoTitude.

SOC structure

Current structure of SOC.

SOC topology updated picture.

Second version of the SOC structure.

SOC topology updated picture.

First version of the SOC structure.

In depth view of SOC structure. First version.

Kubernetes

As the previous Mysticons teams had already used MicroK8s and written documentation for it, it was only natural for us to start from MK8s too. Installing it was very easy with only a few lines of code needed. Furthermore, the inclusion of pre-installed Calico, along with the addons like NGINX ingress controllers and Argo CD, made MK8s simple to use.

Most problems arose from firewall misconfigurations or unfamiliarity with where MicroK8s stores its configuration files. We were also warned that many tools, such as Testkube and Palo Alto defenders, would not support MK8s clusters. After getting more used to MicroK8s, we came to the conclusion that it was almost like a normal Kubernetes.

In comparison, getting the full Kubernetes up and running required significantly more effort. The installation process is well documented in the manuals, but getting everything to work as they should required more technical knowledge than we had available at the time. We couldn't get Calico working but managed to replace it with Flannel.

First, we started installing applications into our clusters using simple yaml files. This helped us to understand how the Kubernetes resources work. After that we switched to using Portainer, which made application deployment and removal simple. Finally, we installed Argo CD and configured it to use our GitLab repositories for application management.