Skip to content

Palo Alto Prisma Cloud

Prisma Cloud is a Cloud Native Application Protection Platform (CNAPP) that secures applications from code development to cloud deployment. We were given the chance by Palo Alto Networks to use this tool in our project. After a license was activated for us, we started exploring Prisma Cloud and how to use it. Being a cloud product, no on-premise installations were needed.

Our team decided to dedicate the staging cluster for Prisma Cloud. We were told that the Defenders, which are used to communicate with Prisma Cloud, were not officially supported on MicroK8s and could be hard to get working. Thus we opted to create the staging cluster using full K8s.

Setting up the Staging cluster

After acquiring enough knowledge about K8s, we were able to start the creation of the cluster. Kubernetes' official documentation details different phases and topics well, but the amount of new information posed problems here and there. First version of our cluster was created with 1 master and 3 workers with the idea of adding another 2 masters afterwards.

Shortly after, we found out that we were not able to add more masters to the cluster, only workers. Troubleshooting this took a little bit of time, but the problem was found in the end. With the full Kubernetes, the cluster initialization process needs to be done with certain arguments through CLI. Otherwise, the cluster only works as a single master cluster. MicroK8s, on the other hand, automates this by adding new nodes as masters by default.

With this newly found information, the cluster was rebuilt. This time around, we could add more master nodes to the cluster along with worker nodes. After setting up the nodes and running join commands on them, we had the planned topology of 3 masters and 3 workers up and running. While learning how to set up the cluster, scripts were made to semi-automate the setup process. This helped to run the necessary commands on nodes faster and made the workflow more fluent.

The next step was to install an ingress controller to the master nodes. The ingress controller is responsible for handling the networking and routing of external traffic to the appropriate services inside the cluster. Connectivity to the cluster was tested with a few web applications.

Some issues still remained. Routing to a single application at a time worked. We didn’t have a dedicated domain name for the testing cluster, so directing traffic to more than one application proved to be problematic. We already had a wildcard domain name for our production cluster, so the solution we considered was deploying a reverse proxy for both clusters. This would have been the next step to explore if we had more time for the project.

Deploying Defenders into the cluster

The deployment of Defenders was the next step to gain visibility into the cluster. After reading through documentation and exploring the Prisma Cloud interface, the first deployment was done with a daemonset YAML file. Prisma Cloud generates the file automatically with provided settings. After this, the file can be downloaded and applied to the cluster. Image below presents a view on the manual deployment basic settings for the Defenders.

Manual deployment view.

After deployment, we had a problem with cluster visibility. We could see the cluster and its nodes, but no pod or container information was visible. Troubleshooting this took a while, but the answer to the problem was simple. When creating the daemonset deployment YAML file, certain settings were accidentally left unchecked. After finding this out, a new deployment of Defenders was needed.

Defenders were re-deployed through Prisma Cloud credential based daemonset management. As seen in the image below, only the Kubernetes kubeconfig file was needed in order for Prisma Cloud to be integrated with our cluster. After this, Defenders were redistributed to hosts.

Credential input view for automated daemonset deployment.

By default, the kubeconfig file had a wrong cluster API server IP address in it. The address was a local IP address instead of a public one. This resulted in the error that can be seen in the picture below with the fourth daemonset. By going into the configuration file and changing the default IP to correct one, we fixed connectivity partly.

After this, an authentication error was raised. In order to fix this, the kubeadm file needed to be updated with our public IP. This allowed certificates to be recreated with the public IP certified as a trusted source. After certificate recreation was done, our daemonset status turned to green “Success” as seen in the image below.

Daemonset status “Success” in our Prisma Cloud cluster (wimmaworker). Zoo cluster with incorrect IP configuration.

The image below shows part of the contents of “orangumaster-1” after pod visibility was obtained. Same kind of visibility to any host was achieved after successful re-deployment of Defenders.

Pod visibility on a host.

The last image illustrates our Prisma Cloud cluster topology. Worth noting is that our Prisma Cloud cluster was not the only location where Defenders were deployed. Our project had several clusters that were used for testing purposes. This can be seen later in the “Integrating alerts” -section, where in the alerts many different host names are displayed.

Prisma Cloud cluster topology.

Code Repository Scanning with Twistcli

From Prisma Cloud utilities we tested the twistcli tool. It has the capability to scan repositories locally, which was what we experimented with. First step was to download the tool for the correct platform. Files for different operating systems were found through Prisma Cloud’s console as seen in the image below. For us the correct file was the first one in the “twistcli tool” list.

Different versions of twistcli tool.

A repository containing a small Python application was cloned to be examined locally. After giving all the required arguments to the twistcli tool, a scan was performed. At first, results showed no vulnerabilities. This is possibly due to the application requiring only a couple of libraries new enough to not contain known vulnerabilities.

To see how vulnerabilities would look like, we added an old version of Tensorflow-library known to have vulnerabilities. Results of the scan are shown in the image below.

Partial results for the example scan.

As the image shows, the vulnerabilities have been fixed in an updated version of the library. This is usually the case with other libraries as well. Another way to fix a problem would be to use a different library. Due to project time ending, we didn’t manage to do more extensive testing and scanning with twistcli tool.

Vulnerability Assessment of CVEs

The Palo Alto Prisma cloud offers a vulnerability explorer, which provides the list of vulnerabilities associated with all the hosts, images and containers in our Kubernetes environment. The vulnerabilities are classified according to their severity as critical, medium, high and low. The first image below shows the graph of identified vulnerabilities on our hosts and images. The second image shows some of the most critical vulnerabilities on hosts.

Vulnerability overview in Prisma Cloud.

Host vulnerabilities.

From Prisma Cloud we exported a CSV file to check and potentially fix vulnerabilities. We focused on vulnerabilities flagged with “critical” or “high” severity tags. There were several dozen of these, but they were all very similar. Below are two example screenshots taken from Prisma Cloud to demonstrate some of the found vulnerabilities.

“Go” vulnerabilities.

“Curl” vulnerabilities.

After checking through the vulnerabilities, we evaluated what to do next. All of the critical and high severity vulnerabilities were in official Docker images. They were provided by the developers, for example “nginx”, and therefore we couldn’t do anything about them but to accept these kinds of vulnerabilities. If we had found a vulnerability in a self made image, we could have updated our packages used to newest versions to fix this. Modifying official images, however, was not in the scope of our project. As the project drew to the end, we didn’t have time to explore the lower severity vulnerabilities.

Integrating alerts

Prisma Cloud offers different integration possibilities to receive automated alerts. There were straight integrations from Prisma Cloud to applications, such as Teams and Slack. Another way was the use of webhooks.

Integrations overview window.

Slack integration was done through Prisma Cloud UI. We set the required fields and triggers to get alerted if policy violations happen. The Slack integration went through as expected. Not long after the first alerts with incident details came through. Below are images to showcase a few alerts.

Slack integration.

Example of two host runtime alerts.

Container compliance alert 1.

Container compliance alert 2.

Image vulnerabilities categorized by severity levels and with CVE numbers.

An alert example where a pod and the container in it was purposefully accessed and modified.

We also explored and experimented with webhook implementations. Configuring these required a little bit more effort compared to integration mentioned earlier. Integration to Slack through a webhook required us to get familiar with the Slack platform better.

With this webhook, a custom JSON payload was used to pass wanted information. Due to time limitations, we couldn’t configure the alerts obtained this way comprehensively and it was better to use the alerts automatically formulated by Prisma Cloud. Next images show the successful integration and simple test alerts. With more time we could have configured and filtered alerts better to suit our wishes.

Slack webhook alert profile set up. Part of the custom JSON payload is displayed.

Real alerts from Prisma Cloud are marked with a red box. Under the box is a manual test alert.

One more channel implemented for alerts was our project’s Discord server. On the server we had a dedicated channel for alerts. The open source team had set up working notifications for their systems earlier. Therefore for us it was easier to try to hook up Prisma Cloud alerts to that channel as well. In order to do this, it was required of us to familiarize ourselves with the open-source SOAR-tool called Shuffle.

Setting up the webhook in Prisma Cloud was as straightforward as other integrations. After setting triggers, inputting the correct webhook URL and adjusting custom JSON payload, the connection to Shuffle was established. Next image shows a workflow interface in Shuffle and the workflow’s nodes.

Shuffle workflow interface.

The first node on the left indicates the incoming webhook from Prisma Cloud to Shuffle. Within the node options the webhook can be turned on and off. It also contains the webhook’s URL.

Second node from the left receives all the information coming through the webhook. The information is in the JSON that Prisma Cloud sends to Shuffle. The information received is filtered in the third node from the left. There could be parallel nodes to this one to separate alerts into categories. For example to host runtime and deployment alerts.

The fourth node takes the filtered information from the third one. In this node the information is made suitable for Discord, so it can be sent forward. Once all the nodes are connected, the chain is ready. Image below shows some example alerts on our Discord server’s alert channel. Due to lack of time we didn’t manage to configure these alerts as far as we wanted. Further configurations of these alerts would have been the next step.

Discord alerts through the webhook.

Last integration we tested was with Teams. Integration itself was straightforward to set up through Prisma Cloud. However, for a reason that was not found out, no alerts came through to Teams at any point after the test message. We believe this might have something to do with our environment being in a private cloud. There was not enough time to explore this further.

Teams integration test message.