Skip directly to search

Skip directly to content

 

Keeping Up With The Norm In An Era Of Software Defined Everything

 
 

Architecture | Armen Kojekians |
09 April 2019

Cloud technology is here and it is here to stay, irrespective of ongoing debates about its usability, security and ease of adoption. As more businesses change their technology landscape to include public cloud services, it is fair to say that the shift to public cloud is becoming a norm. There are no doubts that data protection, compliance and security are the top priorities on public cloud vendor agendas, which leaves us with the perception that we can finally shift our focus to the higher layers in the stack; application flow and business logic, leaving other aspects of security and infrastructure management to the cloud service provider.

That all looks very promising, but according to the shared responsibility model from one of the leading vendors, apparently the customer still assumes a good part of responsibility for managing the applications and services run “in” the cloud. No doubt we still have some work to do and that is where most of us still need some help understanding the boundaries of responsibility and extra efforts that needs to be considered to make the applications and services secure.

So what has changed about our work in the cloud environment and how does that differ from how we have worked in the past? A typical data centre is built using traditional infrastructure; compute, networking and storage were assembled and setup using vendor-recommended best practices and on top of that, the owner of the data centre had to also build physical security as well. In a shared hosting model, the hosting provider is assuming all the costs associated with the operation of that environment including the physical security costs. Adding a new client to a data centre would entail upfront planning, design, capital expenses (bare-metal infrastructure provisioning) and operational costs. These factors impact both the timeframes and setup costs at the current rate of change in technology, when time to market is key, can be an important factor.

Moving forward to virtualisation and Public Cloud, some of that burden was taken by the cloud service providers. The provisioning is now mostly on virtualised platforms and you can do it at the a click of a button. We are now living in the era of Software Defined Everything (SDE). Yes, this term is now being used relatively frequently, and there I thought I was coining a new phrase!

How does the new SDE concept helps me as a cloud adopter to manage my infrastructure more efficiently and in as secure way?

For a start, you can plan and manage your infrastructure and services to react to the external factors like service utilisation; you can build a layer of abstraction between your application layer and storage; you can define and enforce your own network topology, access control, routing paths, and network segmentation.

Network segmentation is not new as a concept, traditionally it was achieved by using physical network devices such as firewalls, switches and routers which allowed to define and route packets between devices in the same subnet or logical broadcast domain (VLANs). The cloud offers the flexibility and freedom to forget traditional VLAN constructs and build a logical definition of a network. Now every customer can sit on a single VLAN and have an intelligent way of controlling the communication between different assets in that network. We enter the age of Virtual eXtensible LANs (vxlans) which bring new opportunities such as micro-segmentation.

For those who are new to the concept, micro-segmentation allows you to consolidate workloads with the different security needs into separate groups of concern (as shown in Figure 1). As such, it enables two virtual machines on the same network or hypervisor host to operate under independent security contexts. In other words, MachineA can be a web server only allowing port 80 access and MachineB can be a back-end database only allowing port 5342 from the web server. Each of those machines will have a defined security context wrapped around them. This can be achieved using the 5-tuple principle, which means you will be able to control the access to those assets based on five attributes. This has to be achieved at scale and the security policies have to be distributable across physical networks. Sure in the past we had this big firewall and switching hardware, guarding the cluster or machines, except making a change like the above would be considered high-risk and more complex than it needs to be. Now we are moving towards virtualised firewalls which are managed by API and UI based managers.

Image 2
Figure 1. Network segmentation before and after micro-segmentation

In the new world, you can route the traffic from VM1 to VM2 at the hypervisor level, without the need to route traffic all the way up to a traditional router. That saves the costs or resource utilisation, network hops and latency.

BEYOND MICRO-SEGMENTATION

The concept of micro-segmentation is and has actively been used in Shared Hosting and Public Clouds for some time. The virtualisation and automation concepts advance and we are now entering a time where Containerisation is the next norm. With this however, comes new challenges but before we move to these, let’s take a step back and talk about what a container is.

In simple terms, a container is effectively packing an application, its configuration and dependencies into a single object, thus making that application portable and allowing multiple instances and versions of it to be deployed and run on the same machine or distributed across a number of environments. Put differently taking the same example of MachineA from above, we can run say 20 containers (applications) inside this single Virtual Machine which all do similar task or something completely different. With this freedom however comes a few security considerations:

Image 3
Figure 2. Inter communication of containers on same host

As we can see in Figure 2, containers can connect to each other inside the same virtual host machine, making their communication totally invisible to traditional firewalls and networking tools. That gives operations and security teams a hard time to monitor and inspect the traffic to identify malicious use of resources or attempt of lateral network movement.

In the diagram below you can see a topology in which application “containers” can communicate across Virtual Machines, but looking from outside it will look like the Virtual Machines are communicating and it will be challenging to identify which applications are in fact initiating that communication. This also limits the ability to understand and control traffic at a granular level.

Image 4
Figure 3. Intra communication of containers between hosts

Containers may also need to communicate with other machines, PAAS or shared storage. For example, persistent storage, or databases or services accessed either internally or via internet. That calls for access controls considerations and good practices to limit the attack surface, design a robust and efficient communication and routing strategy in line with the security best practices including secure by design principles.

The diagram below depicts an example of how complex that landscape can be and the concept on micro-segmentation will be able to route the communication efficiently to allow for information exchange for those purposes.

Keeping up with the Norm 5
Figure 4. Communication into and within the container world

ENTER NANO-SEGMENTATION

Nano-Segmentation is able to step in and reach places Micro-Segmentation cannot get to. Any communication which is happening within the virtual machine is able to be intercepted and analysed before it is considered to be a valid communication stream. Now you will be able to group the application in your containers into a logical group and apply security policies to them. The complexity of it is that now you have to manage vast number of policies and (virtual) firewalls. Remember a container is almost just a process inside a Virtual Machine trying to communicate with another container (another process)!

There is need for an orchestration layer that would allow to manage the security policies and the logical segmentation of your application estate and make that consistent, relatively easy to use and in automated manner.

One company which has an answer to this challenge is Twistlock. Not only have they managed to solve this problem at OSI layer 3, they have implemented a layer 7 firewall; CNAF (Cloud Native Application Firewall).

I am not going into a great level of detail, that will be very well a topic for another article, but at a high level Twistlock architecture has four main components:

Intelligence Stream, where threat and venerability information from around 30 upstream commercial providers are gathered related to operating system and image security. Example of these providers are CIS and National Vulnerability Database.
The console, the console used for management and is also the API endpoint for integration. All policies are defined within the console and then pushed to the defenders.
Defenders, are deployed on every host within the environment. For example, if customer is running a Kubernetes cluster, the defender will be deployed on all worker nodes. It is the correlation between the console and defender where all policies are pushed down and used to protect the customer’s workloads.
CI Plugins: Twistlock has a native Jenkins plunging which give lots of visualisation. If your pipeline is formed using other CI, Twislock comes with a CLI - twistcli, which helps you integrate with those platforms.

Keeping up with the Norm 6
Figure 5. Twistlock Architecture (image source)


Other awesome features include:

■ image scanning for vulnerabilities and compliance for the lifecycle of the container.
■ automatic learning runtime protection,
■ automatically learning and segmenting the network, as well as our Web Application Firewall.

In Figure 6 we show how the Twistlock CNAF fits into our cloud environment. Requests are received over the host network and pass through the container network and into Twistlock, before reaching our application. This allows Twistlock to allow valid traffic to reach our application (as shown in the top part of the diagram) or block undesirable traffic (as shown in the bottom part of the diagram.

Keeping up with the Norm 7
Figure 6. Twistlock CNAF in action in our cloud environment

To conclude, we have seen how the rapid migration of applications to public cloud platforms can change the nature of our infrastructure work. While some aspects of traditional infrastructure work disappear, other tasks change and some aspects of cloud deployment and operation are new and unfamiliar. We’ve explored some of the key technologies and ideas that we need to master in order to provide robust application environments in a public cloud and hopefully provided a few pointers to guide your journey in this direction.

Armen Kojekians

Senior Infrastructure Consultant

Armen enjoys working on all things Distributed and Cloud, helping clients embrace innovation while moving existing workloads or building new products. The only clouds Armen doesn't like are the kind that keep him and his two kids indoors. He also enjoys experimenting with cooking, but admits it doesn't always produce Master Chef style results.

 

Categories

 

Related Articles

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 14 May 2019

    Edge Services

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

 

From This Author

  • 30 April 2019

    Kubernetes Design Principles Part 1

Most Popular Articles

The Twisted Concept of Securing Kubernetes Clusters
 

Architecture | Vlad Calmic | 05 November 2019

The Twisted Concept of Securing Kubernetes Clusters

Cognitive Computing Using Cloud-Based Resources II
 

AI | Radu Orghidan | 01 October 2019

Cognitive Computing Using Cloud-Based Resources II

Cognitive Computing Using Cloud-Based Resources
 

AI | Radu Orghidan | 17 September 2019

Cognitive Computing Using Cloud-Based Resources

Creating A Visual Culture
 

Agile | Madalin Ilie | 03 September 2019

Creating A Visual Culture

Extracting Data from Images in Presentations
 

AI | Alexandru Mortan | 20 August 2019

Extracting Data from Images in Presentations

Evaluating the current testing trends
 

Agile | Gregory Solovey | 06 August 2019

Evaluating the current testing trends

11 Things I wish I knew before working with Terraform – part 2
 

Architecture | Julian Alarcon | 23 July 2019

11 Things I wish I knew before working with Terraform – part 2

The Rising Cost of Poor Software Security
 

Architecture | Eoin Woods | 12 July 2019

The Rising Cost of Poor Software Security

Developing your Product Owner mindset
 

Agile | Thomas Behrens | 09 July 2019

Developing your Product Owner mindset

 

Archive

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure As Code With Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

We are listening

How would you rate your experience with Endava so far?

We would appreciate talking to you about your feedback. Could you share with us your contact details?

 

By using this site you agree to the use of cookies for analytics, personalized content and ads. Learn More