G
GuideDevOps
Lesson 10 of 13

Network Policies & Security Groups

Part of the Security & DevSecOps tutorial series.

The Death of the Perimeter

Historically, network security operated under the "Castle and Moat" model.

The security team deployed a massive, extremely expensive hardware firewall (the moat) at the edge of the corporate data center. Anything outside the moat (the Internet) was strictly untrusted. Everything inside the moat (the internal servers, databases, and employee laptops) was implicitly trusted.

Why this failed: If a hacker managed to breach a single minor server inside the moat (like the HR intranet portal), they achieved lateral dominance. Because the internal servers implicitly trusted one another without restrictions, the hacker could seamlessly pivot from the minor HR server straight into the highly-secured financial database.

The Modern Standard: Zero Trust

In modern cloud environments, the perimeter is dead.

DevSecOps embraces the Zero Trust Architecture. The core principle is: Never trust, always verify.

Just because two microservices sit on the exact same Kubernetes cluster or within the same AWS VPC does not mean they should inherently be allowed to talk to each other. Network traffic must be explicitly whitelisted via code at a granular micro-level.


Cloud Security Groups (AWS/Azure)

When you deploy a server (EC2 instance) on AWS, it requires a Security Group.

A Security Group is a virtual, stateful firewall implemented purely in software. It wraps around individual instances (not subnets) and controls inbound (Ingress) and outbound (Egress) traffic.

By default upon creation, a Security Group allows zero inbound traffic. You must explicitly define rules.

# A Terraform representation of Security as Code
resource "aws_security_group" "web_server_sg" {
  name        = "web-sg"
  description = "Allow inbound HTTP traffic"
 
  ingress {
    description = "Allow Internet to reach Web Server on port 80"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]   # The entire internet
  }
}
 
resource "aws_security_group" "database_sg" {
  name        = "db-sg"
  description = "Strict DB access"
 
  ingress {
    description     = "ONLY allow the Web Server group to talk to the DB"
    from_port       = 5432
    to_port         = 5432
    protocol        = "tcp"
    # Notice we don't use IP addresses. We reference the logical group!
    security_groups = [aws_security_group.web_server_sg.id] 
  }
}

In the example above, even if a rogue server is spun up in the exact same subnet as the database, it cannot reach port 5432. ONLY servers tagged explicitly with the web_server_sg are permitted.

This prevents lateral movement during a breach.


Kubernetes Network Policies

Kubernetes has a severe, terrifying default networking behavior: By default, every Pod in a Kubernetes cluster can communicate with every other Pod.

If a developer deploys a vulnerable WordPress pod in the marketing namespace, a hacker can breach it, and then instantly launch an uninhibited attack against the Payment Processing pod residing in the finance namespace.

To implement Zero Trust inside Kubernetes, you must deploy NetworkPolicies.

A NetworkPolicy is a YAML resource that acts like a micro-firewall between pods, heavily utilizing Kubernetes Labels for targeting.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-db-access
  namespace: finance
spec:
  # 1. Which pods does this firewall cover?
  podSelector:
    matchLabels:
      app: postgres-database
      
  # 2. What traffic is allowed INTO the database?
  policyTypes:
  - Ingress
  ingress:
  - from:
    # 3. ONLY allow incoming traffic from exactly this label
    - podSelector:
        matchLabels:
          app: payment-processor
    ports:
    - protocol: TCP
      port: 5432

As soon as you apply this NetworkPolicy, the database is instantly sealed off from the rest of the cluster. It will simply drop packets originating from the compromised WordPress pod.

Note: For Kubernetes NetworkPolicies to function, your cluster must be running a networking plugin (CNI) that explicitly supports them, such as Calico or Cilium. Flannel does not support Network Policies.