Financial services increasingly rely on cloud-native workloads to process highly sensitive data — from credit scoring to fraud detection. Encryption at rest and in transit is already well-established, but data that is in use is commonly disclosed to the host itself and is susceptible to other host-level or insider attacks.


Confidential Kubernetes combines Google Kubernetes Engine (GKE) with Confidential VMs that use hardware-based Trusted Execution Environments (TEEs). TEEs add two key protections: memory encryption, which keeps data safe while it’s being used, and workload attestation, which makes sure apps only run in trusted environments. This setup lets sensitive apps run securely, meeting strict rules in areas like finance, healthcare, and government.


1. The Runtime Data Security Gap

Most cloud programs secure data at rest (disk/database encryption), data in transit (TLS/mTLS), and access (IAM/RBAC). The blind spot is data in use—the moment plaintext lives in RAM while code executes.


What “data in use” really looks like


Why normal protections aren’t enough


Real risks when data is in memory

Even after you lock down disks, networks, and access, information in RAM can still leak.In this state, attackers can:


Why this matters for banks and finance firms

For financial companies, leaking data from memory isn’t just a technical issue — it can break laws and trigger fines:


What’s needed to close the gap

To really protect data while it’s being used, systems need:

  1. Encrypted memory – so even someone with root access can’t see the data.
  2. Proof of safety (attestation) – keys and secrets should only unlock if the system is running in a trusted state.
  3. Conditional key release – services like Cloud KMS should only give out keys after the system proves it’s secure.
  4. Strong guardrails – use signed images, minimal permissions, no memory dumps in production, and careful logging.


2. Confidential Computing

The state of data that’s often ignored is data in use — when apps are actually working with the data in memory (RAM). In this state, attackers with deep access — like a hacked OS or a bad admin — can still peek at sensitive info. Confidential Computing protects data in-use with Confidential VMs, Confidential GKE, Confidential Dataflow, Confidential Dataproc, and Confidential Space.


What is TEE - Trusted Execution Environment ?

A TEE is a special secure area inside the CPU that:

With that, apps can handle very sensitive data while lowering the chance of leaks.


Main TEE technologies

Use cases: secure APIs, protecting algorithms, validating financial transactions, password/key handling.

Limitation: Enclaves are small and they also need code changes or SDKs. That makes things more complex than people like.


Use cases: run unmodified workloads safely, protect databases or containers, keep admins or attackers from reading VM memory.

Limitation: Small performance hit at VM level. You also can’t choose to protect just certain code pieces.


How Google Cloud uses TEEs


Google Cloud provides several options built on these technologies:

3. Why Confidential Kubernetes Matters

Kubernetes is the standard way to run modern apps. It gives portability and scale, but it wasn’t designed to fully protect data in memory.

In a normal Kubernetes setup:

For industries like banking, healthcare, or government, these are serious risks, not just theory.


How Confidential GKE Helps

Running workloads on Confidential GKE Nodes (which are built on Confidential VMs) adds hardware-level security:

  1. Encrypted memory – all data in RAM is encrypted by the CPU. Even root-level attackers can’t read it.
  2. Proof of trust (attestation) – before workloads start, the node shows cryptographic proof it’s secure and unmodified.
  3. Safe key access – Cloud KMS only releases secrets if the node’s attestation check passes. No attestation = no keys.
  4. Layered defense – works with Binary Authorization, Workload Identity, and RBAC to create stronger overall security.

Confidential Kubernetes upgrades a normal GKE cluster into a trusted environment for sensitive workloads — letting regulated industries run critical apps safely in the cloud.


4. Architecture Overview 

The confidential Kubernetes architecture on GCP introduces multiple layers of security to protect data while in use, without requiring major changes to existing workloads.


The flow can be broken down into five key stages:

Workload Build

As shown in Figure 1, workloads in a Confidential GKE cluster follow a secure flow: the init container handles attestation and key requests, the app processes data only after secrets are in memory, and observability services capture logs.


Deployment on Confidential GKE Nodes


Remote Attestation


The step-by-step handshake is illustrated in Figure 2. The init container proves the node’s state to the verifier, KMS releases keys only if validation succeeds, and the app container runs once secrets are available.


Data Processing


Results Output


Supporting Google Cloud Services


High-Level Flow

                              Figure 1: High-Level Flow of Confidential GKE Workloads

The diagram shows how a transaction flows from the client to a Confidential GKE node. The container checks the environment and requests the keys. Only after secrets are safely loaded into memory does the app container start processing the data. At the same time, logs are sent out for monitoring and visibility.


5. Attestation & Key Release Sequence

                                 Figure 2: Attestation & Key Release Sequence

This sequence shows how the init container talks to the attestation service and Cloud KMS. The app only starts if the node passes validation and KMS releases a key. Secrets are kept in memory, and the process is logged for visibility.


6. Implementation Blueprint


Step 1 – Create a Confidential Node Pool

gcloud container clusters create secure-cluster \
 --region=us-central1 \
 --workload-pool="$(gcloud config get-value project).svc.id.goog"

gcloud container node-pools create confidential-pool \
 --cluster=secure-cluster \
 --region=us-central1 \
 --machine-type=n2d-standard-4 \
 --confidential-compute \
 --num-nodes=3

Note: SGX support is specific to machine types (e.g., n2d or n2-standard with AMD SEV; for SGX you'd use Intel-based nodes via Confidential Space).


Step 2 – Setup KMS with Conditional Access

gcloud kms keyrings create fin-kr --location=us

gcloud kms keys create fin-key --keyring=fin-kr --location=us --purpose=encryption

Note: In production, attach a conditional IAM policy so cloudkms.cryptoKeyVersions.useToDecrypt is granted only when attestation claims match (e.g., node is Confidential, image digest matches, project/cluster constraints). You can model this with a proxy service that validates the attestation token and calls KMS, or via a broker that enforces conditions server-side.


Step 3 – Deployment

apiVersion: apps/v1
kind: Deployment  # This defines a Kubernetes Deployment resource.
metadata:
 name: fin-risk  # It will manage the pods for your financial risk/fraud detection app under the name fin-risk
spec:
 replicas: 3     # Runs 3 replicas (pods) for high availability and load balancing.
 selector:
   matchLabels: { app: fin-risk }  # Pods are labeled app=fin-risk. Ensures the ReplicaSet/controller can match pods to the Deployment.
 template:
   metadata:
     labels: { app: fin-risk }    
   spec:
     serviceAccountName: fin-risk-sa  # Uses a Workload Identity service account (fin-risk-sa) so pods can securely call Google Cloud APIs (like KMS). Avoids node-level credentials — each pod has its own identity.
     volumes:
       - name: secrets-tmp
         emptyDir: { medium: "Memory" }  # tmpfs
     initContainers:
       - name: attester # Runs before the main app container starts. Uses a small image (attester) to handle verification and secret retrieval.
         image: gcr.io/YOUR-PROJECT/attester:sha256-ABC   # signed image
         args:
           - "--attest-endpoint=https://verifier.example.com"
           - "--kms-resource=projects/…/locations/us/keyRings/fin-kr/cryptoKeys/fin-key"
           - "--out=/run/secure/creds.env"
         volumeMounts:
           - { name: secrets-tmp, mountPath: /run/secure } # Mounts the tmpfs volume at /run/secure to store decrypted secrets.
     containers: # The fraud/risk detection app container. Will only start once the initContainer finishes successfully.
       - name: app
         image: gcr.io/YOUR-PROJECT/fin-risk:sha256-DEF   # signed image
         envFrom:   # Pulls in Kubernetes Secrets for non-sensitive configuration
           - secretRef:
               name: placeholder  # (optional) or read /run/secure/creds.env directly
         volumeMounts:
           - { name: secrets-tmp, mountPath: /run/secure, readOnly: true }
         securityContext:         # Locks down the container for least privilege. Protects against container escape attacks.
           readOnlyRootFilesystem: true
           allowPrivilegeEscalation: false
     nodeSelector:
       cloud.google.com/confidential-compute: "true" # Forces the pods to run only on Confidential VM nodes in the GKE cluster. Ensures workloads won’t accidentally land on a non-secure node.
     tolerations:
       - key: "confidential-compute"
         operator: "Exists"
         effect: "NoSchedule"


Explanation:

Ensures both init and app containers use signed images (to be verified by Binary Authorization).


Step 4 - Binary Authorization (policy snippet idea)

gcloud binauthz policy import policy.yaml
gcloud container clusters update secure-cluster \
  --binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE


7. Challenges and limits to know about

Confidential GKE is powerful, but like any technology, it has some limitations:

Think of it as a higher-security option for sensitive or regulated workloads, not something you’ll use for every single app.


8. What’s next

Confidential computing on Kubernetes is still growing, and more features are on the way:


9. Conclusion

Confidential Kubernetes by Google Cloud closes the last significant security gap: data in use. It has memory encryption, attestation, and conditional key access so that sensitive data remains secure - even when apps are in use. This means that banks, healthcare organizations and other regulated industries can migrate mission-critical workloads to Kubernetes with more confidence and lower risk and with increased compliance. In short, it makes Kubernetes a safe environment in which to run your most sensitive applications.