My 1st Microsoft Article: Build and deploy apps on AKS using DevOps (GitHub Actions) and GitOps (ArgoCD)

Yesterday a new article titled “Build and deploy apps on AKS using DevOps and GitOps” was published. This is an article I was working on for a while and it is the first item of work that I can share publicly since joining Microsoft. I am working on many other things I can’t share publicly at the moment. :-)!

The article is a part of the Azure Architecture Center. This article is about modernizing end-to-end app build and deploy using containers, continuous integration (CI) via GitHub Actions for build and push to an Azure Container Registry, as well as GitOps via Argo CD for continuous deployment (CD) to an AKS cluster.

The article can be found here: 

https://learn.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-with-aks

The article explores deploying a Python and Flask based app via two CI/CD approaches push-based and pull-based (GitOps). It is complete with a pros and cons comparison of both approaches and architecture diagrams for each that you can download. Here is a screenshot of the pull-based (GitOps) architecture:

The technologies used in this article and scenario include:

GitHub

GitHub Actions

Azure Container Registry

Azure Kubernetes Service (AKS)

Argo CD (GitOps Operator)

Azure Monitor

This article also has a repository with code for both the push-based CI/CD scenario and the pull-based CI/CD (GitOps) scenario in the AKS Baseline Automation. I had the opportunity to spearhead and work on these. They will walk through using each approach and have the code for the Flask App, and GitHub Actions to run the approaches. A direct link to this section of the article is here: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-with-aks#deploy-this-scenario

I hope that you find all of this useful. Now go check out the article and deploy the app using the approaches. Stay tuned for more from me at Microsoft and for more blog posts here!

Read more

Simplify your AKS IaC Deployments using the AKS Construction Helper tool

After designing and architecting AKS the next step is to deploy your cluster/s. It is ideal to build your AKS deployments out as code.

This means taking your Azure infrastructure & AKS cluster/s design and scripting them as IaC (Infrastructure as Code). Scripting the AKS deployment vs manually deploying gives you documentation as code, standardization, & a templatized deployment for repeatability. You can deploy this code as is, place it in a pipeline for ease of deployment, in inner-source, or in a service catalog for access by multiple teams.

Microsoft has built a tool named the AKS Construction helper to accelerate building out your IaC for AKS. This tool is not as well-known as it should be. I wanted to blog about this tool to share this great resource that will save you tons of time. The AKS Construction helper was originally launched by Keith Howling of Microsoft. The core contributors to this tool have been Gordon Byers and Keith Howling with contributions from others as well.

The AKS Construction helper unifies guidance provided by the AKS Secure BaselineWell Architected FrameworkCloud Adoption Framework, and Enterprise-Scale. It also is part of the official AKS Landing Zone Accelerator (Enterprise Scale). The AKS Construction helper lets you configure your AKS deployment using wizard/form style selections. After you complete your selections the tool gives you IaC code that you can copy to perform the AKS Deployment/s. You can get code for Az CLI, a Github Actions workflow, Terraform, or a Parameters file that can be used with an ARM Template.

Let’s go ahead and take a tour of the tool.

The tool lets you select Operations Principles or Enterprise-Scale path for configuring the options.

This helps narrow down the overall design requirements of your AKS deployment.

The next section of the AKS Construction helper is to fine-tune your AKS deployment. This gives you the chance to tweak things like the cluster name, K8s version, resource group, region, to be created, IP and Cider, initial RBAC, SLA, autoscaling, upgrade configuration, cluster networking, add ons such as an ingress controller (App Gateway, NGINX, etc), monitoring such as Azure Monitor, Azure policy, service mesh, secret storage, Keda, GitOps with Flux, and even has a few options to deploy some sample apps. This is done across 5 tabs in the Fine tine and Deploy section.

After you have set all of the configurations for your cluster there is code available for you to copy on the Deploy tab. Again you have options for Az CLI, a Github Actions workflow, Terraform scripts or an ARM Template Parameters file. Running the deployment code will deploy your AKS cluster exactly how you have it configured in the AKS Construction helper tool. 

What if you are not ready to deploy your AKS Clusters now but you do not want to lose your configuration? The tool has you covered. At the end of the Deploy Cluster code you can click the link as shown in the screenshot to get a URL for your configuration.

The URL will look similar to this:

https://azure.github.io/AKS-Construction/?deploy.deployItemKey=deployArmCli&ops=oss&preset=defaultOps&deploy.location=EastUS2&addons.ingress=nginx&addons.monitor=aci&addons.openServiceMeshAddon=true&addons.fluxGitOpsAddon=true

You can access this URL at any time to pick up where you left off with your AKS deployment configuration.

That brings us to the end of this blog post. Stop wasting time, head over to the tool, and start using this for all of your AKS Deployments. Here are the links for the tool:

The wizard-driven tool can be found here:

https://azure.github.io/AKS-Construction

The GitHub Repository for the tool can be found here:

https://github.com/Azure/AKS-Construction

Read more

Running Stateful Apps in Kubernetes

With Kubernetes you will eventually, have the need to run stateful applications in Kubernetes. This is more common than you think. If you have never run stateful apps on Kubernetes before this can be a scary thing adding more moving parts to a Kubernetes cluster, deploying the app, as well as managing your stateful application/s on Kubernetes when it requires state.

In this blog post I am going to take you on a short journey to gain an understanding of Stateless vs Stateful applications, how storage works in Kubernetes touching on volumes, storage classes, persistent volumes (PC), and persistent volume claims (PVC), what Stateful Sets are, about Persistent state with pods, and good practices for running Stateful Apps on Kubernetes.

Stateless

A stateless app is an application program that does not save client data generated in one session for use in the next session with that client.

Stateful

A stateful app is a program that saves client data from the activities of one session for use in the next session.

The data that is saved is called the application’s state. Here is a visual covering the differences between Stateless and Stateful applications:

Volumes

Here is a breakdown of what volumes are:

  • A volume is a directory, typically with data in it, that is accessible to the containers in a pod.
    • A volume represents a way to store, retrieve, and persist data across pods through an applications lifecycle.
    • Volume modes in Kubernetes supports are Filesystem or Block.
    • Volumes are backed by different types of storage such as NFS, iSCSI, or other cloud storage (i.e. awsElasticBlockStore, azureDisk, gcePersistentDisk etc..).
    • When pods ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.

StorageClasses

Here is a breakdown of what volumes are:

  • Define types of storage tiers like Premium and Standard through Storage Classes in Kubernetes.
    • Give K8s admins a way to describe the “classes” of storage they offer.
    • StorageClasses define the provisioner, parameters, and reclaimPolicy used when a PersistentVolume is provisioned.
    • When a pod is deleted the underlying storage resource can either be deleted or kept for use with a future pod.
    • A reclaim Policy controls the behavior of the underlying storage resource when pod & the its persistent volume are no longer required.

Example of a configuration file for a StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: managed-premium-retain
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
parameters:
  storageaccounttype: Premium_LRS
  kind: Managed

Reclaim Policy

Here is a breakdown of what Reclaim Policies:

  • Retain –
    • Allows for manual reclamation of the resource. The PV is not available for another claim due to previous claimant’s data remaining on the volume. A K8s admin must manually reclaim the volume.
    • Delete –
      • The delete reclaim policy removes the PV resource from the K8s cluster, & the associated storage asset such as cloud storage, NFS etc…
    • Recycle –
      • Performs a basic scrub on the volume & makes it available again for a new PVC.

Persistent Volumes (PVs)

Here is a breakdown of what Persistent Volumes are:

  • A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
    • A Persistent Volume can be manually provisioned by an Kubernetes admin or dynamically provisioned using Storage Classes by the Kubernetes API server.
    • Dynamic provisioning uses a StorageClass to identify what type of storage (NFS, iSCSI, or cloud-based) needs to be created.

Example of a configuration file for the PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0010
spec:
  capacity:
   storage: 40Gi
  volumeMode: Filesystem
  accessModes:
   - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
   - hard
   - nfsvers=4.1
  nfs:
   path: /tmp
   server: 172.19.0.22

Persistent Volume Claims (PVCs)

Here is a breakdown of what Persistent Volumes Claims are:

  • A PersistentVolumeClaim (PVC) is a request for storage by a user.
    • A PersistentVolumeClaim specifies the volume mode of either Block or File storage from a StorageClass, the access mode, and the capacity needed.
    • PVC Access Modes Are:
      • ReadOnlyMany (ROX) allows being mounted by multiple nodes in read-only mode.
      • ReadWriteOnce (RWO) allows being mounted by a single node in read-write mode.
      • ReadWriteMany (RWX) allows multiple nodes to be mounted in read-write mode.

Example of a configuration file for the PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc0002
spec:
  storageClassName: manual
  accessModes:
   - ReadWriteOnce
  resources:
   requests:
    storage: 10Gi

Lifecycle of a Volume & Claim

Let’s take a look at how the lifecycle of volumes and claims flow:

StatefulSets

Here is a breakdown of what Stateful Sets are:

  • StaefulSets are Kubernetes objects that are used when we need each pod to have its own independent state & use its own individual volume.
    • With StatefulSets each pod is assigned a unique name & the unique name stays with it even if the pod is deleted & recreated.
    • Headless services are primarily used when we deploy statefulset applications. Headless services don’t operate like load balancers. Headless services are not assigned IPs like a regular service is.

StatefulSets are typically used when the following is needed:

  • unique network identifiers for pods
    • persistent storage for retaining data
    • Ordered, graceful deployment, & scaling of pods
    • Ordered, & automated rolling updates of the app

Some Good Practices When Running Stateful Apps on Kubernetes

That wraps up this blog post! Thanks for reading and stay tuned to my blog for more content on Kubernetes soon.

Read more

How To Set the Application Reconciliation Timeout in Argo CD

Argo CD has something called the Application reconciliation timeout. This is how often your applications will sync from Argo CD to the Git repository. It looks for changes and when it sees changes it will then apply the desired state from the repo to the Kubernetes (K8s) cluster. By default the timeout period is set to 3 minutes. This is set in the General Argo CD configuration.

The General Argo CD configuration is set in the argocd-cm ConfigMap. And the argocd-cm ConfigMap is deployed in the argocd namespace.

You can view what is currently set by running the following kubectl command on your K8s cluster that is running your Argo CD instance:

kubectl describe configmaps argocd-cm -n argocd

The output will look like the following:

You can also see that the argocd-cm Data is empty by running kubectl get configmaps -n argocd or if you are using AKS navigate to ConfigMaps in the Azure portal like in the following screenshot.

Most Argo CD instances are running the default settings for its configurations. The argocd-server component reads and writes to the argocd-cm ConfigMap and other Argo configuration ConfigMaps based on admin user interactions with the Argo CD web UI or the Argo CD CLI. It is normal for it to be empty with Data at 0 if you have not changed any defaults or set anything directly in the ConfigMap yet.

To change the Application reconciliation timeout you need to do the following:

  1. Get a copy of the argocd-cm ConfigMap here: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cm.yaml
  2. The Application reconciliation timeout can be found on line 283 “timeout.reconciliation: 180s”.
  3. Change “180s” to whatever number you want to change it to i.e. change to “60s” to reduce the sync internal to 1 minute.
  4. Remove all of the other settings in the file except for the Application reconciliation timeout. The file should look like this:
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
data:
  # Application reconciliation timeout is the max amount of time required to discover if a new manifests version got
  # published to the repository. Reconciliation by timeout is disabled if timeout is set to 0. Three minutes by default.
  # > Note: argocd-repo-server deployment must be manually restarted after changing the setting.
  timeout.reconciliation: 60s

5. Save the file.

6. Connect to the Kubernetes cluster that is running Argo CD and apply the argocd-cm ConfigMap file you just updated by running the following:

kubectl apply -f argocd-cm.yaml -n argocd

7. Run the following to verify the update was applied:

kubectl describe configmaps argocd-cm -n argocd

You should also notice at least 1 is listed under Data for the ConfigMap now.

8. It is a good practice to redeploy the argocd-repo-server after updating the argocd-cm ConfgigMap. You can redeploy the argocd-repo-server by running the following:

kubectl -n argocd rollout restart deploy argocd-repo-server

That’s it! Now your app in Argo CD will sync on the new Application Reconciliation Timeout that you set. Check back soon for more Azure, Cloud, Kubernetes, GitOps, Argo CD content and more.

BTW: For more in-depth information on GitOps and Argo CD check out my GitOps and Argo CD courses on Pluralsight here:

GitOps: The Big Picture“:

https://app.pluralsight.com/library/courses/gitops-the-big-picture

Getting Started with Argo CD“:

https://app.pluralsight.com/library/courses/argo-cd-getting-started

And here is the link to my Pluralsight profile to follow mehttps://app.pluralsight.com/profile/author/steve-buchanan

Read more

GitOps Fundamentals Certification

Recently Codefresh launched the 1st certification in its GitOps certification path. This one is called “GitOps Fundamentals“. You can find it here: https://codefresh.learnworlds.com .

It takes you through the basics of GitOps to gain theoretical knowledge, and how to utilize Argo CD as the GitOps operator to gain hands-on knowledge. You will learn about both and will have questions on both in the quizzes and final exam.

They also touch on Argo Rollouts to go over Progressive Delivery with topics such as blue/green deployments and canary deployments. This is the 1st ever GitOps certification and it’s free! They do have plans for GitOps at Edge and GitOps at Scale certifications.

You can find more information about the GitOps certification and Codefresh’s future plans for it on this blog by Hannah Seligson (one of the authors of the course and exam) here: https://codefresh.io/blog/get-gitops-certified-argo.

I jumped all over this opportunity to get certified on GitOps, by signing up for the course, taking the training, and the exam! I passed and now I am GitOps certified.

Here is the certification:

GitOps is gaining adoption more and more every day in the Kubernetes space. Also, Argo CD is growing extremely fast as one of the top if not the top GitOps operator. I recommend you check this Codefresh GitOps certification out and get GitOps certified as this pattern and the technology behind it are growing at a super fast rate.

Also note, it looks like Weaveworks is planning to launch a “Certified GitOps Practitioner (CGP)” certification soon. I would guess the Weaveworks GitOps certification will contain content on Flux another GitOps operator. You can learn more about their coming GitOps certification here: https://www.weave.works/certified-gitops-practitioner

Also for more training on GitOps and Argo CD be sure to check out my GitOps and Argo CD courses on Pluralsight here:

GitOps: The Big Picture“:

https://app.pluralsight.com/library/courses/gitops-the-big-picture

Getting Started with Argo CD“:

https://app.pluralsight.com/library/courses/argo-cd-getting-started

And here is the link to my Pluralsight profile to follow me for future GitOps, Kubernetes, Cloud, and DevOps content: https://app.pluralsight.com/profile/author/steve-buchanan

Read more

Guest on the 1st episode of Tech/Life Podcast

A former Microsoft MVP and friend Steve Beaumont started a podcast. I was honored to be a guest on his first episode! The episode was released today. This podcast explores balancing Tech and Life. On the podcast episodes, Steve talks with people within the technology field, discussing both tech and diving into personal lives, stress, learning, and interests.

Steve has a website for the podcast here: www.techvlife.com.

Steve and I had a chance to catch up at MMS 2022 for the episode. We talked about my transition to Microsoft working as a Principal Program Manager in Azure. We also talked about my time practicing Kung Fu, how I stay motivated, make goals, balance tech with hobbies, how tech is one of my hobbies, and balancing that so it does not become another job, Kubernetes, AKS, and more.

Steve already has episodes with many other great folks already. He will be releasing them in the coming weeks. So be sure to subscribe to his podcast. Here are some of the other guests he will have on:


Aria Carley
Justin Chalfant
Ben Whitmore
Mike Danoski
David Segura
Donna Ryan
Mike Marable

You can listen to the 1st episode with me right here:

Read more

Watch Learn Live Episode 7 – Introduction to Azure Arc enabled Kubernetes

Today Pierre Roman (@wiredcanuck) Senior Cloud Advocate of Microsoft & myself (@buchatech) streamed “Introduction to Azure Arc enabled Kubernetes” on Learn Live. Here is what we covered in this session:

In this session, showed you how Azure Arc enabled Kubernetes clusters can help customers like Contoso to optimize and simplify their operations. Here are the Learning objectives we covered:

  • Describe Kubernetes, Azure Arc, and Azure Arc-enabled Kubernetes.
  • Connect Kubernetes clusters to Azure Arc.
  • Manage Azure Arc enabled Kubernetes clusters by using GitOps.
  • Integrate Azure Arc enabled Kubernetes cluster with Azure services like Azure Monitor and Azure Policy.

If you missed it don’t worry. 🙂 You can watch the playback on the Microsoft Developer YouTube channel here:

You can check out more Learn Live episodes on the:

Or

Read more

Speaking at MMS 2022 in Person!

MMSMOA is back in person for 2022. I am excited to be heading back to present a session and be on some panels!

Here is the MMS website: https://mmsmoa.com

My session will be with my friend and co-author of my latest book John Joyner.

Here are the session details:

Azure Arc: Extending Hyperscale Cloud Management to Your Datacenter

Description:

Learn about Microsoft’s Azure Arc service, a new multi-cloud management platform that belongs in every cloud or DevOps estate. The premise of Azure Arc is compelling: why not extend familiar management tools proven in Azure to on-premise and other cloud networks? A practical scenario-based tour will get you up to speed quickly, with instruction and demos that are heavy with hands-on experience. If your organization has resources across the hybrid cloud, multi-cloud, and edge environments, then this session is for you. You will learn how to configure and use Azure Arc to uniformly manage workloads across all of these environments.

What you will learn:

  • Introduces the basics of hybrid, multi-cloud, and edge computing and how Azure Arc fits into that IT strategy
  • Insights into Azure native management tooling for managing on-premises servers and extending to other clouds
  • Detail an end-to-end hybrid server monitoring scenario leveraging Azure Monitor and/or Microsoft Sentinel that is seamlessly delivered by Azure Arc
  • Define a blueprint to achieve regulatory compliance with industry standards using Azure Arc, delivering Azure Policy from Microsoft Defender for Cloud

Session link to register here: mms2022atmoa.sched.com/event/yDOu/azure-arc-extending-hyperscale-cloud-management-to-your-datacenter

I will also be a part of these panels:

Cloud Adoption Roundtable

Are you thinking about starting the cloud journey, or are you an experienced cloud engineer already?  Come join this interactive session where we will talk all things cloud!  We will have a round-table discussion about what resources are available, where to find them, and which ones are better than others.  Talk with experienced cloud architects about the mistakes they’ve seen and how to avoid them.  Come listen to stories, enjoy a few drinks, and have a great time talking about the cloud movement.

What you will learn:

  • How to begin your cloud adoption journey
  • What resources are available to start your migration process, and how to find them
  • Common mistakes/pitfalls
  • Q&A with cloud adoption survivors

https://mms2022atmoa.sched.com/event/102rB/cloud-adoption-roundtable

Cloud AMA – Come ask the Cloud MVPs Anything

This session will be an open format Q&A. Come ask your burning questions in front of a live audience and get real-time feedback from cloud MVP’s and SME’s. No question too hard, no topic off-limits. Wanted to know why something was built the way it was? Want to know how to accomplish something you’ve been working on for months? Have a general question about Azure in general? Come, listen, ask.

https://mms2022atmoa.sched.com/event/zp1h/cloud-ama-come-ask-the-cloud-mvps-anything

Hope to see you at MMS 2022!

Read more

Tech Talk with Kazeem – Azure Arc Enabled Kubernetes for Beginners

I was a guest on Tech Talk with Kazeem again! The topic of discussion was Azure Arc Enabled Kubernetes for Beginners.

This image has an empty alt attribute; its file name is TechTalk-w-Kazeem-2022-ArcK8s-2.pngThis image has an empty alt attribute; its file name is TechTalk-w-Kazeem-2022-ArcK8s-1.png
@KazeemCanTeach & @buchatech@buchatech Azure Arc K8s book with O’Reilly

In the discussion with me and Microsoft MVP Kazeem Adegboyega, we talked about Azure Stack, AKS, Azure Arc: K8s, and GitOps! We talked about each technology and when to use them for what purpose and more.

You can check it out here:

Read more

Dok Talks #121 – Running Stateful Apps in Kubernetes Made Simple

I am giving a talk for the Data on Kubernetes Community (DoKC) Community next week. They are a user group like community that focuses on how to build and operate data-centric applications on Kubernetes. Be sure to check them out! The DoK website is: https://dok.community.

My talk is titled: “Running Stateful Apps in Kubernetes Made Simple

ABSTRACT OF THE TALK

Eventually, the time will come to run a stateful app in Kubernetes. This can be a scary thing adding more moving parts to a Kubernetes cluster and deploying as well as managing your app on Kubernetes when it requires state.

In this talk, Steve Buchanan will take you through a journey of understanding how storage works in Kubernetes, how to Persistent state with pods, what storage options are available with Azure Kubernetes Service, best practices, and a demo of deploying a stateful app to AKS.

In the demo, I will show how to deploy stateful Worpress & Jenkins workloads on Azure Kubernetes Service using the GitOps model with Argo CD.

KEY TAKE-AWAYS FROM THE TALK

Overview of Storage in Kubernetes covering Storage Classes, Persistent Volumes, & Persistent Volume Claims. Overview of Azure Storage, Best Practices to running stateful apps in Kubernetes.

Register here:

https://www.meetup.com/Data-on-Kubernetes-community/events/284283907/

——-Update——-

If you missed the session you can stream it here:

Read more