Microsoft has been making some amazing enhancements to AKS and in the open-source space in general. This effort has been making it easier to use Kubernetes and easier for folks who are getting started with Kubernetes.
Recently Microsoft has added more functionally called “Kubernetes resource view“.
This allows you to see and work with some Kubernetes resources directly in the Azure portal. As you can see in the previous screenshot it includes Namespaces, Workloads, and Services. When you deploy a new AKS cluster this is enabled by default.
If you have deployed an AKS cluster before this functionality was release you will need to enable the Kubernetes resource view. You can choose what namespace to enable this on. It will look like this:
The three main areas of resources are:
Namespaces
Workloads
Deployments
Pods
Replica Sets
Daemon sets
Services and ingresses
In these resource areas, you can view the resources, add, delete, and show labels.
You can click on a resource to see the properties of it under Overview. The overview tab has valuable information for example for a pod you can see the pod status, the containers that belong to it, its conditions, and more. Here are some screenshots:
–
You can see any events around the resource and you can even view or edit the resources Yaml. Here is what it looks like when editing a resource:
Well, this was a quick blog post to give an early look at the new Kubernetes resource view in AKS. I recommend you check out it! Remember this is a preview and it’s going to get better and better.
I can imagine in the future we will be able to access more Kubernetes resources and API Objects in the Azure portal. For example, it will be cool to be able to work with Secrets, and Configmaps right in the Azure portal! I don’t know about you, but I am very excited about what Microsoft has been doing with AKS!
I recently presented at the Inside Azure management event. This event was packed full of Microsoft MVP’s and community experts from around the world. The focus on the event was around Azure Management based topics with some Kubernetes, AI, and DevOps topics sprinkled in.
My session was “Azure Policy Insights & Multi-Tag demo via Azure Policy” Here is what it covered: “Azure Policy is a great tool when it comes to auditing and ensuring your cloud governance is met. In this session 9 time Microsoft Azure MVP Steve Buchanan is going to take you on a full-speed ride on the ins and outs of Azure Policy and land you with a recipe for handling a multi-tagging strategy with Azure Policy. Some of the key topics you will learn from this session include:
Overview of Azure Policy
Azure Policy Configuration best practices to meet compliance (NIST, PCI, ISO, HIPPA)
Securing Azure services: AKS / Networking / SQL / App Service
Azure Policy vs RBAC
Overview of Azure Policy Guest Configuration
Tagging and more“
The event has passed and if you didn’t make it no worries! All of the sessions have been recorded and uploaded to the Inside Azure management YouTube channel to be watched at your leisure. Here is the link to the YouTube channel where you can watch all the sessions:
The event coordinators have also set up some Youtube playlists to make it easier to find videos on the topics that pertain to you. They broke these out in the following categories: Azure Management, Artificial Intelligence in Azure, Cloud Governance, Cybersecurity, and DevOps.
Today my third Pluralsight course has been published. This course is titled “Microsoft Azure Pricing and Support Options“. It is a part of an AZ-900 learning path that will help you prepare for the Microsoft Azure Fundamentals AZ-900 exam. Learn more about that exam here: https://docs.microsoft.com/en-us/learn/certifications/exams/az-900 .
I was excited about this course as it is a part of my day to day focus on Azure. Also, this is a chance to help those that are getting started with Azure. Here is what you will gain from this course:
This course will teach you the fundamental knowledge about the pricing and support options in Azure one of the core skills measured in the AZ-900: Azure Fundamentals exam.
In this course, Microsoft Azure Pricing and Support Options, you’ll learn to estimate the pricing and support of Azure cloud services. First, you’ll explore Azure subscriptions, subscription management, and management groups.
Next, you’ll discover, plan, and manage Azure costs. Finally, you’ll learn how to understand Azure SLA’s, various Azure service lifecycles, and how to select the right support options. When you’re finished with this course, you’ll have the skills and knowledge of Azure pricing and available support needed to estimate Azure costs and select the right support options.
I was recently added to the speaker lineup for the “Inside Azure Management Summit” happening on 7/23/2020. This event is a FREE 1-day virtual event. It features the Microsoft cloud experts from the authoring team of “Inside Azure Management” book, Microsoft MVP’s, and community experts from around the world.
Attendees will get a day full of deep-dive technical sessions across a variety of Microsoft cloud topics including:
DevOps and Automation
Cyber Security
Cloud Governance
Migration and Monitoring
Docker and Kubernetes
AI and Identity
The sessions will span a 13-hr period to allow audiences from around the world to join a portion of the event.
I will be giving a session on Azure Policy. Here is my session info:
Session Title:
Azure Policy Insights & Multi-Tag demo via Azure Policy
Session Abstract:
Azure Policy is a great tool when it comes to auditing and ensuring your cloud governance is met. In this session 9 time Microsoft Azure MVP Steve Buchanan is going to take you on a full-speed ride on the ins and outs of Azure Policy and land you with a recipe for handling a multi-tagging strategy with Azure Policy. Some of the key topics you will learn from this session include:
Wow! Today I was honored to be renewed as an MVP again. This marks year #9! Here is a screenshot of the official email from Microsoft:
I never take this award for granted. It is not guaranteed. Every year when July 1 rolls around I never truly know if I will be back in the MVP program or not. 2020 has been a tough year with the COVID19 pandemic and racial tensions in the US coming to a head. Me being renewed as an MVP this time around is a boost of much needed good news.
This marks the second year for me as a Microsoft Azure MVP! It is not easy to be a part of these ranks with so many talented people in the cloud space.
I am looking forward to the 2020-2021 MVP award year with some things already in the works such as guest spots on podcast shows, speaking at some virtual conferences, organizing an Azure user group, speaking at user groups, more courses, and of course, more blog posts to come around Azure, Azure Kubernetes Service (AKS), Hybrid Cloud, and more.
This year I also aim in doing my part to exposing the MVP program to underrepresented people and encouraging them to contribute to the tech community at large potentially even having some new faces join the ranks as an MVP.
As always I am honored to remain a part of MVP ranks. I will continue to do all that I can in the Azure, Azure Stack, AKS, and CloudOps/DevOps communities this year.
I recently had the honor of being a guest on the “Lisa at the Edge” Podcast. Lisa is a Microsoft Hybrid Cloud Strategist and an influencer in the hybrid cloud community based out of Scotland. She runs a blog and this year she started a popular podcast.
On Lisa’s podcast, she covers Careers in Tech and Microsoft Hybrid Cloud and a range of other topics with experts across the tech community.
This is an episode you don’t want to miss. This was one of the most entertaining podcasts I have been on. It took some interesting turns in regards to topics and very engaging. In the podcast episode Lisa and I talk about:
Evolving your career as technology evolves
Transformation of IT dept to Strategic Business Partner
When working with
Containers a common need is to store Container images somewhere. Container
Registries are the go-to for this. Docker hub is an example of a Container
Registry and it is the most well-known Container Registry.
What is a Container Registry?
A Container Registry is a group of repositories used to store container images. A container repository is used to manage, pull or push container images. A Container Registry does more than a repository in that it has API paths, tasks, scanning for vulnerabilities, digital signature of images, access control rules and more.
Container registries can be public or private. For example, a public registry is Docker Hub and anyone can access its container repositories to pull images. A private registry is one that you would host either on-premises or on a cloud provider. All of the major cloud providers including Azure has a Container Registry offering.
Integrate ACR with AKS
With AKS it is a good idea to use a private container registry to host your container images. The process is used Docker to build your image>push the image to your Azure Container Registry>Pull the image from the registry when deploying a Pod to your AKS cluster.
There are 3 ways to
integrate AKS with Azure Container Registry. I typically only use one way and
will focus on that in this blog post.
2 of the ways you can integrate AKS with Azure Container Registry. The first is through an Azure AD service principal name (SPN) that assigns the AcrPull role to the SPN. More on this here. You would use this first way in scenarios where you only have one ACR and this will be the default place to pull images from.
The second is to create a Kubernetes ServiceAccount that would be used to pull images when deploying pods. With this you would add “kind: ServiceAccount” to your Kubernetes cluster and it would use the ACR credentials. Then in your pods yaml files you would need to specify the service account for example “serviceAccountName: ExampleServiceAccountName”.
The way I like to integrate AKS with Azure Container Registry is to use Kubernetes Secret of type docker-registry. With this option basically, you create a secret in the Kubernetes cluster for your Azure Container Registry. You then specify the secret in your pod yaml files. This allows you to have multiple container registries to pull from. This option is also quick and easy to setup. Ok.
To get started you need to build your Docker image and push it up to your Azure Container Registry. In this blog post, I will not cover deploying ACR, or building the Docker image assuming you have already done these things. Now let’s set up the ACR and AKS integration using a docker-registry Kubernetes secret.
1. For the first step, you will need the credentials to your Azure Container Registry. To get this go navigate to:
2. The second step push your Docker image up to your ACR.
# Log into the Azure Container Registry
docker login ACRNAMEHERE.azurecr.io -u ACRUSERNAMEHERE -p PASSWORDHERE
# Tag the docker image with ACR
docker tag DOCKERIMAGENAMEHERE ACRNAMEHERE.azurecr.io/DOCKERIMAGENAMEHERE:v1
# Push the image to ACR
docker push ACRNAMEHERE.azurecr.io/DOCKERIMAGENAMEHERE:v1
3. The third step create the docker-registry Kubernetes secret by running following syntax from Azure Cloud Shell:
In Kubernetes, you have a container or containers running as a pod. In front of the pods, you have something known as a service. Services are simply an abstraction that defines a logical set of pods and how to access them. As pods move around the service that defines the pods it is bound to keeps track of what nodes the pods are running on. For external access to services, there is typically an Ingress controller that allows access from outside of the Kubernetes cluster to a service. An ingress defines the rules for inbound connections.
Microsoft has had an
Application Gateway Ingress Controller for Azure Kubernetes Service AKS in
public preview for some time and recently released for GA. The Application
Gateway Ingress Controller (AGIC) monitors the Kubernetes cluster for ingress
resources and makes changes to the specified Application Gateway to allow
inbound connections.
This allows you to leverage the Application Gateway service in Azure as the entry into your AKS cluster. In addition to utilizing the Application Gateway standard set of functionality, the AGIC uses the Application Gateway Web Application Firewall (WAF). In fact, that is the only version of the Application Gateway that is supported by the AGIC. The great thing about this is that you can put Application Gateways WAF protection in front of your applications that are running on AKS.
This blog post is not a detailed deep dive into AGIC. To learn more about AGIC visit this link: https://azure.github.io/application-gateway-kubernetes-ingress. In this blog post, I want to share a script I built that deploys the AGIC. There are many steps to deploying the AGIC and I figured this is something folks will need to deploy over and over so it makes sense to make it a little easier to do. You won’t have to worry about creating a managed identity, getting various id’s, downloading and updating YAML files, or installing helm charts. Also, this script will be useful if you are not familiar with sed and helm commands. It combines PowerShell, AZ CLI, sed, and helm code. I have already used this script about 10 times myself to deploy the AGIC and boy has it saved me time. I thought it would be useful to someone out there and wanted to share it.
I typically deploy RBAC enabled AKS clusters so this script is set up to work with an RBAC enabled AKS cluster. If you are deploying AGIC for a non-RBAC AKS cluster be sure to view the notes in the script and adjust a couple of lines of code to make it non-RBAC ready. Also note this AGIC script is focused on brownfield deployments so before running the script there are some components you should already have deployed. These components are:
VNet and 2 Subnets (one for your AKS cluster and one for the App Gateway)
AKS Cluster
Public IP
Application Gateway
The script will
deploy and do the following:
Deploys the AAD Pod Identity.
Creates the Managed Identity used by the AAD Pod Identity.
Gives the Managed Identity Contributor access to Application Gateway.
Gives the Managed Identity Reader access to the resource group that hosts the Application Gateway.
Downloads and renames the sample-helm-config.yaml file to helm-agic-config.yaml.
Updates the helm-agic-config.yaml with environment variables and sets RBAC enabled to true using Sed.
Adds the Application Gateway ingress helm chart repo and updates the repo on your AKS cluster.
Installs the AGIC pod using a helm chart and environment variables in the helm-agic-config.yaml file.
Now let’s take a look at running the script. It is recommended to upload to and run this script from Azure Cloud shell (PowerShell). Run:
./AGICDeployment.ps1 -verbose
You will be prompted
for the following as shown in the screenshot:
Enter the name of the Azure Subscription you want to use.:
Enter the name of the Resource Group that contains the AKS Cluster.:
Enter the name of the AKS Cluster you want to use.:
Enter the name of the new Managed Identity.:
Here is a screenshot
of what you will see while the script runs.
That’s it. You don’t have to do anything else except entering values at the beginning of running the script. To verify your new AGIC pod is running you can check a couple of things. First, run:
kubectl get pods
Note the name of my
AGIC pod is appgw-ingress-azure-6cc9846c47-f7tqn.
Your pod name will be different.
Now you can check
the logs of the AGIC pod by running:
kubectl logs appgw-ingress-azure-6cc9846c47-f7tqn
You should not have
any errors but if you do they will show in the log. If everything ran fine the
output log should look similar to:
After its all said and done you will have a running Application Gateway Ingress Controller that is connected to the Application Gateway and ready for new ingresses.
This script does not deploy any ingress into your AKS cluster. That will need to be done in addition to this script as you need. The following is an example YAML code for an ingress. You can use this to create an ingress for a pod running in your AKS cluster.
When working in my Azure labs I tend to end up with a lot of Azure App Registrations. Recently I had hundreds and hundreds of them in the directory. Its been a while since I cleaned this up. I needed to clean this out and did not want to do this manually through the portal. PowerShell to the rescue.
I pulled together a script that will place your app registrations in a variable. It will then loop through the app registrations to remove them. Note you will be prompted with a confirmation to remove each one.
I recently published a blog post on 4sysops.com about Web App for Containers on Azure here: https://4sysops.com/archives/web-app-for-containers-on-azure. That blog post is about the often-overlooked service in Azure that can be used to host a container/s on a web app in Azure App service.
This is a great service if you just need to run a single container or even a couple of containers that you have in Docker Compose. This service is PaaS and abstracts away an orchestration system like Kubernetes. If you need insight into the Azure App Service Web App for Containers service check out the blog post on 4sysops.
In this long blog post I am going to take things a step further and walk-through the build & release of a Container from Azure DevOps to Azure Web App for Containers. The overall goal of this post is to help someone else out if they want to setup a build and release pipeline for building and deploying a container to Azure App Service. We will use a very simple PHP web app I built that will run in the container.
Here are the components that are involved in this scenario:
Azure Container Registry (ACR): We will use this to store our container image. We will be pushing up the container image and pull it back down from the registry as a part of the build and release process.
Azure DevOps (ADO): This is the DevOps tooling we will use to build our container, push it up to ACR, pull it down into our release pipeline and then deploy to our Azure App Service.
App Service Web App for Containers: This is the web server service on Azure that will be used to host our container. Under the hood this will be a container that is running Linux and Apache to host the PHP web app.
Here is the data Flow for our containerized web app:
Deploy the Azure App Service Web App for Containers instance
Deploy the Azure Container Registry
Deploy the Azure DevOps organization and project, create repository to host the code, clone repository in VS Code (Not shown in this blog post. Assume you know how to this up.)
Update the application code (PHP code and Docker image) in Visual Studio code
Commit application code from Visual Studio code to the Azure DevOps repo (Not shown in this blog post. Assume you know how to this up.)
Setup build and then run container build and push the container image to ACR
Setup release pipeline and then kick off the release pipeline pulling down the container image from ACR and deploys the container to the App Service Web App for Containers instance.
Here is a diagram detailing out the build and release process we will be using:
Ok. Let’s get into the setup of core components of the solution and the various parts of the build and release pipeline.
For starters this solution will need a project in Azure DevOps with a repo. Create a project in Azure DevOps and a repo based on Git. Name the repo exerciseoftheday. Next up let’s create the core framework we need in Azure.
Deploy Azure App Service Web App for Containers
Let’s create the Azure App Service Web App for Containers that will be needed. We will need a resource group, an app service plan and then we can setup the app service. The PHP app we will be running is named Exercise Of The Day (EOTD) for short so our resources will use EOTD. Use the following steps to set all of this up.
We will do everything via Azure Cloud Shell. Go to https://shell.azure.com/ or launch Cloud Shell within VS Code.
Run the following Syntax:
# Create a resource group
az group create –name EOTDWebAppRG –location centralus
# Create an Azure App Service plan
az appservice plan create –name EOTDAppServicePlan –resource-group EOTDWebAppRG –sku S1 –is-linux
# Create an Azure App Service Web App for Containers
Note the loginServer from the output. This is the FQDN of the registry. Normally we would need this, admin enabled, and the password to log into the registry. In this scenario we won’t need admin enabled or the password because we will be adding a connection to Azure DevOps and the pipelines will handle pushing to and pulling the image from the registry.
When it’s all done you should see the following resources in the new resource group:
Next, we will need to build an application and a container image.