Recently I was a guest on the “Day Two Cloud” podcast hosted by fellow Microsoft MVP/Pluralsight author Ned Bellavance.
We talked about how native Azure governance & management tools Azure Policy, Tagging, and Blueprints can be used to bring order to your cloud environments. Listen now here:
At Experts Live Europe 2019 I presented a session titled “Master Azure with VS Code”. This was a fun session with an engaging audience that took to twitter after the session. There was some chatter asking this session was recorded. It was not. I did note that I planned to write a blog post on this topic.
Here is that blog post and it is the first one of 2020 for me! In this post, we are going to dive into how VS code is helpful when working with Azure and many extensions I find useful when working with Azure. This post is not set to be an end-all to using VS Code with Azure but from my experience. Use this post as a starting point or a reference for expanding your use of VS Code with Azure. Also, check out the many other community experts and Microsoft MVPs for their additional knowledge plus tips and tricks on this topic.
VS Code Overview
First off if you are not using VS Code stop reading this right now, go download it and install it then come back to finish reading. 🙂 VS Code is a must-have in your toolbox and it is free! For those that are new to VS Code, it is an open-source – code editor developed by Microsoft that runs on Windows, Linux, and macOS. Here is a shortlist of the many benefits of VS Code:
Has support for hundreds of languages.
Has Integrated Terminal.
Also powerful developer tool with functionality, like IntelliSense code completion and debugging.
Includes syntax highlighting, bracket-matching, auto-indentation, box-selection, snippets, and more.
Integrates with build and scripting tools to perform common tasks making everyday workflows faster.
Has support for Git to work with source control.
Large Extension Marketplace of third-party extensions.
Note that yes, VS
Code is for the “IT Pro”. Not just developers.
Azure Extensions in VS Code
VS Code has a ton of
extensions in general. There are a number of Azure specific extensions and you
can work with Azure directly from VS Code.
If you go to the VS Code Marketplace here: https://marketplace.visualstudio.com/vscode and search on Azure you will see results for many published by Microsoft and many community based extensions for Azure. As of the time of writing this blog post, there are 93. Here is a screenshot showing some of the results:
You can also go
directly to the Azure Tools extension from Microsoft here:
In the rest of this post, I am going to share some key extensions I use with Azure. I will post the marketplace links at the end of each extension I talk about and if it is maintained by community or Microsoft.
Deploy to Azure using VS Code
It is important to
note that not all of the Azure extensions available in VS Code can be used to
deploy to Azure. Some can but most can’t here is a list of the services that
you can deploy to from extensions in VS Code.
Azure Service
Description
Azure Functions
Build and manage Azure Functions serverless apps directly in VS Code with the Azure Functions extension.
App Service
Manage
Azure resources directly in VS Code with the Azure App Service extension.
Docker
Deploy your website using a Docker container.
Azure CLI
Create,
deploy, and update a website using a terminal and the Azure CLI.
Static website
Create,
deploy, and update a static website on Azure Storage.
NOTE: This list is current at the time of
writing this blog post. This will change over time.
Azure Cloud Shell in VS Code
Cloud Shell is something you should be using with Azure to make your life easier. It is an interactive command-line shell. You are authenticated to your Azure account when you launch it, It typically runs in the browser and is used for managing Azure resources. When you launch it you can choose the shell experience that best for you, either Bash or PowerShell. With VS Code you can launch Cloud Shell directly in VS Code!
Cloud Shell is a part of the Azure
Account extension. Here are some key points on using Cloud Shell with VS
Code:
Free (storage consumed has costs.)
Launch Azure Cloud Shell directly in VS
Code.
Launch Bash, PowerShell, or Upload.
Works in the Integrated Terminal.
Azure and open-source Tooling in Cloud Shell:
Azure Tools: blobxfer Azure CLI and Azure classic CLI Azure Functions CLI AzCopy Service Fabric CLI Batch Shipyard
You get the following PowerShell modules in Cloud Shell: Azure Modules (Az.Accounts, Az.Compute, Az.Network, Az.Resources, Az.Storage) Azure AD Management (Preview) Exchange Online (In development) MicrosoftPowerBIMgmt SqlServer
In my day to day I do cloud foundations work helping companies with their Azure governance and management. On projects we will develop a tagging strategy. A tagging strategy is only good if it is actually used. One way to ensure that tags are used is by using Azure Policy to require tags on resource groups or resources.
In the past I have used the deny effect in an Azure Policy to require tags upon resource creation. I basically use the template as previously blogged about here: https://www.buchatech.com/2019/03/requiring-many-tags-on-resource-groups-via-azure-policy. This policy works but can be a problem because the error that is given when denied during deployment is not clear about what tags are required. Also, folks think it is a pain and slows down the provisioning process.
I set out to require tags using a different method. The idea was to use the effect append vs deny so that resources without the proper tags would be flagged as non-compliant and the policy would add the required tags with generic values. Someone from the cloud team could then go put in the proper values for the tags bringing the resources into compliance. Th end result was that the effect append does work remediating with a single tag but falls down when trying to remediate using multiple tags.
I discovered that this behavior was intended and that the append effect only supports one remediation action (i.e. one tag). On 9-20-19 Microsoft updated the modify effect so that Modify can handle multiple ‘operations’ – where each operation specifies what needs to be remediated.
Now let’s walk through using the modify effect in an Azure Policy to add multiple tags on a resource group.
You will need to start off by coding your Azure Policy definition template. There are three important parts you need to ensure you have in template. You need to have modify effect for the proper effect, roleDefinitionIds as this is the role that will be used by the managed identity set as contributor, and operations to tell Azure policy what to do when remediation out of compliance resources.
Add
the ARM template as a new policy definition in the Azure portal.
See
the following screenshot to complete your Azure policy definition.
You
will then see your new Azure policy definition.
Next, you need to assign the Azure policy definition. To do this click on Assignments.
See
the following screenshot to complete your Azure policy assignment.
Note
that this policy assignment will create a managed identity so that the policy
has the ability to edit tags on existing resources.
The
assignment will now be created but the evaluation has not happened so the
compliance state will be set to not started as shown in the following
screenshot.
Moving an Azure VM from one virtual network (VNet) to another VNet is not a new problem. If you find yourself having to do this just know it is a pain. Anyone that has faced the Vnet-to-Vnet VM Move conundrum before knows that moving a VM to another subnet is a trivial task and would think a VM move to another VNet would be the same. However, when it comes to moving a VM to a new VNet there is no supported way to do this from Microsoft and that is by design. There are some workarounds to moving a VM to a new VNet but in the end these boil down to redeploying the VM. When moving a VM to a new VNet you will need to plan for downtime.
In this blog post, I am not going to go through the steps of the workarounds for moving a VM to a new VNet. There are plenty of other blog posts out there covering how to do this manually or using ASR. If you want to read up on this there is one article I keep bookmarked by Microsoft MVP Tim Warner that you can find here. In this blog post, I am going to share and talk about a PowerShell script I pulled together to make the Vnet-to-Vnet VM Migration less painful by automating it. Download link is at the end of this blog post. I put together the script to work with PowerShell 5 using the AzureRM Module and one that works with PowerShell 7 (Core) using the AZ module.
Again, in general, the script is not moving the VM. The script is facilitating a migration of sorts by creating a new VM in new VNet while retaining the original VMs configuration and data disks. Here are the steps that are performed in the script:
(1) Gathers info on existing VM, VNet, and subnet. (2) Removes the original VM while saving all data disks and VM info. (3) Creates VM configuration for new VM, creates nic for new VM, and new availability set. (4) Adds data disks to new VM, adds nics to new VM, adds VM to the new VNet. (5)Creates new VM and adds the VM to the new VNet.
Let’s look at some other general information about the script. The script is a single script that has code for both PowerShell 5 using the AzureRM Module and PowerShell 7 (Core) using the AZ module. When you run the script it prompts you to choose what Azure module you are using. The script is interactive so it will prompt you to log in and prompt you for your Azure subscription in case you are running multiple subscriptions. The script is intended to migrate a single VM. It assumes the VM is deployed into an availability set and will migrate to a new availability set or in the existing availability set. You can also migrate to a new resource group or place the new VM in the existing resource group. Ok. Let’s dive into running the script. Here is a walk-through of running the script.
Open PowerShell and run Vnet-to-Vnet VM migration.ps1.
You will first be prompted for the following information:
Enter the Resource Group of the original VM: Enter the original VM name: Enter the new VM name: Enter the new availability set name: Enter the new VNet resource group: Enter the new VNet name: Enter the new Subnet name:
Next, you will be prompted to select your PowerShell version and Azure module as shown in the following screenshot.
The main differences
between PowerShell 5 using the AzureRM Module and PowerShell 7 (Core) using the
AZ module are the interactive login methods as well as cmdlets. Here are
screenshots of the different logins.
PowerShell 5 with AzureRM Module:
A window will pop up
for you to log into your Azure account.
Next a grid will pop
up for you to select your subscription from the list.
PowerShell 7 (Core) with AZ module:
A warning will pop up prompting you with the device login info. To log into Azure go to https://microsoft.com/devicelogin and entering the code the warning gave you as shown in the screenshot.
List of your subscriptions will output. Go ahead and copy the subscription ID you plan to use. NOTE* Ignore this if you only have one subscription.
You are prompted to enter your subscription ID. This is to set the PowerShell session to the specified Azure subscription. Again ignore this if you only have 1 subscription. Note if you leave it blank and click enter it will error. Even with the error, it will finish the rest of the script just fine.
Next in both PS 5 or
PS 7 you will see the following prompt confirming that you want to remove the
specified original VM. Confirm yes and press enter.
Virtual machine removal operation This cmdlet will remove the specified virtual machine. Do you want to continue? [Y] Yes [N] No [S] Suspend [?] Help (default is “Y”): Y
NOTE* the rest of the script will run. It will take a while so be patient. As it runs you should see some output similar to this:
WARNING: Since the VM is created using premium storage or managed disk, existing standard storage account, diagstwfyhhi3jyz54e, is used for boot diagnostics. VERBOSE: Performing the operation “New” on target “VMMove012”.
When it is all said
and done if it was successful you will see:
RequestId : IsSuccessStatusCode : True StatusCode : OK ReasonPhrase : OK
That’s it. Now go
into the Azure portal and you will see that your VM is moved to a new VNet and
will have the data disks still. Check out the following screenshots for an
example showing what the resource groups look like before and after the script
runs.
It has been a while since presenting on Azure Stack. On June 26th I will be presenting on “Azure Stack 101 in 45 minutes” at an Azure Virtual Day Camp for a D365 user group. Here is a link to the main site:
In July I will be co-presenting with Kyle Weeks at the Minnesota Azure User Group on Azure Management. The session is titled “Scale Matters: Policy + Azure Management Groups”. Come check out this session if you want to go through what Azure Management Groups are, how they scale to any complexity and the best part… how to do this with policy configurations + Azure blueprints + RBAC. Here is a link to register for the meeting:
CloudSkills.fm is a podcast by fellow Microsoft MVP Mike Pfeiffer and veteran in the tech space with 5 books under his belt and numerous courses on Pluralsight. The podcast can be found here: cloudskills.fm. Mike is an all around good guy and I was honored to be a featured guest on one of his podcast episodes. The podcast is weekly with technical tips and career advice for people working in the cloud computing industry. The podcast is geared for developers, IT pros, those making move into cloud.
On this episode Mike
and I talked about managing both the technical and non-technical aspects of
your career in the cloud computing industry. We also discuss DevOps stuff
around Docker, Azure Kubernetes Service, Terraform and cloud stuff around Azure
management including my 5 points to success with cloud. You can listen to the
podcast here:
I’m very excited
Opsgility recently published a new Azure course by me titled: “Deploy and
Configure Infrastructure”. This course is part of the AZ 300 certification
learning path for Microsoft Azure Architect Technologies. More about the AZ 300
certification can be found here: https://www.microsoft.com/en-us/learning/exam-az-300.aspx.
The course is over 4 hours of Azure content!
Description of the course:
In the course learn
how to analyze resource utilization and consumption, create and configure
storage accounts, create and configure a VM for Windows and Linux, create
connectivity between virtual networks, implement and manage virtual networking,
manage Azure Active Directory, and implement and manage hybrid identities.
Lately I have been hearing a lot about a solution named Rancher in the Kubernetes space. Rancher is an open source Kubernetes Multi-Cluster Operations and Workload Management solution. You can learn more about Rancher here: https://www.rancher.com.
In short you can use
Rancher to deploy and manage Kubernetes clusters deployed to Azure, AWS, GCP
their managed Kubernetes offerings like GCE, EKS, AKS or even if you rolled
your own. Rancher also integrates with a bunch of 3rd party solutions for
things like authentication such as Active Directory, Azure Active Directory,
Github, and Ping and logging solutions such as Splunk, Elasticsearch, or a
Syslog endpoint.
Recently training
opened up for some Rancher/Kubernetes/Docker training so I decided to go. The
primary focus was on Rancher while also covering some good info on Docker and
Kubernetes. This was really good training with a lot of hands on time, however
there was one problem with the labs. The labs had instructions and setup
scripts ready to go to run Rancher local on your laptop or on AWS via
Terraform. There was nothing for Azure.
I ended up getting
my Rancher environment running on Azure but it would have been nice to have
some scripts or templates ready to go to spin up Rancher on Azure. I did find
some ARM templates to spin up Rancher but they deployed an old version and it
was not clear in the templates on where they could be updated to deploy the new
version of Rancher. I decided to spend some time building out a couple of ARM
templates that can be used to quickly deploy Rancher on Azure and add a
Kubernetes host to Rancher. In the ARM template I pulled together it pulls the
Rancher container from Docker Hub so it will always deploy the latest version.
In this blog post I will spell out the steps to get your Rancher up and running
in under 15 minutes.
The repository consists of ARM templates for deploying Rancher and a host VM for Kubernetes. NOTE: These templates are intended for labs to learn Rancher. They are not intended for use in production.
In the repo ARM Template #1 named RancherNode.JSON will deploy an Ubuntu VM with Docker and the latest version of Rancher (https://hub.docker.com/r/rancher/rancher) from Docker Hub. ARM Template #2 named RancherHost.JSON will deploy an Ubuntu VM with Docker to be used as a Kubernetes host in Rancher.
Node Deployment
Deploy the
RancherNode.JSON ARM template to your Azure subscription through “Template
Deployment” or other deployment method. You will be prompted for the
following info shown in the screenshot:
Host Deployment
Deploy the
RancherHost.JSON ARM template to your Azure subscription through “Template
Deployment” or other deployment method. Note that that should deploy this
into the same Resource Group that you deployed the Rancher Node ARM template
into. You will be prompted for the following info shown in the screenshot:
After the Rancher
Node and Rancher Host ARM templates are deployed you should see the following
resources in the new Resource Group:
Name
Type
RancherVNet
Virtual
network
RancherHost
Virtual
machine
RancherNode
Virtual
machine
RancherHostPublicIP
Public
IP address
RancherNodePublicIP
Public
IP address
RancherHostNic
Network
interface
RancherNodeNic
Network
interface
RancherHost_OSDisk
Disk
RancherNode_OSDisk
Disk
Next navigate the
Rancher portal in the web browser. The URL is the DNS name of the Rancher Node
VM. You can find the DNS name by clicking on the Rancher Node VM in the Azure
portal on the overview page. Here is an example of the URL:
The Rancher portal
will prompt you to set a password. This is shown in the following screenshot.
After setting the
password the Rancher portal will prompt you for the correct Rancher Server URL.
This will automatically be the Rancher Node VM DNS name. Click Save URL.
You will then be
logged into the Rancher portal. You will see the cluster page. From here you
will want to add a cluster. Doing this is how you add a new Kubernetes cluster
to Rancher. In this post I will show you how to add a cluster to the Rancher
Host VM. When it’s all said and done Rancher will have successfully deployed
Kubernetes to the Rancher Host VM. Note that you could add a managed Kubernetes
such as AKS but we won’t do that in this blog. I will save that for a future
blog post!
Click on Add Cluster
Under “From my
own existing nodes” Click on custom, give the cluster a name and click
Next.
Next check all the
boxes for the Node Options since all the roles will be on a single Kubernetes
cluster. Copy the code shown at the bottom of the page, click done and run the
code on the Rancher Host.
In order to run the
code on the Rancher Host you need to SSH in and run it from there. To do this
follow these steps:
In the Azure Portal, from within the resource group click on the Rancher Host VM.
On the Overview page click on Connect.
Copy “ssh ranchuser@rancherhost.centralus.cloudapp.azure.com” from the Connect to virtual machine pop up screen.
Open a terminal in either Azure cloud shell or with something like a terminal via VS Code and past the “ssh ranchuser@rancherhost.centralus.cloudapp.azure.com” in.
Running the code
will look like this:
When done you can
run Docker PS to see that the Rancher agent containers are running.
In the Rancher
portal under clusters you will see the Rancher host being provisioned
The status will
change as Kubernetes is deployed.
Once it’s done
provisioning you will see your Kubernetes cluster as Active.
From here you can
see a bunch of info about your new Kubernetes cluster. Also notice that you
could even launch Kubectl right from hereand start running commands! Take some
time to click around to see all the familiar stuff you are used to working with
in Kubernetes. This is pretty cool and simplifies the management experience for
Kubernetes.
If you want to add
more nodes or need the configuration code again just click the ellipsis button
and edit.
In Edit Cluster you
can change the cluster name, get and change settings and copy the code to add
more VMs to the cluster.
That’s the end of
this post. Thanks for reading. Check back for more Azure, Kubernetes, and
Rancher blog posts.
Azure Kubernetes Service (AKS) service Azure App Service Environment (ASE) Azure Service Fabric (ASF) Comparison
Scenario:
So, your team recently has been tasked with developing a new application and running it. The team made the decision to take a microservices based approach to the application. Your team also has decided to utilize Docker containers and Azure as a cloud platform. Great, now it’s time to move forward right? Not so fast. There is no question that Docker containers will be used, but what is in question is where you will run the containers. In Azure containers can run on Azure’s managed Kubernetes (AKS) service, an App Service Plan on Azure App Service Environment (ASE), or Azure Service Fabric (ASF). Let’s look at each one of these Azure services including an overview, pro’s, cons, and pricing.
Conclusion:
Choose Azure Kubernetes Service if you need more control, want to avoid vendor lock-in (can run on Azure, AWS, GCP, on-prem), need features of a full orchestration system, flexibility of auto scale configurations, need deeper monitoring, flexibility with networking, public IP’s, DNS, SSL, need a rich ecosystem of addons, will have many multi-container deployments, and plan to run a large number of containers. Also, this is a low cost.
Choose Azure App Service Environment if don’t need as much control, want a dedicated SLA, don’t need deep monitoring or control of the underlying server infrastructure, want to leverage features such as deployment slots, green/blue deployments, will have simple and a low number of multi-container deployments via Docker compose, and plan to run a smaller number of containers. Regarding cost, running a containerized application in an App Service Plan in ASE tends to be more expensive compared to running in AKS or Service Fabric. The higher cost of running containers on ASE is because with an App Service Plan on ASE, you are paying costs for a combination of resources and the managed service. With AKS and ASF you are only paying for the resources used.
Choose Service Fabric if you want a full micros services platform, need flexibility now or in the future to run in cloud and or on-premises, will run native code in addition to containers, want automatic load balancing, low cost.
A huge thanks to my colleague Sunny Singh (@sunnys101) for giving his input and reviewing this post. Thanks for reading and check back for more Azure and container contents soon.
Part of running Kubernetes is being able to
monitoring the cluster, the nodes, and the workloads running in it. Running
production workloads regardless of PaaS, VM’s, or containers requires a solid
level of reliability. Azure Kubernetes Service comes with monitoring provided
from Azure bundled with the semi-managed service. Kubernetes also has built in
monitoring that can also be utilized.
It is important to note that AKS is a free
service and Microsoft aims to achieve at least 99.5% availability for the
Kubernetes API server on the master node side.
But due to AKS being a free service Microsoft
does not carry an SLA on the Kubernetes cluster service itself. Microsoft does
provide an SLA for the availability of the underlying nodes in the cluster via
the Azure Virtual Machines SLA. Without an official SLA for the Kubernetes
cluster service it becomes even more critical to understand your deployment and
have the right monitoring tooling and plan in place so when an issue arises the
DevOps or CloudOps team can address, investigate, and resolve any issues with the
cluster.
The monitoring service included with AKS
gives you monitoring from two perspectives including the first one being
directly from an AKS cluster and the second one being all AKS clusters in a
subscription. The monitoring looks at two key areas “Health status”
and “Performance charts” and consists of:
Insights – Monitoring for the
Kubernetes cluster and containers.
Metrics – Metric based
cluster and pod charts.
Log Analytics – K8s and Container
logs viewing and search.
Azure Monitor
Azure Monitor has a containers section. Here
is where you will find a health summary across all clusters in a subscription
including ACS. You also will see how many nodes and system/user pods a cluster
has and if there are any health issues with the a node or pod. If you click on
a cluster from here it will bring you to the Insights section on the AKS
cluster itself.
If you click on an AKS cluster you will be
brought to the Insights section of AKS monitoring on the actual AKS cluster.
From here you can access the Metrics section and the Logs section as well as
shown in the following screenshot.
Insights
Insights is where you will find the bulk of
useful data when it comes to monitoring AKS. Within Insights you have these 4
areas Cluster, Nodes, Controllers, and Containers. Let’s take a deeper look
into each of the 4 areas.
Cluster
The cluster page contains charts with key
performance metrics for your AKS clusters health. It has performance charts for
your node count with status, pod count with status, along with aggregated node
memory and CPU utilization across the cluster. In here you can change the date
range and add filters to scope down to specific information you want to see.
Nodes
After clicking on the nodes tab you will see
the nodes running in your AKS cluster along with uptime, amount of pods on the
node, CPU usage, memory working set, and memory RSS. You can click on the arrow
next to a node to expand it displaying the pods that are running on it.
What you will notice is that when you click
on a node, or pod a property pane will be shown on the right hand side with the
properties of the selected object. An example of a node is shown in the
following screenshot.
Controllers
Click on the Controllers tab to see the
health of the clusters controllers. Again here you will see CPU usage, memory
working set, and memory RSS of each controller and what is running a
controller. As an example shown in the following screenshot you can see the
kubernetes dashboard pod running on the kubernetes-dashboard controller.
The properties of the kubernetes dashboard pod
as shown in the following screenshot gives you information like the pod name,
pod status, Uid, label and more.
You can drill in to see the container the pod
was deployed using.
Containers
On the Containers tab is where all the
containers in the AKS cluster are displayed. An as with the other tabs you can
see CPU usage, memory working set, and memory RSS. You also will see status,
the pod it is part of, the node its running on, its uptime and if it has had
any restarts. In the following screenshot the CPU usage metric filter is used
and I am showing a containers that has restarted 71 times indicating an issue
with that container.
In the
following screenshot the memory working set metric filter is shown.
You can also filter the
containers that will be shown through using the searching by name filter.
You also can see a containers logs in the containers tab. To do this select a container to show its properties. Within the properties you can click on View container live logs (preview) as shown in the following screenshot or View container logs. Container log data is collected every three minutes. STDOUT and STDERR is the log output from each Docker container that is sent to Log Analytics.
Kube-system is not currently collected and sent to Log Analytics. If you are not familiar with Docker logs more information on STDOUT and STDERR can be found on this Docker logging article here: https://docs.docker.com/config/containers/logging.
In this blog post I am going to walk through the steps for deploying WordPress to Azure Kubernetes Service (AKS) using MySQL and WordPress Docker images. Note that using the way I will show you is one way. Another way to deploy WordPress to AKS would be using a Helm Chart. Here is a link to the WordPress Helm Chart by Bitnami https://bitnami.com/stack/wordpress/helm. Here are the images we will use in this blog post:
The first thing we need to do is save these files as mysql-deployment.yaml and wordpress-deployment.yaml respectively.
Next, we need to setup a password for our MySQL DB. We will do this by creating a secret on our K8s cluster. To do this launch the bash or PowerShell in Azure cloud shell like in the following screenshot and run the following syntax:
NOTE: You could use kubectl create /home/steve/mysql-deployment.yaml instead of apply to create the MySQL pod and service. I use apply because I typically use the declarative object configuration approach. kubectl apply essentially equals kubectl create + kubectl replace. In order to update an object after it has been created using kubectl create you would need to run kubectl replace.
There are pros and cons to using each and it is more of a preference for example when using the declarative approach there is no audit trail associated with changes. For more information on the multiple Kubernetes Object Management approaches go here: https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview.
Note that in the mysql yaml file it has syntax to create a persistent volume. This is needed so that the database stays in tact even if the pod fails, is moved etc. You can check to ensure the persistent volume was created by running the following syntax:
kubectl get pvc
Also, you can run the following syntax to verify the mysql pod is running:
kubectl get pods
Deploying the WordPress Pod and service is the same process. Use the following syntax to create the WordPress pod and service:
Again, check to ensure the persistent volume was created. Use the following syntax:
kubectl get pvc
NOTE: When checking right after you created the persistent volume it may be in a pending status for a while like shown in the following screenshot:
You can also check the persistent volume using the K8s dashboard as shown in the following screenshot:
With the deployment of MySQL and WordPress we created 2 services. The MySQL service has a clusterip that can only be accessed internally. The WordPress service has an external IP that is also attached to an Azure Load Balancer for external access. I am not going to expand on what Kubernetes services are in this blog post but know that they are typically used as an abstracted layer in K8s used for access to Pods on the backend and follow the Pods regardless of the node they are running on. For more information about Kubernetes services visit this link: https://kubernetes.io/docs/concepts/services-networking/service.
In order to see that the services are running properly and find out the external IP you can run the following syntax:
kubectl get services (to see all services)
or
kubectl get services wordpress (to see just the WordPress service)
You also can view the services in the K8s dashboard as shown in the following screenshot:
Well now that we have verified the pods and the services are running let’s check out our new WordPress instance by going to the external IP in a web browser.
Thanks for checking out this blog post. I hope this was an easy to use guide to get WordPress up and running on your Azure Kubernetes Service cluster. Check back soon for more Azure and Kubernetes/Container content.