Running Stateful Apps in Kubernetes

With Kubernetes you will eventually, have the need to run stateful applications in Kubernetes. This is more common than you think. If you have never run stateful apps on Kubernetes before this can be a scary thing adding more moving parts to a Kubernetes cluster, deploying the app, as well as managing your stateful application/s on Kubernetes when it requires state.

In this blog post I am going to take you on a short journey to gain an understanding of Stateless vs Stateful applications, how storage works in Kubernetes touching on volumes, storage classes, persistent volumes (PC), and persistent volume claims (PVC), what Stateful Sets are, about Persistent state with pods, and good practices for running Stateful Apps on Kubernetes.

Stateless

A stateless app is an application program that does not save client data generated in one session for use in the next session with that client.

Stateful

A stateful app is a program that saves client data from the activities of one session for use in the next session.

The data that is saved is called the application’s state. Here is a visual covering the differences between Stateless and Stateful applications:

Volumes

Here is a breakdown of what volumes are:

  • A volume is a directory, typically with data in it, that is accessible to the containers in a pod.
    • A volume represents a way to store, retrieve, and persist data across pods through an applications lifecycle.
    • Volume modes in Kubernetes supports are Filesystem or Block.
    • Volumes are backed by different types of storage such as NFS, iSCSI, or other cloud storage (i.e. awsElasticBlockStore, azureDisk, gcePersistentDisk etc..).
    • When pods ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.

StorageClasses

Here is a breakdown of what volumes are:

  • Define types of storage tiers like Premium and Standard through Storage Classes in Kubernetes.
    • Give K8s admins a way to describe the “classes” of storage they offer.
    • StorageClasses define the provisioner, parameters, and reclaimPolicy used when a PersistentVolume is provisioned.
    • When a pod is deleted the underlying storage resource can either be deleted or kept for use with a future pod.
    • A reclaim Policy controls the behavior of the underlying storage resource when pod & the its persistent volume are no longer required.

Example of a configuration file for a StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: managed-premium-retain
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
parameters:
  storageaccounttype: Premium_LRS
  kind: Managed

Reclaim Policy

Here is a breakdown of what Reclaim Policies:

  • Retain –
    • Allows for manual reclamation of the resource. The PV is not available for another claim due to previous claimant’s data remaining on the volume. A K8s admin must manually reclaim the volume.
    • Delete –
      • The delete reclaim policy removes the PV resource from the K8s cluster, & the associated storage asset such as cloud storage, NFS etc…
    • Recycle –
      • Performs a basic scrub on the volume & makes it available again for a new PVC.

Persistent Volumes (PVs)

Here is a breakdown of what Persistent Volumes are:

  • A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
    • A Persistent Volume can be manually provisioned by an Kubernetes admin or dynamically provisioned using Storage Classes by the Kubernetes API server.
    • Dynamic provisioning uses a StorageClass to identify what type of storage (NFS, iSCSI, or cloud-based) needs to be created.

Example of a configuration file for the PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0010
spec:
  capacity:
   storage: 40Gi
  volumeMode: Filesystem
  accessModes:
   - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
   - hard
   - nfsvers=4.1
  nfs:
   path: /tmp
   server: 172.19.0.22

Persistent Volume Claims (PVCs)

Here is a breakdown of what Persistent Volumes Claims are:

  • A PersistentVolumeClaim (PVC) is a request for storage by a user.
    • A PersistentVolumeClaim specifies the volume mode of either Block or File storage from a StorageClass, the access mode, and the capacity needed.
    • PVC Access Modes Are:
      • ReadOnlyMany (ROX) allows being mounted by multiple nodes in read-only mode.
      • ReadWriteOnce (RWO) allows being mounted by a single node in read-write mode.
      • ReadWriteMany (RWX) allows multiple nodes to be mounted in read-write mode.

Example of a configuration file for the PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc0002
spec:
  storageClassName: manual
  accessModes:
   - ReadWriteOnce
  resources:
   requests:
    storage: 10Gi

Lifecycle of a Volume & Claim

Let’s take a look at how the lifecycle of volumes and claims flow:

StatefulSets

Here is a breakdown of what Stateful Sets are:

  • StaefulSets are Kubernetes objects that are used when we need each pod to have its own independent state & use its own individual volume.
    • With StatefulSets each pod is assigned a unique name & the unique name stays with it even if the pod is deleted & recreated.
    • Headless services are primarily used when we deploy statefulset applications. Headless services don’t operate like load balancers. Headless services are not assigned IPs like a regular service is.

StatefulSets are typically used when the following is needed:

  • unique network identifiers for pods
    • persistent storage for retaining data
    • Ordered, graceful deployment, & scaling of pods
    • Ordered, & automated rolling updates of the app

Some Good Practices When Running Stateful Apps on Kubernetes

That wraps up this blog post! Thanks for reading and stay tuned to my blog for more content on Kubernetes soon.

Read more

How To Set the Application Reconciliation Timeout in Argo CD

Argo CD has something called the Application reconciliation timeout. This is how often your applications will sync from Argo CD to the Git repository. It looks for changes and when it sees changes it will then apply the desired state from the repo to the Kubernetes (K8s) cluster. By default the timeout period is set to 3 minutes. This is set in the General Argo CD configuration.

The General Argo CD configuration is set in the argocd-cm ConfigMap. And the argocd-cm ConfigMap is deployed in the argocd namespace.

You can view what is currently set by running the following kubectl command on your K8s cluster that is running your Argo CD instance:

kubectl describe configmaps argocd-cm -n argocd

The output will look like the following:

You can also see that the argocd-cm Data is empty by running kubectl get configmaps -n argocd or if you are using AKS navigate to ConfigMaps in the Azure portal like in the following screenshot.

Most Argo CD instances are running the default settings for its configurations. The argocd-server component reads and writes to the argocd-cm ConfigMap and other Argo configuration ConfigMaps based on admin user interactions with the Argo CD web UI or the Argo CD CLI. It is normal for it to be empty with Data at 0 if you have not changed any defaults or set anything directly in the ConfigMap yet.

To change the Application reconciliation timeout you need to do the following:

  1. Get a copy of the argocd-cm ConfigMap here: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cm.yaml
  2. The Application reconciliation timeout can be found on line 283 “timeout.reconciliation: 180s”.
  3. Change “180s” to whatever number you want to change it to i.e. change to “60s” to reduce the sync internal to 1 minute.
  4. Remove all of the other settings in the file except for the Application reconciliation timeout. The file should look like this:
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
data:
  # Application reconciliation timeout is the max amount of time required to discover if a new manifests version got
  # published to the repository. Reconciliation by timeout is disabled if timeout is set to 0. Three minutes by default.
  # > Note: argocd-repo-server deployment must be manually restarted after changing the setting.
  timeout.reconciliation: 60s

5. Save the file.

6. Connect to the Kubernetes cluster that is running Argo CD and apply the argocd-cm ConfigMap file you just updated by running the following:

kubectl apply -f argocd-cm.yaml -n argocd

7. Run the following to verify the update was applied:

kubectl describe configmaps argocd-cm -n argocd

You should also notice at least 1 is listed under Data for the ConfigMap now.

8. It is a good practice to redeploy the argocd-repo-server after updating the argocd-cm ConfgigMap. You can redeploy the argocd-repo-server by running the following:

kubectl -n argocd rollout restart deploy argocd-repo-server

That’s it! Now your app in Argo CD will sync on the new Application Reconciliation Timeout that you set. Check back soon for more Azure, Cloud, Kubernetes, GitOps, Argo CD content and more.

BTW: For more in-depth information on GitOps and Argo CD check out my GitOps and Argo CD courses on Pluralsight here:

GitOps: The Big Picture“:

https://app.pluralsight.com/library/courses/gitops-the-big-picture

Getting Started with Argo CD“:

https://app.pluralsight.com/library/courses/argo-cd-getting-started

And here is the link to my Pluralsight profile to follow mehttps://app.pluralsight.com/profile/author/steve-buchanan

Read more

GitOps Fundamentals Certification

Recently Codefresh launched the 1st certification in its GitOps certification path. This one is called “GitOps Fundamentals“. You can find it here: https://codefresh.learnworlds.com .

It takes you through the basics of GitOps to gain theoretical knowledge, and how to utilize Argo CD as the GitOps operator to gain hands-on knowledge. You will learn about both and will have questions on both in the quizzes and final exam.

They also touch on Argo Rollouts to go over Progressive Delivery with topics such as blue/green deployments and canary deployments. This is the 1st ever GitOps certification and it’s free! They do have plans for GitOps at Edge and GitOps at Scale certifications.

You can find more information about the GitOps certification and Codefresh’s future plans for it on this blog by Hannah Seligson (one of the authors of the course and exam) here: https://codefresh.io/blog/get-gitops-certified-argo.

I jumped all over this opportunity to get certified on GitOps, by signing up for the course, taking the training, and the exam! I passed and now I am GitOps certified.

Here is the certification:

GitOps is gaining adoption more and more every day in the Kubernetes space. Also, Argo CD is growing extremely fast as one of the top if not the top GitOps operator. I recommend you check this Codefresh GitOps certification out and get GitOps certified as this pattern and the technology behind it are growing at a super fast rate.

Also note, it looks like Weaveworks is planning to launch a “Certified GitOps Practitioner (CGP)” certification soon. I would guess the Weaveworks GitOps certification will contain content on Flux another GitOps operator. You can learn more about their coming GitOps certification here: https://www.weave.works/certified-gitops-practitioner

Also for more training on GitOps and Argo CD be sure to check out my GitOps and Argo CD courses on Pluralsight here:

GitOps: The Big Picture“:

https://app.pluralsight.com/library/courses/gitops-the-big-picture

Getting Started with Argo CD“:

https://app.pluralsight.com/library/courses/argo-cd-getting-started

And here is the link to my Pluralsight profile to follow me for future GitOps, Kubernetes, Cloud, and DevOps content: https://app.pluralsight.com/profile/author/steve-buchanan

Read more

A Guide to Navigating the AKS Enterprise Documentation & Scripts

NOTE: As with all of my blog posts the views and opinions on this post are my own and are not that of my employer.

The goal of this blog is to serve as Guidance on Microsoft AKS Enterprise Documentation.

Before joining Microsoft, I was in the F500/F100 consulting world. I was focused on Azure, DevOps, and Kubernetes. Many organizations had an interest in utilizing a managed Kubernetes service. This would often lead them to Azure Kubernetes Service (AKS). We spent time guiding organizations on how to get started with AKS including the design of the architecture, deployment, and operation of it.  

Like with Azure and other platforms that have a lot of moving parts, AKS has many design areas that need to be covered as a part of the design and implementation. The core areas are:

  • IAM (Identity and access management)
  • Networking (topology, IP addressing, Ingress, load balancing, service mesh, Web App Firewall, etc.)
  • Governance (Resource organization, taxonomy, etc.)
  • Security (platform security, image security, runtime security, secrets management, etc.)
  • Management and Operations (monitoring, backup, DR, etc.)
  • Automation and DevOps (Orchestration, service discovery, Configuration, Autoscaling, CI/CD/GitOps, etc.)

These are in addition to the core but come into play with the apps that will run on top of Kubernetes:

  • Applications
  • Data

In order to simplify Kubernetes projects, you can funnel them down to three phases; Design, Deploy, and Operate.

This is a lot of ground to cover on top of gaining a solid understanding of Kubernetes itself. Microsoft has created a set of resources that can simplify and accelerate the adoption of Kubernetes. This is a set of resources that help you build out landing zones for AKS and some for Azure. These resources live in the Azure Architecture Center (AAC). The AAC is where you get guidance for architecting solutions on Azure using established patterns and practices.

I highly recommend any team and organization that plans to adopt Kubernetes utilize these artifacts from Microsoft to help you along your journey. This will ensure your AKS clusters are enterprise ready. When starting with AKS it can be confusing when and in what order to use these resources.

Again, the goal of this blog post is to give you a guide on how to use these resources. I will list these resources here in order with a brief description of them, when to use them, and how to use them:

-DESIGN-

Part #1 is to start with architecting. You will need to start with designing your AKS architecture. There are several documents that can assist with this as you work through your AKS architecture design. You will want to start with the Baseline architecture for an Azure Kubernetes Service (AKS) document. This document is core for designing AKS, however, there are some additional AKS documents that you will want to utilize in addition to the Baseline architecture for an Azure Kubernetes Service (AKS). These additional documents will depend on your organization’s specific use case.

Baseline architecture for an Azure Kubernetes Service (AKS) cluster

What it is:

The AKS baseline gives you detailed recommendations for networking, security, identity, management, and monitoring of AKS clusters. This baseline takes you through all the needed facets of AKS to come up with a plan for implementing AKS across your enterprise. The final result will be based on your organization’s business requirements.

How to use it:

This document will take you through 6 core areas divided up into sections with sub-sections.

You will start with your networking and work your way through the sections finishing off with operations.

This document has a Visio file of the AKS architecture you can download to get you started. You can download this right away and build it out with specifics to your needs as you work through this document. In fact, there are multiple Visio templates you can download to help.

A common area that folks really struggle with when getting started with AKS is planning the IP addresses. Teams need help deciding to use Kubenet or Azure CNI for the networking model. You cannot change this on an AKS cluster after it is deployed so you have to make this decision upfront. The only way to go from one networking model to another is to deploy a new cluster. Admins often worry about IP exhaustion when utilizing Azure CNI. There is a Visio and another sub-doc to help with all of this within the IP Address section. It has a link to this: repo (https://github.com/mspnp/aks-baseline/blob/main/networking/topology.md) that has a markdown file that has a table to help with planning your subnets for AKS and this document that helps you determine to go with Kubenet or Azure CNI as well as critical information on each model type and IPs.

This document also covers GitOps, multi-tenancy, and cost management with AKS.

LINK TO THE DOCUMENT: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks

The next four documents I am going to mention fit different scenarios so you may or may not need them. I will call out in the “How to use it” sections below each reference.

AKS Secure Baseline with Private Cluster

What it is:

This document helps you deploy a secure AKS cluster, compliant with Enterprise-Scale for AKS guidance and best practices. This document also contains links to reference scripts for deploying a private AKS cluster.

How to use it:

In practice in the real world, you will want to deploy a private AKS cluster 99% of the time. There needs to be a very solid reason not to. By doing this alone you will greatly improve the security posture of your AKS cluster. By default, when you deploy AKS the API server is accessible via a public IP. Deploying a private AKS cluster makes the AKS API Server private and only accessible on the Azure or when connected to your Azure VNet that the private cluster is on i.e. if you are connected via ExpressRoute. I would recommend you plan to deploy your clusters as private and utilize this document right along the baseline document when designing your AKS architecture.

LINK TO THE DOCUMENT: https://github.com/Azure/AKS-Landing-Zone-Accelerator/tree/main/Scenarios/AKS-Secure-Baseline-PrivateCluster

AKS baseline for multi-region clusters

What it is:

This reference architecture details how to run multiple instances of an Azure Kubernetes Service (AKS) cluster across multiple regions in an active/active and highly available configuration.

How to use it:

If you need multi-region AKS clusters with greater high availability then this is a document you will want to look at to guide you with this. If you don’t need multi-region-based clusters skip this document.

LINK TO THE DOCUMENT: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-multi-region/aks-multi-cluster

AKS regulated cluster for PCI

What it is:

Microsoft has built a 9-part series of articles to help when organizations need to run PCI workloads on AKS. Below are the first 3 of those articles as this is where you start. You will want to reference all 9 parts of the series though.

Introduction of an AKS regulated cluster for PCI-DSS 3.2.1 – This reference architecture describes the considerations for an Azure Kubernetes Service (AKS) cluster designed to run a sensitive workload. The guidance is tied to the regulatory requirements of the Payment Card Industry Data Security Standard (PCI-DSS 3.2.1).

Architecture of an AKS regulated cluster for PCI-DSS 3.2.1 – This article describes a reference architecture for an Azure Kubernetes Service (AKS) cluster that runs a workload in compliance with the Payment Card Industry Data Security Standard (PCI-DSS 3.2.1). This architecture is focused on the infrastructure and not the PCI-DSS 3.2.1 workload.

Configure networking of an AKS regulated cluster for PCI-DSS 3.2.1 – This article describes the networking considerations for an Azure Kubernetes Service (AKS) cluster that’s configured in accordance with the Payment Card Industry Data Security Standard (PCI-DSS 3.2.1).

How to use it:

If your organization plans to run any workloads that need PCI compliance on AKS then you will want to check out this document and utilize it when designing for your AKS clusters. It gets into topics such as TLS, DDoS protection, pop-to-pod security, and more.

LINK TO THE DOCUMENT/s:

Introduction of an AKS regulated cluster for PCI-DSS 3.2.1https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-pci/aks-pci-intro

Architecture of an AKS regulated cluster for PCI-DSS 3.2.1 – https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-pci/aks-pci-ra-code-assets

Configure networking of an AKS regulated cluster for PCI-DSS 3.2.1https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-pci/aks-pci-network

Advanced Azure Kubernetes Service (AKS) microservices architecture

What it is:

This reference architecture details several configurations to consider when running microservices on Azure Kubernetes Services. Topics include configuring network policies, pod autoscaling, and distributed tracing across a microservice-based application.

How to use it:

The chances are high that you will be running microservice-based workloads on your AKS cluster. Utilize this document in your design process to ensure your architecture is ready to handle microservices-based workloads. It also includes a Visio file to help you get started.

LINK TO THE DOCUMENT: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-microservices/aks-microservices-advanced

-DEPLOY-

Part #2 is to deploy the architecture you designed. The best option for deploying Azure infrastructure and AKS clusters is to script it as IaC (Infrastructure as Code). Scripting the deployment vs manually deploying allows you to have documentation via code, standardization, and a templatized deployment for repeatability. You can take this code and place it in a pipeline for ease of deployment, in a service catalog for access to teams across your org, or as an inner source for use among DevOps teams.

Microsoft has built something called the AKS Landin Zone Accelerator as a starting point to use for building out your IaC for AKS. The idea is that you can utilize the Azure Kubernetes Service (AKS) Baseline documentation as a reference when designing your AKS and use the AKS Landing Zone Accelerator to deploy. Now your architecture should be based on the AKS baseline with some modifications to fit your specific needs. The AKS Landing Zone Accelerator may need to be modified to fit your specific needs as well. As long as your architecture is based on the AKS Baseline then you should not have to make a ton of modifications to the AKS Landing Zone Accelerator code. In fact, 80% or more of the work should be done for you already when utilizing the AKS Landing Zone Accelerator IaC code.

The AKS Landing Zone Accelerator contains IaC code for both bicep and terraform. It also has instructions on how to deploy the AKS Baseline using either of the two languages.

Read more

Guest on the 1st episode of Tech/Life Podcast

A former Microsoft MVP and friend Steve Beaumont started a podcast. I was honored to be a guest on his first episode! The episode was released today. This podcast explores balancing Tech and Life. On the podcast episodes, Steve talks with people within the technology field, discussing both tech and diving into personal lives, stress, learning, and interests.

Steve has a website for the podcast here: www.techvlife.com.

Steve and I had a chance to catch up at MMS 2022 for the episode. We talked about my transition to Microsoft working as a Principal Program Manager in Azure. We also talked about my time practicing Kung Fu, how I stay motivated, make goals, balance tech with hobbies, how tech is one of my hobbies, and balancing that so it does not become another job, Kubernetes, AKS, and more.

Steve already has episodes with many other great folks already. He will be releasing them in the coming weeks. So be sure to subscribe to his podcast. Here are some of the other guests he will have on:


Aria Carley
Justin Chalfant
Ben Whitmore
Mike Danoski
David Segura
Donna Ryan
Mike Marable

You can listen to the 1st episode with me right here:

Read more

16th Pluralsight course – “Laravel The Big Picture” a web framework for modern PHP based web apps

I am excited to announce that I published a Laravel course on Pluralsight! This course is titled “Laravel 9: The Big Picture“. This is my 16th course with Pluralsight. I have been working with PHP based websites, Content Management Systems, Frameworks, and the language off and on for many years. When the opportunity came to author a course on Laravel I jumped on it.

Laravel is a full-stack web framework for modern PHP based web applications. PHP is a language that has been around for a long time used to power most of the web site and web apps on the internet today. And some of the best web development teams in the world build their products with Laravel.

Many don’t know this but Laravel can be used for front-end and back-end development, as well as developing a REST API. Some of the largest companies and most popular websites have been built using Laravel such as Disney, Apple, Pfizer, BBC, Twitch, Mastercard, and more.

In this course, Laravel 9: The Big Picture, you’ll learn about the Laravel full-stack framework. First, you’ll explore Laravel’s core components such as: routing, middleware, controllers, requests, responses, views, blade templates, and more. Next, you’ll discover how to install Laravel, configure it, how it handles security, works with databases, about its APIs and more. Finally, you’ll learn what it is like to develop, build, and deploy an app with Laravel.

When you’re finished with this course, you’ll have the skills and knowledge of Laravel needed to decide if it is the right PHP web framework for you and where to go next on your journey with Laravel.

Check out the new Laravel course here: https://app.pluralsight.com/library/courses/laravel-9-big-picture

I hope you find value in this new Laravel 9: The Big Picture course. Be sure to follow my profile on Pluralsight so you will be notified as I release new courses

Here is the link to my Pluralsight profile to follow mehttps://app.pluralsight.com/profile/author/steve-buchanan

Read more

BIT Talk: Destination Cloud – Pivoting Your Career in Tech

This month I will be a guest speaker at the free Blacks In Technology Twin Cities chapter happy hour event. We will be having a tech career discussion. The topic is: “Destination Cloud – Pivoting Your Career in Tech with Steve Buchanan“.

When: The happy hour event will be on Wed, May 25, 2022, 6:00 PM – 8:00 PM.

Location: Modern Well, 2909 S Wayzata Blvd, Minneapolis, MN 55405.

Join us for an evening with me and moderator Brian Waters as we explore career pathways in the Digital Space. Here are topics we will explore:

  • Choosing an area of interest
  • Being Strategic & Intentional in Career Search & Growth
  • Transitioning into Different Tech Disciplines & Industries, and
  • Identifying tech trends on the horizon!

This event is both in-person and virtual. The in-person location is Modern Well, 2909 S Wayzata Blvd, Minneapolis, MN 55405 and the zoom link is below. I hope you can make it out to this event!

Register here: https://us02web.zoom.us/j/85049210370?pwd=M0ZQR0twNTlQbTJGOUtDZjloVFA2dz09 or here: https://www.linkedin.com/events/destinationcloud-pivotingyourca6927748171498475520

Read more

Watch Learn Live Episode 7 – Introduction to Azure Arc enabled Kubernetes

Today Pierre Roman (@wiredcanuck) Senior Cloud Advocate of Microsoft & myself (@buchatech) streamed “Introduction to Azure Arc enabled Kubernetes” on Learn Live. Here is what we covered in this session:

In this session, showed you how Azure Arc enabled Kubernetes clusters can help customers like Contoso to optimize and simplify their operations. Here are the Learning objectives we covered:

  • Describe Kubernetes, Azure Arc, and Azure Arc-enabled Kubernetes.
  • Connect Kubernetes clusters to Azure Arc.
  • Manage Azure Arc enabled Kubernetes clusters by using GitOps.
  • Integrate Azure Arc enabled Kubernetes cluster with Azure services like Azure Monitor and Azure Policy.

If you missed it don’t worry. 🙂 You can watch the playback on the Microsoft Developer YouTube channel here:

You can check out more Learn Live episodes on the:

Or

Read more

15th Pluralsight Course Published – Ember JS

I am excited to announce that I published an Ember.js course on Pluralsight! This course is titled “Ember 4: The Big Picture“. This is my 15th course with Pluralsight. Ember.js is a JavaScript framework used for developing web apps. Some of the best web development teams in the world build their products with Ember.

This course will give an overview of Ember’s components and architecture and a guide to the next steps you can take to get started with Ember. In this course, Ember 4: The Big Picture, you’ll learn about the Ember front-end framework. First, you’ll explore Ember’s core parts: Ember.js, Ember Data, Ember CLI, and Ember Inspector.

Next, you’ll discover Ember’s core concepts: routing, services, and Components. Finally, you’ll learn how to what it is like to develop, build, and deploy an app with Ember.

When you’re finished with this course, you’ll have the skills and knowledge of Ember JS needed to decide if it is the right JavaScript framework for you and where to go next on your journey with Ember.

Check out the new Ember JS course here: https://app.pluralsight.com/library/courses/ember-4-big-picture

I hope you find value in this new Ember 4: The Big Picture course. Be sure to follow my profile on Pluralsight so you will be notified as I release new courses

Here is the link to my Pluralsight profile to follow mehttps://app.pluralsight.com/profile/author/steve-buchanan

Read more

Co-hosting 2 sessions in the Azure Hybrid Cloud Study Hall Series

I am very excited to be a part of a new Microsoft Azure Hybrid Cloud Study Hall series. This is a free fourteen-part weekly series that starts in April running through June.

In this study hall, you will learn how you can manage your on-premises, edge, and multi-cloud resources, and how you can deploy Azure services anywhere with Azure Arc and Azure Stack.

In this series, each session covers working with hybrid cloud resources using Azure services and hybrid cloud technologies. In these sessions we will:

  • Answer your questions live
  • Walk-through how to configure hybrid cloud resources
  • Walk-through how to deploy hybrid cloud resources
  • Walk-through how to manage hybrid cloud resources

In these sessions, together with you, we will work through Microsoft Learn modules focused on Azure Arc and Azure Stack HCI.

We have a solid lineup of speakers from Microsoft and the community! And I will be co-delivering two sessions myself.

Some of the speakers and moderators

Check out this video Microsoft marketing made where I talk about the sessions:

My sessions are:

Introduction to Azure Arc enabled Kubernetes

on May 5, 2022 10:00AM – 11:30AM (Pacific) co-hosting with Pierre Roman. 

Add to Calendar:
https://aka.ms/learnlive-azure-hybrid-cloud-study-hall-Ep7

The Learn Module:
https://aka.ms/learnlive-20220505A

Implement Azure App Service on Kubernetes with Arc

on June 9, 2022 10:00AM – 11:30AM (Pacific) co-hosting with Lior Kamrat.

Add to Calendar:
https://aka.ms/learnlive-azure-hybrid-cloud-study-hall-Ep11

The Learn Module:
https://aka.ms/learnlive-20220609A

Check out all of the Learn Live – Azure Hybrid Cloud Study Hall sessions here:

https://docs.microsoft.com/en-us/events/learntv/learnlive-azure-hybrid-cloud-study-hall

Read more