The goal of this tutorial is to show how to deploy iWay application that was built by iWay iIT tool in the previous tutorial here to the Microsoft Azure Cloud
Microsoft Azure is an ever-expanding set of cloud services to help your organization meet your business challenges. It's the freedom to build, manage, and deploy applications on a massive, global network using your favorite tools and frameworks.
In short, this application consists of two Docker containers:
iWay Application exposing an API(defined via RAML file) which utilized the iIT MongoDB Connector
MongoDB Server
We brought these containers up and managed communication between two by using Kubernetes technology.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
We deploy this application onAzure Cloud by utilizing brand new Microsoft product called Azure Kubernetes Service (AKS).
Actually, by utilizing technologies described above we managed to build a Software as a Service (SaaS) managed by Azure Cloud!
Software as a service (SaaS) is a software distribution model in which a third-party provider hosts applications and makes them available to customers over the Internet.
Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications offline.
Introduction to Azure Kubernetes Service (AKS) preview
Azure Kubernetes Service (AKS) makes it simple to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications.
This enables you to use your existing skills, or draw upon a large and growing body of community expertise, to deploy and manage container-based applications on Microsoft Azure.
By using AKS, you can take advantage of the enterprise-grade features of Azure, while still maintaining application portability through Kubernetes and the Docker image format.
Important:
Azure Kubernetes Service (AKS) is currently in preview. Previews are made available to you on the condition that you agree to the supplemental terms of use.
Some aspects of this feature may change prior to general availability (GA).
Managed Kubernetes in Azure
AKS reduces the complexity and operational overhead of managing a Kubernetes cluster by offloading much of that responsibility to Azure.
As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you.
In addition, you pay only for the agent nodes within your clusters, not for the masters. As a managed Kubernetes service, AKS provides:
Automated Kubernetes version upgrades and patching
Easy cluster scaling
Self-healing hosted control plane (masters)
Cost savings - pay only for running agent pool nodes
With Azure handling the management of the nodes in your AKS cluster, you no longer need to perform many tasks manually, like cluster upgrades.
Because Azure handles these critical maintenance tasks for you, AKS does not provide direct access (such as with SSH) to the cluster.
Using Azure Kubernetes Service (AKS)
The goal of AKS is to provide a container hosting environment by using open-source tools and technologies that are popular among customers today.
AKS expose the standard Kubernetes API endpoints. By using these standard endpoints, you can leverage any software that is capable of talking to a Kubernetes cluster.
Creating a Kubernetes cluster using Azure Kubernetes Service (AKS)
To begin using AKS, deploy an AKS cluster with the Azure CLI or via the portal (search the Marketplace for Azure Kubernetes Service).
If you are an advanced user who needs more control over the Azure Resource Manager templates, use the open-source acs-engine project to build your own custom Kubernetes cluster and deploy it via the az CLI.
Using Kubernetes
Kubernetes automates deployment, scaling, and management of containerized applications. It has a rich set of features including:
Automatic binpacking
Self-healing
Horizontal scaling
Service discovery and load balancing
Automated rollouts and rollbacks
Secret and configuration management
Storage orchestration
Batch execution
Deploy an Azure Kubernetes Service (AKS) cluster
Install Azure CLI 2.0.
The Azure CLI 2.0 is a command-line tool for managing Azure resources.
Select the Cloud Shell button on the menu in the upper-right corner of the Azure portal:
Run
az --version
to find the version. If you need to install or upgrade, see Install Azure CLI.
Ensure that the needed Azure service providers are enabled with the az provider register command.
az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService
After registering, you are now ready to create a Kubernetes cluster with AKS.
Create a resource group
Create a resource group with the az group create command. An Azure resource group is a logical group in which Azure resources are deployed and managed. When creating a resource group you are asked to specify a location, this is where your resources will live in Azure. While AKS is in preview, only some location options are available. These are eastus, westeurope, centralus, canadacentral, canadaeast.
The following example creates a resource group named myResourceGroup in the eastus location.
az group create --name myResourceGroup --location eastus
Internal load balancing makes a Kubernetes service accessible to applications running in the same virtual network as the Kubernetes cluster.
When all services using the internal load balancer have been deleted, the load balancer itself is also deleted
Load balancer rules for the demo application:
Load balancer properties:
How to test an application on the AZURE cloud.
Run the following kubernetes command:
kubectl get services
This command allows us to select an external IP address for the service.
Initially the EXTERNAL-IP for the services appears as pending.
After couple of minutes EXTERNAL-IP address has changed from pending to an IP address
By using EXTERNAL-IP address and port 9999 for the service myiway, we may visit the iWay Service manager Console:
Use EXTERNAL-IP address on port 8081 and iIT iWay Explorer to post JSON document into MongoDB database:
Use the API endpoint to browse the posted document:
iWay Google Deployment Overview
The goal of this tutorial is to show how to deploy iWay application that was build by iWay iIT tool in previous tutorial hereto the Google Cloud.
Google Cloud Platform (GCP) is a collection of Google’s computing resources, made available via services to the general public as a public cloud offering.
The GCP resources consist of physical hardware infrastructure — computers, hard disk drives, solid state drives, and networking — contained within Google’s globally distributed data centers, where any of the components are custom designed using patterns similar to those available in the Open Compute Project.
This hardware is made available to customers in the form of virtualizedresources, such as virtual machines (VMs), as an alternative to customers building and maintaining their own physical infrastructure.
In short this application consists of two Docker containers:
iWay Application exposing an API(defined via RAML file) which utilized the iIT MongoDB Connector
MongoDB Server
We brought these containers up and managed communication between two by using Kubernetes technology.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Actually, by utilizing technologies described above we managed to build a Software as a Service (SaaS) managed by Azure Cloud!
Software as a service (SaaS) is a software distribution model in which a third-party provider hosts applications and makes them available to customers over the Internet.
Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. Kubernetes Engine allows to get up and running with Kubernetes in no time, by completely eliminating the need to install, manage, and operate your own Kubernetes clusters.
Before you begin
Take the following steps to enable the Kubernetes Engine API:
Wait for the API and related services to be enabled. This can take several minutes.
Make sure that billing is enabled for your project.
We use can use Google Cloud Platform Dashboard and Google Cloud Shell to complete the task.
Google Cloud Shell is a shell environment for managing resources hosted on Google Cloud Platform (GCP). Cloud Shell comes preinstalled with the gcloud and kubectl command-line tools. gcloud provides the primary command-line interface for GCP, and kubectl provides the command-line interface for running commands against Kubernetes clusters.
Here are the steps to deploy an application to the Google Cloud:
I. Create a new project.
II. Activate a Cloud Shell.
Press a button depicted below on Google Cloud Platform Dashboard:
III. Set a default project
To set a default project, run the following command from Cloud Shell:
Replace PROJECT_ID with your project ID.
IV Set a default compute zone
To set a default time zone, run the following command from Cloud Shell:
where COMPUTE_ZONE is the desired geographical compute zone, such as us-west1-a.
V. Create a Kubernetes Engine cluster from the Google Cloud Platform Kubernetes Engine console
Also cluster may be created from the shell with the following commnand:
gcloud container clusters create CLUSTER_NAME
where CLUSTER_NAME is the name you choose for the cluster.
Sample iway-cluster in a Kubernetes Engine console:
VI. Get authentication credentials for the cluster
After creating your cluster, you need to get authentication credentials to interact with the cluster.
To authenticate for the cluster, run the following command:
Go to Cloud Console to see myiway deployment details:
4. Next step should be to create a kubernetes service for myiway: kubectl expose deployment myiway --name=myiway --type=LoadBalancer
5. And create a kubernetes service for the mongodb: kubectl expose deployment mongodb --name=mongodb --type=LoadBalancer
6. Go to Google Cloud Console, then Services, to see if myiway and mongodb kubernetes services where created successfully:
Or by executing the following command kubectl get svc -o wide we list the running services .
7. Go to Cloud Console to see the deployments logs:
8. By using EXTERNAL-IP address and port 9999 for the service myiway, we may visit the iWay Service manager Console:
9. Use EXTERNAL-IP address on port 8081 and iIT iWay Explorer to post JSON document into MongoDB database:
10. Use the API endpoint to browse the posted document:
iWay AWS Deployment overview
The goal of this tutorial is to show how to deploy an iWay application that was built using iIT in the previous tutorial to the AWS Cloud.
Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow.
In short this application consists of two Docker containers:
iWay Application exposing an API(defined via RAML file) which utilized the iIT MongoDB Connector
MongoDB Server
We brought these containers up and managed communication between two by using Kubernetes technology.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Actually, by utilizing technologies described above we managed to build a Software as a Service (SaaS) managed by AWS Cloud!
Software as a service (SaaS) is a software distribution model in which a third-party provider hosts applications and makes them available to customers over the Internet.
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it provides automated version upgrades and patching for them.
Amazon EKS is also integrated with many AWS services to provide scalability and security for your applications, including the following:
Elastic Load Balancing for load distribution
IAM for authentication
Amazon VPC for isolation
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community.
Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.
How Does Amazon EKS Work?
Steps to deploy application with Amazon EKS :
First, create an Amazon EKS cluster in the AWS Management Console or with the AWS CLI or one of the AWS SDKs.
Then, launch worker nodes that register with the Amazon EKS cluster. We provide you with an AWS CloudFormation template that automatically configures your nodes.
When your cluster is ready, you can configure your favorite Kubernetes tools (such as kubectl) to communicate with your cluster.
Deploy and manage applications on your Amazon EKS cluster the same way that you would with any other Kubernetes environment.
For more information about creating your required resources and your first Amazon EKS cluster, see Getting Started with Amazon EKS.
Amazon EKS Prerequisites
Before you can create an Amazon EKS cluster, you must create an IAM role that Kubernetes can assume to create AWS resources. For example, when a load balancer is created,
Kubernetes assumes the role to create an Elastic Load Balancing load balancer in your account. This only needs to be done one time and can be used for multiple EKS clusters.
You must also create a VPC with and a security group for your cluster to use.
Although the VPC and security groups can be used for multiple EKS clusters, it is advisable to use a separate VPC for each EKS cluster to provide better network isolation.
On the Specify Details page, fill out the parameters accordingly, and then choose Next.
Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.
VpcBlock: Choose a CIDR range for your VPC. You may leave the default value.
Subnet01Block: Choose a CIDR range for subnet 1. You may leave the default value.
Subnet02Block: Choose a CIDR range for subnet 2. You may leave the default value.
Subnet03Block: Choose a CIDR range for subnet 3. You may leave the default value.
(Optional) On the Options page, tag your stack resources. Choose Next.
On the Review page, choose Create.
When your stack is created, select it in the console and choose Outputs.
Record the SecurityGroups value for the security group that was created. You need this when you create your EKS cluster; this security group is applied to the cross-account elastic network interfaces that are created in your subnets that allow the Amazon EKS control plane to communicate with your worker nodes.
Record the VpcId for the subnets that were created. You need this when you launch your worker node group template.
Record the SubnetIds for the subnets that were created. You need this when you create your EKS cluster; these are the subnets that your worker nodes are launched into.
If cluster VPC was created successfully:
3. Install and Configure kubectl for Amazon EKS
Amazon EKS clusters require kubectl and kubelet binaries and the Heptio Authenticator to allow IAM authentication for your Kubernetes cluster. Beginning with Kubernetes version 1.10,
You can configure the stock kubectl client to work with Amazon EKS by installing the Heptio Authenticator and modifying your kubectl configuration file to use it for authentication.
After you install kubectl, you can verify its version with the following command:
kubectl version --short –client
To install heptio-authenticator-aws for Amazon EKS:
or you can use go get to fetch the binary from the Heptio Authenticator project on GitHub for other operating systems.
Download and install the heptio-authenticator-aws binary. Amazon EKS vends heptio-authenticator-aws binaries that you can use,
Download the Amazon EKS-vended heptio-authenticator-aws binary from Amazon S3:
Test that the heptio-authenticator-aws binary works:
heptio-authenticator-aws help
(Optional) Download and Install the Latest AWS CLI
Amazon EKS requires at least version 1.15.32 of the AWS CLI. To install or upgrade the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.
Note:
Your system's Python version must be Python 3, or Python 2.7.9 or greater. Otherwise, you receive hostname doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are "hostname doesn't match" errors? in the Python Requests FAQ.
Create Your Amazon EKS Cluster
Now you can create your Amazon EKS cluster.
Important:
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. Also, the Heptio Authenticator uses the AWS SDK for Go to authenticate against your Amazon EKS cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.
If you install and configure the AWS CLI, you can configure the IAM credentials for your user. These also work for the Heptio Authenticator. If the AWS CLI is configured properly for your user, then the Heptio Authenticator can find those credentials as well. For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
Important:
You must use IAM user credentials for this step, not root credentials. If you create your Amazon EKS cluster using root credentials, you cannot authenticate to the cluster. For more information, see How Users Sign In to Your Account in the IAM User Guide.
1. To add a new user with the console
Set user details:
Set new user permissions:
Review new user properties and click button "Create user":
Subnets: The SubnetIds values (comma-separated) from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC. By default, the available subnets in the above VPC are preselected.
Security Groups: The SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC. This security group has ControlPlaneSecurityGroup in the drop-down name.
Amazon EKS Cluster configuration screen:
Select button "Create":
Important:
The worker node AWS CloudFormation template modifies the security group that you specify here, so we recommend that you use a dedicated security group for your cluster control plane. If you share it with other resources, you may block or disrupt connections to those resources.
Note:
You may receive an error that one of the Availability Zones in your request does not have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account.
On the Clusters page, choose the name of your newly created cluster to view the cluster information.
The Status field shows CREATING until the cluster provisioning process completes. When your cluster provisioning is complete (usually less than 10 minutes), and note the API server endpoint and Certificate authority values. These are used in your kubectl configuration.
Configure kubectl for Amazon EKS
In this section, you create a kubeconfig file for your cluster. The code block in the procedure below shows the kubeconfig elements to add to your configuration.
If you have an existing configuration and you are comfortable working with kubeconfig files, you can merge these elements into your existing setup.
Be sure to replace the <endpoint-url> value with the full endpoint URL (for example,https://API_SERVER_ENDPOINT.yl4.us-west-2.eks.amazonaws.com) that was created for your cluster, replace the <base64-encoded-ca-cert> with the certificateAuthority.data value you retrieved earlier, and replace the <cluster-name> with your cluster name.
Save the file to the default kubectl folder, with your cluster name in the file name. For example, if your cluster name is iway_cluster, save the file to ~/.kube/config-iway_cluster.
Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.
Amazon EKS uses the Heptio Authenticator with kubectl for cluster authentication, which uses the same default AWS credential provider chain as the AWS CLI and AWS SDKs.
If you have installed the AWS CLI on your system, then by default the Heptio authenticator will use the same credentials that are returned with the following command:
aws sts get-caller-identity
Test your configuration:
kubectl get svc
Note:
If you receive the error "heptio-authenticator-aws": executable file not found in $PATH, then your kubectl is not configured for Amazon EKS.
Now that your VPC and Kubernetes control plane are created, you can launch and configure your worker nodes.
Important:
Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 On-Demand Instance prices. For more information, see Amazon EC2 Pricing.
ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS CloudFormation output that you generated with Create your Amazon EKS Cluster VPC.
NodeGroupName: Enter a name for your node group that is included in your Auto Scaling node group name.
NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your worker node Auto Scaling group can scale in to.
NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your worker node Auto Scaling group can scale out to.
NodeInstanceType: Choose an instance type for your worker nodes.
NodeImageId: Enter the current Amazon EKS worker node AMI ID for your Region.
Region
Amazon EKS-optimized AMI ID
US West (Oregon) (us-west-2)
ami-73a6e20b
US East (N. Virginia) (us-east-1)
ami-dea4d5a1
Note:
The Amazon EKS worker node AMI is based on Amazon Linux 2. You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to the associated RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.
KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your worker nodes with after they launch.
Open the file with your favorite text editor. Replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRolevalue that you recorded in the previous procedure, and save the file.
Important:
Do not modify any other lines in this file.
Then execute the following command:
kubectl apply -f aws-auth-cm.yaml
Note:
If you receive the error "heptio-authenticator-aws": executable file not found in $PATH, then your kubectl is not configured for Amazon EKS. For more information, see Configure kubectl for Amazon EKS.
Watch the status of your nodes and wait for them to reach the Ready status.
Run the command:
kubectl get nodes –watch
and wait till status becomes ready.
Launch an iWay application
In this section, you will finally launch an iWay API application to the AWS Cloud.
We start the process with creating AWS Storage Class, because Amazon EKS clusters are not created with any storage classes.
You must define storage classes for your cluster to use and you should define a default storage class for your persistent volume claims.
For more information, see Storage Classes in the Kubernetes documentation.
To create an AWS storage class for your Amazon EKS cluster create an AWS storage class manifest file for your storage class.
Steps to deploy iWAY API application to the AWS Cloud are:
Create Storage by executing the following command: