AWS ECR : How to push or pull docker image

Hello everyone!
In this post we will see how to push a docker image to your AWS ECR  and how to pull image from it.
Pre-requisites:-
  • Skip this step if you already have docker on your machine. I am using  “Docker for Windows” software to run dockers on my Windows 10 laptop.
If you have Windows 7 download Docker Toolbox for Windows with Virtualbox.
  • Get AWS CLI.
  • Create AWS IAM user from AWS console which has permission to put and delete images. You can refer sample policy below.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ecr:*",
            "Resource": "*"
        }
    ]
}
Once you are done with pre-requisites let's move forward.
1)  Open powershell in windows or command prompt in linux. Below I'll be running command on windows powershell. But the AWS CLI command on linux are similar.
In powershell check that you have docker running. It should give you an output like below.
PS C:\CloudVedas> docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

55f016be65aa hello-world "/hello" 2 hours ago Exited (0) 2 hours ago gifted_hamilton

PS C:\CloudVedas>
2) Configure AWS CLI by entering the access key and secret key of the IAM user.
PS C:\CloudVedas> aws configure
AWS Access Key ID [****************A37B]:
AWS Secret Access Key [****************W3w3]:
Default region name [ap-southeast-2]:
Default output format [None]:
PS C:\CloudVedas>
3) Check if your IAM user is able to describe ECR.
PS C:\CloudVedas> aws ecr describe-repositories
{
    "repositories": []
}
PS C:\CloudVedas>
4) Let's create an ECR repository now. You can skip this step if you already have repo.
PS C:\CloudVedas> aws ecr create-repository --repository-name cloudvedas
{
"repository": {
"repositoryArn": "arn:aws:ecr:ap-southeast-2:123456789123:repository/cloudvedas",
"registryId": "123456789123",
"repositoryName": "cloudvedas",
"repositoryUri": "123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas",
"createdAt": 1564224171.0
}
}
PS C:\CloudVedas>
5) Next we will authenticate the Docker client to the Amazon ECR registry to which we intend to push our image. You will get a long docker login token as below.
PS C:\CloudVedas> aws ecr get-login-password --region ap-southeast-2

6) Enter below the long token that you will receive in response of above command.
docker login -u AWS -p <token> https://123456789123.dkr.ecr.ap-southeast-2.amazonaws.com
Login Succeeded
You will see "Login Succeeded" message once you are logged in successfully. Continue to Step 7 if you want to push image. Skip to step 10 if you want to pull image from ECR.
Push Image
7) Tag your image with the Amazon ECR registry, repository, and optional image tag name combination to use.
PS C:\CloudVedas> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 6 months ago 1.84kB
PS C:\CloudVedas>

PS C:\CloudVedas> docker tag fce289e99eb9 123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas


PS C:\CloudVedas> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas latest fce289e99eb9 6 months ago 1.84kB
hello-world latest fce289e99eb9 6 months ago 1.84kB
PS C:\CloudVedas>
8) Next let's push the image.
PS C:\CloudVedas> docker push 123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas
The push refers to repository [123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas]
af0b15c8625b: Pushed
latest: digest: sha256:92c7f9c92844bb49837dur49vnbvm7c2a7949e40f8ea90c8b3bc396879d95e899a size: 524
PS C:\CloudVedas>
9) We just now pushed the image. Let's check our image in ECR.
PS C:\CloudVedas> aws ecr describe-images --repository-name cloudvedas
{
"imageDetails": [
{
"registryId": "123456789123",
"repositoryName": "cloudvedas",
"imageDigest": "sha256:92c7f9c92844bb49837dur49vnbvm7c2a7949e40f8ea90c8b3bc396879d95e899a",
"imageTags": [
"latest"
],
"imageSizeInBytes": 2487,
"imagePushedAt": 1564224404.0
}
]
}
PS C:\CloudVedas>
Great ! We can see our image in ECR and it has a tag "latest".
Pull Image
10) If you want to pull the image you have to follow same instruction till step 6, after that just execute below command.
PS C:\CloudVedas> docker pull 123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas:latest

Solved: How to lock Terraform provider version

While working with terraform you would have noticed that every time you execute a terraform plan it will download the latest version of terraform available for that provider.
While this is good if you are testing as you get the latest features but, it can create trouble in production if a buggy version gets deployed. So, it is always recommended that you lock down the version of provider. In this post we will show you how to do that.
It’s really very simple to lock down the provider version. You just have to add a snippet like below in your main.tf file .


provider "aws" {
  version="<=2.6.0"
  region  = "us-east-1"
}

In the above example we have specified that version 2.6.0 or older can be used.
The version argument value may either be a single explicit version or a version constraint string. Constraint strings use the following syntax to specify a range of versions that are acceptable:
>= 2.4.0: version 2.4.0 or newer
<= 2.4.0: version 2.4.0 or older
~> 2.4.0: any non-beta version >= 2.4.0 and < 2.5.0, e.g. 2.4.X
~> 2.4: any non-beta version >= 2.4.0 and < 3.0.0, e.g. 2.X.Y
>= 2.0.0, <= 3.0.0: any version between 2.0.0 and 3.0.0 inclusive
Give it a try and let us know if you have any query or suggestion.

Solved: How to configure Terraform backend on AWS S3

Terraform is a very useful tool for IaaS. As you would have already known that it create a .tfstate file to save the status of infra. If you are doing testing you can save the .tfstate locally on your laptop. But, if you are working in prod environment with team then it's best that you save the .tfstate remotely so that it's secure and can be used by other team members.
Here we will show you two ways of configuring AWS S3 as backend to save the .tfstate file.
  1. First way of configuring .tfstate is that you define it in the main.tf file. You will just have to add a snippet like below in your main.tf file.
terraform {

      backend "s3" {

          bucket="cloudvedas-test123"

          key="cloudvedas-test-s3.tfstate"

          region="us-east-1"

      }

}

Here we have defined following things.
bucket = The S3 bucket in which the .tfstate should be saved
key = The name of the .tfstate file
region = The region in which S3 backend bucket exists.
2 Another way of specifying the S3 backend is that you define it when you initialize the terraform using the init command. This can be useful when you want to invoke the terraform from a jenkins file.
  • Here is an example that you can  execute in windows command prompt. This will do the same thing as we did in first example.
terraform init -no-color -reconfigure -force-copy -backend-config="region="us-east-1"" \
-backend-config="bucket="cloudvedas-test123"" -backend-config="key="cloudvedas-test1-win-s3.tfstate""
  • If you want to execute from a linux shell use below syntax.
 terraform init -no-color -reconfigure -force-copy \
-backend-config="region=us-east-1" \
-backend-config="bucket=cloudvedas-test123" \
-backend-config="key=cloudvedas-test-s3.tfstate"
Give it a try and let us know in comments section if you have any query or suggestion.

Solved RDS : Access denied; you need the SUPER privilege for thisoperation

Access denied; you need the SUPER privilege for this operation

You may get this error while trying to set values for RDS AURORA MySQL from the command line. It can be setting for long running queries or slow queries or many others.

If, you are sure you are trying to execute these changes using the master user then you can’t set these from command line.

For RDS Aurora you will have to make these changes through Parameter groups of DB and Cluster.

  •  To make the change, login to your AWS RDS console.
  • On the left side panel click on Parameter Groups and select the group associated with your RDS Cluster and node.
  • Make changes in the parameter groups.
  • Once you have saved the changes in parameter group it will start applying to your RDS cluster.

Some parameter changes will require reboot of your cluster while others can be done without reboot.  You will see pending-reboot in your cluster if it needs reboot to change the parameter.  For more details about parameter groups refer this AWS doc.

Solved : How to redirect non-www blogger custom domain to www with SSL

If you have added a custom domain to your blogger blog you may have noticed that when you access your domain like www.example.com it works fine but if you try to access the naked domain like example.com(notice no www) it fails or gives SSL error. In this post we will show you how to fix this.