Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

How to become a Google Cloud Certified Professional Cloud Architect


In this post I'll be giving tips on how to prepare for Google Cloud Certified Professional Cloud Architect exam. This will also be helpful for people who are currently working on other cloud platforms like AWS or Azure and looking for broadening their skills to Google Cloud Platform (GCP).

As many of you who are following this blog knows that I am already working on AWS and Azure. About a couple of years back we got heavily into Kubernetes. Being a curious techie when I started digging further about Kubernetes, I found that it was initially designed by Google. One thing led to another and I ended up exploring more about google cloud. In parallel, we started getting traction on multi-cloud strategy and GCP is also considered a good option with many features which are helpful for both startups and big Enterprises.

So, I decided to get more knowledge and expertise on Google Cloud. When I compared AWS with GCP I felt that most of the technologies are similar but obviously with different naming convention and some technical setting differences. Thus, if you have worked on AWS it won't be very difficult to grasp the Google Cloud as well. But even if you don't have background in other clouds then also learning about google cloud is not very difficult, you just need to spend extra time on the basics.

If, you are from AWS background you can get a good comparison of AWS and GCP services here and that's what I did as a starting point.

Next I went through the Udemy course Google Cloud (GCP) 2020: Zero to Cloud Architect. This course covers the GCP services in detail starting from basics. So, it is useful even for someone who is starting from scratch.

Since our company is a partner of Google so, I supplemented my preparation by enrolling in the online training labs of QWIKLABS. These labs are really helpful in getting you good hands-on practice on the various GCP services.

AWS background folks will find that the GCP services are not very different but it still has some differences for e.g. in AWS a subnet is restricted to an AZ but in GCP the subnet can span to multiple AZs in a region. You have to keep these subtle differences in mind when designing in GCP.

If we talk specifically about the certification exam it mainly focuses on below topics:-

  • Managing and provisioning cloud infrastructure.
  • Designing and planning a cloud solution which is scalable, resilient and cost effective.
  • Security strengthening using IAM, Firewall, Security Groups etc.
  • Analyzing and Monitoring the application behavior using GCP Operations Suite.
  • Managing HA and DR.
The exam is in multiple choice questions format. You will also get 2-3 case studies and you have to select an answer which is most suitable considering the scenario mentioned in case study. 

You can choose to appear for an exam at the test center or go for an online proctored exam from home. Considering the Corona situation I appeared for an online proctored exam. You just have to follow the guidelines mentioned in the link I have shared above and with good internet connection it is pretty easy to appear for the exam from home.

Overall I found the exam to be very engaging covering wide ranging of topics.

If you have any queries regarding the exam preparation or GCP in general please post them in the comment section below. 

How to transfer files to and from EC2 instance



In our earlier post we showed how you can use Filezilla a GUI based solution to transfer files to an EC2 instance. But, in many companies installation of third party software like Filezilla is not allowed. 

So, in this post we will show you how you can transfer files to and from an EC2 linux instance using our old trustworthy friend SFTP

For those who don't know about sftp let us give you a gist of what it is.

SFTP is SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP)  thus it works on same port 22 as ssh. It's secure in comparison to ftp which works on port 21 and nowadays blocked because of security reasons. sftp is generally pre-installed on most linux versions including Amazon Linux.

Also, if you compare it with SCP , which supports only file transfers, the SFTP allows you to perform a range of operations on remote files and resume file transfers.

If you want to know more about SFTP please look at this sftp wiki page 

Now let's see how we can use sftp to transfer files.

Pre-requisite for this are only two things both of which are pretty much standard requirement to access your EC2 linux instances.

1) ssh .pem key which you configured when you built the remote server where you want to connect.

2) Port 22 should be open to access the remote server. (In case you want to know if a port is open on a remote linux/unix server by CLI without using telnet check this post .)

Once you have checked that you have fulfilled the pre-requisites let's move to the next step.

Open your shell terminal, it can be GIT Bash installed on you local windows desktop or Linux shell terminal.

Inside the terminal you need to execute below command

sftp -o IdentityFile=identity_file ec2-user@10.xxx.xxx.xxx

where identity_file is you .pem key .

Your actual command will look like

sftp -o IdentityFile=cloudvedas.pem ec2-user@192.168.0.10

Let's check our directory in remote server

sftp>pwd

Remote working directory: /home/ec2-user

Let's go to /tmp

sftp> cd /tmp

Let's transfer a file from local machine to the remote server

sftp> PUT test123-file.sh

Now if you want to transfer a file from remote server to local machine 

sftp> GET remote123-file.sh

Note: PUT and GET commands are case sensitive and will work in uppercase only.

If you forgot what was the home directory on your local machine you can check that from sftp prompt

sftp>lpwd

Local working directory: /home/cloudvedas

If you want to change the directory in your local machine do this

sftp>lcd /tmp

Hope this simple tutorial is helpful for you. Do let us know in comments section if you have any queries or to share what methods you use to transfer file to EC2 instances.

AWS Subnet Calculator


This is a simple calculator to let you know how many IPv4 IPs you will get when you create a Subnet in AWS. 

AWS allows subnet mask only between /16 to /28 . Few IPs in each subnet are reserved for AWS internal usage . 

To calculate for e.g. in subnet 10.0.0.0/24 subnet mask is 24 so, enter 24 below to get available IPs in this subnet . 

Enter Subnet Mask
















Disclaimer: Please note that this is not an offical AWS calculator. Please visit AWS VPC for more details.

AWS DynamoDB Cheat Sheet


DynamoDB is fast and flexible noSQL DB service for all application that need consistent single digit millisecond latency at any scale. It is a fully managed DB and support both document and key value data models.It is great for IoT, mobile/web gaming, and many other apps.


Quick facts of dynamodb
  • Stored on SSD storage
  • Spread across 3 geo distinct Ds.
  • Eventual consistent reads:- Consistency across all copies is usually reached within a sec. Repeating a read after short time should return the updated data.(Best Read perf)
  • Strongly consistent reads:- It returns a result that reflects all writes that received successful response prior to the read.

Table
Items(Like row of data in a table)
Attributes(Like column of data in a table)


Here everything between brackets {} is Item and 1587, Alan etc. are attributes.

{
"ID" : 1587,
"Name" : "Alan"
"Phone": "555-5555"
}


Two types of primary keys available:-
Single Attribute(Think unique ID)
Partition Key (Hash Key) composed of one attribute.

Composite(Think unique ID and date Range)
Partition key and Sort key (hash & Range) composed of 2 attributes


Partition key
  • Dynamodb uses the partition key 's value as input to an internal hash function. The output from the hash function determines the partition(this is simply the physical location in which the data is stored)
  • No two items in a table can have the same partition key value.


Partition Key and Sort Key
  • Dynamodb uses the partition key 's value as input to an internal hash function. The output from the hash function determines the partition(this is simply the physical location in which the data is stored)
  • Two items in a table can have the same partition key , but they must have a different sort key.
  • All items with the same partition key are sorted together , in sorted order by sorted key value

Local secondary index
  • It has the same partition key but different sort key
  • Can only be created when creating a table. they cannot be removed or modified later.


Global secondary index:
  • It has different partition key and different sort key.
  • Can be created at table creation or added later.


DynamoDB streams
  • If a new item is added to the table, the stream captures an image of the entire item, including all of its attributes
  • If an item is updated, the stream captures the before and after image of any attributes that were modified in the item.
  • If an item is deleted from the table, the stream captures an image of an entire item before it was deleted.

Query:-
A query operations find items in a table using only primary key attribute values. You must provide a partition attribute name and a distinct value to search for. You can optionally provide a sort key attribute name and value, and use a comparison operator to refine search results.
By default, a query returns all of the data attributes for the items with specified primary key(s) however you can use the ProjectionExpression parameter so that the query only returns some of the attributes, rather than all of them.

Query results are always sorted by the sort key. If the data type of the sort key is a number the results are returned in numeric order. Otherwise, the results are returned in order of ascii character code values. By default the sort order is ascending. To reverse the order set the ScanIndexForward parameter to false.

By default is eventually consistent but can be changed to strongly consistent.

SCAN:-
A Scan operation examines every item in the table. By default, a scan returns all of the data attributes of every item however you can use the ProjectionExpression parameter so that the scan only returns some of the attributes, rather than all of them.

Hope you find this quick glance of DynamoDB useful. Do let us know in comments if you have any query or suggestion.

Today we also want to share with you a good news that our blog is now included by Feedspot in the list of AWS Top 10 blogs . We would like to thank you all for your help and support in achieving this.

AWS certification exam cheat sheets

AWS certification exams grill you on vast topics and lot of services. In this post we have consolidated major services and topics of different exams so that you can access them from a single location.

Below links will give you better info on which topics and services are important for each exam and how to best prepare for them.

AWS ECR : How to push or pull docker image

Hello everyone!
In this post we will see how to push a docker image to your AWS ECR  and how to pull image from it.
Pre-requisites:-
  • Skip this step if you already have docker on your machine. I am using  “Docker for Windows” software to run dockers on my Windows 10 laptop.
If you have Windows 7 download Docker Toolbox for Windows with Virtualbox.
  • Get AWS CLI.
  • Create AWS IAM user from AWS console which has permission to put and delete images. You can refer sample policy below.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ecr:*",
            "Resource": "*"
        }
    ]
}
Once you are done with pre-requisites let's move forward.
1)  Open powershell in windows or command prompt in linux. Below I'll be running command on windows powershell. But the AWS CLI command on linux are similar.
In powershell check that you have docker running. It should give you an output like below.
PS C:\CloudVedas> docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

55f016be65aa hello-world "/hello" 2 hours ago Exited (0) 2 hours ago gifted_hamilton

PS C:\CloudVedas>
2) Configure AWS CLI by entering the access key and secret key of the IAM user.
PS C:\CloudVedas> aws configure
AWS Access Key ID [****************A37B]:
AWS Secret Access Key [****************W3w3]:
Default region name [ap-southeast-2]:
Default output format [None]:
PS C:\CloudVedas>
3) Check if your IAM user is able to describe ECR.
PS C:\CloudVedas> aws ecr describe-repositories
{
    "repositories": []
}
PS C:\CloudVedas>
4) Let's create an ECR repository now. You can skip this step if you already have repo.
PS C:\CloudVedas> aws ecr create-repository --repository-name cloudvedas
{
"repository": {
"repositoryArn": "arn:aws:ecr:ap-southeast-2:123456789123:repository/cloudvedas",
"registryId": "123456789123",
"repositoryName": "cloudvedas",
"repositoryUri": "123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas",
"createdAt": 1564224171.0
}
}
PS C:\CloudVedas>
5) Next we will authenticate the Docker client to the Amazon ECR registry to which we intend to push our image. You will get a long docker login token as below.
PS C:\CloudVedas> aws ecr get-login-password --region ap-southeast-2

6) Enter below the long token that you will receive in response of above command.
docker login -u AWS -p <token> https://123456789123.dkr.ecr.ap-southeast-2.amazonaws.com
Login Succeeded
You will see "Login Succeeded" message once you are logged in successfully. Continue to Step 7 if you want to push image. Skip to step 10 if you want to pull image from ECR.
Push Image
7) Tag your image with the Amazon ECR registry, repository, and optional image tag name combination to use.
PS C:\CloudVedas> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 6 months ago 1.84kB
PS C:\CloudVedas>

PS C:\CloudVedas> docker tag fce289e99eb9 123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas


PS C:\CloudVedas> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas latest fce289e99eb9 6 months ago 1.84kB
hello-world latest fce289e99eb9 6 months ago 1.84kB
PS C:\CloudVedas>
8) Next let's push the image.
PS C:\CloudVedas> docker push 123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas
The push refers to repository [123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas]
af0b15c8625b: Pushed
latest: digest: sha256:92c7f9c92844bb49837dur49vnbvm7c2a7949e40f8ea90c8b3bc396879d95e899a size: 524
PS C:\CloudVedas>
9) We just now pushed the image. Let's check our image in ECR.
PS C:\CloudVedas> aws ecr describe-images --repository-name cloudvedas
{
"imageDetails": [
{
"registryId": "123456789123",
"repositoryName": "cloudvedas",
"imageDigest": "sha256:92c7f9c92844bb49837dur49vnbvm7c2a7949e40f8ea90c8b3bc396879d95e899a",
"imageTags": [
"latest"
],
"imageSizeInBytes": 2487,
"imagePushedAt": 1564224404.0
}
]
}
PS C:\CloudVedas>
Great ! We can see our image in ECR and it has a tag "latest".
Pull Image
10) If you want to pull the image you have to follow same instruction till step 6, after that just execute below command.
PS C:\CloudVedas> docker pull 123456789123.dkr.ecr.ap-southeast-2.amazonaws.com/cloudvedas:latest

Solved: How to configure Terraform backend on AWS S3

Terraform is a very useful tool for IaaS. As you would have already known that it create a .tfstate file to save the status of infra. If you are doing testing you can save the .tfstate locally on your laptop. But, if you are working in prod environment with team then it's best that you save the .tfstate remotely so that it's secure and can be used by other team members.
Here we will show you two ways of configuring AWS S3 as backend to save the .tfstate file.
  1. First way of configuring .tfstate is that you define it in the main.tf file. You will just have to add a snippet like below in your main.tf file.
terraform {

      backend "s3" {

          bucket="cloudvedas-test123"

          key="cloudvedas-test-s3.tfstate"

          region="us-east-1"

      }

}

Here we have defined following things.
bucket = The S3 bucket in which the .tfstate should be saved
key = The name of the .tfstate file
region = The region in which S3 backend bucket exists.
2 Another way of specifying the S3 backend is that you define it when you initialize the terraform using the init command. This can be useful when you want to invoke the terraform from a jenkins file.
  • Here is an example that you can  execute in windows command prompt. This will do the same thing as we did in first example.
terraform init -no-color -reconfigure -force-copy -backend-config="region="us-east-1"" \
-backend-config="bucket="cloudvedas-test123"" -backend-config="key="cloudvedas-test1-win-s3.tfstate""
  • If you want to execute from a linux shell use below syntax.
 terraform init -no-color -reconfigure -force-copy \
-backend-config="region=us-east-1" \
-backend-config="bucket=cloudvedas-test123" \
-backend-config="key=cloudvedas-test-s3.tfstate"
Give it a try and let us know in comments section if you have any query or suggestion.

Solved RDS : Access denied; you need the SUPER privilege for thisoperation

Access denied; you need the SUPER privilege for this operation

You may get this error while trying to set values for RDS AURORA MySQL from the command line. It can be setting for long running queries or slow queries or many others.

If, you are sure you are trying to execute these changes using the master user then you can’t set these from command line.

For RDS Aurora you will have to make these changes through Parameter groups of DB and Cluster.

  •  To make the change, login to your AWS RDS console.
  • On the left side panel click on Parameter Groups and select the group associated with your RDS Cluster and node.
  • Make changes in the parameter groups.
  • Once you have saved the changes in parameter group it will start applying to your RDS cluster.

Some parameter changes will require reboot of your cluster while others can be done without reboot.  You will see pending-reboot in your cluster if it needs reboot to change the parameter.  For more details about parameter groups refer this AWS doc.

Solved: AWS Inspector issue : Service 'Amazon Web Services Agent'(AWSAgent) could not be stopped. Verify that you have sufficientprivileges to stop system services.

AWS Inspector issue
“Service ‘Amazon Web Services Agent’ (AWSAgent) could not be stopped. Verify that you have sufficient privileges to stop system services.”

Solution:-

First check that you are running the AWS inspector installation as administrator. But if you are still getting error then it can be because the most recent Amazon Windows AMIs released on February 23rd include a driver that uses the same service name as the Amazon Inspector Agent. This causes Inspector Agent installations to fail with the above error message. Impacted versions of the Windows AMIs include Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, and Windows Server 2016.

Try fixing it with the remove script provided in below forum link after taking all the required backups.

https://forums.aws.amazon.com/ann.jspa?annID=5505

But if you are still getting error “EC2 Windows Utility Device’ not found” when you execute the remove script follow below steps.

  • Take snapshot image of the instance.
  • After taking snapshot image login to the instance and execute below command in powershell as an administrator to fix it. It will need reboot of instance.
$agentService = Get-WmiObject -class win32_systemdriver | Where-Object {$_.Name -eq 'awsagent'}$agentService.Delete()
  • After running these commands reboot the instance and try installation of AWS Inspector again.

Solved: How to reset RDS master user password

In this post we will show you how to reset  the master user password of an RDS DB instance. The new password after reset will be effective immediately.
  • Go to your AWS RDS console.
  • Select the DB instance whose master password you want to change. Do note that the DB instance should be in available state and not in backing up or any other state.
  • Once you have selected the instance, select Modify from Instance Actions dropdown.
  • It will take you to a new page  :- Modify DB Instance: cloudvedasdb
  • Scroll down to the Setting section and look for New master password.
  • Enter the new password in the text box and at the bottom click on continue.
  • Select if you want to change password immediately or during next maintenance as this will need reboot.
  • Finally click on Modify DB instance.
  • The new password will be effective depending on what option you chose above.

How to prepare for AWS Certified SysOps Administrator – Associate

In one of our earlier post we have detailed about which AWS certification is suitable for you ?
If you are from System Admin or DevOps background the  AWS Certified SysOps Administrator – Associate certification will be a good plus for you.
If you are  an absolute beginner on AWS you can start with free labs from AWS . To practice further you can create free AWS account. These two actions will get you started on AWS.
Beware that if you go beyond free tier limits you will be billed. Best practice is to create a billing alert.  This alert can save you from unexpected bill shocker.
If you want to learn further you can opt for either classroom course or online course. The classroom course is generally expensive and range between USD 600 to 2000. While the online course can cost you anything between USD 10 to USD 300 depending on which course you choose.
Our personal opinion is that you should go for online courses as they are cheaper and if you follow their labs honestly(yeah not just seeing him do it but actually doing the labs yourself 😉 ) they can be as good as classroom training.
In online courses we found courses from two providers acloudguru and linux academy  to be good. Earlier the Linux Academy course was only available through their site using a monthly plan but the same course is now available on Udemy too. The acloudguru course is already available on Udemy from longtime.  Though both these courses are available to purchase from their respective site under monthly subscription but, if you buy these courses from Udemy you pay only once and get lifetime access to same courses.  And many times Udemy provide heavy discounts on courses which can get you a good bargain.
The acloudguru course is delivered by Ryan who is enthusiastic and teach you really well. Though sometime he can get a bit click happy and can quickly zip past few topics. But, you always have the option to rewind and go through the topic again 🙂 .
The labs in the course are very useful and helps you get deep understanding of topic. The course also has quizzes to check your knowledge.
Overall we found the acloudguru course to be beneficial in getting you exam ready.
However do note that the course alone is not enough to clear the exam. You should go through whitepapers and FAQs of atleast below services .
  • EC2
  • S3
  • VPC
  • Route 53
  • CloudWatch
  • OpsWorks
  • Billing
Exam pattern
Exam has Multiple-choice and multiple-answer questions. Exam is of 80 minutes. You can download the exam blueprint here.
Practice Exam Questions
To get a good evaluation of your preparation you can go through another course  for sample exam questions on Udemy  .
Exam Cost
Exam will cost you USD 150. AWS also gives you an option to book a practice exam for USD 20 before you go for actual exam.
How to book exam
To book an exam you will have to create an account in AWS Training and Certification Portal .
Passing score
AWS doesn’t reveal minimum passing score and it keeps on changing. But we have observed that generally people who score above 70% passed the exam. You will immediately see the score on you screen once you finish the test and will also get a report on your mail within 1 hour.
Exam Tips
  • Get good sleep and keep calm during the exam.
  • You won’t get more than 3 mins per question.
  • You may find some very long questions in exam. Best strategy to tackle them is to read the answer options first and then check for relevant info in question.
  • Since it’s an AWS exam so look for AWS related options in the answers.  Chances are high that  Non-AWS related option in answer will be wrong.
  • AWS exams generally don’t focus on mugging their datasheets. So you won’t get a question like “How much RAM does a C3.xlarge offer?” .
That’s all folks! Best of luck for the exam!
Do let us know in comments section if you have any query.

How to prepare for AWS Certified Developer - Associate certificationexam

In one of our earlier post we have detailed about which AWS certification is suitable for you ?
If you have decided to go for AWS Certified Developer – Associate certification this post is for you.
If you are  an absolute beginner you can start with free labs from AWS . To practice further you can create free AWS account. These two actions will get you started on AWS.
Beware that if you go beyond free tier limits you will be billed. Best practice is to create a billing alert.  This alert can save you from unexpected bill shockers.
To further hone your skills you can either go for AWS classroom training or go for online courses. The classroom training will cost you from USD 800 to USD 2000. While the online courses can cost you from USD 10 to USD 300, depending on which course you choose.
Our personal opinion is that you should go for online courses as they are cheaper and if you follow their labs honestly they can be as good as classroom training.
In online courses we recommend two providers acloudguru and Linux Academy.  Both have monthly plans to buy the courses but they can become bit costly if you can’t complete the course in 1 month.
We have observed that acloudguru also provide the same course on Udemy where you get lifetime access to course with just one time payment. Also, Udemy provide heavy discounts on courses during sale which can get you a good bargain.
Thus, we recommend this AWS Certified Developer – Associate course  of acloudguru on Udemy . Also, once you buy this course on Udemy you will get access of the same course on acloudguru website also.
The acloudguru course instructor Ryan is an industry expert and deliver the course really well. The course will cover almost all the main topics which are asked in exam.  (Though we have observed that as of Apr-18 it was missing a session on AWS lambda for which the questions have started appearing in exam. Hope they update the course soon.) Currently you can learn about lambda from Ryan’s other course on AWS Lambda .
It’s a good idea to follow all the labs with the instructor and once you get confidence redo the labs independently. Don’t forget to complete the practice quizzes to check your knowledge.
This course will give you good base for the exam. But, the course itself is not enough to clear the exam.  You should go through whitepapers and FAQs of atleast below service .
  • EC2
  • S3
  • SQS
  • RDS
  • DynamoDB
  • Lambda
Exam pattern
Exam has Multiple-choice and multiple-answer questions. Exam is of 80 minutes.
Practice Exam Questions
To get a good evaluation of your preparation you can go through another course of  acloudguru sample exam questions on Udemy  .  Many test takers have said that they got similar questions in exam.
Exam Cost
Exam will cost you USD 150. AWS also gives you an option to book a practice exam for USD 20 before you go for actual exam.
How to book exam
To book an exam you will have to create an account in AWS Training and Certification Portal .
Passing score
AWS doesn’t reveal minimum passing score and it keeps on changing. But we have observed that generally people who score above 80% passed the exam. You will immediately see the score on you screen once you finish the test and will also get a report on your mail within 1 hour.
Exam Tips
  • Get good sleep and keep calm during the exam.
  • You won’t get more than 3 mins per question.
  • You may find some very long questions in exam. Best strategy to tackle them is to read the answer options first and than check for relevant info in question.
  • Since it’s an AWS exam so look for AWS related options in the answers.  Chances are high that  Non-AWS related option in answer will be wrong.
  • AWS exams generally don’t focus on mugging their datasheets. So you won’t get a question like “How much RAM does a C3.xlarge offer?” .
That’s all folks! Best of luck for the exam!
Do let us know in comments section if you have any query.

Solved: Restore root disk of EC2 without changing IP or Hostname

If the root volume of your EC2 instance got corrupt, instance won’t come up. Since you don’t have access to console of EC2 instance so you can’t do much.
In this post we will discuss options to restore an EC2 instance from snapshot backup. Prerequisite for the below guide is that you already have snapshot of the volume which you want to restore.
Option 1 – Different IP and Hostname 
Easiest option to restore an EC2 instance is to launch a new instance with the available snapshot. Refer this AWS doc to launch an instance from backup.
But, an instance launched this way will have both hostname and private IP different from original instance. If this is a problem for you go to option 2.
Option 2 – Same IP different Hostname
So, as per option 1 you have created and instance with an AMI but now you need the IP to be same as old one. To go around this you can detach the network interface of old instance and attach it to new instance.
This option will give you same private IP as old one but you will still have new hostname.
If you need both hostname and IP to be same go to option 3.
Option 3 – Same IP and Hostname
In this option we will discuss how you can restore an EC2 instance and keep both hostname and IP same. This can very important if your EC2 instance is in Active Directory(AD) domain, as change in IP and hostname mapping will cause conflict in domain. Because of this conflict the domain server can block login to the EC2 instance.
Let’s see how we can get around this.
Prerequisites:-
  • You already have snapshot of the root and other volumes of instance.
  • Keep a screenshot of your Instance description from your AWS console, this can be used to refer instance configuration later.
Plan
For the eager one’s, plan is to follow the below steps.
  • Stop the instance
  • Detach the current root EBS
  • Create a new volume from the old snapshot
  • Attach the new volume to instance
  • Boot the instance
Now let’s see the steps in detail.
Stop Instance
Stop the instance if it’s up.
Detach the current root EBS 
  • Select the root volume(/dev/sda1) mentioned as “Root device” from instance description and click on EBS id of the volume.
  • You will now come in “Volumes” window.
  • From the “Actions” drop down select “Detach Volume”
Create a new volume from the old snapshot
  • Create a volume from the snapshot you have taken earlier.
  • Select the snapshot of the volume and from “Actions” drop down select “Create Volume”.
  • In the “Create Volume” window ensure that you select the Availability Zone to be same as the AZ in which your instance is located.
  • Leave the other options as default. 
  • Finally hit “Create Volume”.
Attach the new volume
  • Once your volume is created select it.
  • From the “Actions” drop down select “Attach”.
  • While attaching the volume in the device field mention volume name as /dev/sda1 since we are attaching it as root volume. This is same as your old root volume name.
  • Hit “Attach” .
Boot instance
  • Once the volume is attached start the instance normally. You should now see that the instance has the data from your old backup. Also, it’s hostname and private IP will remain same as before.
Note:- If you want to keep the public IP to be fixed, you need to assign an Elastic IP to the instance. Public IP assigned by default, by AWS keeps on changing with every reboot.
That’s all folks!

Solve: How to add an EBS volume to a Windows EC2 instance and configure it

This post is divided in two sections. In the first section you will see how to create an EBS volume and in the next section we will show you how to configure the EBS volume in windows instance.
Create EBS Volume
  • Go to AWS Console > EC2
  • In the left panel select “Volumes” .
  • Once in the “Volume” screen select “Create Volume”
  • In the “Create Volume” window specify the size of disk and the Availability Zone  in which you want the disk to be created.
Tip:- The disk should be in same AZ as your EC2 instance.
  • Now in the left pane again select the “Volume” to see all your volumes.
  • Select the volume you just created and after that in the upper menu click on “Actions” and select “Attach volume”
  • In “Attach volume” window select the instance to which you want to attach the volume and click on “Attach”.

Configure EBS volume in Windows
  • Login to your windows EC2 instance using RDP. Once inside the instance, from the Start menu go to  “Computer Management” as mentioned below.
Start > Control Panel > System and Security > Administrative Tools > Computer Management
  • Click on Disk Management on the left pane.
  • Here we can see the new disk but it’s still offline. Right click on the new disk and select “Online”.
  • Once the disk is online right click again on the disk and select “Initialize Disk”.

  • If disk is below 2TB, select MBR and click OK
  • Finally right click on the pane where size is shown(refer image below). Select “New Simple Volume”
  • Leave other options as default and click “Next” till you come to “Assign Drive Letter or Path”.  Here we have assigned the drive letter E .
  • Leave everything else as default in next windows and click on finish.
  • Now if we go to “This PC/ My computer” we should see the new disk.


So here we have attached an EBS volume to the Windows EC2 instance. Do let us know in comments section if you have any query.





AWS CLI Elastic Beanstalk cheat sheet

In our last post we have seen how to use EB CLI for managing elastic beanstalk through command line.  But, you can also manage elastic beanstalk using traditional AWS CLI. In this post you will find  AWS CLI cheat sheet for the same.
If you are new to Elastic Beanstalk, it’s recommended that you go through this free AWS Elastic Beanstalk crash course.
Below are the major commands used frequently while managing the elastic beanstalk environment.
To check the availability of a CNAME
aws elasticbeanstalk check-dns-availability --cname-prefix my-cname
To create a new application
aws elasticbeanstalk create-application --application-name CldVdsApp --description "my application"
Compose Environments
 aws elasticbeanstalk compose-environments --application-name media-library --group-name dev --version-labels front-v1 worker-v1
To create a new environment for an application
The following command creates a new environment for version “v1” of a java application named “CldVdsApp”:
aws elasticbeanstalk create-environment --application-name CldVdsApp --environment-name my-env --cname-prefix CldVdsApp --version-label v1 --solution-stack-name "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8"
To specify a JSON file to define environment configuration options
The following create-environment command specifies that a JSON file with the name myoptions.jsonshould be used to override values obtained from the solution stack or the configuration template:
aws elasticbeanstalk create-environment --environment-name sample-env --application-name CldVdsApp --option-settings file://myoptions.json
To create a storage location
The following command creates a storage location in Amazon S3:
aws elasticbeanstalk create-storage-location
To abort a deployment
aws elasticbeanstalk abort-environment-update --environment-name my-env
To delete an application
The following command deletes an application named CldVdsApp:
aws elasticbeanstalk delete-application --application-name CldVdsApp
You can refer the complete set of AWS CLI for elastic beanstalk on this link.
Note:- All the above commands are taken from different AWS CLI reference guides and put in one place over here. Please run the commands after due diligence as we won’t be responsible for any mistakes in executing the commands and it’s consequences.  If you have any concern or query feel free to contact us or comment below.

Solved: Exceeded EC2 Instance Quota

You may face an error like “Exceeded EC2 Instance Quota” while you are trying to spin up new instances either standalone or in cluster.
This error is caused because you have hit the limit on number of instances allowed in your AWS account.
This limit is region and instance size specific. To get rid of this error you will have to request Amazon to increase the EC2 instance limit.
Requesting a limit increase is simple. Please follow below steps to know more.
  • Login to you AWS console and select EC2 from Services drop down.
  • Once in EC2 dashboard, in the left menu look for “Limits” and click on it.(refer image below)


  • Expand “Instance Limits” to see the limits in your account for each instance type. In our case we have limit of 5 on “r4.2xlarge instances” so we click on “Request limit Increase”.
  • You will get option to Create Case. Fill the details as in below image with a reason for requesting limit increase.

  • Once you submit the case, if your reason is good enough for Amazon they will increase the limit between couple of minutes to few hours.
You won’t be charged for increasing the limit but, only for instances that you spin up.

AWS EB CLI Cheat Sheet - Elastic Beanstalk

In this post we will discuss about the Elastic Beanstalk CLI called EB CLI.
If you are new to Elastic Beanstalk, it’s recommended that you go through this free AWS Elastic Beanstalk crash course.
If you want to manage Elastic Beanstalk using traditional AWS CLI follow this post .
Installation
Follow these guides to install eb cli on Windows, Linux and MacOS .
Get help
eb -h
Initialize eb cli
eb init
It will ask questions:-
  • Default region
  • Access key details
  • Select existing application or create new.
  • Application name
  • Platform e.g. PHP, Python etc.
  • Setup ssh
  • Select keypair or create one.
Create environment
eb create
Check status
eb status
Check health information
eb health
Check events
eb events
Pull logs
eb logs
Open environment website in browser
eb open
Deploy Update
eb deploy
Check configuration options
eb config
Terminate environment
eb terminate
List  environments
eb list
Change current environment
eb use cldvds-env
Below are some other useful commands
eb abortCancel deployment
eb appversionManage EB application versions
eb cloneCreate clone of environment
eb consoleOpen environment in AWS console
eb labsExtra commands for experiment
eb localRun commands on local machine
eb platformManage platform
eb printenvShow environment variables
eb restorerebuild a terminated environment
eb scaleScaling the number of instances.
eb setenvSet environment variables
eb sshConnect to instance via ssh
eb swapSwap CNAME of two environments
eb tagsModify environment tags
eb upgradeUpdate the platform to most recent version
Above list is created by referring the AWS doc for elastic beanstalk cli . If you have any query or concern please feel free to contact us.

AWS EC2 CLI - Cheat sheet

Below is the cheat sheet of AWS CLI commands for EC2.
If you are new to EC2, it’s recommended that you go through this free AWS EC2 crash course.
If you want to know how to install AWS CLI please follow steps on this post
Get help
aws ec2 help
Create instance EC2 Classic
aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups my-sg
Create instance in VPC
aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids \
sg-xxxxxxxx --subnet-id subnet-xxxxxxxx
Start instance
aws ec2 start-instances --instance-ids <instance-id>
Stop instance
aws ec2 stop-instances --instance-ids <instance-id>
Reboot instance 
aws ec2 reboot-instances --instance-ids <instance-id>
Terminate instance
aws ec2 terminate-instances --instance-ids <instance-id>
View console output
aws ec2 get-console-output --instance-id <instance-id>
Describe Instance
aws ec2 describe-instances --instance-ids <instance-id>
Create an AMI
aws ec2 create-image \ --instance-id <instance-id> \ --name myAMI \ --description 'CloudVedas Test AMI'
List images(AMIs)
aws ec2 describe-images --image-ids <ami-id>
List  security groups
aws ec2 describe-security-groups
Create security group
aws ec2 create-security-group --vpc-id vpc-1234abcd --group-name db-access --description "cloudvedas db access"
Get details of security group
aws ec2 describe-security-groups --group-names <group-name>
Delete Security group
aws ec2 delete-security-group --group-id sg-1234abcd
List key pairs
aws ec2 describe-key-pairs
Create keypair
aws ec2 create-key-pair --key-name <value>
Import keypair
aws ec2 import-key-pair --key-name keyname_test --public-key-material file:///cldvds/sagu/id_rsa.pub
Delete keypair
aws ec2 delete-key-pair --key-name <value>
Check the networking attribute
aws ec2 describe-instance-attribute --instance-id <instance-id> --attribute sriovNetSupport
Add tags to instance
aws ec2 create-tags --resources i-xxxxxxxx --tags Key=Name,Value=MyInstance
Add EBS volume
aws ec2 --block-device-mappings "[{\"DeviceName\":\"/dev/sdf\",\"Ebs\":{\"VolumeSize\":20,\"DeleteOnTermination\":false}}]"
List EBS volumes
aws ec2 describe-volumes
Check snapshot associated with EBS volume
aws ec2 describe-volumes --volume-ids vol-01c6l3de3v21bd46s
Note:- All the above commands are taken from different AWS EC2 CLI reference guides and put in one place over here. Please run the commands after due diligence as we won’t be responsible for any mistakes in executing the commands and it’s consequences.  If you have any concern or query feel free to contact us.

AWS S3 CLI - Cheat sheet

Below is the cheat sheet of AWS CLI commands for S3.
If you are new to S3 it’s recommended that you go through this free AWS S3 crash course.
If you want to know how to install AWS CLI, follow steps on this post.
Get help
aws s3 help
or
aws s3api help
Create bucket
aws s3 mb s3://bucket-name 
Removing bucket
aws s3 rb s3://bucket-name
To remove a non-empty bucket (Extremely careful while running this). This will remove all contents in the bucket including subfolders and data in them.
aws s3 rb s3://bucket-name --force
Copy object
aws s3 cp mypic.png s3://mybucket/
Copy buckets
aws s3 cp myfolder s3://mybucket/myfolder --recursive
(Note: –recursive will copy recursively everything including the subfolders)
Sync buckets
 aws s3 sync <source> <target> [--options]

List buckets
aws s3 ls
List specific bucket
aws s3 ls s3://mybucket
Bucket location
aws s3api get-bucket-location --bucket <bucket-name>
Logging status
aws s3api get-bucket-logging --bucket <bucket-name>
ACL (Access Control List)
The following example copies an object into a bucket. It grants read permissions on the object to everyone and full permissions (read, readacl, and writeacl) to the account associated with user@example.com.
aws s3 cp file.txt s3://my-bucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=user@example.com

How to prepare for AWS Certified Solutions Architect - Professional

Hello!
In this post we will discuss how to prepare for AWS Certified Solutions Architect – Professional certification.
Pre-requisite
Only pre-requisite to appear for the professional exam is that you clear  AWS Certified Solutions Architect – Associate certification.  You can check here about how to prepare for associate certification.
Once you have cleared the associate exam you can start preparation of  AWS Certified Solutions Architect – Professional certification.
Many of the topics in the professional exam are same as that of the associate exam. So in professional exam you may actually see few questions are repeated from associate exam itself.
But still it has a lot of new topics also. Below are the topics on which you can expect most questions in exam.
Exam Topics
  • VPC
  • EC2
  • S3
  • Amazon Elasticache
  • Redshift
  • Cloudfront
  • Elastic Transcoder
  • AWS Data Pipeline
  • RDS
  • Cloudsearch
  • EMR
  • DynamoDB
  • SQS
  • CloudTrail
  • KMS
  • Kinesis
  • Opsworks
  • Auto Scaling
  • ELB
  • VPC peering
  • Direct Connect
  • Cross Account Access
Preparation
As you can see the list above contains a wide range of topics and reading about them can be overwhelming. You can also see the official exam blueprint here . It is good if you have at least 1 year of experience with these AWS technologies.
You can start your preparation by attending AWS  classroom training or you can go for online courses. I personally liked the contents of two online courses one was from acloudguru and  other from linux academy .
Both the courses are good but I chose acloudguru course as the course allowed you lifetime access. While for linux academy you have to pay monthly fee. I knew that with full-time job it may take me more than a month to prepare for this exam so I decided to opt for the acloudguru course.
The acloudguru course is comprehensive and the trainer Ryan covers the topics in a decent way. The course alone is not enough to clear the exam but it will give you a good understanding of the exam topics.
Apart from the course you should also refer to the AWS FAQs  which are very helpful during scenario based questions.
Also Linuxacademy gives you 7 days free trial so you can use that period to do their practice exams which also has lot of good questions.
Exam pattern
  • Multiple choice and multiple answer questions.
  • You will be given scenarios and questions will be based on that. Only few will be direct questions.
  • Exam will be of 170 minutes
  • And you can expect approx 80 questions
Cost
Exam cost is USD 300
You also have an option to appear for practice exam from Amazon. It costs you USD 40. Many people have told me that the actual exam is easier in comparison to the practice exam. So, if you score good in practice exam you can be sure about your preparation.
Sample Questions
You can refer to some sample exam questions here and on Udemy.
Hope the above info is helpful to you. Do let me know if you have any query.
Best of Luck for the exam!