Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Solved: Conflicting conditional operation error while creating S3 bucket

A conflicting conditional operation is currently in progress against this resource. Please try again.
You can get above error when you are creating an S3 bucket.
This error generally comes if you have deleted an S3 bucket in one region and immediately trying to create a bucket in other region with same bucket name.
Problem is that the S3 syncing is not instant across regions. It may take anything between 2 minutes to 30 minutes for the information to update in all S3 region that you have already deleted the bucket with that name.
You may get the same error even when you try creating same name bucket in different AWS account. Reason being same of syncing.
So if you want the bucket name to be same try creating it again after a coffee break.

AWS Crash Course - Elastic Beanstalk

Welcome back to AWS Crash Course.
In the last section we discussed about EBS.
In this section we will discuss about AWS Elastic Beanstalk.
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
  • You can push updates from GIT and only the modified files are transmitted to AWS elastic beanstalk.
  • Elastic beanstalk supports IAM, EC2, VPC and RDS instances.
  • You have full access to the resources under elastic beanstalk
  • Code is stored in S3
  • Multiple environments are allowed to support version control. You can roll back changes.
  • Amazon Linux AMI and Windows 2008 R2 supported.
    What are the supported Languages and Development Stacks?
  • Apache Tomcat for Java applications
  • Apache HTTP Server for PHP applications
  • Apache HTTP Server for Python applications
  • Nginx or Apache HTTP Server for Node.js applications
  • Passenger or Puma for Ruby applications
  • Microsoft IIS 7.5, 8.0, and 8.5 for .NET applications
  • Java SE
  • Docker
  • Go
How can you update Elastic Beanstalk?
  • You can upload the code for updating on AWS elastic beanstalk
  • It support multiple running environments like test, pre-prod and prod etc
  • Each environment is independently configured and runs on its own separate AWS resources
  • Elastic beanstalk also stores and tracks application versions over time so an existing environment can easily rolled back to a prior version.
  • New environment can be launched using an older version to try and reproduce a customer problem.
Fault Tolerance
  • Always design, implement, and deploy for automated recovery from failure
  • Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS
  • Use ELB for balancing the load.
  • Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances.
  • If you are using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups.
  What about Security?
  • Security on AWS is a shared responsibility
  • You are responsible for the security of data coming in and out of your Elastic Beanstalk environment.
  • Configure SSL to protect information from your clients.
  • Configure security groups and NACL with least privilege.
This short course was to give you an understanding of elastic beanstalk. If you want to try some hands on follow this AWS tutorial.

Solved: Share AMI with other AWS accounts

At times you may have to safely share an AMI(Amazon machine Image) with another AWS account. You can do it without making the AMI Public.
Here we will show you how you can do it easily.
  1. Login to your EC2 console by this link . EC2 Console
  2. In the left navigation panel choose AMIs in Image section.
  3. Select the AMI you want to share.
  4. Click on Actions > Modify Image Permissions
  5. In the Modify Image Permissions box do the following :-

    a) This image is currently “Private”
    b) Enter the AWS account number with which you want to share the AMI. Click Add Permissions.
    c) Check the box with Add “create volume” permissions to the following associated snapshots when creating permissions.
  6. Finally click on Save .
Once above steps are done in the source account you have to go in the destination account AMIs section in EC2 Console and in the filter select Private images. You should now be able to see the image you shared earlier.

If you want to do the same with AWS CLI, Use these two commands:-

Here we are granting launch permission to a specific AMI(ami-a2n4b68kl) for a specific AWS account number (123456789) .
aws ec2 modify-image-attribute --image-id ami-a2n4b68kl --launch-permission "{\"Add\":[{\"UserId\":\"123456789\"}]}"
Below command will grant create volume permission for snapshot(snap-try657hvndh909) as we did in Step 5(c)
aws ec2 modify-snapshot-attribute --snapshot-id snap-try657hvndh909 \
--attribute createVolumePermission --operation-type add --user-ids 123456789
After doing this the AMI should be visible in AMIs of the new account.

AWS Crash Course - EBS

In the last section we discussed about VPC. In this section we will discuss about EBS.
What is EBS?
  • EBS is Elastic Block Storage.
  • EBS volume is a durable, block-level storage. It’s similar to the hard disk that you have in your laptop or desktop.
  • EBS volumes can be used as primary storage for data that requires frequent updates.
  • EBS volume in an Availability Zone is automatically replicated within that zone to prevent data loss due to failure.
  • You can create encrypted EBS volumes with the Amazon EBS encryption feature or use 3rd party software for encryption.
  • To improve performance use RAID Groups e.g. RAID 0, RAID 1, RAID 10
What are the different types of EBS volumes?
  • General Purpose SSD (gp2) – It provides you upto 10,000 IOPS(Input/output operations per second)  and it can be of size from 1GB to 16TB . This is used for for normal loads. And should be enough for your you Dev or UAT setups.
  • Provisioned IOPS SSD (io1) – It provides you upto 20000 IOPS  and it can be of size from 4GB to 16TB . These are generally used for Large SQL/NoSQL Databases.
  • Throughput Optimized HDD (st1) – These provide you upto 500 IOPS  and can range in size from 500GB to 16TB. These are mostly useful for Big Data/ Data warehouses.
  • Cold HDD (sc1) – These are the cheapest kind of disks.  They provide upto 250 IOPS -and can range in size from 500GB to 16TB. These are commonly used fro data archiving as they provide low IOPS but are cheap for storing data which is not used frequently.
You can take snapshots of EBS volumes.
So what is a snapshot?
  • You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots
  • Snapshots are incremental backups – Saves time and storage costs
  • Snapshots support encryption
  • Snapshots exclude data that has been cached by any applications or the OS
  • You can share your unencrypted snapshots with others
  • You can use a copy of a snapshot for Migrations, DR, Data retention etc.
You can try handson with EBS by using this exercise .

AWS Crash Course – Route 53

Route 53 is a DNS service that route user requests.
  • Amazon Route 53 (Route 53) is a scalable and highly available Domain Name System (DNS).
  • The name route 53 is reference to UDP port 53 which is generally used for DNS.
  • Route 53 with its DNS service that allows administrators to direct traffic by simply updating DNS records in the hosted zone.
  • TTL(Time to Live) can be adjusted for resource records to be shorter which allow record changes to propagate faster to clients.
  • One of the key features of Route 53 is programmatic access to the service that allows customers to modify DNS records via web service calls.
Three Main functions of Route 53 are:-
Domain registration:- It allows you to register domain names from your AWS accounts.
DNS service:- This service is used for mapping your website IP to a name. e.g.54.168.4.10 to example.com. It also supports many other formats which we will discuss below.
Health Monitoring:- It can monitor the health of your servers/VMs/instances and can route traffic as per the routing policy. It can also work as a load balancer for region level traffic management.
Route 53 supports different routing policies and you can use the one which is most suitable for your applications.
Routing Policies :-
  • Simple:- In this Route 53 will respond to DNS queries that are only in the record set.
  • Weighted:- This policy let you split the traffic based on different weights assigned. for e.g. 10% traffic goes to us-east-1 and 90% goes to eu-west-1
  • Latency:- Allows to route your traffic based on lowest network latency for your end user.(ie which region will give end user the fastest response time)
  • Failover:- This policy is used when you create an active/passive setup. Route 53 will monitor the health of your primary site using a health check.
  • Geolocation:- This routing lets you choose where your traffic will go based on geographic location of end users. So the user requesting from France will be served from server which is nearest to France.
Route 53 supports many DNS record formats:-
  • A Format :- Returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host.
  • AAAA Format:-  Returns a 128-bit IPv6 address, most commonly used to map hostnames to an IP address of the host.
  • CNAME Format:- Alias of one name to another. So with CNAME you can set example.com and www.example.com as alias of each other.
  • MX Format :- Maps a domain name to a list of message transfer agents for that domain
  • NS Format:- Delegates a DNS zone to use the given authoritative name servers.
  • PTR Format :- Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups, but other uses include such things as DNS-SD.
  • SOA Format:- Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.
  • SRV Format:- Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX.
  • TXT Format :- Originally for arbitrary human-readable text in a DNS record.
Tip:- For the exam understanding A format and CNAME should be enough.
If you want to try some handson try this exercise .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

Solved: How to download a complete S3 bucket or a S3 folder?

If you ever want to download an entire S3 folder you can do it with CLI.
You can download the AWS CLI from this page. AWS CLI Download
Download the AWS CLI as per your system Window, Linux or Mac.
In our case we use Windows 64 bit. Once you donwload the .exe simply double click on it to install the AWS CLI.
Once the AWS CLI is installed go to windows command prompt(CMD) and enter command
aws configure
It will ask for the AWS user details with which you want to login and region name. Check this post to know How to create an IAM user.
You can get the AWS IAM user access details from IAM console .
Get Region name here .
Fill in the user details as below:

AWS Access Key ID: <Key ID of the user>
AWS Secret Access Key: <Secret key of the user>D
region Name : <us-east-1> 
Default output format: None

Once you have downloaded and configured AWS CLI on your machine you have to exceute “sync” command as shown below.
aws s3 sync s3://mybucket/dir  /local/folder
You can also do the same with “cp” command. It will need –recursive option to recursively copy the contents of subdirectories also.
aws s3 cp s3://myBucket/dir /local/folder --recursive
Refer to this S3 cheat sheet to learn more tricks.

Solved: How to remove the AWS EC2 instance name from the URL?

After installing the website using AWS image you would have noticed that the URL still have reference of the EC2.
It can be in post URLs like
http://ec2-65-231-192-68.compute-1.amazonaws.com/your-post-name
How to get rid of that?
The easiest way to do it is by going to your wordpress dashboard.
In the dashboard go to Settings>General.
In the General Settings page you will see two parameters, WordPress Address (URL) and Site Address (URL).  Change them to your website name e.g. http://yourwebsite.com .
Finally save it. Your post URLs should now look like
http://yourwebsite.com/your-post-name
In Bitnami WordPress Image you will find that WordPress Address (URL) and Site Address (URL) will be greyed out and it won’t allow to modify them. In that case you will have to modify wp-config.php file.
For bitnami image the file location is /opt/bitnami/apps/wordpress/htdocs/wp-config.php. Keep a copy of old file and modify the current file
cp -p /opt/bitnami/apps/wordpress/htdocs/wp-config.php /opt/bitnami/apps/wordpress/htdocs/wp-config.php.oldsudo vi /opt/bitnami/apps/wordpress/htdocs/wp-config.php
Modify two lines which has entries for WP_HOME and WP_SITEURL . They  should now look like.
define('WP_HOME','http://yourwebsite.com');define('WP_SITEURL','http://yourwebsite.com');
If your website is having SSL certificate and you want all your posts and pages to have https. Than the above entries should look like.

define('WP_HOME','https://yourwebsite.com');define('WP_SITEURL','https://yourwebsite.com');
Finally save the file.
When you refresh the page it should now show your desired URL.
If the URL is still not showing correctly and you are sure that you have modified the file correctly than restart apache.
sudo /opt/bitnami/ctlscript.sh restart apache
This should get you done!

AWS Crash Course – VPC

In the last section we discussed about EC2.  In case you missed it you can check it here AWS Crash Course – EC2 .
In this section we will discuss about VPC.
What is VPC?
  • VPC is Virtual Private Cloud.
  • VPC is like your own private cloud inside the AWS public cloud.
  • You can decide the network range.
  • Your VPC is not shared with others.
  • You can launch instances in VPC and restrict inbound/outbound access to them.
  • You can leverage multiple layers of security, including security groups and network access control lists.
  • You can create a Virtual Private Network (VPN) connection between your corporate datacenter and your VPC.
Components of Amazon VPC:-
  • Subnet: A segment of a VPC’s IP address range this is basically the network range of IPs which you assign to your resource e.g. EC2.
  • Internet Gateway: If you want your instance in VPC to be able to access Public Internet, you create an internet gateway.
  • NAT Gateway: You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.
  • Hardware VPN Connection: A hardware-based VPN connection between your Amazon VPC and your datacenter, home network, or co-location facility.
  • Virtual Private Gateway: A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection..
  • Customer Gateway: A customer gateway is a physical device or software application on your side of the VPN connection.
  • Router: Routers acts like a mediator for your sunets in VPC. It interconnect subnets and direct traffic between Internet gateways, virtual private gateways, NAT gateways, and subnets.
  • Peering Connection: A peering connection enables you to route traffic via private IP addresses between two peered VPCs. Peering connection is used to do VPC Peering by which you can establish connections/tunnel between two different VPCs.
VPC has few more components but to avoid confusion we will discuss about them in later sections.
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - EC2

We are starting this series on AWS to give you a decent understanding of different AWS services. These will be short articles which you can go through in 15-20 mins everyday.
You can check the complete series here of AWS Crash Course .
Introduction:-
  • AWS compute is part of it’s IaaS offerings.
  • With compute, you can deploy virtual servers to run your applications.
  • Don’t have to wait for days or weeks to get your desired server capacity.
  • You can manage the OS or let AWS manage it for you.
  • It can be used to build mobile apps or running massive clusters.
  • You can even deploy application serverless.
  • It provides high fault tolerance.
  • Easy scalability and load balancing.
  • You are billed as per your usage.
What is EC2?
  • EC2 is Elastic Compute Cloud
  • It’s VM (virtual machine) in cloud.
  • You can commission one or thousands of instances simultaneously, and pay only for what you use, making web-scale cloud computing easy.
  • Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
  • Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.
What are EC2 pricing models?
  • On Demand – Pay by hour no long term commitment.
  • Reserved – Yearly reservations up to 75% cheaper compared to On Demand.
  • Dedicated – A dedicated Physical server is provided to you. Up to 70% cheaper compared to On Demand.
  • Spot – Bid on spare Amazon computing capacity. Up to 90% cheaper compared to On Demand.
EC2 Instance Types:-
  • General Purpose (T2, M4 and M3) – Small and mid-size databases
  • Compute Optimized (C4 and C3) – High performance front-end fleets, web-servers, batch processing etc.
  • Memory Optimized (X1, R4 and R3) – High performance databases, data mining & analysis, in-memory databases
  • Accelerated Computing Instances(P2, G2 and F1) – Used for graphic workloads
  • Storage Optimized I3 – High I/O Instances – NoSQL databases like Cassandra, MongoDB
  • D2 – Dense-storage Instances – Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing
Check out more details in next section .  AWS Crash Course – VPC
If you want to try some hands on, you can follow this guide to launch Amazon Linux Instance or this for Windows instance.

How to close AWS Free Tier account before expiry

You have been using the AWS account from around last 1 year and now the free tier period is about to get over.  If you have created this account for only testing purpose and don’t want the resources in it anymore, than it’s best to close this account. 
Closing the account will save you from unexpected AWS bills for resources which you may have started in some region and forgot to stop/delete.
Here are simple steps on How to Close the AWS account.
  1. Go to your AWS Settings Page .
  2. Scroll Down to “Close Account”
  3. Tick on the check box.
  4. Click “Close Account”

    Once you click on “Close Account” you will get confirmation mail from AWS for account closure. If you have any unpaid bill for that month you will receive it as per your billing cycle.
    After clearing unpaid bill you should not get any bill from next month.

Solved: How to calculate number of available IPs in a Subnet

Many people are confused about how many usable IPs you can get in a subnet and how to calculate it.
So here I am giving you a simple way to calculate it.
Here is the formula.
Maximum Number of IPs = 2**(32 - netmask_length)
Let’s say you have subnet mask  /28 then the maximum number of IPs you can have is
Maximum Number of IPs= 2**(32-28) = 2**(4) = 2*2*2*2 = 16
So you can have max 16 IPs in a  /28 subnet.
First and last IP of subnet is reserved for Network Address and Broadcast Address. So you are left with only 14 IPs in normal networks.
But, generally cloud providers like AWS, Azure etc. reserve 5 IPs instead of 2 IPs in each subnet . Thus, the the usable IPs available for you in AWS or Azure for /28 subnet will be 11.
Similarly, you can calculate the usable IPs in each subnet when working on cloud .

For simplicity we have created an AWS Subnet Calculator which you can use.


Be Sociable. Share It. Happy Learning!

How to become an AWS Certified Solution Architect in 30 days ?

In this post we will be discussing on how to clear the “AWS Certified Solutions Architect – Associate” exam in 30 days .
AWS exams are not restricted to any version as you see in other exams like RHCSA on RHEL 6.
The syllabus is vast and keep on changing as AWS keep on adding new services. So, hard work alone won’t help. Also, your prior experience on few specific AWS services won’t help you clear the exam easily. As the questions in exam are on wide range of services.
Below I am listing a smart plan which can get you ready for the exam in 30 days.
First 7 days.
If you are looking for very quick overview of all services so  that you can sound familiar with AWS you can refer this post.
You can also refer to our free AWS Crash Course  if you want to go little deeper. It will give you good knowledge of key topics in short time.
Later go through the online training and videos. You can look at AWS re:invent videos. But if you are new to AWS it’s recommended that you buy an online course. Content of both acloudguru and linuxacademy is good but I used the acloudguru course on Udemy as it provide lifetime access to the course. You can read my complete review for the acloudguru course here .
TIP:- We found that buying the acloudguru course from  Udemy is cheaper in comparison to  acloud.guru website. It’s the same course AWS Certified Solutions Architect – Associate on Udemy at cheaper rate, as generally Udemy provide heavy discounts on courses.
Day 8 to 14
For the next seven days repeat the exercises in the course doing hands on in your own AWS account.
TIP:- Create a billing alert in the account. It will remind you if you are going above the free tier limits and save you from unpleasant surprises.
To see how to create a billing alert refer here.
Day 15 to 21
Online course will give you a good base. As you don’t have to worry about the syllabus. Next step is to go through the listed AWS white papers.
  • Overview of Amazon Web Services
  • Storage Options in the Cloud
  • Overview of Security Processes
  • AWS Risk & Compliance Whitepaper
  • Architecting for the AWS Cloud: Best Practices
Here you can get all the AWS Whitepapers .
Day 22 to 30
Finally go through FAQs of AWS services. Here I am listing few key services from which you can expect most questions.
  • EC2
  • S3
  • EBS
  • RDS
  • VPC
  • ELB
  • Route 53
  • Glacier
  • Cloudfront
  • Direct Connect
    Tips for the exam:-
  • You won’t get more than 3 mins per question.
  • You may find some very long questions in exam. Best strategy to tackle them is to read the answer options first and than check for relevant info in question.
  • If you find a question confusing, better to mark it for review and check it later.
  • Since it’s an AWS exam so look for AWS related options in the answers.  Chances are high that  Non-AWS related option in answer will be wrong.
  • AWS exams generally don’t focus on mugging their datasheets. So you won’t get a question like “How much RAM does a C3.xlarge offer?” .
  • For cost optimization Spot instances are best. If you are confused about option between dedicated and spot, choose spot if question talks about cost.
You can check out the exam blue print here. And can refer to sample exam questions here.
Once you are done with Associate level and you want to move to the next level check How to prepare for AWS Certified Solutions Architect – Professional .
Hope this post helps you in your preparation. Do let me know if you have any query.

Solved: How to Create AWS billing alert

  1. Sign in to the AWS Management Console and open the Billing and Cost Management console .
  2. On the left navigation pane, choose Preferences.
  3. Select the Receive Billing Alerts check box.
  4. Choose Save preferences.
Now we will create the alarm in cloudwatch.
On top left of AWS console choose “Services” and select “Cloudwatch”.
  • In cloudwatch on the left pane select “Alarms”.
  • Now Select Create Alarm .
  • In Billing Metrics select “Total Estimated Charges”.


  • A new window will open . Put a check mark on USD. and click Select Metric.


  • In the section “Whenever charges for : Estimated charges ” specify threshold as 0.01 and click Next. Refer image below.


  • For “send a notification to”, specify an email address.
  • Finally click on create alarm. Cloudwatch will send you a confirmation mail.
Be Sociable. Share It. Happy Learning!

AWS VS AZURE VS OPENSTACK

Most of the services provided by different cloud providers are same as what you do in on-premises setup, they just have a different name in cloud. Below is comparison of major services offered by different cloud providers and what they mean in simple laymen terms. Hope this is helpful to you.

S3 CROSS ACCOUNT ACCESS WITH FOLDER RESTRICTION

 

This document is created to show you how to grant cross account access to a user and restrict it to a folder in S3 bucket. It can be a very useful cost saving measure where you don’t have to duplicate the data in QA bucket. While keeping the data safe as you are granting only read access to data.

Problem:- We want to allow the QA user (qauser) to get files which are in Production bucket (prodbucket) but it should only be able to access folder1 which is in prodbucket. Also both Production user (produser) and qauser should be able to access the buckets which are in their own accounts.
Hirearchy of prod bucket is prodbucket/folder1 .

Solution:-