Launching Web-app through Terraform


THIS IS MY FIRST TASK OF HYBRID MULTI_CLOUD COMPUTING UNDER THE MENTORSHIP OF SIR VIMAL DAGA

I have joined the linux-world to study hybrid multi-cloud. It has been now 12 days in this training and I feel overwhelmed for being the part of this training   we were assigned the first task by our mentor, sir Vimal Daga . And I successfully completed this task.

Amazon Web service (Aws):
Amazon web service is a platform that offers flexible, reliable, scalable, easy-to-use and cost-effective cloud computing solutions.

This was the task assigned by Vimal Daga sir

Task 1 : Have to create/launch Application using Terraform
1. Create the key and security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

So here I made this whole setup 

pre-process:
1. AWS 
create an AWS account and login in console
create an I am user account and generate profile

2. Terraform
download the terraform software
set the path
create a directory

Step-1 AWS LOGIN THROUGH COMMAND LINE

on creating an account, AWS provides a credential file. By the use of this file login through the command line

Step 2. providing Provider and creating key pairs

provider "aws" {

  region = "ap-south-1"

  profile = "tabu"

}

//Creating keypairs

 resource "tls_private_key" "privatekey"

{

  algorithm   = "RSA"

}

 resource "aws_key_pair" "resource_key"

{

  key_name   = "saba_12"

  public_key = tls_private_key.privatekey.public_key_openssh

}

 resource "local_file" "key_file"

{

  content = tls_private_key.privatekey.private_key_pem

  filename = "saba_12.pem"

}

 


Step 3. I created the security groups which will allow port no. [80 and 22]
// Creating Security group

resource "aws_security_group" "securitygroups"

 {                     

  name        = "launch-wizard-4"

  description = "this security group will allow traffic at port 80"

 

  ingress {

    description = "http is allowed"

    from_port   = 80

    to_port     = 80

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

    }

ingress {

    description = "ssh is allowed"

    from_port   = 22

    to_port     = 22

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

    }

               egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

    }

  tags = {

    Name = "security_group"                  

    }

}

 

variable "enter_your_security_group"

 {

 type = string

  default = "launch-wizard-4"

 }  


Step - 4: Lunching instance and Creating remote SSH Connection                                                                         

// Launch instance

 resource "aws_instance" "myin"

 {

  ami           = "ami-005956c5f0f757d37"

  instance_type = "t2.micro"

  key_name = aws_key_pair.resource_key.key_name

  security_groups = [var.enter_your_security_group]                

 

tags = {

                 Name = "My_OS"

            }

// Creating remote connection

  connection {

    type     = "ssh"

    user     = "ec2-user"

    private_key = tls_private_key.privatekey.private_key_pem

    host     = aws_instance.myin.public_ip

               }

provisioner "remote-exec" {

    inline = [

      "sudo yum install httpd php git -y",

      "sudo service httpd start",

             ]

        }

    }

 

// Printing availability zone of OS            

 output "printaz" {

      value = aws_instance.myin.availability_zone

}



Step – 5: Creating EBS Volume

// Creating EBS Volume

 resource "aws_ebs_volume" "ebsvolume" {                            

  availability_zone = aws_instance.myin.availability_zone

size              = 2

  tags = {

                  Name = "vol1"

                              }

                      }


Step – 6: Attaching EBS to Instance

// Attaching Volume EBS to Instance   

 

resource "aws_volume_attachment" "volumeattached" {             

  device_name = "/dev/sdh"

  volume_id   = aws_ebs_volume.ebsvolume.id

  instance_id = aws_instance.myin.id

  force_detach = true

                 }

// Printing IP address of OS

 output "myos_ip" {

  value = aws_instance.myin.public_ip

                               }

 // Copying IP address of OS in a file

 resource "null_resource" "nulllocal2"  {

               provisioner "local-exec" {

                   command = "echo  ${aws_instance.myin.public_ip} > publicip.txt"

                                                             }

                                     }

Step – 7: Creating null resource                                                                

 resource "null_resource" "nullremote3"  {

depends_on = [

    aws_volume_attachment.volumeattached,

                    ]

connection {

    type     = "ssh"

    user     = "ec2-user"

    private_key = tls_private_key.privatekey.private_key_pem

    host     = aws_instance.myin.public_ip

                    }

provisioner "remote-exec" {

    inline = [

      "sudo mkfs.ext4  /dev/xvdh",

      "sudo mount  /dev/sdh  /var/www/html",

      "sudo rm -rf /var/www/html/*",

      "sudo git clone https://github.com/sabacs12/images.git /var/www/html/"  

    ]

  }

}

 Step – 8: Creating S3 bucket

resource "aws_s3_bucket" "s3buckets" {

  bucket = "bucket158338"

  acl    = "public-read"

 

  tags = {

    Name        = "bucket158338"

    Envirnoment = "Dev"

  }

}

// Creating Cloudfront Distribution

 

locals {

s3_origin_id = "saba12345"

       }

resource "aws_cloudfront_distribution" "cloudfront_distribution" {

                  origin {

                      domain_name = aws_s3_bucket.s3buckets.bucket_regional_domain_name

                      origin_id   = local.s3_origin_id

                }

 enabled             = true

  is_ipv6_enabled     = true

  default_root_object = "index.html"

 

  default_cache_behavior {

    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]

    cached_methods   = ["GET", "HEAD"]

    target_origin_id = local.s3_origin_id

 

                      forwarded_values {

                             query_string = false

 

                                     cookies {

                                         forward = "none"

                                             }

                                       }

 

                              viewer_protocol_policy = "allow-all"

                              min_ttl                = 0

                                                   default_ttl            = 3600

                                                   max_ttl                = 86400

                                              }

 

  # Cache behavior with precedence 0

                ordered_cache_behavior {

                                 path_pattern     = "/content/immutable/*"

                                 allowed_methods  = ["GET", "HEAD", "OPTIONS"]

                                 cached_methods   = ["GET", "HEAD", "OPTIONS"]

                                 target_origin_id = local.s3_origin_id

 

                                             forwarded_values {

                                                            query_string = false

                                                            headers      = ["Origin"]

 

                                                                           cookies {

                                                                           forward = "none"

                                                                                          }

                                                                            }

 

                                                            min_ttl                = 0

                                                            default_ttl            = 86400

                                                            max_ttl                = 31536000

                                                            compress               = true

                                                            viewer_protocol_policy = "redirect-to-https"

                                                            }

# Cache behavior with precedence 1

                              ordered_cache_behavior {

                                             path_pattern     = "/content/*"

                                             allowed_methods  = ["GET", "HEAD", "OPTIONS"]

                                             cached_methods   = ["GET", "HEAD"]

                                             target_origin_id = local.s3_origin_id

 

                                                            forwarded_values {

                                                                           query_string = false

 

                                                                                          cookies {

                                                                                          forward = "none"

                                                                                                         }

                                                                                           }

 

                                                                           min_ttl                = 0

                                                                           default_ttl            = 3600

                                                                           max_ttl                = 86400

                                                                           compress               = true

                                                                           viewer_protocol_policy = "redirect-to-https"

                                                                           }

 

 

 

               restrictions {

                               geo_restriction {

                                             restriction_type = "whitelist"

                                             locations        = ["US", "CA", "GB", "DE"]

                                                             }

                                   }

 

                    tags = {

                                             Environment = "production"

                                  }

 

           viewer_certificate {

                cloudfront_default_certificate = true

                                                   }

               }


Step – 10: Creating Snapshot of EBS Volume

 

resource "aws_ebs_snapshot" "snapshot" {

  volume_id = aws_ebs_volume.ebsvolume.id             

 

  tags = {

    Name = "My_OS"

                }

  }

 

resource "null_resource" "nulllocal1"  {

 

depends_on = [

    null_resource.nullremote3,

                    ]

 

               provisioner "local-exec" {

                   command = "start chrome  ${aws_instance.myin.public_ip}/terra.jpg"

                                                             }

Well this was all the setup now the commands we run the  terraform code 

terraform init (its for the initialization and plugins are downloaded)

terraform validate (it validate our code and checks the errors)

terraform apply (to launch the infrastucture)

now the whole cloud inrastucture is ready
webpage:

So to destroy the whole infrastucture we use one command:
terraform destroy











Code is in github : raw.githubusercontent.com/sabacs12/terraform/master/task1/saba.tf

Sir Stated "writing a code is one time pain"  

During the task completion I  get lot of errors and honestly it took me 3 days to complete it. Thanks to Volunteers.whenever I feel I am stuck now I texted them and they showed the approach to solve it.
My special thanks to Priyansh sir and Rajeeb sir

I must say Vimal sir has excellent strategy of teaching. I am so involved in this training I don't get bored. I would like to thank Vimal sir for guiding me.





Comments

Popular posts from this blog

Flutter Task-1

HYBRID MULTI-CLOUD TASK-4