Skip to main content

Posts

Showing posts from 2020

How will you make Aurora to communicate with a Lambda?

Step 1: Create an IAM policy as follows: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAuroraToLambdaFunction", "Effect": "Allow", "Action": "lambda:InvokeFunction", "Resource": "arn:aws:lambda:<REGION>:<ACCOUNT_NO>:function:<FUNCTION_NAME>" } ] } Step 2: Create an IAM role and attach the policy above and have the trust policy as follows: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "rds.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Step 3: Link the above IAM role with the Aurora DB cluster Step 4: Allow outbound communication from Aurora C...

Ports

Ports: 22 - SSH 80 - HTTP 443 - HTTPS 1433 - Microsoft SQL Server 1434   - Microsoft SQL  Monitor 3306 - MySQL DB System 3389 - Microsoft RDP 5432 - PostgreSQL

AWS Aurora

What are the two modes of Aurora you can have? MySQL PostgreSQL What are the different Database locations you can provision your Aurora DB with? Regional - Provision Aurora DB in a single AWS region Global - In multiple AWS regions. It has primary & secondary regions. Data written in DB in primary region gets replicated in the secondary regions in < 1 sec. How will you achieve fast failover/high availability? Through Multi-AZ deployment Can you backup Aurora DB? Yes, by creating point-in-time snapshots. We can set up retention period for these backups. We can also copy tags to snapshots. Can you encrypt data in Aurora DB? Yes. You can use the KMS keys . I have added few incorrect data and I want to go back to my previous good state in Aurora DB. How would I achieve this? You can quickly rewind to a specific point-in-time by Enabling Backtrack. But, this will cost you to save the changes you made for backtracking. What are the various DB features available in Aurora DB? One wri...

AWS S3 Cross Region Replication

What are the constraints to enable S3 cross region replication? To enable cross region replication, you should have two buckets: Bucket 1 - should be in region A Bucket 2 - should be in region B Versioning should be enabled in both the buckets for replication. Can you create same region replication? Yes Can you create cross account replication? Yes Can you replicate a subset of objects using replication? Yes, you can limit the scope of the replication rules to specific prefixed/tags etc., You can also apply the rules to all the objects in the bucket What is the expected delay in copying the files from the source to destination buckets in S3 replication? A few seconds. Can you replicate only the new objects? The new objects can be replicated after enabling the replication rule. However, the already existing objects can also be replicated by enabling Replicate existing objects option.

AWS VPC Flow Logs

You can find the IP addresses where requests to your VPC Network Interfaces are coming from through the VPC Flow Logs . These logs can be published in S3/CloudWatch . You can also view/query the data through Athena .

AWS EBS

DeleteOnTermination attribute: The EC2 instances will have Root EBS volumes. When you terminate an EC2 instance, the Root EBS volume will also get deleted. To prevent this, set the DeleteOnTermination attribute, for the EBS volume, to false. This will prevent the EBS volume from being deleted even when the EC2 instance is terminated. A comparison of EBS volume types: General Purpose SSD  - recommended for most workloads Provisioned IOPS SSD  - use this when the required number of input/output operations per second is high (10000 IOPS or 160 MiB/s of throughput per volume) Throughput Optimized HDD   - use this for a fast throughput at a lower price Cold SSD  - use for large volumes of data which are in frequently accessed Where are the snapshots stored? In S3. Can you directly create a snapshot in another region? No. You have to create a snapshot in the same region and copy that snapshot to another region. Is it safe to copy the snapshot to another region? What if oth...

Availability Zone Disruption

Availability Zones (AZs) are nothing but data centers within a region. For example, North Virginia has the maximum of 6 AZs. An AZ is geographically separated from another AZ. Suppose a natural calamity, like earthquake occurs, in the region of an AZ, say 1A, the entire AZ will not be available. This is called Availability Zone Disruption . In such cases, we should make our applications highly available by scaling them across other AZs. Minimum good number of AZs is 2.

High availability (Multi-AZ) for Amazon RDS

There is something called failover technology in Amazon. AWS RDS's Multi-AZ deployment uses this technology. If you enable Multi-AZ for an RDS DB, say MySQL DB, RDS automatically creates a standby replica in a different AZ. If the primary DB instance is in AZ-1A, then RDS creates a standby replica in AZ-1B (for example). Suppose I add a new row to a table in the primary DB, then the same row is added, almost in the same time, in the standby replica. This is called as synchronous replication . Thus, standby replicas are useful during DB instance failure/ AZ disruption . How? Because, there is no need to create a backup later because the backup has already been created. This gives high availability during planned system maintenance. Normal backup  operation - I/O activities are blocked in the primary database  Automated backup operation (standby replica) - I/O activities are not blocked This standby replica is not similar to read replica (which is used for disaster recovery). S...

AWS CloudTrail

AWS CloudTrail is an API monitoring service.  It records activities in your account. We can log those activities in S3 bucket It gives visibility to user activities e.g., if you want to know who created an EC2 instance, you can get the answer using CloudTrail Using CloudTrail, you can track changes to AWS resources in your accounts

useCallback hook

Check the code below: usecallback-demo-1-component-renders-unnecessarily - Code You can run the app directly: usecallback-demo-1-component-renders-unnecessarily - App Whenever the button is clicked, the 'increment rendered' message is logged in console. This means, we're rendering the  Increment  component unnecessarily. The Increment component is rendered every time  à  because it depends on  increment  method  à  this method again depends on  count  state  à  so, whenever the  count  state changes, since  count  state is a dependency of  Increment  component, this must be rendered everytime. However, consider the following code where this problem is solved (of course using random number instead of count for demo): usecallback-demo-2-usecallback-prevents-unnecessary-renders - Code Here, we're caching the  increment  method using  useCallback  hook. The  Inc...

CDK - Error: unable to determine cloud assembly asset output directory. Assets must be defined indirectly within a "Stage" or an "App" scope

Cause: Got this error after executing  1) `npm install @aws-cdk/aws-lambda` 2) `cdk diff` --> threw error This is because of the version mismatch of @aws-cdk/aws-lambda package against @aws-cdk/core. The package.json file looked like this: "dependencies": {     "@aws-cdk/aws-lambda": "^1.78.0",     "@aws-cdk/aws-sns": "1.49.1",     "@aws-cdk/aws-sns-subscriptions": "1.49.1",     "@aws-cdk/aws-sqs": "1.49.1",     "@aws-cdk/core": "1.49.1"   } Fix: Updated the package.json file as follows: "dependencies": {     "@aws-cdk/aws-lambda": " 1.49.1 ", --> changed here     "@aws-cdk/aws-sns": "1.49.1",     "@aws-cdk/aws-sns-subscriptions": "1.49.1",     "@aws-cdk/aws-sqs": "1.49.1",     "@aws-cdk/core": "1.49.1"   } And then running `npm i` fixed the issue. From next time ...

How to create a react application and push in a new Git repo?

Create a repo called my-app in Github npx create-react-app my-app --template typescript git remote add origin https://github.com/<YOUR-GIT-NAME>/my-app.git Control panel --> Credential manager --> Windows credentials --> remove all Git credentials (otherwise, you will get 403) git config user.name "<YOUR-NAME>" git config user.email <EMAIL-ADDRESS> git config -e git checkout -b main git branch -D master git branch git fetch origin main git pull origin main --allow-unrelated-histories Fix all merge conflicts through VS Code git commit git push --set-upstream origin main

What are various AWS services used for?

AWS EMR (Amazon Elastic MapReduce) - is used for big data reading/processing and analysis / process applications with data intensive workload AWS Kinesis  - real-time streaming data AWS Custom Kinesis Streams Applications  - to analyze data and move analyze outcomes to other systems VPC Peering  - for inter-region private connection VPC Endpoints  - private link; should be used with VPC Peering for inter-region connection NAT Gateway  - allows private instances to download updates from internet; AWS recommends creating a NAT Gateway in each AZ CloudWatch Alarms  - can do auto-scaling, EC2 actions, SNS notifications etc., Amazon Kinesis Data Firehose - is used to ship data to other AWS services (not for analysis) AWS RDS (Amazon Relational Database Service) - is an SQL DB. It works on OLTP Amazon DynamoDB - is a No-SQL DB. It is used for light-weight and durable storage AWS Direct Connect - is used to establish a network connection from on premises to A...

NACL - deny rules first, allow rules next

The evaluation of rules in security groups and network access control list are totally different. In security groups, all the rules are evaluated before allowing a traffic. Whereas in network access control list, it is done in the order of the rule number, that is, from top to bottom. If you set a rule which are allowe http traffic from a particular IP in rule number 99 and if you also set a rule number 100 which denies the http traffic from the same IP address, then the traffic from that particular IP address is not denied because rule #99 is executed prior to rule #100, that is, the rule on top (if you see in console) is executed first, then the below. Hence in this case deny followed by allow is not actually denied whereas allow followed by deny takes the precedence that it is actually denied. Thus in an NACL rule, it is always good to have the deny rules first and the allow rules next. Tip: Write deny rules for specific ports. Example port 80.

Network ACLs are stateless while security groups are stateful - What does it mean?

In security groups, if you add an inbound rule for port 80, then an outbound rule for port 80 it is automatically added. You don't need to explicitly add an outbound rule in security groups if you add an inbound rule. This is not true in NACL i.e., the inbound and outbound rules have to be defined explicitly. FAQ 1: Suppose I add an inbound rule for port 80 in a security group, an outbound rule for port 80 will be added by default. In this case, will I see an explicit outbound rule for port 80 in that security group in AWS console? Ans: No. It is inherently added. FAQ 2: I don't want to allow incoming traffic on port 80 but to deny all incoming traffic on port 80 - How to do this in a security group? Ans: You don't need to do anything for this in a security group because everything is denied by default in security groups. This means, you set rules only to allow. On contradiction, in NACLs, you can set rules both to deny and allow. This gives you more control.

How to delete commits in Git?

Suppose you have 3 commits with SHAs as follows: HEAD~0 --> Commit 3 ccccccc HEAD~1 --> Commit 2 bbbbbbb HEAD~2 --> Commit 1 aaaaaaa If you want to remove the last two commits (i.e., commits 2 and 3) and make Commit 1 as the latest commit, run the following commands: git reset --hard aaaaaaa git push origin HEAD --force Now, the commit history would be as follows: HEAD~0 --> Commit 1 aaaaaaa

How to squash multiple commits into one?

Suppose you have commits as follows: HEAD~0 --> Commit 5 eeeeeee HEAD~1 --> Commit 4 ddddddd HEAD~2 --> Commit 3 ccccccc HEAD~3 --> Commit 2 bbbbbbb HEAD~4 --> Commit 1 aaaaaaa If you want to squash commits 3, 4 and 5 and make them as a single commit, run the commands below: git rebase -i HEAD~3 3 means, you are squashing the last 3 commits. Update text in editor opened, save and close. git push -f Now, the commits would be as follows: HEAD~0 --> Squashed Commit fffffff HEAD~1 --> Commit 2 bbbbbbb HEAD~2 --> Commit 1 aaaaaaa

GoLang - How to check if key exists in map?

package main import "fmt" var m map[string]string func main() { m = make(map[string]string) m["foo"] = "abc" if val, ok := m["foo"]; ok { fmt.Println("foo found -", val) } else { fmt.Println("foo not found") } if val, ok := m["bar"]; ok { fmt.Println("bar found -", val) } else { fmt.Println("bar not found") } } Output: foo found - abc bar not found

How to fix the error - Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bootstrap.sh\": permission denied": unknown?

It occurs when you try to create image from your custom Dockerfile, push it in ECR and try to link that image and create an ECS task and run it. The ECS task would fail to run with this error message. This can be fixed by providing necessary permission in your Dockerfile as follows: RUN chmod +x bootstrap.sh # This line should be added CMD ["/bootstrap.sh"] Now, build this Dockerfile and push in ECR. The ECS task utilizing this image will be running successfully now.

How to fix the error - botocore.exceptions.ProfileNotFound: The config profile (AnyProfileName) could not be found?

This occurs when you do aws configure in cmd. It is caused by setting the following environment variable: AWS_PROFILE = AnyProfileName Removing this environment variable is the fix. After removing, if you do aws configure , aws would be configured for the default profile rather than AnyProfileName .

How to install Apache server on Amazon Linux?

Run the following commands to install Apache server on Amazon Linux: SSH on Port 22 sudo su (logs you as root user) yum install httpd -y (installs Apache) service httpd start (starts the service) systemctl httpd status (shows the current status of httpd service) chkconfig httpd on (makes the httpd service to start automatically on instance restart) Load the public ip of the instance in browser. You would see the Apache landing page. If you don't see it, allow port 80 in the inbound rules of the security group associated with the EC2 instance. You should see the Apache start page now.

Can you create an AWS CodePipeline with only 1 stage?

No. There should be a minimum of 2 stages in a pipeline. When you try to create a pipeline with only 1 stage, you would get the following error: Pipeline has only 1 stage(s).  There should be a minimum of 2 stages in a pipeline ( Service: AWSCodePipeline;  Status Code: 400;  Error Code: InvalidStructureException;  Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)

How to assume a role on target account from login account using MFA Code?

Step 1 : Run the following command and get the temporary credentials. aws sts assume-role \ --role-arn arn:aws:iam::<TargetAccountId>:role/<RoleName> \ --role-session-name <RoleName>  \ --serial-number arn:aws:iam::<LoginAccountId>:mfa/<LoginName>  \ --token-code <6DigitMFACode> Step 2 : The temporary credentials include the following: AccessKeyId SecretAccessKey SessionToken Expiration date and time (this token is valid for 1 hour by default) Copy these values and create a profile in %USERPROFILE%\.aws\credentials file as follows: [PROFILE_NAME] aws_access_key_id = <AccessKeyId> aws_secret_access_key = <SecretAccessKey> aws_session_token = <SessionToken> Step 3 : You can now use this profile to assume <RoleName> role in <TargetAccountId> account.

I am trying to create a bucket 'MyBucket' but I can't. It says bucket with the same name already exists but I could not see any such in my account!

When you try to create a bucket with a name MyBucket (or whatever name), sometimes you would get the following error: deploy-doc-store-s3 already exists and you will not be able to create the bucket. You would search for that bucket in your account but still you would not find it! Then, why are you not able to create MyBucket when a bucket with the same name does not exist in your account? This is because S3 is global. AWS S3 does not require region selection. So, S3 Bucket names need to be globally unique. It occurred because someone else would have created a bucket with the name MyBucket . How to confirm if there already exists a bucket with the name MyBucket? Run the following command: aws s3api head-bucket --bucket MyBucket If it returns, NOTHING i.e, 200 OK - the bucket already exists and you have access to it (the bucket exists in your current account) 403 Forbidden -  the bucket already exists and you don't have access to it...

AWS - Error - An error occurred (ExpiredToken) when calling the DescribeStacks operation: The security token included in the request is expired

Error:   An error occurred (ExpiredToken) when calling the DescribeStacks operation: The security token included in the request is expired. Reason: It occurred when I ran a MAKE command with a profile having expired token (security credentials) Fix: Generate new security credentials (aws sts assume-role) and run the command again