Skip to main content

Posts

Showing posts from 2022

AWS DMS (database migration service) - Homogeneous vs. Heterogeneous database migration

If I have an MSSQL server on-premises and I want to migrate that in AWS cloud (as MSSQL server itself), this is called a homogeneous database migration. If I have an MSSQL server on-premises and I want to transform and migrate that to a PostgreSQL database in AWS cloud, this is called a heterogeneous database migration. In this case the schema of the source and target database would be different (and also the datatypes, DB codes as the DB engines are different). So, you have to use a tool called SCT (schema conversion tool) to convert the source schema to the target schema (and also code transformation). After this, the DMS (database migration service) can be used to migrate the on-promises database into the AWS cloud (to RDS/EC2). The major advantage of using DMS service is, the source database would be fully operational while migrating the database. DMS can also move data between SQL, NoSQL and text-based targets.

AWS Route 53 health check

It monitors endpoints by sending HTTP requests at regular intervals. It marks the endpoint 'healthy' if the response is successful, 'unhealthy' otherwise. The endpoint will be marked 'unhealthy' if there is no response. In addition to endpoints, the Route 53 health checks can also be used to monitor other calculated health checks or cloudwatch alarms' state. Question: what is monitoring other health checks? You might want to make sure some minimum number of resources among multiple web servers are healthy. Question: What does monitoring a cloudwatch alarm mean? You could have created a CloudWatch alarm that monitors the number of targets running healthy in the load balancer's target group. The Route 53 health check could monitor such CloudWatch alarms. You can specify the endpoint using an IP address or domain name (the domain name could be configured in Route 53).  Example: The endpoint can be from an API Gateway which has a path parameter using which i...

How will you mitigate read-intensive database workloads?

Suppose you have a database and it has got a products table. And, there are 100 users. 60 users are reading data of product A and the remaining 40 users are reading data of product B. You can do either one of the following: 1. You can create an elasticache cluster. Let the first among 60 users query product A and the remaining 59 users use the data from the cache. ElastiCache is effective in caching frequent requests. 2. Create read replicas to distribute the read-only traffic across multiple database instances this improving the read performance.

AWS CloudWatch Log Agent

Cloudwatch monitors two things: 1. Your AWS resources 2. The applications running on your AWS resources There is something called cloudwatch log agent which you can install on your machine on premises. This can send application logs to cloudwatch logs. You can filter these logs using cloudwatch metric filter. For example, you can filter only the out of memory exception and find the total count of it. These metric filters are monitored by cloudwatch alarms. And then you can notify others using SNS.

AWS Config

You have few AWS resources. You want to compare their current configurations with expected/desired configurations. This can be achieved through an AWS service called AWS Config. It asses, audit and evaluate the configurations of AWS resources.

Resource based policy

Imagine you have two AWS accounts, account A & account B. There are AWS resources in the account B which you want to share with account A. Let's say an S3 bucket for example. This can be achieved as follows: 1. Create an IAM role in account B which can access that S3 bucket. Let the users from account A assume that role in account B and eventually access the S3 bucket in account B. The role acts as a proxy here. One of the disadvantages of this case is the user context is changed from account A's user to account B's role. That means when the context is changed to account B's role the user will not be able to access resources in account A anymore. This is a user based policy. 2. There are some resources in AWS you can attach the resource based policies to. You can mention the list of AWS accounts which can access this resource in this policy. The user will be able to access that S3 bucket in account B. At the same time, the user will also be accesing the resources in...

Service Control Policies + AWS Organizations

Imagine you have 5 AWS accounts. And you want to restrict 2 of those accounts from using an AWS service. How will you do that? This can be achieved through service control policies and AWS organisations. Create an organisational unit (OU) with those 2 accounts and then restrict access to that OU. This service control policies control the use of AWS services across multiple accounts. The IAM policies can be applied only to users, roles and groups. But not to accounts. Whereas SCPs can be applied to accounts or group of accounts.

How to upgrade node version in Linux (Ubuntu)?

Install nvm using the commands below: curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash source ~/.nvm/nvm.sh Usage: nvm install <version>       Download and install a <version> nvm use <version>           Modify PATH to use <version> nvm ls                      List versions nvm ls-remote               To list available versions

AWS Route53 - Private Hosted Zone

Issues faced while launching ECS Tasks (pulling image from an ECR repo) from a private subnet

I created a private subnet. I created an ECR repo with Private visibility and pushed an image into it. Then, I created an ECS Cluster. I added a Task Definition with No Task Role and a Task Execution Role (ecs-tasks.amazonaws.com can assuem this role), with AmazonECSTaskExecutionRolePolicy permission policy. The container in the task definition has private repository authentication enabled. Then, I created a task as follows: aws ecs run-task --task-definition <task-definition-name> --cluster <ecs-cluster-name> --network-configuration '{"awsvpcConfiguration": {"subnets":["<subnet-id>"], "securityGroups": ["<sg-id>"], "assignPublicIp": "DISABLED" }}' --count 1 --launch-type FARGATE The task did not start and stopped with the following error: ERROR: ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry au...

AWS - WAF

WAF: WAF is a global service like CloudFront, Route 53, SES, IAM.  It’s a Web Application Firewall to protect Web apps/APIs.  You can allow/block requests based on request properties.  WAF can be administered using AWS Firewall Manager (across multiple accounts and resources).  AWS Shield Advanced can be integrated with AWS WAF.  WAF comes free with Shield Advanced.  With WAF + CloudFront, the rules will run in Edge locations (security + performance).  With WAF + regional resources like ALB/APIGW/AppSync/Cognito User Pools, the rules will run in the region (internet-facing/internal resources are protected).  WAF can block XSS and SQL Injection attacks but can’t withstand DDoS attacks (use WAF’s Web ACL rate-based rules + AWS Shield Advanced for DDoS attacks).  WAF can mitigate application layer DDoS attacks.  WAF’s Web ACL has two types of rules – regular and rate-based.  You can’t do rate limit in the regular rule (but can add cond...

How to renew AWS SSO temporary credentials through CLI?

Run the following commands (from Ubuntu): 1. Add the following in ~/.aws/config file: [profile sso] sso_start_url = <SSO-Start-URL> sso_region = <AWS-Region> sso_account_id = <12-digit-AWS-Account-Number> sso_role_name = <SSO-Role-Name> 2. Run the following command: aws sso login --profile sso 3. You will be redirected to the browser --> Login --> Allow --> Close the browser 4. Install aws-sso-creds : brew tap jaxxstorm/tap brew install aws-sso-creds 5. Run the following command: eval $(aws-sso-creds export --profile sso) 6. Check the current profile: aws sts get-caller-identity .

How to connect to EC2 instance (without private key) using SSM Session Manager?

Run the following AWS CLI commands: $ aws ec2 run-instances --image-id <Id-of-AMI-with-SSM-Agent-pre-installed> --subnet-id <subnet-id> --instance-type <instance-type> --associate-public-ip-address $ aws iam create-role --role-name <role-name> --assume-role-policy-document '{"Version":"2012-10-17","Statement":{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}}' Note: The command above has inline JSON. Run such commands only from bash (not from cmd/powershell). $ aws iam attach-role-policy --role-name <role-name> --policy-arn <ARN-of-AmazonSSMManagedInstanceCore-Policy> $ aws iam create-instance-profile --instance-profile-name <instance-profile-name> $ aws iam add-role-to-instance-profile --role-name <role-name> --instance-profile-name <instance-profile-name> $ aws ec2 associate-iam-instanc...

Think of these AWS services when you hear these terms

Terms - Services No duplicates - SQS FIFO / Simple WorkFlow (SWF) / Kinesis Data Streams At least once delivery - SQS Standard Exactly once processing - SQS FIFO Instance level firewall - Security Group Subnet level firewall - NACL Infrastructure provisioning - CloudFormation Real-time guidance to resouce provisioning - AWS Trusted Advisor Recommendations for cost optimization / security / fault tolerance / performance / service limits - AWS Trusted Advisor View & analyze costs/usage - AWS Cost Explorer Automated security assessment service - Amazon Inspector Improve security & compliance of applications -  Amazon Inspector Automatically assess applications for exposure, vulnerabilities and deviations from best practices -  Amazon Inspector Set custom budget alerts / reservation utilization / coverage targets / receive alerts - AWS Budgets Integrate LDAP Directory Service to IAM - SAML / STS / Custom identity broker Record of S3 actions - CloudTrail logs IT audits and ...

AWS RDS IAM DB Authentication

Create AWS RDS MySQL DB: aws rds create-db-instance \ --db-instance-identifier test-mysql-instance \ --db-instance-class db.t3.micro \ --engine mysql \ --master-username admin \ --master-user-password secret99 \ --allocated-storage 20 \ --enable-iam-database-authentication Create user in DB as follows: CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS'; Connect to MySQL DB using IAM DB Authentication: RDSHOST="test-mysql-instance.abcdef123456.us-west-2.rds.amazonaws.com" TOKEN="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 3306 --region us-west-2 --username jane_doe )" mysql --host=$RDSHOST --port=3306 --ssl-ca=[file_path]/global-bundle.pem --enable-cleartext-plugin --user=jane_doe --password=$TOKEN Link to download certificate: https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem N...

How to create a Python virtual environment in Ubuntu?

Run the following commands: >  sudo apt update >  sudo apt install python3 >  python3 -V >  sudo apt install python3-pip >  pip3 -V >  pip3 install virtualenv >  virtualenv env >  source env/bin/activate (env) > python -V Python 3.10.1 (env) > pip -V pip 21.3.1 from /home/user/env/lib/python3.10/site-packages/pip (python 3.10) (env) > deactivate Other way of creating virtual environment: >  python3.10 -V Python 3.10.1 > python3.10 -m venv venv1 > source  venv1/bin/activate (venv1) > python -V Python 3.10.1 (venv1) > pip -V pip 21.2.4 from /home/user/venv-1/lib/python3.10/site-packages/pip (python 3.10) (venv1) > deactivate > >  python -V Python 3.8.10 >  python -m venv venv2 >  source  venv2/bin/activate (venv2) >  python -V Python 3.8.10 (venv2) >  pip -V pip 20.0.2 from /home/user/venv-2/lib/python3.8/site-packages/pip (python 3.8) (ve...