-
Notifications
You must be signed in to change notification settings - Fork 0
AWS ‐ Concepts
Basics
6 Pillars
- Operational Excellence
- Security
- Reliability
- Performance efficiency
- Cost optimization
- Sustainability
- Cloud provides greater flexibility in managing resources and cost
- Minimum upfront investments as customer does not have to purchase any physical infrastructure
- Provides Just in time infrastructure
- No long term contracts or commitments
- Rich Automation - Infra becomes scritable using APIs and shell
- Automatic scaling based on the load, Scaled out - adding more resources of same size, Scale in - removing the resources, scale up - increasing the size of the resources, scale down - decreasing the size of the resource
- Increased ability of software development lifecycle
- Benefits of HA (High availability) and disaster recovery
-
Cloud provides scalable architecture - cloud provides infra that has ability expand and contract depends on the load
-
Cloud infra can easily horizontally or vertically
-
Provides infinite scalability
-
Horizontal scaling - Scale up (increasing no.of web servers or nodes), Scale down (decreasing no.of web servers or nodes)
-
Vertical scaling - Scale out (increasing the processing capacity/memory/resources of server), Scale in (decreasing the processing capacity/memory/resources of server)
- Cloud has many building blocks to construct a system
- Cloud may not have exact services or components or software in place as similar to the non-cloud infra, application architecture has to support the cloud native solutions in order to maximize the cloud benefits
- Thinking about failure while designing the product, later the product becomes fail proof
- Avoid Single point of failure (Ex: hosting web app and db on same instance)
- To mitigate the single point of failure, use Load balancer environment
In this scenario also, having multiple web servers connecting with single database server causing the single point of failure..
to mitigate this, use amazon RDS database instances along with elastic load balancer where scaling is automatically included along with redundancy to avoid SPF
Leverage redundancy in terms of software/web servers/db nodes/network resources to avoid the single point of failure.
Ability of cloud to scale resources to match the demand
2 ways of scaling
- Scaling at fixed time interval
- Scaling on demand based on certain metrics, when metrics reached certain threshold; to supply the resources to fulfil the demand
A design principle to minimize the dependencies between components in order to improve the scalability of applications
Decoupling or loose coupling refers to a design principle concerned with minimizing dependencies between components in order to improve the scalability of applications.
- Loose coupling enables the applications to scale independently
Its about decreasing the latency and increasing throughput. Also talks about how important it is to utilize the cloud resources efficiently.
- Get to know about all the services, select appropriate services depends on the use cases maximize the efficiency and performance.
Security responsibility shared between customer and amazon to work together Amazon is responsible for the security OF the cloud infrastructure, physical infra, network infra Customer is responsible for the security IN the cloud such as account, user management
For IaaS
For PaaS
For SaaS
EC2 Classic (old)
Latest EC2
File system for data storage
- Snapshots are stored into s3 incrementally, and snapshots are used to restore the data in new regions/availability zones
Issues in manual scaling:
automatic scaling:
Autoscaling depends on 3 main components
1. Launch configuration (what to launch) - specifies about AMI, what ec2 configuration, security group, storage etc
3. Scaling Policy (when to launch) - defines threshold to launch the instances, to define monitoring threshold
It allows the user to monitor the resource utilization, performance, network traffic, load, set alarm notifications
It refers to automatically provisioning resources It takes care of the capacity planning, load balancing, autoscaling, application health monitoring
Its a provisioning engine to automate the infrastructure needs, the difference from the beanstalk being that, user can perform more granular level configuration in opsworks.
Scripted way of automating the deployment, using template file in json format specifications about the components/resources needed.
- Usecase such as replicating dev env to qa or staging etc
It is used to help configure and launch the required resources from the existing stack.
It is a Component service, it coordinates deployments to ec2 instances
Typical setup, but not scalable
cache with each app server instance also not idea solution
Elasticache Supports two types of cachine.
- MemCached
- Redis
- Write Through Pattern
Pros
It increases the data hit as all the data being kept in the cache, Data being updated in cache irrespective of the demand
Cons
Increases more storage as all the data kept in memory
- Lazy Load
Props
Keeping only needed data in the memory, so less memory requirement
Cons
Higher data miss rate, hence causing lower performance
CloudFront stores the resources cached locally as close to the users, when the requests comes it is routed to the least latent network edge to get the resources from the regional locations.
Objects available in s3 are highly available & durable, follows "Eventual Consistency" Model Whenever there is change in the object, there is a latency in propagating the changes to all the replicas. This causing the storage to return the objects even after delete request made.
So it is best suitable for objects that does not change much such as Archieves, videos, images
Max object size is 5TB, unlimited on no.of objects being stored. Objects can be accessed via rest api.
Extension of S3 - for data that are retrieved infrequently
Data is transitioned from s3 to glacier when ready for archived
Example: store the videos and high quality images in s3 and store the thumbnail in the RRS
- hosting static websites (https://www.linkedin.com/learning/aws-essential-training-for-architects/use-s3-for-web-application-hosting?autoSkip=true&resume=false)
- static file storage
- Versioning
- Caching
- Throttling
- Scaling
- Security
- Authentication & authorization
- Monitoring
- Functions as the unit of scale
- Abstracts the runtime
- The function
- some custom code/script that performs business logic
- Event Sources
- a trigger to execute the function, ex: trigger the function when an object is added into s3 bucket (when any bucket event is occurred)
study & expand
DynamoDB - NoSQL, Schema-less, scalable database service with low latency, high performance, high throughput
- data stored in ssd
- data automatically replicated across multiple Availability zones
It is a reliable, durable, highly scalable distributed system for passing messages between components.
Used to build loosely coupled systems (minimizing the dependencies).
To configure and coordinate the tasks in the given workflow.
Example: Tightly Coupled E-Commerce System
- Push notifications rather than pull
- Posting to a topic causes a message to send immediately
- SNS lets us push notifications where as SQS requires applications to poll constantly (pull approach)
- modes Email, Text Message
- Simple monthly calculator tool - to anaylze the services, usage to provide the cost metrics
- Get detailed billing reports by account/services/tags monthly, daily hourly
- Cost Explorer - ui to get interactive reports
- Billing Alarms - using CloudWatch and SNS to get the billing notifications whenever threshold reach
- Create Budgets
- Load Balancers
- EC2 Instances for the application/api deployments
- S3 Instances for the storage needs
- Lambda for serverless computing
- DynamoDB for nosql database requirements
- RDS for database needs
- CloudWatch for monitoring resources and alert systems
- CloudFront for CDN solutions
Fault tolerance refers to the ability of a system (computer, network, cloud cluster, etc.) to continue operating without interruption when one or more of its components fail.
Its about how the system is able to withstand the load when one or more of its components fail.
The objective of creating a fault-tolerant system is to prevent disruptions arising from a single point of failure, ensuring the high availability and business continuity of mission-critical applications or systems.
Its about avoiding loss of service by ensuring enough number of resources available to serve the load.
High availability refers to a system’s ability to avoid loss of service by minimizing downtime. It’s expressed in terms of a system’s uptime, as a percentage of total running time. Five nines, or 99.999% uptime, is considered the “holy grail” of availability.
Frequently asked questions topics
- AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads.