AWS Core Services
- Module Introduction
- Amazon Elastic Cloud Compute (EC2)
- Amazon Elastic Block Store (EBS)
- Amazon Simple Storage Services (S3)
- AWS Global Infrastructure
- Amazon Visual Private Cloud (VPC)
- Security Groups
Module Introduction
Welcome to Module 2: Core Services. This module is comprised of a series of videos that provide high level information about several AWS Cloud services. To make this content easier to consume, the core services are discussed in independent videos. You must watch all of the associated videos.By the end of this module, you should be able to: ·Talk about key services on the AWS platform and their common use cases With that, let’s get started.
Amazon Elastic Cloud Compute (EC2)
Hi, I'm Mike Blackmer. I work for AWS Training and Certification as a curriculum developer. I'm presenting the Amazon EC2 overview. I'll start by showing you some basic facts about the product, then deliver a demonstration that shows you how to build and configure an Amazon EC2 instance. First, what is EC2? It stands for Elastic Compute Cloud. Compute refers to the compute, or server, resources that are being presented. There's a lot of different fun and exciting things you can do with servers. Cloud refers to the fact that these are cloud-hosted compute resources. Elastic refers to the fact that if properly configured, you can increase or decrease the amount of servers required by an application automatically according to the current demands on that application. But let's stop calling them servers and use the proper name of Amazon EC2 instances. Instances are pay as you go. You only pay for running instances, and only for the time that they're running. There is a broad selection of hardware and software, and selection of where to host your instances. There's a lot more to it than just that. For more information, go to AWS.amazon.com/ec2. Now I'd like to demonstrate how to build and configure an EC2 instance, and during that process, I'd like to go into detail about the topics that we've covered so far. During the demonstration, we log to the AWS console, choose a region where we're going to host our instance, launch the EC2 wizard, select the AMI, which stands for Amazon Machine Image, providing us with a software platform for our instance. Then we'll select the instance type, referring to the hardware capabilities. Then we'll configure network, configure storage, and finally configure key pairs which will allow us to connect to the instance after we’ve launched and connect to it. I've already logged into the console. My first choice is to choose the region where the EC2 instance will be hosted. Right now it's set for Oregon, which happens to be close by. Click on the dropdown list and I could choose from any of these other regions, but I'll leave it at Oregon. Now let's go ahead and click on Services. EC2, and click Launch Instance. The first selection criteria is the Amazon Machine Image (AMI), and that refers to the software load that will come with the instance once it's launched. Quick Start gives you a list of a variety of Linux and Windows servers. There's also a marketplace with third-party images and My AMIs, in case you've built some of your own. We're going to select the Amazon Linux AMI. The next screen gives us the hardware selection. They're called instance types, and we can scroll down and see a large list of things from eight core, 32 gigs of memory, 64 cores. There's a broad variety here. We're going to go low-end for the demonstration and choose the T2 Micro instance type. Next we'll choose Configure Instance Details. I do have the option to create tons of images that will share the same hardware and software build. Seems like we're limited to 100,000. You know what? I want to keep my job, so I'm going to select one. We'll build one instance. Scrolling down, here is where we'll configure the network configuration, and here we're going to stick with the default: the default virtual private cloud (VPC), and the default sub-net and the default auto assign settings, which will give us a DHCP address. Skip down, everything else looks pretty good. Next we'll add storage. I can increase the size of the root volume, make that a 12 gig volume. I can change the type of disk. I can also add a new volume. We'll make that 16 to keep things interesting. Also, I want this volume to be deleted if and when we choose to terminate, or delete, the instance. Next we'll add tags. By default, an EC2 instance is given a rather cryptic identifier, so we want to give it a friendly name, so click Add Tag, Name and Value of. I'll just call it EC2 demo. Next we'll configure the security group. A security group is a set of firewall rules. It automatically creates a default rule for SSH connectivity. I can add another rule to allow for simple web connectivity. I give it a simple name like SSH GTP, so I know exactly what the security group provides. Now I click Review and Launch. It provides us with an overview reminding us of our selections. It all looks like what I planned, and now I click Launch. Now in order to connect to the system using SSH, I need to create a key pair, so I click Create a New Key Pair, call it EC2 Demo, and download the private key. I save it locally. That key is absolutely required in order to connect over SSH. And now the magic button to launch the instance. It’s been successfully initiated. Things look good in the launch log-and by the way, there's that cryptic identifier. I click on that. There's my friendly name and it says the instance state is pending. I click on the Refresh button and now it's running. That's terrific. Now that we've built it, let's try to access it. I highlight the instance, and under Description, I can find the public DNS and public IP address of the instance, so I'm going to copy that, launch PuTTY, and the default user is EC2-user, so do EC2-user@. Paste in the DNS and I'll click Open. Cache that local key. Oh: it doesn't work because I haven't configured the private key yet, so I create a new session with the same information and now I'll select SSH and Auth, and browse for that private key. I saved it to this folder here and it's not there. PuTTY on Windows requires a PPK file, so I need to open another application called PuTTYgen. Click Load, get to the right folder, make sure we can see that there's the PEM file, select that, and save the private key. This will save it as a PPK file. And there it is under the PuTTY selection window. Now when I open the connection, it automatically logs us in, and we've been successful. I hope you found the demonstration informative. For AWS Training and Certification, I'm Mike Blackmer.
Amazon Elastic Block Store (EBS)
Welcome to an Amazon Elastic Block Store (Amazon EBS) introductory video. I'm Rafael Lopes with the AWS Training and Certifications team. As part of the team, I have contributed to develop and deliver exclusive training content like this one. In this brief video, I will be introducing the Amazon EBS service with a demonstration, so let's get started. EBS volumes can be used as a storage unit for your Amazon EC2 instances, so whenever you think you need disk space for your instances running on AWS, you can think about them. These volumes can be hard disks or SSD devices, and you pay as you use-so whenever you don't need a volume anymore, you can just delete it and stop paying for it. EBS volumes are designed for being durable and available. This means that the data in a volume is automatically replicated across multiple servers running in the Availability Zone. I made the comparison about EBS volumes and physical media devices like hard disks or SSDs, but it's actually much more durable than that because of the block-level replication. When creating an EBS volume, you can select the type of storage that best fits your needs. You can choose between magnetic or SSD, based on your performance and price requirements. It's all about choosing the right tool for the right job, so for example, if you are running a database instance, you can configure the database to use a secondary volume for the data-which may perform faster than the volume assigned to the operating system. Or you can assign a magnetic volume for the logs, because magnetic is cheaper. To provide an even higher level of data durability, Amazon EBS gives you the ability to create point-in-time snapshots of your volumes, and AWS allows you to recreate a new volume from a snapshot at any time. Share snapshots or even copy snapshots [to] different AWS Regions for even greater disaster recovery (DR) protection. You can, for example, encrypt and share your snapshots from Virginia to Tokyo. You could also have encrypted EBS volumes at no additional cost. The encryption occurs on the EC2 side, so the data moving between the EC2 instance and the EBS volume inside AWS data centers will be encrypted in transit. As your company grows, the amount of data stored on your EBS volumes will likely also grow. EBS volumes have the ability to increase capacity and change to different types, meaning that you can change from hard disk to SSD or increase from a 50-gigabyte volume to a 16-terabyte volume. For example, you can do this sizable operation on the y without needing to stop the instances. So let me do a demonstration to show you how fast and easy it is to create a new volume and attach that volume into an EC2 instance. Inside the AWS Management Console, the EC2 instances and the EBS volumes can be here on the EC2 console, which you can find by clicking here in EC2 under the Compute tab. If we click here in Instances, we can see that I have many instances running. The volumes are located here on the sidebar in Volumes under Elastic Block Store, or EBS volumes. So those are the volumes that I have in my account. If I want to create a new volume and attach that new volume to an instance-and in this case I'm really attaching to the Linux instance-the EBS volume must be created on the same Availability Zone where the instance resides. So if this instance is in US East One B, when I create the volume, I also need to create the volume in US East One B. So let's do that. Here in Volumes, click Create Volume. And here the first thing that I specify is the Availability Zone US East One B, because I will want to attach this EBS volume into an instance that is running in US East One B. Here I have the option to specify the volume type such as magnetic or SSD. The general purpose SSD we'll build will charge me only for the amount of the size in gigabytes. If I want to create a volume that has 25 gigabytes, I should specify the 25 gigabytes here. This is how I can restore a snapshot to a volume, which in this case I don't want to do. And then I click in Create Volume. This is the volume ID that has been created for me. If I click in Close, I can have the option of sorting these volumes by created date, volume type, and size. We can see that this is the volume that I just created, 25 gigabytes, the volume type GP2, which is SSD. Now that the volume is created, we want to attach that volume into an EC2 instance. So I just click here in Actions, Attach volume, and then I specify the instance that I want to attach to the volume, which in this case is Linux instance. And the device. So let's say /Dev/Std. Attach.Now if we look inside that instance, which we can do by clicking here in Instances, selecting the Linux instance, clicking here in Connect, and copying the SSH command-because this is a Linux instance and I'm using MacOS-I can go back here to my terminal and do the SSH command. So I copy the SSH command and I paste in my terminal. Now I am connected inside my EC2 instances. If I do the command lsblk, I can see the block storage devices that I have attached on this instance. And we can clearly see here on /Dev/xvdb, which kind of the same for STB, is a 25-gigabyte disk. Now with this EBS volume attached, we can create a file system with it so we can do this command: /dev/xvdb. It’s got to run as root. And then my Linux operating system is now creating a file system on this volume. If I do LSBLK again, nothing changes here, but now I am able to mount that volume into a folder in my Linux machine. If this was a Windows machine, I would need to go on disk management and then I could create the file system and mount from there. To mount on a Linux machine. We do the comment mount. The mountpoint is xvdb, at the folder where we want to mount that volume. Only root can do that, so let's do it with roots permissions. And now, that volume is mounted on the /mnt folder. If we go to the /mnt folder, we have our file system. So we can create our files, our directories, our symbolic links, and everything that a storage block device gives us the ability to do. This is a text file. If I do an LS, now I can see my files there. I can create a directory. I can move files to that directory. If I do an LS, I have a folder. If I enter in that folder, my file is inside that folder. So you can see how easy it is to create, attach, and format an EBS volume to an EC2 instance. At any time, I can go back here, and with the command mount, we can mount a volume into the folder, and then I am able to go back to the AWS Management Console, click in Volumes, select my volume, and detach this volume from my instance. If the volume was detached, it will stay in the available state. You can see that this volume is now in use because it’s actually being used with my instance. Since this volume is available, I can detach that and attach it to another EC2 instance in the same Availability Zone, which in this case is US East One B. I can also put tags in this volume. So if this volume is being used by a database, I can put the tag value “database volume,” and that's it. Now this volume is the database volume. Tags are very important because whenever you put tags on your AWS resources, you can drill down your billing per tag-so you can see how much all of the volumes with the tag key name and the tag value “database volume” costs in a certain period of time, in the same way as EC2 instances, EBS snapshots, and everything that supports tags. And that's it. In summary, we reviewed what EBS volumes are, and you saw a demo on how to create and attach one EBS volume to a Linux EC2 instance. I hope you learned a little something and will continue to explore our videos. I’m Rafael Lopes with the AWS Training and Certification team. Thanks for watching.
Amazon Simple Storage Services (S3)
Welcome to this video about Amazon Simple Storage Service, also known as Amazon S3. My name is Heiwad Osman and I'm a technical trainer with AWS. I'll introduce you to Amazon S3, cover common usage scenarios, and then we'll get to see a quick demo of S3 in action. Let's begin. Amazon S3 is a fully managed storage service that provides a simple API for storing and retrieving data. This means that the data you store in S3 isn't associated with any particular server, and you don't have to manage any infrastructure yourself. You can put as many objects into S3 as you want. S3 holds trillions of objects and regularly peaks at millions of requests per second. Objects can be almost any data file, such as images, videos, or server logs. Since S3 supports objects as large as several terabytes in size, you could even store database snapshots as objects. Amazon S3 also provides low-latency access to the data over the internet by HTTP or HTTPS, so you can retrieve data anytime from anywhere. You can also access S3 privately through a virtual private cloud endpoint. You get fine-grained control over who can access your data using identity and access management policies, S3 bucket policies, and even per-object access control lists. By default, none of your data is shared publicly. You can also encrypt your data in transit and choose to enable server-side encryption on your objects. Let's take a file we want to store, such as this Welcome video. First, we need a place to put it. In S3, you can create a bucket to hold your data. When we want to put this video into our bucket as an object, we need to specify a key, which is just a string that can be used to retrieve the object later. A common practice is to set these strings in a way that resembles a file path. Let's put our video into S3 as an object with the corresponding key. When you create a bucket in S3, it's associated with a particular AWS region. Whenever you store data in the bucket, it is redundantly stored across multiple AWS facilities within your selected region. The S3 service is designed to durably store your data, even in the case of concurrent data loss in two AWS facilities. S3 will automatically manage the storage behind your bucket even as your data grows. This allows you to get started immediately and to have your data storage grow with your application needs. S3 will also scale to handle a high volume of requests. You don't have to provision the storage or throughput, and you'll only be billed for what you use. You can access S3 by the management console, AWS CLI, or AWS SDK. Additionally, you can also access the data in your bucket directly via the rest endpoints. These support HTTP or HTTPS access. Here we see an example of a URL for an object constructed from the bucket name S3 endpoint for the selected region and the key we use when we stored the object. To support this type of URL-based access, S3 bucket names must be globally unique and DNS compliant. Also, object keys should be using characters that are safe for URLs. This exibility to store a virtually unlimited amount of data and access that data from anywhere makes the S3 service suitable for a wide range of scenarios. Let's look at some use cases for S3. As a location for any application data, S3 buckets provide that shared location for storing objects that any instances of your application can access, including applications on EC2 or even traditional servers. This can be useful for user-generated media files, server logs, or other files your application needs to store in a common location. Also, because the content can be fetched directly over the web, you can ooad serving of that content from your application and allow clients to directly fetch the data themselves from S3. For static web hosting, S3 buckets can serve up the static contents of your website, including HTML, CSS, javascript, and other files. The high durability of S3 makes it a good candidate to store backups of your data. For even greater availability and disaster recovery capability, S3 can even be configured to support cross-region replication such that data put into an S3 bucket in one region can be automatically replicated to another S3 region. The scalable storage and performance of S3 make it a great candidate for staging or long-term storage of data you plan to analyze using a variety of big data tools. For example, data stage in S3 can be loaded into Redshift, processed in EMR, or even queried in place using tools such as Amazon Athena. You can also import or export large volumes of data into S3 using AWS Import/Export devices such as Snowball. Given how simple it is to store and access data with S3, you'll find yourself using it frequently with AWS services and for other parts of your application. Now that we've covered S3 functionality and common use cases, you should be well on your way to identifying how S3 can help you as you build your application on AWS. Let's switch gears to a demo of S3 in action. Here we are in the Amazon S3 section of the AWS Management Console, and I can see that I have a listing of different buckets. What we're going to do in this section is we're going to go ahead and create a new bucket and then add some data to it, and retrieve that data. Let's go ahead and click Create Bucket. Here, I get prompted to set a bucket name and region, so the bucket name needs to be DNS-compliant. I'll go ahead and set a name of Amazing Bucket 1 and then my next decision is on the region. In my case, I know that I have an application running on EC2 instances that needs to access this data and that EC2 set of instances is in the Oregon region. So I’ll go ahead and set my region to be US West Oregon, and at this point, I've made all the decisions that I need to make in order to create a bucket. The other steps in this wizard let me do things like turn versioning on my bucket or change the default permissions so that I could delegate access to this bucket to public internet users or to specific AWS users. In this case, I know that I want the default, so I'll just go ahead and hit Create. Now we can see that I have a bucket. It's called Amazing Bucket 1. Go ahead and click on it. I'm prompted that the bucket is empty, and I can upload new objects. I can also see what the properties and permissions were for this bucket, but go ahead and hit Upload. I see that in the management console, I have the ability to drag and drop files and to modify the permissions on those files, but I would actually prefer to upload my data using the AWS CLI. So here I have opened the terminal window, and we can see that in this terminal window, I'm in a folder called Assets, and I have a file called demo.txt. Let's take a quick peek at this file and then we can see that it's a text file, and what I'm going to do is I'm going to copy this file over to my S3 bucket, so I can access it later from my EC2 instances. I'll go ahead and use the S3 copy command to copy that Demo.txt to an object that lives in Amazing Bucket 1 underneath the key hello.txt. So I've gone ahead and uploaded that data. I can also take the contents of a folder on my local machine, and I can sync them using the sync command and the CLI will handle each of those files and check to see if it exists in the bucket and then if it doesn't, it’ll go ahead and upload it. So now we've uploaded code.zip and random.csv also to my bucket. If I go ahead now and I SSH into an EC2 instance, we can go ahead and see that this instance has been provisioned with an IAM role that gives it access to read any of the S3 buckets in my account. And let's go ahead and check to see from the EC2 instance what content is in that S3 Amazing bucket 1. So go ahead and do an AWS S3 ls on the S3 Amazing bucket 1 and I'll make that recursive so it'll check through all the paths. And I see that I have these three files. So just like before, I can do that copy command, but now I do it in reverse by specifying the bucket name first. And I've copied that hello.txt out of my bucket. I can go ahead and do an ls on my local EC2 instance storage and I see hello.txt. If I do a cat, I can peek at the file, and I can see that we have my text file downloaded. I could also do that sync command in reverse. So now I can sync the contents of amazing-bucket-1/files to a local folder on my EC2 instance. And now I can see that I have a folder. Contents of that folder has those two files, code.zip and random.csv. So that really covers a simple Getting Started with S3 of putting data in and getting data back out. Let's switch back over to the management console and refresh it. So now I should see that I have some files in my S3 bucket. These are the same files that I saw from the management console and in the AWS CLI. I’ll go ahead and hit hello.txt, and I can see some options. So here I have the opportunity to change properties and permissions on a per object basis, and I can see some of these attributes of this file. So that really covers Getting Started with the S3 Service. In this video, we've covered an introduction to S3 and some common use cases. And in our demo, we got to go hands-on with creating a bucket, copying files into our bucket, and then downloading those files from an EC2 instance. Thanks for watching.
AWS Global Infrastructure
Welcome to the lesson on AWS Global Infrastructure. When you’re hosting your IT needs with AWS, it’s important to understand how AWS is set up.AWS’s global infrastructure can be broken down into three topics: AWS Regions, Availability Zones, and edge locations.Regions are geographic areas that host two or more Availability Zones, and are the organizing level for AWS services. When you deploy resources with AWS, you’ll pick the region where those resources are located. When doing so, it is important to consider which region will help you optimize latency while minimizing costs and adhering to regulatory requirements.You can also deploy resources in multiple regions to better suit your business’s needs. For example, if your developer resources necessitate deployment in one region, but your primary customer base is in a different region, you can deploy development assets in one region and your customer facing solution in a different region. Or, you can deploy the same resources in multiple regions, allowing you to provide a consistent experience globally, regardless of your customers’ location. You will be minimizing latency and increasing agility for your organization within minutes and with minimal cost!Finally, regions are completely separate entities from one another. Resources in one region are not automatically replicated to other regions, and not all services are available in all regions - though some of the most common AWS services are available in all regions, like Amazon S3 or Amazon EC2. You can see which services are available in each region by checking the global infrastructure page. (Display on screen: https://aws.amazon.com/about-aws/global-infrastructure/)
Next, let’s talk about Availability Zones.Availability zones are a collection of data centers within a specific region. Each Availability Zone is physically isolated from the others, but connected together by a fast, low-latency network. Each availability zone is a physically distinct, independent infrastructure. They are physically and logically separate. They also each have their own discrete, uninterruptable power supply; onsite backup generators; cooling equipment; and networking and connectivity. They are supplied by different grids, from independent utility companies for power and are networked through multiple tier-1 transit providers. Isolating the Availability Zones means they are protected from failures in other zones, which ensures high availability. Data redundancy within a region means that if one zone goes down, the other zones can handle requests.That’s one reason that AWS recommends provisioning your data across multiple Availability Zones as a best practice.Now let’s cover the third component, edge locations. AWS edge locations host a content delivery network, or CDN, called Amazon CloudFront. CloudFront is used to deliver content to your customers. Requests for content are automatically routed to the nearest edge location so that the content is delivered faster to the end users. When you use the global network of edge locations, your customers have access to quicker content delivery. Edge locations are typically located in highly populated areas.In this training, you have been introduced to the general AWS infrastructure, including Regions, Availability Zones, and Edge Locations. We also covered some of the unique characteristics of Availability Zones that make them highly reliable and durable. For more information on the infrastructure of AWS visit AWS.amazon.com/about-aws/global-infrastructure (display on screen as well: AWS.amazon.com/about-aws/global-infrastructure).
Amazon Visual Private Cloud (VPC)
Hello, my name is Kent Rademacher and I will be the instructor for this module. I'm currently a senior technical trainer for AWS and teach architecting on AWS and system operations on AWS. In this module, you will learn about Amazon Virtual Private Cloud, or Amazon VPC. I will first introduce this service followed by a feature review of Amazon VPC. We will then walk through an example Amazon VPC configuration, utilizing the features previously discussed. Finally, we will summarize and cover next steps for further learning about Amazon VPC. The AWS cloud offers pay-as-you-go, on-demand compute as well as managed services, all accessible via the web. These compute resources and services must be accessible via normal IP protocols implemented with familiar network structures. Customers need to adhere to networking best practices as well as meet regulatory and organizational requirements. Amazon Virtual Private Cloud, or VPC, is the networking AWS service that will meet your networking requirements. Amazon VPC allows you to create a private network within the AWS cloud that uses many of the same concepts and constructs as an on-premises network, but as we shall see later, much of the complexity of setting up a network has been abstracted without sacrificing control, security, and usability. Amazon VPC also gives you complete control of the network configuration. Customers can define normal networking configuration items such as IP address spaces, subnets, and routing tables. This allows you to control what you expose to the Internet and what you isolate within the Amazon VPC. You can deploy your Amazon VPC in a way to layer security controls in the network. This includes isolating subnets, defining access control lists, and customizing routing rules. You have complete control to allow and deny both incoming and outgoing traffc. Finally, there are numerous AWS services that deploy into your Amazon VPC that then inherit and take advantage of the security that you have built into your cloud network. Amazon VPC is an AWS foundational service and integrates with numerous AWS services. For instance, Amazon Elastic Cloud Compute (Amazon EC2) instances are deployed into your Amazon VPC. Similarly, Amazon Relational Database Service (Amazon RDS) database instances deploy into your VPC, where the database is protected by the structure of the network just like your on-premises network. Understanding and implementing Amazon VPC will allow you to fully use other AWS services. Let's take a look at the features of Amazon VPC. Amazon VPC builds upon the AWS global infrastructure of Regions and Availability Zones, and allows you to easily take advantage of the high availability provided by the AWS cloud. Amazon VPCs live within regions and can span across multiple Availability Zones. Each AWS account can create multiple VPCs that can be used to segregate environments. A VPC defines an IP address space that is then divided by subnets. These subnets are deployed within Availability Zones, causing the VPC to span Availability Zones. You can create many subnets in a VPC, though fewer is recommended to limit the complexity of the network topology, but this is totally up to you. You can configure route tables for your subnets to control the traffc between subnets and the Internet. By default, all subnets within a VPC can communicate with each other. Subnets are generally classified as public or private, with public having direct access to the Internet and private not having direct access to the Internet. For a subnet to be public, we need to attach an Internet gateway to the VPC and update the route table of the public subnet to send non-local traffc to the Internet gateway. Amazon EC2 instances also need a public IP address to route to an Internet gateway. Let's design an example Amazon VPC that we can use to start deploying compute resources and AWS services. We’ll create a network that supports high availability and uses multiple subnets. First, since VPC or region based, we need to select a region. I’ve selected the Oregon Region. Next, I'll create the VPC. I'll give it a name, Test VPC, and I'll define the IP address space for the VPC. The 10.0.0.0/16 is the classless interdomain routing (CIDR) format and means that I have over 65,000 IP addresses to use in the VPC. Next, I create a subnet named Subnet A1. I have assigned an IP address space that contains 256 IP addresses. Also, I specify that this subnet will live in an Availability Zone A. Next, I created another sub-net called Subnet B1, assign an IP address space, but this contains 512 IP addresses. I've added an Internet gateway called Test IGW. Subnet A1 will become a public subnet where non-local traffc is routed through the Internet gateway. Subnet B1 will be our private subnet that is isolated from the Internet. Let's summarize what we have accomplished and review some next steps. We have learned about how to create VPCs, Internet gateways, and subnets. Next steps include learning about other VPC features such as routing tables, VPC endpoints, and peering connections. Also, you can learn about deploying AWS resources into your VPC. More information is available at AWS.amazon.com/VPC. I hope you learned a little something and will continue to explore other videos. I'm Kent Rademacher with AWS Training and Certification. Thanks for watching.
Security Groups
In Today’s video, I’ll be giving you a brief introduction into AWS Security Groups.Security of the AWS Cloud is one of Amazon Web Services’ highest priorities, and we provide many robust security options to help you secure your data in the AWS Cloud.One of the features I want to talk about is security groups.At AWS, security groups will act like a built-in firewall for your virtual servers. With these security groups, you have full control on how accessible your instances are. At the most basic level, it is just another method to filter traffc to your instances. It provides you control on what traffc to allow or deny. To determine who has access to your instances, you would configure a security group rule. Rules can vary from keeping the instance completely private, totally public, or somewhere in between.Here is an example of a classic AWS multi-tier security group example. In this architecture, you will notice that multiple different security group rules have been created to accommodate this multi-tiered web architecture. If we start at the web tier you will see that we have set up a rule to accept traffc from anywhere on the Internet on port 80/443 by selecting the source of 0.0.0.0/0.Next if we move to the app tier, there is a security group that only accepts traffc from the web tier, and similarly, the database tier can only accept traffc from the app tier. Finally, you will notice that there has also been a rule created to allow administration remotely from the corporate network over SSH port 22. So let‘s take a look at creating a security group. Here I am logged into the AWS management console and I’m going to click EC2. In the navigation pane, under network & security we see Security Groups. Let’s click that. Now we will see a list of security groups that belong to the account. To create a security group, we want to click Create Security Group. A pop up window will appear. In this window, you will notice that you can create a name, a description, and finally attach it to a source. Next if we go down here to the rules, by default all inbound traffc is DENIED and all outbound traffc is ALLOWED. If you want to edit this, you can here by clicking the inbound tab and outbound tabs to adjust rules. You can edit by traffc type, protocols, port ranges, and source. Best practice is to figure out what traffc is required for your instance and to specifically only allow that traffc. Alright! Let’s go over what we discussed today. AWS provides virtual firewalls that can control traffc for one or more instances, called security groups. You can control accessibility to your instances by creating security group rules. These security groups can be managed on the AWS management console.For more detailed information on security groups, you can visit <http://aws.amazon.com/>
.