This post was contributed by Sekhar Sarukkai who is the chief scientist at SkyHigh Networks.
A few weeks ago, voter information on 198 million Americans was released, catapulting cloud security to the forefront of the increasingly turbulent cybersecurity discourse. What was more surprising was that the media analytics firm in charge of the voter information, Deep Root Analytics, stored the sensitive data in an S3 bucket with no additional protections. In fact, anyone with a six-character subdomain could have accessed the leaked data for two weeks.
The Deep Root leak highlighted the importance of proper cloud security configuration for AWS. Like with many other IaaS systems, AWS’s security capabilities are often not effectively utilized by its users. This makes sensitive data vulnerable to attack, both from internal and external sources. These threats can range from DDoS hacks to malware, and can be deliberately malicious or simply based on negligence. No matter the reason, AWS users can limit data vulnerabilities by following best security practices for the AWS infrastructure, as well as best practices for applications deployed on the cloud.
In recent years, Amazon made sizeable investments to safeguard its platform, developing security solutions such as the AWS shield for DDoS attacks. However, more persistent and sophisticated attacks may still be able to break through these defenses. That being said, Amazon, operating under the shared security model, strives to fulfill its end of the shared responsibility as the AWS platform is largely secure. In fact, Gartner estimates that 95% of cloud security breaches by 2020 will be because of the customer, not because of the platform’s lack of security infrastructure.
This shared responsibility model works by holding Amazon responsible for maintaining security “of” the cloud, which includes monitoring the AWS infrastructure and responding to cases of fraud and abuse. In this model, the customer is responsible for the security “in” the cloud, which refers to configuring the services themselves, installing updates, and using AWS in a secure manner.
AWS Best Practices
The following best practices ensure that both Amazon and the customers do their part to secure data in AWS.
1. Turn on CloudTrail log file validation
CloudTrail log validation is essential because it makes sure that any changes to a log file can be identified after being delivered to the S3 bucket. Not only does this provide an additional layer of security, but this also ensures that data cannot be accessed with just a subdomain, like in the Deep Root Analytics leak.
2. Enable access logging for CloudTrail S3 buckets
Access logging is an important element of AWS because it allows customers to identify unauthorized access attempts, potentially preventing security threats. In this process, log data captured by CloudTrail is stored in the CloudTrail S3 buckets. This data is also useful for monitoring or forensic investigations after a hack has occurred.
3. Enable multifactor authentication (MFA)
Multifactor authentication is an effective way to create an additional level of security after a user has already logged in by making that user enter a code from another device. This should be activated for both root user and IAM accounts. However, for the root user, the authentication should be done through a dedicated device rather than a personal one. This is because the root user account would always be accessible, while a personal device may get lost or its owner may leave the company.
4. Maintain a strict password policy
Users often choose passwords that are easy to remember but easy to guess. By establishing and maintaining a strict password policy, users are better protected from brute force login attempts. In general, effective passwords should include one uppercase letter, one lower case letter, one number, one symbol, and a minimum length of 14 characters.
5. Encrypt Elastic Block Store (EBS) database
Encrypting the Elastic Block Store (EBS) database simply creates an added layer of security. The one caveat to encrypting the EBS database is that it can only be done when creating the EBS volume. If you want to encrypt existing EBS volumes, you must create a new EBS volume and encrypt it at the beginning, transferring data from the old volume to the new one.
6. Disable access for inactive or unused IAM users
Inactive or unused IAM user accounts are security risks because abandoned accounts can easily be compromised if the user is no longer at the company. Additionally, unused IAM accounts may not be monitored as regularly as active accounts. With that in mind, the best practice is to disable IAM accounts that have not been accessed in the past 90 days.
7. Restrict access to Redshift clusters
If a Redshift cluster is publicly accessible, anyone on the internet could potentially establish a connection to an AWS database. This increases the risk of brute force attacks, SQL injections, and DoS attacks. As a result, Redshift clusters must be restricted to everyone but IAM and root user accounts.
These best practices provide a solid foundation for securing sensitive information within the AWS infrastructure. Following even a few of them can go a long way to ensuring that your data remains secure and that another Deep Root Analytics leak does not occur in the future.