Keeping secrets in AWS

Keeping secrets in AWS

The ability to keep secrets is very important on the internet. There is always someone who tries to get access to anything that is available. A common way to keep secrets is to use password protection. This is an simple principle, you provide the password and get access to the secret.

But in the world of AWS we want to have everything automated. This means that we can not rely on entering a password when a server needs to be granted some form of access.

Bootstraping an instance with secrets

As a instance gets started it often needs to have access to restricted data. This could be code in a repository, database access or private keys. Some of this access can be granted with IAM roles. But some resources are not controlled by IAM.

I will here show how you can leverage IAM roles to get arbitrary encrypted data. The goal is to keep encrypted data in the UserData section and to decrypt the data during the instance initialization. This can be done in many was, but I am going to go show one way.

Storing a keyfile in an S3 bucket

S3 can be used to store secret keys. We can encrypt the key at rest and we can restrict access using IAM roles. During bootstraping we download the key and use it to decrypt our data. And with this keyfile we can use gpg to encrypt and decrypt any data. To handle the encrypted data we also use base64 encoding. That makes it easy to include arbitrary data in User Data scripts.

Here are the necessary steps.

  1. Create a keyfile. This keyfile will be used to encrypt and decrypt our data.
    The contents can be any passphrase that is hard to guess. We will call this file passphrase.txt
  2. Encrypt our data. This will be done using gpg and the encrypted data will be base64 encoded so it can be included in the UserData.
    $ printf "The protected data" | gpg -q --batch --passphrase-file passphrase.txt -c | base64 -w 0
    jA0EBwMCRS4U6TKHeQnj0kcBM0OpHfzgT/GdJEnhP0qwjCVFXzSGIRnRCf9ggsqLlNnh/WMHXl2oSh/n3DCf/llUNmiDzxdn4zJHvBXxe+H+9omh++mjAA==
    
  3. Create a keystore bucket. Make sure it is not public and that server side encryption is enabled. And upload the keyfile passphrase.txt

  4. Create a Role for the instance that needs access
  5. Restrict access to the bucket so that only the someone with the correct role can read the objects

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::<Account Id>:role/<Role name>"
                },
                "Action": "s3:ListBucket",
                "Resource": "arn:aws:s3:::<Bucket name>"
            },
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::<Account Id>:role/<Role name>"
                },
                "Action": [
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:DeleteObject"
                ],
                "Resource": "arn:aws:s3:::<Bucket name>/*"
            },
            {
                "Effect": "Deny",
                "Principal": "*",
                "Action": "s3:*",
                "Resource": [
                    "arn:aws:s3:::<Bucket name>",
                    "arn:aws:s3:::<Bucket name>/*"
                ],
                "Condition": {
                    "StringNotLike": {
                        "aws:userId": [
                            "<Role Id>:*"
                        ]
                    }
                }
            }
        ]
    }
    

    The Role Id can be displayed with the command aws iam get-role --role-name <Role name>

  6. Assign the IAM role to the instance
  7. Create a User Data bootstrap script that accesses the password bucket. The encrypted data from step 2 is here inserted.

    #!/bin/bash
    ENCRYPTEDDATA="jA0EBwMC2H+26uHVShrj0kcBiGt3iid56PwK5mOm2xBWA6L+wgVjDW4a39NAu+d/mqsLRSKbAmxPM/ybCkUktU84lEF8n8xI3zLZI3/9NvOAjMl7mZqE4w=="
    
    apt update -y
    apt install awscli -y
    aws s3 cp s3://<Bucket Name>/passphrase.txt /tmp/passphrase.txt
    
    DATA=`echo $ENCRYPTEDDATA | base64 -di | gpg --batch --passphrase-file /tmp/passphrase.txt -dq`
    rm /tmp/passphrase.txt
    echo $DATA > /tmp/data
    

Encrypted data in version control

By encrypting our data it is possible to store it in the User Data or CloudFormation template without risk. This can also be kept in a version control repository. There are clear advantages to keep also the encrypted data in version control. If it is kept at some other location it can become difficult to match the version of the encrypted data with the corresponding version of the code. When the encrypted data is embedded in the code it is always clear to see when it has been changed.

How do you keep your secrets? Leave a comment down below!

About the Author

Jakob is a tech focused system designer. With a background in network management, system administration and development he has an interest in the entire stack. Always looking for new and improved solutions. Never accepting that things have to be complicated.

Leave a Reply 3 comments

Axu - November 21, 2017 Reply

Hi,

Interesting approach. But why use bootstarp bucket / server over Amazon KMS or Cloud HSM?
Is it because of trust issues (Amazon KMS) or with money (Cloud HSM)?

-axu-

    Jakob Lundberg - November 21, 2017 Reply

    Hi Axu,

    Thank you for your response. Yes, KMS or Cloud HSM can also be used to store your key. As I said, there are many ways to solve this problem.

    I wanted to show one solution that is not tied to any particular service. So that you can follow the principle from start to finish. Once you understand the whole process it is easier to select the services that suit your situation the best.

Axu - November 22, 2017 Reply

Thank you for clarification.
That makes perfect sense.

Good write-up, with hands on experience. Rare treat 🙂

Leave a Reply: