Who we are

Contacts

1815 W 14th St, Houston, TX 77008

281-817-6190

AWS Python

Advanced Unit Testing in AWS

Leveraging Moto and Pytest

Introduction

In the world of AWS development, ensuring the reliability, efficiency, and correctness of your cloud-based applications is paramount. As cloud solutions grow increasingly complex, so too does the challenge of effectively testing these systems. Traditional testing methods often fall short in the face of AWS’s vast and intricately interconnected services. This is where advanced unit testing techniques, employing tools such as moto and pytest.mark.parametrize, come into play, offering a more nuanced approach to testing AWS applications. Furthermore, LocalStack emerges as a powerful ally, providing a comprehensive environment for functional testing that closely mimics AWS’s cloud infrastructure, all within the confines of your local development environment.

The Role of Advanced Testing

The leap from basic to advanced unit testing in AWS signifies a transition towards more sophisticated, accurate, and efficient testing strategies. Advanced unit testing, particularly with moto and pytest, allows developers to simulate AWS services and create parameterized tests. This approach not only enhances test coverage and accuracy but also streamlines the testing process, making it more dynamic and reflective of real-world scenarios.

Introducing Moto and Pytest for AWS Unit Testing

Moto stands as a cornerstone for mocking AWS services, enabling developers to simulate the cloud environment locally without incurring any costs or network overhead. Paired with pytest.mark.parametrize, developers can execute comprehensive test suites that cover a wide array of scenarios, ensuring their AWS applications behave as expected under various conditions.

Leveraging LocalStack for Functional Testing

While moto excels at unit testing, LocalStack takes it a step further by offering a fully functional local AWS cloud stack, allowing for intricate functional testing. This tool is indispensable for developers aiming to validate the integration and performance of their AWS applications in an environment that closely mirrors the actual AWS cloud.

A Journey Towards Robust AWS Applications

This article aims to guide you through the advanced unit testing landscape of AWS development, from setting up moto and pytest for your Python projects to integrating LocalStack for comprehensive functional testing. Through detailed examples and practical advice, you’ll learn how to harness these powerful tools to elevate the quality, reliability, and resilience of your AWS applications.

As we delve into the nuances of advanced testing techniques, remember that the ultimate goal is to build AWS applications that are not just functional but also robust and dependable. By the end of this journey, you’ll be equipped with the knowledge and tools necessary to achieve just that, setting a new standard for excellence in AWS development.

Moto

Getting Started with Moto

Moto is an open-source library that allows developers to mock AWS services. By simulating AWS responses, moto enables you to test your application’s interaction with AWS services without making actual calls to AWS. This means you can assert the behavior of your application under various scenarios controlled by you. Supported services include, but are not limited to, S3, EC2, DynamoDB, Lambda, and many more, covering a broad spectrum of AWS’s offerings.

Installation

Getting moto up and running in your project is straightforward. You’ll need Python installed on your development machine, as moto is a Python library. To install moto, simply run the following command in your terminal:

pip install moto

Setting Up Your First Mock

To demonstrate the power of moto, let’s set up a simple mock for Amazon S3, one of the most commonly used AWS services. This example will guide you through creating a test that simulates uploading a file to an S3 bucket.

First, import the necessary modules in your Python test file:

import boto3
from moto import mock_s3

Next, set up your mock environment using the mock_s3 decorator:

@mock_s3
def test_s3_upload():
    # Set up the mock S3 environment
    s3 = boto3.client('s3', region_name='us-east-1')
    bucket_name = 'my-test-bucket'
    s3.create_bucket(Bucket=bucket_name)

    # Perform operations as if interacting with real AWS
    s3.put_object(Bucket=bucket_name, Key='test_file.txt', Body=b'Hello Moto!')

    # Assert that the object exists in the bucket
    response = s3.get_object(Bucket=bucket_name, Key='test_file.txt')
    data = response['Body'].read()
    assert data == b'Hello Moto!'

In this example, the @mock_s3 decorator initializes a mock environment for S3 before executing the test function. Inside the function, we create an S3 bucket and upload a file to it, all within the mock environment. Finally, we retrieve the file and assert that its contents match what we uploaded.

Continuing the real-life example

Following our example from the previous post in this series, here’s how we can extend our tests using moto. Say we wanted to take the postgres query output and write it to DynamoDB.

First, we add some mocks:

@pytest.fixture(scope="function")
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ["AWS_ACCESS_KEY_ID"] = "testing"
    os.environ["AWS_SECRET_ACCESS_KEY"] = "testing"
    os.environ["AWS_SECURITY_TOKEN"] = "testing"
    os.environ["AWS_SESSION_TOKEN"] = "testing"
    os.environ["AWS_DEFAULT_REGION"] = "us-east-1"


@pytest.fixture
def aws_client(aws_credentials):
    with mock_aws():
        yield boto3.resource("dynamodb", region_name="us-east-1")


@pytest.fixture
def aws_resource(aws_credentials):
    with mock_aws():
        yield boto3.client("dynamodb", region_name="us-east-1")

Then we create a DynamoDB Table mock:

@pytest.fixture
def create_table(aws_client):
    boto3.client("dynamodb").create_table(
        AttributeDefinitions=[
            {
                'AttributeName': 'string',
                'AttributeType': 'S',
            },
            {
                'AttributeName': 'mykey',
                'AttributeType': 'S',
            },
        ],
        BillingMode='PAY_PER_REQUEST',
        KeySchema=[
            {
                'AttributeName': 'string',
                'KeyType': 'HASH',
            },
            {
                'AttributeName': 'mykey',
                'KeyType': 'RANGE'
            },
        ],
        TableName='string',
    )

Then we can write our test:

def test_dynamo_put(create_table):
    dynamo_put("string", "data")
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table("string")
    response = table.get_item(
        Key={
            'string': 'data',
            'mykey': 'foo'
        }
    )
    print(response)
    assert response["Item"]["string"] == "data"

And our updated lambda code:

def dynamo_put(table_name, value):
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table(table_name)
    return table.put_item(
        Item={
            'string': value,
            'mykey': 'foo'
        },
    )


def lambda_handler(event, context):
    connection = connect_to_database()

    result = query(connection)

    dynamo_put('string', result)

    return result

Parameterized Testing with Pytest

One of the most powerful features of pytest, a popular Python testing framework, is pytest.mark.parametrize. This decorator allows for the parameterization of arguments for test functions, enabling developers to run a test function multiple times with different sets of parameters. This approach significantly increases the efficiency and breadth of your testing suite without requiring multiple, almost identical test functions.

Use Cases

  • Testing Across Different Input Values: When you have a function that should produce different outputs for different inputs, pytest.mark.parametrize allows you to test all these cases in a concise manner. For example, testing a function that validates email addresses can be done with various valid and invalid email inputs.
  • Edge Case Testing: It’s particularly useful for ensuring your functions handle edge cases correctly. By parameterizing tests to include edge cases, you can systematically verify that your application behaves correctly under less common conditions.
  • Cross-Service Interaction Scenarios: In AWS development, where functions may interact with multiple services under different conditions, pytest.mark.parametrize can test these interactions thoroughly. For example, testing a function that interacts differently with S3 or DynamoDB based on the input parameters.

Example

Here’s a simple example demonstrating how to use pytest.mark.parametrize to test a hypothetical function, add, which simply adds two numbers:

import pytest

# A simple function to add two numbers
def add(a, b):
    return a + b

# Testing the add function with various sets of parameters
@pytest.mark.parametrize("a, b, expected", [
    (1, 2, 3),
    (4, 5, 9),
    (-1, 1, 0),
    (0, 0, 0),
    (100, 200, 300),
])
def test_add(a, b, expected):
    assert add(a, b) == expected

In this example, the test_add function is run five times, each time with a different set of parameters. This not only saves time but also ensures our add function is rigorously tested across a variety of scenarios.

Updating our real-life scenario

Say we needed to test our dynamodb code with different values, maybe to check for edge cases or just confirm there’s nothing “hard-coded” in the script. With parametrize we can easily extend our test to cover multiple values and scenarios:

@pytest.mark.parametrize(
    "value", ["foo", "bar", "baz"]
)
def test_dynamo_put(create_table, value):
    dynamo_put("string", value)
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table("string")
    response = table.get_item(
        Key={
            'string': value,
            'mykey': 'foo'
        }
    )
    print(response)
    assert response["Item"]["string"] == value

Note that parametrize is iterating on the strings provided in the “value” array, and we use that variable in our assertion.

Up Next: LocalStack for functional, local testing

In our next piece, we will look at LocalStack for a more comprehensive test suite. We will continue to explore our real-life example, extend our tests and code to leverage LocalStack, and fill out our local testing suite.

From there, the next piece will cover integration testing, how it differs from unit and functional testing, and some real-life examples of how we can implement this critical type of test.

Conclusion

The power of moto lies in its comprehensive support for AWS services and its seamless integration into the Python testing ecosystem. By simulating AWS environments locally, developers gain the freedom to experiment, test, and validate their applications under a variety of conditions without the overhead of deploying to the actual cloud. This ensures that applications are not just built to function but are also designed to withstand the complexities and challenges of the AWS cloud environment.

Moreover, the integration of moto with testing frameworks such as pytest enhances the robustness of your testing suite, allowing for sophisticated test scenarios that include parameterized tests and complex AWS service interactions. This approach elevates the quality of your AWS applications, ensuring they are not only performant and scalable but also resilient and reliable.

As AWS continues to evolve, becoming increasingly central to the infrastructure of businesses worldwide, the importance of tools like moto cannot be overstated. They empower developers to harness the full potential of AWS services, fostering innovation and excellence in cloud-based solutions. By incorporating moto into your development workflow, you take a significant step towards building AWS applications that stand the test of time, ready to meet the demands of your users and the challenges of the digital world.