Co-authorâs:
- Prashant Mishra (Technical Architect @ Tata 1mg)
- Sankar Yadalam (Technical Architect @ Tata 1mg)
- Aman Garg (Associate Technical Architect @ Tata 1mg)
- Dollar Dhingra (Associate Engineering Manager @ Tata 1mg)
Introduction
In the dynamic realm of software development, the importance of rigorous testing practices cannot be overstated. Robust testing serves as the bedrock for ensuring the reliability and scalability of systems. On the onset of intricate microservices architectures, the significance of a strong unit and API testing strategy has skyrocketed.Enter an ingenious solution: an API testing framework meticulously crafted on the foundation of Pytest and Sanic. This powerhouse pairing empowers developers to create both efficient and thorough tests for services fueled by the Sanic framework.
Why is there a need for Robust Testing?
As applications scale and grow in complexity, unit and API testing become essential. Bugs and regressions can creep in with each code change, potentially breaking features that were once solid. The absence of a comprehensive testing strategy can lead to unwarranted downtime and a drop in user satisfaction. While itâs not just about avoiding technical debt, it is also about maintaining the integrity of your codebase as it evolves.
Purpose: A Versatile Toolkit for Testing Excellence
The overarching purpose of this API testing framework is to provide developers with a versatile toolkit for crafting effective unit and API test cases. But this isnât just another testing framework, it goes beyond the basics. It is designed to tackle the challenges of mocking relational databases and Redis caches seamlessly. Gone are the days of setting up intricate testing environments with dependent services - this framework simplifies the process, making testing easy.This framework emerges as a solution to the redundancy of repeatedly mocking different components and writing boilerplate test code. Itâs not just about efficiency but about consistency too.
Unveiling the Frameworkâs Architecture
Key Features:
- Mocking Inter-Service API Calls: Communication between services is pivotal in a microservices architecture. This framework allows you to mock these interactions effectively, enabling isolated testing and quick iterations.
- Effortless Database Mocking: Mocking PostgreSQL instances is made simpler, especially for local and pre-staging testing. Say goodbye to dealing with real databases in your test environment.
- Redis Cache Mocking: Redis cache plays a crucial role in many applications. With built-in Redis cache mocking, you can thoroughly test cache-dependent functionalities without relying on an actual Redis cluster.
- Testing in a Staging Environment: Run your test cases against genuine services, databases, and Redis clusters in a staging environment. This ensures your tests mimic real-world scenarios closely.
- Test-Driven Development (TDD) Made Simple: Typically, we encounter failed curl/requests, and converting these into test cases is remarkably straightforward. After rectifying the issues, you can swiftly execute your test suite, resulting in an accelerated Test-Driven Development (TDD)Â process.
Implementing the Framework
The implementation of the framework is exemplified through sample boilerplate code below. This robust framework leverages a collection of tools and libraries:
- Pytest-Sanic: Pytest-Sanic emerges as a Pytest plugin that is tailored to the specific requirements of testing Sanic applications. It facilitates the smooth testing of asynchronous code segments, ensuring a robust testing experience.
- Redislite: At the heart of Redis cache mocking lies Redislite, a lightweight Redis instance library. Its significance is pivotal in proficiently emulating Redis cache, a crucial component within many applications.
- SQLite Databases: For simulating PostgreSQL instances during local, development, and pre-staging testing phases, SQLite databases play a crucial role. This approach avoids the need for real databases during the testing process, streamlining the overall testing infrastructure.
- 1mgModels Package: The 1mgModels package is a cornerstone of this framework, facilitating the simulation of inter-service HTTP API calls. Beyond its pivotal role in isolated testing, it serves as a centralized repository for shared models within the context of Tata 1mg.
- Python Faker: Python Faker finds its application in generating randomized values, effectively enriching 1mgModels contracts. This diversification of test data contributes to the comprehensiveness of your testing suite.
A Peek into Conftest Approach
The application of the conftest approach is pivotal for effective testing. It ensures that fixtures are shared across classes, modules, packages, or sessions. In this context, the conftest.py file holds a significant role at the root directory of your application. It lays the groundwork for fundamental configurations and setups. Refer https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-acâŠ
Conftest: Centralized Configuration and Setup:
A key player within this approach, the conftest.py file is positioned at the root directory of your application. This file brings together essential configurations and setups to ensure a coherent testing environment. Below is a glimpse of the code structure:import asyncioimport pytestfrom sanic import Sanicfrom app.routes import blueprint_groupfrom pytest_sanic.utils import [email protected]_fixture(scope="session")def loop(): """ Default event loop, you should only use this event loop in your tests. """ loop = asyncio.get_event_loop() yield loop loop.close()@pytest.fixture(scope="session")def sanic_client(loop): """ Create a TestClient instance for test easy use. test_client(app, **kwargs) """ clients = [] async def create_client(app, **kwargs): client = TestClient(app, **kwargs) await client.start_server() clients.append(client) return client yield create_client # Clean up if clients: for client in clients: loop.run_until_complete(client.close())@pytest.yeild_fixture(scope="session")async def app(): """Create an app for tests""" app = Sanic("test_app") # Mocks need to be written here so that this response is returned instead of actual response from the # external service # environment variable based mocking enablement if _env.lower() == "local" or _env.lower() == "prod": @app.route("/location-service/v4/address", methods=["GET"]) async def test_get(request): # header variable based api response body and status control if request.header.status_code==200: return json(body="Mocked Success Response", status=status_code, headers=headers) elif request.header.status_code==400 return json(body="Mocked Bad request Response", status=status_code, headers=headers) else: pass yield [email protected](scope="session")def test_cli(loop, app, sanic_client): """Setup a test sanic app""" # Test DB registration test_db_config = config['POSTGRES_TEST'] # test db config return test_setup( loop, sanic_client, "app.service_clients", app, blueprint_group, env={ENV}, gen_schemas=TRUE, db_config=test_db_config, )def test_setup( cls, loop, sanic_client, path, app, blueprint_group, env="dev", gen_schemas=True, db_config=test_db_config ): Sanic.test_mode = True cls._blueprint_group = blueprint_group Host.setup_app_ctx(app) if db_config: # We use Postgres and tortoise ORM. register_tortoise(app, config=db_config, generate_schemas=gen_schemas) Host.register_listeners(app) Host.register_middlewares(app) Host.register_exception_handler(app) Host.register_app_blueprints(app) Host.setup_dynamic_methods() _cli = loop.run_until_complete(sanic_client(app)) set_clients_host_for_tests(path, _cli.port) return _cli
Mocking:
Environment-Driven Mocking(Refer conftest file): Imagine having the capability to dynamically control the behavior of your API calls based on the environment. With our approach, this becomes a reality. By harnessing the versatility of environment variables, we empower you to selectively mock external resources or gracefully bypass them. For instance, in environments marked as local or in production, we seamlessly simulate external calls. However, in other contexts, the framework intelligently skips mocking, allowing actual API calls to be invoked during test cases. This level of flexibility ensures that your tests accurately mirror real-world scenariosHeaders for Precision(Refer conftest file): The integration of headers into your testing arsenal takes your API testing to a whole new level. Drawing from my expertise, Iâll guide you through leveraging headers to orchestrate finely-tuned test outcomes. By adjusting header values strategically, you can trigger mock responses that simulate both success and failure scenarios. This nuanced approach provides comprehensive coverage, allowing you to validate a wide array of API test cases with precision.Port Allocation and Utilization: In our methodology, every time your test suite is initiated, a dedicated port is assigned. When invoking API test cases, itâs imperative to employ this assigned port. This practice not only maintains uniformity but also streamlines your testing process.Seamless Host and Port Synchronization: A crucial aspect of our methodology involves seamlessly adjusting the external serviceâs host and port during test case execution. This dynamic adaptation ensures that the mocked API calls are invoked within the specific environment where the test suite is operational. To facilitate this, weâve designed a versatile utility named âset_clients_host_for_testsâ As part of the test startup process, this utility method adeptly modifies the host and port settings to align precisely with the current testing environment. Itâs worth noting that the implementation of this method is deeply tailored to your internal code architecture, ensuring an intricately customized integration.Databases: If real postgres is not available, we can use SQLite. Sample configuration for using SQLite with tortoise is :# This db_config used in conftest filedb_config = { "connections": { "default": "sqlite://:memory:", }, "apps": { "test": { "models": ["app.models"], "default_connection": "default", }, },}Putting Theory into Action: Crafting Effective Test CasesHereâs a structured approach to designing test cases within the context of your Sanic-powered servicesOrganize Your Workspace: Begin by creating a dedicated package named âtestsâ at the root directory of your project. This package will house all your test-related code.Naming Conventions: Adhere to a consistent naming convention for your test case files and functions. Start your test case filenames with âtests_â to distinguish them from other modules. Similarly, ensure that your test function names commence with âtests_â to maintain clarity.A Practical Example: To illustrate, consider the following exemplary test case:async def test_cart_item_add(test_cli, params, status_code, data, err_msg): """Test case for cart address update using 1mgmodels success & error response.""" # Defined in main app route, custom headers, params can be passed from here resp = await test_cli.put("/v1/cart/address", headers={"x-authorization": "test", "status_code":400}, data=data) assert resp.status_code == 400 resp_json = resp.json() if err_msg is not None: assert resp_json["error"] == cart_apis.AddCartItemApi.error_response(err_msg=err_msg).error.dict()This test case demonstrates a comprehensive approach. It asserts the status code and response structure while also showcasing a method to validate error response structures. This level of detail ensures that your test cases cover a wide spectrum, encompassing status codes, response structures, and even values in both mocked and real scenarios.Solid Foundation for Testing: The collective power of meticulously designed test cases, combined with the arsenal of testing tools weâve discussed, establishes a robust foundation. This foundation empowers you to craft, maintain, and execute tests seamlessly for your Sanic-powered services.
Potential for Integration Testing
While the primary focus is unit and API testing, this frameworkâs architecture sets the stage for future integration testing endeavors. Itâs a testament to its adaptabilityâââa solution that grows with your projectâs evolving testing requirements.
Conclusion
As a Python expert, navigating the intricacies of testing in a microservices landscape just got smoother with this API testing framework. By understanding its purpose, appreciating the need for robust testing, diving into its features, and grasping its implementation, youâre equipped to wield this framework effectively. Itâs not just about writing tests itâs about ensuring the quality, reliability, and scalability of your applications in an ever-changing software world.
Building a Comprehensive API Testing Framework using Pytest, Sanic, and TDD was originally published in Tata 1mg Technology on Medium, where people are continuing the conversation by highlighting and responding to this story.
- Log in to post comments
- 16 views