Thursday, May 15, 2025

When not to mock or patch

Despite taking a break from working on the unit test stubs forthe goblinfish-testing-pact package, it hasn’t been far from my thoughts, for several reasons. I want to get this package done, first off, and I won’t consider it to be done until it has a test-suite that executes the vast majority of its code. That’s particularly true, I feel, for a package that’s intended to help make other testing packages easier to work with to achieve the kinds of test goals that it’s intended to accomplish.

Secondly, and part of the reason I want to get to that done state, I want to be able to use it in my other personal project efforts. Technically, I could proceed with those, and add PACT testing in later, but that would ultimately put me in the same kind of position with those other projects as I’m in now with this one: scrambling to get tests written or reconciled with whatever I might already have in place, rather than working those tests out as I go. I’d much prefer the latter.

The main blocker for me on getting the PACT tests written for the package that provides them was figuring out how I would (or in some cases could) test things. Writing code with an eye towards it being testable is a skill, and something of an art at times. I tried to keep that in mind, but I made a fundamental assumption in a lot of cases: That I would be able to usefully apply either a patch or some variant of a Mock to anything that I needed to. That proved not to be the case in at least one of the tests for the ExaminesModuleMembers class, where the built-in issubclass function is used to determine whether a named test-entity in the test-module is a unittest.TestCase. That proved to be problematic in two ways. The first was in figuring out how to specify a patch target name. For example, using an approach like this:

import unittest
from unittest.mock import patch

class test_SomeClass(unittest.TestCase):

    @patch('issubclass')
    def test_patch_issubclass(self, patched_issubclass):
        patched_issubclass.return_value=True
        self.assertTrue(issubclass(1, float))

…raised an error indicating that the target name was not viable:

TypeError: Need a valid target to patch.
    You supplied: 'issubclass'

Fair enough, I thought. I know that issubclass is a built-in, and after confirming that issubclass.__module__ returned a module name (builtins), I tried this:

class test_SomeClass(unittest.TestCase):

    @patch('builtins.isinstance')
    def test_patch_isinstance(self, patched_isinstance):
        patched_isinstance.return_value=True
        self.assertTrue(isinstance(1, float))

…only to be faced with this error instead:

AttributeError: 'bool' object has no attribute '_mock_name'
Note

Just to see what would happen, I tried a similar variation with the built-in isinstance function. That raised a recursion error, because one of the first things that the patch decorator does is determine whether the target specified is a string… using isinstance.

So, with those discoveries in hand, I’ve added a new item to my list of things to keep in mind when writings tests for Python code:

Tip

Don’t assume that a built-in function, one that lives in the builtins module of a Python distribution, will be patch-able.

There is at least one way around that particular challenge: wrapping the built-in function in a local function that can be patched in the test code. For example, defining a wrapper function for issubclass like this:

def _issubclass(cls, classinfo) -> bool:
    """
    Wrapper around the built-in issubclass function, to allow a
    point for patching that built-in for testing purposes.
    """
    return issubclass(cls, classinfo)

…would allow the code that checks for that subclass relationship to be re-written like this:

    ...
    # Check that it's a unittest.TestCase
    self.assertTrue(
        _issubclass(test_case, unittest.TestCase),
        f'The {test_case.__name__} class is exepcted to be a '
        'subclass of unittest.TestCase, but is not: '
        f'{test_case.__mro__}'
    )
    ...

and the related test’s code could then patch the wrapper _issubclass function, allowing that code to take control over the results wherever needed:

class ExaminesModuleMembers(HasSourceAndTestEntities):
    ...

    @patch('modules._issubclass')
    def test_source_entities_have_test_cases(self, _issubclass):
	    ...

This, and similar variations that would implement the same wrapper logic as a @classmethod or as a @staticmethod, would also be viable; only the scopes of the calls would change, really.

I’m not a big fan of this approach, in general, though I’ll use it if there are no better options. My main concern with it, trivial though it might be, is that it adds more code, and thus more tests. I’ve seen it asserted various places that “every line of code written is a liability,” and there’s a fair amount of truth to that assertion. This function, one-liner though it is, still feels like it would be cluttering things up, even if only by that handful of added lines of code.

Another option would be to re-think the implementation of the target code that the test relates to. In this particular case, there is at least one option that I could think of that would remove the need for applying a patch to the main test-method: Adding another test-method to the mix-in that simply asserts that the test-case class is a subclass of unittest.TestCase. That would be a simple test-method to add, though I’d want to keep it from being duplicated across all the different classes that it applies to. Assuming another hypothetical mix-in class built for that purpose, that new class wouldn’t have to be much more than this:

class RequiresTextCaseMixIn:
    """
    Provides a common test that asserts that a derived class
    also derives from unittest.TestCase
    """
    def test_is_test_case(self):
        self.assertTrue(
            issubclass(self.__class__, unittest.TestCase),
            f'{self.__class__.__name__} is expected to be a '
            'subclass of unittest.TestCase, but is not '
            f'({self.__class__.__mro__})'
        )

For this particular scenario, testing the PACT package, I like this better than the patched/mocked wrapper function. It doesn’t add any new (and trivial) code that adds new test requirements, for starters, and this test is quite simple. What I didn’t like about it, at least on my first consideration of it, is that it moves the checking of the test-case class’ requirements into those individual classes, instead of the testing provided by ExaminesModuleMembers. My initial thoughts about going down this path centered around being concerned that test-failures would be raised later in the current manual process than I wanted, since they would be raised as the tests for those test-cases executed. However, after thinking through it again, I decided that this was not as bad a scenario as I’d initially thought. This rearrangement was, to my thinking, acceptable.

All of the options mentioned so far are predicated on keeping with an assumption that I made early on, but that may not hold true: that using patch and/or Mock objects would be the preferred way to deal with writing tests that are, at their heart, a collection of module files. The original thought behind that was that if I didn’t need to create a fake project-structure, it would be better: there wouldn’t be an additional collection of files to manage, I wouldn’t have to contend with the additional imports, and so on. In retrospect, though, particularly as I’ve really seen just how extensive the collection of mocked/patched values would likely need to be, the more I grew to like the idea of actually having a fake project in place to test against.

My initial concerns that led me away from that decision really boiled down to one thing: A concern about how I would make sure that everything was tested, across a combination of required test-entities in the context of the package, and for a reasonably realistic project representation at the same time. Until I had all of the PACT processes implemented, I wasn’t sure that the test-code would be able to manage both without getting far more complicated than I wanted it to be. However, after seeing how the stubs of the tests worked out, I am much more confident about that concern being significant. Knowing now what I do about how the processes involved took shape, I anticipate that a simple fake project structure will actually simplify things by allowing the happy-paths tests to operate against code structures that are a reasonable and direct simulation of an actual project. I also anticipate that generating unhappy-path data using patch decorators or contexts will keep those tests more manageable.

At a high level, how I expect things to unfold as I progress with the revised tests is:

  • The actual test-entities expectations will be determined by their correlation with the relevant members of the package itself.
  • The actual test executions will use classes set up to derive from the class being tested for any given test-case class, but pointing at a fake-project entity, allowing:
    • Creation of any necessary elements in the scope of the fake project in order to test the behavior of the package source-element.
    • Isolation of the code whose behavior is being tested from the package entities themselves.

I struggled with trying to come up with a better explanation than that, but couldn’t come up with one, so an example would probably be good. Assume a module at fake-project/module.py that starts with no code in it at all, and that the test in question is for the module_members.ExaminesSourceFunction class, responsible for checking that a happy-paths test-method and one unhappy-path test for each parameter in a function exist. The expected test-methods will be defined by inspection of the ExaminesSourceFunction class, and would include five happy-paths tests, one for each instance method and property. The unhappy-path tests, initially, will follow the expectations that were in place as of the v.0.0.3 release, with tests for invalid setter values, invalid setter instances, and invalid deleter calls, but I plan to revise the expectations as noted in my earlier post.

The actual test executions, though, will use in-test classes that point at the fake-project elements that relate. Each test-case will be concerned more with making sure to minimize the number of lines of code that are not executed in the PACT source target element, and the processes for that minimization will rely on adding whatever source-entity stubs are needed to make sure that the PACT methods are thoroughly tested. Using the same ExaminesSourceFunction mix-in as an example, that would include fake-project functions that include:

  • A function with no parameters.
  • Functions with one and two required positional parameters.
  • A function with one required and one optional positional parameter.
  • A function with an *args argument-list.
  • Functions with one and two required keyword-only parameters.
  • A function with one required and one optional keyword-only parameter.
  • A function with a **kwargs keyword-arguments list.

Those function variants may be combined. The tests, in this particular case, could patch the test_entity property of the class, so that actual test-methods are not required, or those test-methods could be stubbed out in the classes defined within their tests.

No comments:

Post a Comment

Local AWS API Gateway development with Python: The initial FastAPI implementation

Based on some LinkedIn conversations prompted by my post there about the previous post on this topic , I feel like I shoul...