Based on some LinkedIn conversations prompted by my post there about the previous post on this topic, I feel like I should clarify what I'm trying to accomplish, and what I'm not trying to accomplish with this package before I go too much further.
The scenario I had in mind centers around someone who is developing an application or API that will eventually be hosted through an AWS API Gateway
resource, and whose backing compute logic is provided, in the main, by Lambda Functions. That much I think was clear. Where I apparently was not
clear enough was in some additional wrinkles: I'm thinking of cases where the developer of the application cannot usefully deploy an in-flight
instance of
their efforts as they are working on it, leaving them with no way to actually make HTTP requests to an instance of the application to test and debug as
they are working on it. There could be any number of reasons why that's not possible. The ones I can think of, or that I've encountered include:
- The process involved for deploying an
in-flight
instance involves something that they do not have access to — maybe they aren't actually allowed to create the AWS resources for security reasons, or the deployment process isn't geared towards allowing developer-owned copies of the application. - They may not have sufficient computer power/resources to run local options like
AWS'
sam local
or LocalStack, both of which require Docker.
(Alternately, maybe they do have the basic computing power, but those are not responsive enough to make the experience even remotely painless) - They may need to be able to develop and test while disconnected from the Internet.
- Other options that allow local execution of in-progress work might add unwanted dependencies and code that shouldn't be deployed to any
real
environment. The best example I know of for this is thechalice
package, which behaves much like FastAPI or Flask, but also implies thatchalice
will be used to deploy the code.
A common key point here is that deployment of the in-flight
work is not practical, or even not possible. In those cases, the idea of being
able to run a local application/API process while working on the code is still very desirable. That is what I'm trying to accomplish.
So, with that out of the way, and with a recent review of Python's decorators and how they actually work still fresh in my mind after the previous post,
here's where I picked it back up again. First, I came to the conclusion that in order to really establish that I was doing what I wanted to, I needed a
bit more variety in the application/API model that I was going to work with. I landed on having three basic object-types represented, person
,
place
and thing
, and was mainly focused on the decorators that support CRUD
operations for an API. From an application-centric perspective, generating and sending web pages as responses, the Create and Read would
be the only operations that would be needed. Putting all o that together, the endpoints, their operations, the relevant HTTP methods, and the
HTTP Method | CRUD operation | Endpoint Path | Lambda Code Path ( examples/src/… ) |
|
---|---|---|---|---|
POST |
create |
/person |
people/crud_operations.py |
|
/place |
places/crud_operations.py |
|||
/thing |
things/crud_operations.py |
|||
GET |
read |
/person/{oid} |
people/crud_operations.py |
|
/people |
people/crud_operations.py |
|||
/places/{oid} |
places/crud_operations.py |
|||
/place |
places/crud_operations.py |
|||
/things/{oid} |
things/crud_operations.py |
|||
/thing |
things/crud_operations.py |
|||
PATCH |
update |
/person/{oid} |
people/crud_operations.py |
|
/place/{oid} |
places/crud_operations.py |
|||
/thing/{oid} |
things/crud_operations.py |
|||
DELETE |
delete |
/person/{oid} |
people/crud_operations.py |
|
/place/{oid} |
places/crud_operations.py |
|||
/thing/{oid} |
things/crud_operations.py |
|||
Lambda Function modules are in the
examples/src
directory of the project's repository
|
The individual target Lambda Function handlers in the example I used are very simple: All that I
expected I needed to do, for now at least, was to be able to observe that the appropriate target function
was being called when the local applicatoin endpoint was called. With that in mind, all that I really
needed to do was set things up to follow my standard logging pattern, described
in earlier posts here.
An example of one of those functions, the read_person
handler that would map locally to
/v1/person/{oid}/
if the local process was successful is:
def read_person(event, context):
# HTTP GET handler (for single person items)
logger.info('Calling read_person')
try:
logger.debug(f'event ..... {event}')
_body = event.get('body')
body = json.loads(_body) if _body else None
logger.debug(f'body ...... {body}')
logger.debug(f'context ... {context}')
result = {
'statusCode': 200,
'body': 'read_person completed successfully'
}
logger.debug(f'result .... {result}')
except Exception as error:
result = {
'statusCode': 500,
'body': f'read_person raised '
f'{error.__class__.__name__}'
}
logger.exception(f'{error.__class__.__name__}: {error}')
logger.info('read_person completed')
logger.debug(f'<= {json.dumps(event)}')
logger.debug(f'=> {json.dumps(result)}')
return result
In this example, since all it's returning is a string, all I would expect back from a browser request to that endpoint on the local application would be.
read_person completed successfully
To get to that point, there are a few steps involved. In a pure FastAPI implementation, any incoming
request is picked up by FastAPI's standard request-detection processes, and a Request
object is
created and available to read request data from, provided that it is imported. The function that actually handles
a request is mapped with a decorator that is provided by a FastAPI
application object. An example
of a typical handler for the same /v1/person/{oid}/
endpoint might look something like this:
from fastapi import FastAPI, Request
# ...
app = FastAPI()
# ...
@app.get('/v1/person/{oid}/')
def get_person(request: Request) -> dict:
"""GET handler for person requests"""
logger.debug(f'Calling {__name__}.get_person:')
logger.debug(f'variables: {pformat(vars())}')
try:
result = {
'statusCode': 200,
'body': 'get_person completed successfully'
}
except Exception as error:
msg = (
f'get_person raised an exception: '
f'{error.__class__.__name__}'
)
logger.exception(msg)
result = {
'statusCode': 200,
'body': msg
}
finally:
return result
I started my implementation using FastAPI for one very simple reason: Its decorator structure more
closely mirrors the sort of resource definitions for Lambda Functions that are called by an API
Gateway instance as defined in a SAM
template structure. An (incomplete) example of that sort of templated function declaration might
look something like this for the same /v1/person/{oid}/
that would call the
read_person
function shown above:
Type: AWS::Serverless::Function
Properties:
# The (relative) path from the template to the
# directory where the function's module can be found
CodeUri: ../src/people
# The namespace of the function, in the form
# module_name.function_name
Handler: crud_operations.read_person
Events:
ApiEvent:
Type: Api
Properties:
Method: get
Path: /v1/person/{oid}/
RestApiId:
Ref: SomeApiIdentifier
# Other properties that might be of use later, but not today
# Description: String
# Environment: Environment
# FunctionName: String
# MemorySize: Integer
# Timeout: Integer
This template structure provides all of the information that would be needed to set up the local route:
The Handler
is just an import-capable path, and the Method
and Path
under Events.ApiEvent.Properties
indicate the HTTP method and the API path that would be used
in a FastAPI decorator to define the function to be called when that method/endpoint combination receives
a request.
GoalRight now, the entire mapping process is manual, but knowing that a SAM template provides those values is a solid step towards eventually automating the process. Somewhere down the line, I plan to write a command-line tool that will read a SAM template (and maybe later a CloudFormation template), and automatically generate the relevant local API mappings.
Back to the process flow! My immediate goal, then, was to provide an override for the various FastAPI HTTP-method
decorators (for example, its get
decorator) that would accept a handler-function or a namespace representation of one in addition to its existing
path specification, and route requests to that path to the specified handler-function. Along the way, it would need
to conver the FastAPI Request
object's data into a
Lambda Proxy Input event and a
LambdaContext
object that could be passed to the target handler-function. A high-level outline of
the code structure, using the get
decorator again, shows the basic processes involved, and the parameters
used:
def get(
self,
path: str,
external_function: LambdaSpec | str,
*args,
**kwargs
) -> Callable:
"""
Overrides the parent (FastAPI) decorator of the same name, to
allow the specification of an external function that will be
used to handle requests.
Parameters:
-----------
path : str
The path argument to be used in calling the parent class'
method that this method overrides.
external_function : LambdaSpec | str
The "external" function to be wrapped and returned by the
method
*args : Any
Any additional positional or listed arguments to be used
in calling the parent class' method that this method
overrides.
**kwargs : Any
Any keyword or keyword-only arguments to be used in
calling the parent class' method that this method
overrides.
"""
# At this layer, we're just getting the arguments passed.
# Resolve the external function
if isinstance(external_function, str):
target_function = get_external_function(
external_function
)
elif callable(external_function):
target_function = external_function
else:
raise TypeError()
def _wrapper(target: Callable | None = None):
"""
The initial function returned by the decoration process,
which will be called by the Python runtime with the
target function it is decorating, if used as a decorator.
"""
# At this level, we're retrieving the target function
# that is being decorated, if one was provided.
# Handle async vs. sync functions based on FastAPI's
# apparent preferences and the target function, if one
# has been provided
if iscoroutinefunction(target):
async def _replacer(request: Request):
"""
An async version of the function that will be
returned, replacing a decorator target where
applicable.
"""
else:
def _replacer(request: Request):
"""
A sync version of the function that will be
returned, replacing a decorator target where
applicable.
"""
# Call the original decorator to keep all the things
# it does
new_function = _FastAPI.get(
self, path, *args, **kwargs
)(_replacer)
return new_function
return _wrapper
At the outermost layer of the decorator structure, the external_function
that is passed in the decorator arguments
can either be a function imported earlier, or the namespace of an importable function — the same sort of string-value
noted in the SAM template example shown above. The next layer in, _wrapper
, is reponsible for retrieving the
target function that the decorator is decorating. It also accepts a None
value, for reasons that I'll
dig into in more detail later. The _wrapper
is responsible for defining the function that will be returned to
replace the decoration target function. That function, _replacer
, is created on the fly inside the
decorator's closure, and will return either an
async
version of the function, or a normal
synchronous function, depending on whether the decorator
target is async
or not.
NoteFastAPI looks like it prefers async functions, though it supports both sync and async. Trying to account for all the potential crossovers between sync and async functions doesn't feel necessary, since the purpose of this package is to provide a developer convenience tool that allows them to work on local Lambda Functions and execute them with local HTTP requests.
Once everything is figured out, the _replacer
function is passed to the original, standard FastAPI decorator,
and the response from that, new_function
is returned. The complete code is far too long to reproduce here —
I don't want to overload the reader with a wall of code — but any who are curious can look at it in detail in
the
project repository.
This post has already gotten much longer than I'd anticipated, and I have other things that I want to show, so rather than dive
into the various helper functions that are called in the get
decorator above, I'll refer the reader to their entries
in the project's repository as well:
_extract_form_data
— Anasync
function that extracts form-data from a FastAPIRequest
object._lambda_result_to_response
— Converts a Lambda Proxy Integration response-dictionary into a FastAPIResponse
object._request_to_lambda_signature
— Converts a FastAPIRequest
object's data into a Lambda-readyevent
dictionary andLambdaContext
object
I also created a fairly robust, if very simple, test-harness at local/test-harness.py
that exercises all of the initially-suported HTTP methods across all of the resource-types. I'll end this post with a couple of
log-dumps from the current version (v.0.0.3
, after making a couple of minor corrections to a few items). First,
the successful routing set-up for the decorator-based FastAPI mapping example:
pipenv run uvicorn --port 5000 app:app
[INFO] Using sync replacer for delete_person. [INFO] Returning _replacer at 0x106c88680 to decorate delete_person. [INFO] Using sync replacer for get_person. [INFO] Returning _replacer at 0x106c88720 to decorate get_person. [INFO] Using sync replacer for get_people. [INFO] Returning _replacer at 0x106c88a40 to decorate get_people. [INFO] Using sync replacer for patch_person. [INFO] Returning _replacer at 0x106c88d60 to decorate patch_person. [INFO] Using sync replacer for post_person. [INFO] Returning _replacer at 0x106c89080 to decorate post_person. [INFO] Using sync replacer for put_person. [INFO] Returning _replacer at 0x106c893a0 to decorate put_person. [INFO] Using sync replacer for delete_place. [INFO] Returning _replacer at 0x106c89d00 to decorate delete_place. [INFO] Using sync replacer for get_place. [INFO] Returning _replacer at 0x106c89da0 to decorate get_place. [INFO] Using sync replacer for get_places. [INFO] Returning _replacer at 0x106c8a0c0 to decorate get_places. [INFO] Using sync replacer for patch_place. [INFO] Returning _replacer at 0x106c8a3e0 to decorate patch_place. [INFO] Using sync replacer for post_place. [INFO] Returning _replacer at 0x106c8a700 to decorate post_place. [INFO] Using sync replacer for put_place. [INFO] Returning _replacer at 0x106c8aa20 to decorate put_place. [INFO] Using sync replacer for delete_thing. [INFO] Returning _replacer at 0x106c8b420 to decorate delete_thing. [INFO] Using sync replacer for get_thing. [INFO] Returning _replacer at 0x106c8b4c0 to decorate get_thing. [INFO] Using sync replacer for get_things. [INFO] Returning _replacer at 0x106c8b7e0 to decorate get_things. [INFO] Using sync replacer for patch_thing. [INFO] Returning _replacer at 0x106c8bb00 to decorate patch_thing. [INFO] Using sync replacer for post_thing. [INFO] Returning _replacer at 0x106c8be20 to decorate post_thing. [INFO] Using sync replacer for put_thing. [INFO] Returning _replacer at 0x106cb0180 to decorate put_thing. INFO: Started server process [5626] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
...and the results of a request made to the /v1/people/
endpoint:
[INFO] Created LambdaContext( [ aws_request_id=99abaa85-ace8-4890-9753-8113be1c1e5a, log_group_name=None, log_stream_name=None, function_name=None, memory_limit_in_mb=None, function_version=None, invoked_function_arn=None, client_context=ClientContext( [custom=None,env=None,client=None] ), identity=CognitoIdentity( [ cognito_identity_id=None cognito_identity_pool_id=None ] ) ] ) with 900000 ms remaining before timeout. [app] [INFO] Created LambdaContext(...). [INFO] _request_to_lambda_signature completed [INFO] Calling read_people [INFO] read_people completed [INFO] Calling _request_to_lambda_signature: [INFO] Returning <starlette.responses.Response object at 0x106ce3790> INFO: 127.0.0.1:54456 - "GET /v1/people/ HTTP/1.1" 200 OK
As a final consideration, I'd also point out that the examples/local-fastapi/app-funcs-only.py
module, which defines the endpoint mappings like this, with no functions being decorated, also works:
# Endpoint-Handler Functions
app.delete(
'/v1/person/{oid}/', 'people.crud_operations.delete_person'
)()
app.get(
'/v1/person/{oid}/', 'people.crud_operations.read_person'
)()
app.get(
'/v1/people/', 'people.crud_operations.read_people'
)()
app.patch(
'/v1/person/{oid}/', 'people.crud_operations.update_person'
)()
app.post(
'/v1/person/', 'people.crud_operations.create_person'
)()
app.put(
'/v1/person/{oid}/', 'people.crud_operations.update_person'
)()
app.delete(
'/v1/place/{oid}/', 'places.crud_operations.delete_place'
)()
app.get(
'/v1/place/{oid}/', 'places.crud_operations.read_place'
)()
app.get(
'/v1/places/', 'places.crud_operations.read_places
')()
app.patch(
'/v1/place/{oid}/', 'places.crud_operations.update_place'
)()
app.post(
'/v1/place/', 'places.crud_operations.create_place'
)()
app.put(
'/v1/place/{oid}/', 'places.crud_operations.update_place'
)()
app.delete(
'/v1/thing/{oid}/', 'things.crud_operations.delete_thing'
)()
app.get(
'/v1/thing/{oid}/', 'things.crud_operations.read_thing'
)()
app.get(
'/v1/things/', 'things.crud_operations.read_things'
)()
app.patch(
'/v1/thing/{oid}/', 'things.crud_operations.update_thing'
)()
app.post(
'/v1/thing/', 'places.crud_operations.create_place'
)()
app.put(
'/v1/thing/{oid}/', 'things.crud_operations.update_thing'
)()
This package isn't complete by any stretch of the imagination, but it is already covering the majority of what I had in mind.
No comments:
Post a Comment