index
Objects
BaseInput
@dataclass
class BaseInput()
Dataclass for lambda input
Attributes:
prefixstr - Input path target raster data.output_directorystr, optional - specify an output directory.basenamestr, optional - a name prefix for file objects.bucket- (str, optional): input bucket name to read and write to.tilesize- (int, optional): input pixel dimension. Defaults to 5000.batchsizeint | str, optional - input number of tiles per iteration. Use "all" to create a singular batch with all items in 1 json. Defaults to 0.
Raises:
NotImplementedError- when trying to tile tiles.FileNotFoundError- when tiles are not found.
Functions
lambda_handler
@tracer.capture_lambda_handler
@logger.inject_lambda_context()
def lambda_handler(event: dict, context: dict) -> List[str] | List[dict]
Lambda event handler for controlling task generation in step function. This function is essentially a batcher to construct a number of state events based on the batch size.
Arguments:
eventdict - should fit the BaseInput.contextdict - lambda context object.
Returns:
List[str] | List[dict]: The output array of task descriptions for the next state. The array is an array of objects holding the search window and the output tile name or an array of output s3 payloads.
Examples:
{
"prefix": "foo/bar.tiff",
"output_directory": "foo/bar/",
"basename": "some_name",
"bucket": "some-bucket",
"tilesize": 5000,
"batchsize": 50
}
Notes:
.. caution:: The task controller has two states of mind, with and without ( = 0) a batchsize. In case the step functions goes into a catch error state the batchsize will be set to its default. Using a batchsize at the start of calling the event will do nothing if the step function does not go into that state. Using a batchsize ( = "all") will instruct the lambda to create a singular json with all tiles written to it. Usefull for EC2 multiprocessing.