I tend to start with a type-checked language (usually Scala) since the AWS libraries have a lot more structure than the arbitrary dicts-of-lists used in JS, Python, etc. (I've written AWS code in those too, but it's not my preference). One annoyance is that they can take a while to 'spin up', compared to "slow" languages like Python. Ideally Lambda would support (with native SDKs) some well-typed languages which don't rely on runtime behemoths like the JVM (e.g. Rust, Haskell, StandardML, etc.)
I try to use the AWS 'resource API' rather than 'service API', since it's usually easier to understand. The latter can do anything, but deals with fiddly 'Request' and 'Response' values; the former isn't as expansive, but provides high-level things like 'Tables', 'Buckets', etc.
I wrap all calls to AWS in a failure mechanism, and check for nulls immediately. I usually use Scala's `Try[T]` type, which is essentially `Either[Exception, T]`. Note that there are some cases where null is expected, like an empty DynamoDB.get result. Those should be turned into `Try[Option[T]]` values immediately.
I'll aggressively simplify the interface that an application depends on. For example, a Lambda might be completely independent of DynamoDB except for a single function like:
put: Row => Try[Unit]
Even things which are more complicated, like range queries with conditional bells and whistles, etc. can be hidden behind reasonably simples interfaces. In particular, the application logic should not instantiate AWS clients, parse results, etc. That should all be handled separately.
I'll usually wrap these simple type signatures in an interface, with an AWS-backed implementation and a stub implementation for testing (usually little more than a HashMap). These stubs can usually be put in a shared library and re-used across projects.
My current approach to dependency injection is to dynamically bind the overridable part (e.g. using a scala.util.DynamicVariable). This can be hidden behind a nicer API. The "real" AWS-backed version is bound by default; usually wrapped in a `Try` or `Future`, to prevent failures from affecting anything else.
All business/application logic is written against this nice, simple API. Tests can re-bind whatever they need, e.g. swapping out the UserTable with a HashMap stub.
I tend to use property-based tests, like ScalaCheck, since they're good at finding edge-cases, and don't require us to invent test data by hand.
For each project I'll usually write a "healthcheck" lambda. This returns various information about the system, e.g. counting database rows, last-accessed times, etc. as well as performing integration tests (e.g. test queries) to check that the AWS-backed implementations work, that we can connect to all the needed systems, etc.
I try to use the AWS 'resource API' rather than 'service API', since it's usually easier to understand. The latter can do anything, but deals with fiddly 'Request' and 'Response' values; the former isn't as expansive, but provides high-level things like 'Tables', 'Buckets', etc.
I wrap all calls to AWS in a failure mechanism, and check for nulls immediately. I usually use Scala's `Try[T]` type, which is essentially `Either[Exception, T]`. Note that there are some cases where null is expected, like an empty DynamoDB.get result. Those should be turned into `Try[Option[T]]` values immediately.
I'll aggressively simplify the interface that an application depends on. For example, a Lambda might be completely independent of DynamoDB except for a single function like:
Even things which are more complicated, like range queries with conditional bells and whistles, etc. can be hidden behind reasonably simples interfaces. In particular, the application logic should not instantiate AWS clients, parse results, etc. That should all be handled separately.I'll usually wrap these simple type signatures in an interface, with an AWS-backed implementation and a stub implementation for testing (usually little more than a HashMap). These stubs can usually be put in a shared library and re-used across projects.
My current approach to dependency injection is to dynamically bind the overridable part (e.g. using a scala.util.DynamicVariable). This can be hidden behind a nicer API. The "real" AWS-backed version is bound by default; usually wrapped in a `Try` or `Future`, to prevent failures from affecting anything else.
All business/application logic is written against this nice, simple API. Tests can re-bind whatever they need, e.g. swapping out the UserTable with a HashMap stub.
I tend to use property-based tests, like ScalaCheck, since they're good at finding edge-cases, and don't require us to invent test data by hand.
For each project I'll usually write a "healthcheck" lambda. This returns various information about the system, e.g. counting database rows, last-accessed times, etc. as well as performing integration tests (e.g. test queries) to check that the AWS-backed implementations work, that we can connect to all the needed systems, etc.