-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Runtime crates refactor #53
Comments
Quick feedback on this as well, I generally like the idea, but relying on (How much these types of optimizations are really desirable or necessary, I will leave that up to you. Personally when I need that kind of sustained high-throughput performance I tend to opt for something different than serverless, but I could also imagine folks wanting their lambdas to be as efficient as possible.) I really like what For instance, the proposed trait could be transformed into: pub trait Handler<Error>
where Error: Display + ErrorExt + Send + Sync
{
type Output: Buf; // alternatively type Output: AsRef<[u8]>
fn run(&mut self, event: &[u8], ctx: Context) -> Result<Self::Output, Error>;
} This allows for a couple of things:
|
All good suggestions @srijs. I don't think the runtime needs to maintain ownership of the input buffer. However, for the sake of fewer allocations, it makes sense to go down this path. I'll update the RFC with your suggestions before implementing. |
One more update from the error handling RFC. We will add one additional create, called |
It's awesome you folks are bringing the RFC process to this project. Love it!
Can we quatify was it meant by I might start out with a few concrete cases. I feel like tokio crate original when down the route of disassembling its functionality into separate micro crates without much thought to the the counter balancing drawbacks ⚖️. In practice, that made it more difficult for users to reassemble those crates for basic cases. Since then the tokio ecosystems's pivoted a bit and collected functionality into a crate that's more user-friendly to consume. I'd base the decision to break this more crates based on a bit more experience backed up by a few concrete use cases. In other words make sure you understand the problem before working on a solution :)
I'm actually not sure how common that case is or if it's actually an anti-pattern to optimize for other problems (which haven't surfaced clearly yet) Multi purpose functions, for instance lend themselves toward requiring a sum combined set of IAM permissions. My impression of lambda's security best practices the opposite, that is to limit a functions purpose as well as its required IAM permissions. Multiplexing many functions within a single functions seems like it may be a users transitioning into the lambda world trying to map the semantics of semantics how you traditionally design an application into a function centric model. The shapes eventually wont line up well. If you really wanted to can do that today by implementing a function based on All that said I'm sympathetic to with making it possible, even if not recommended. When making it possible, consider the tradeoffs. I empathize with @srijs's comment above on balancing of features specific to performance optimization reasons, making the system more harder to understand vs optimizing for performance. Similar to the comment above, look for balancing weights ⚖️ what are you trading off in one approach for another and is that going to be worth it for actual use cases, ect. My understanding is that currently if your service is performance critical enough to not be able withstand allocating a -- I want to make one last comment that's less specific to this gh issue but more towards the future RFC process for this project. I'd put some minimal effort into setting expectations. Specifically so that these don't drag on. Rust has processes for this but I think this project wouldn't require all of them. I'd try to outline the following to set expectations and have them met to ensure the best open source contributor experience heres somethings I'd look for
Many of these where captured above but could stand to be called out in some scan-able format. The last few items could be more clear |
Thanks for the feedback. We are learning how to best manage this as we go along.
We definitely do not want to fragment the library too much and force developers to discover multiple dependencies. Developers who simply want to handle one event, they will still be able to include only the runtime crate as a dependency - it just so happens that the runtime crate will itself depend on the core and simply act as a light wrapper around the main core handler.
As I mention in the issue, this is seldom recommended - the first drawback is that you cannot deploy code that handles the different events independently. However, there are cases where this is valid. For example a Lambda function that handles both API Gateway and ALB's proxy events, or a function that supports both HTTP events and a special health-check event type. Further, while ideally we'd want to split our code in single-purpose Lambda functions, in the real world it's often convenient to have a single Lambda function handling multiple events: reduces the concurrency requests on the system and therefore the occurrence of cold starts.
Simplicity for developers is definitely a guiding tenet for us in all changes to this library. This refactor is purely to make it easier for ourselves to extend the library in the future and for maintainers of other libraries to integrate easily. One example was the rusoto crate taking a dependency on the
Thanks for the feedback, it all makes sense. We'll be more "formulaic" in future RFCs. |
Changes are merged and new release is making its way to crates.io |
At the moment the AWS Lambda rust runtime is composed of two crates: The Runtime Client and the Runtime. The runtime client implements a client SDK to the Lambda runtime APIs, taking care of all the communication to the runtime endpoint to receive new events, send responses and errors. The runtime implements the “main loop” of a function's lifecycle: call
/next
to receive the next event, invoke the handler, and then post responses or errors to the relevant endpoints. The runtime client SDK only deals with byte slices and does not understand types; the runtime crate, on the other hand, expects the event and response types to be serializable/deserializable with Serde.We have received requests to handle multiple event types in the same Lambda function (#29) as well as abstracted implementations that abstract the event into a well-known type in rust, such as the HTTP crate (#18). To make it easier to extend the runtime and support future use-cases, we propose to split the Rust Lambda runtime into three separate crates:
/next
endpoint and posting responses. Expects a handler that only deals with byte slices.The runtime-core crate will declare this handler type:
The typed runtime will declare the same handler type as now and wrap the run method to perform the serialization/deserialization before passing data to the core crate.
The text was updated successfully, but these errors were encountered: