-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ft: request parser #100
base: master
Are you sure you want to change the base?
ft: request parser #100
Conversation
… of deprecating them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a good start but there are quite a few things I would change. Let me know what you think.
In essence, I think we should make this more flexible, there are too many content types to hardcode like this (I know that the current version does not support all content types but it'd be a nice feature).
E.g.: Put it like this, with the suggested API how do I ingest XML?
I think what we need is an internal map of content-type
headers to parser, by default we'd provide the 2-3 we have here. Then, internally, in the parse
method (if we have such an API, do we need parse_json
, parse_multipart
, etc.? Can't we just have parse
?) we fetch the appropriate parser from the map based on the request content-type
.
I'm talking about a map as in mapper (if you want to use that, don't depend on it just copy the code).
We'd have an internal map and expose a function to allow users to add/overwrite to the map.
# pseudo code
m <- map()
# internal defaults
m$set("application/json", parse_json)
m$set("application/form-data", parse_multipart)
# ...
# can't think of a better name for the function right now :(
set_parser <- function(content_type, callback){
m$set(content_type, callback)
}
# this is the parse method...
parse = \(){
fn <- m$get(req$CONTENT_TYPE)
fn(req)
}
"multipart/form-data", | ||
"application/x-www-form-urlencoded" | ||
) | ||
content_type <- if (is.null(content_type)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this line, what we should parse is always req$CONTENT_TYPE
, we should use the parser that matches the content type, why the if
and match.arg
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a common usecase is when req$CONTENT_TYPE
is "text/plain" but you want it parsed as "application/json".
the if
statement ensures that if you've specified the content type it's one of the 3 supported ones, otherwise defaults to req$CONTENT_TYPE
.
return(list()) | ||
} | ||
|
||
content_type_choices <- c( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't hardcode that. I know that in the current version it is technically hardcoded but how do I now support XML?
Note there are too many content types to hardcode them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a valid concern. my thinking was for ambiorix::parse_req()
to only support the 3 most common mime types.
From dev user point of view, it would be awesome if one could owerwrite the default mapper and define what parsers use with what content types. |
doesn't this take us back to this comment from yesterday? users will typically know the expected content type for a given endpoint, so if
allowing users to globally specify custom parsers is effectively the same as defining a parser themselves and calling it in the request handler. that said, i'm probably overlooking something, so i'd love to hear your thoughts. |
I was thinking of something like option(...) |
i've implemented that in the PR but for only the 3 mime types. have a look at the overriding default parsers section. |
I don't fully follow the proposed solution.
It's really rather confusing. Passing an instance of a class to a function is really strange, Note if we use {yyjsonr} for the parser can we switch everything to use this package? Everywhere else we use {jsonlite}. |
ahaa, i see where the misunderstanding is coming from.
|
If we want to keep them in we should hardcode Also note that this may break quite a few existing code because currently parse_json <- function(req, ...){
parse_req(req, content_type = "application/json", ...)
} Honestly I think it's because it should be the other way around (but don't bother yet and read on). Anyways, if in the end goal is to use It just feels too weird me, I apologise.
In essence, why add
The point of the function is to allow the user to be able to ignore this messy
Then why Sorry, I'm just really confused by I would remove |
you raise good arguments there, and tbh i had not foreseen the impact of making the API more murkier. the user can create at this point i'm not even sure/convinced of the suggested new parsers: i have thought of your initial suggestion of what would you expect ultimately, from where i stand now, i don't think this whole business of adding parsers to ambiorix is a good idea. so i'll just let this PR sit for a while as i mull over it. |
We may need some example of final code we expect for raw, json (supported, configurable) , xml (user defined). I think we all have diferent final user code in mind. |
this initial draft addresses #99 and #95
webutils::parse_query()
, following this comment.{webutils}
&{yyjsonr}
as dependencies.