-
Notifications
You must be signed in to change notification settings - Fork 22
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
## Summary Fixes #1167 ### Time to review: __15 mins__ ## Changes proposed Added filtering to the search endpoint, includes all but the query box parameter which has its own follow-up ticket Added utilities to help generate the search filter schema Added indexes to improve the performance of search (see additional info below for details) Extensive additions to the tests Added the ability to choose examples on the OpenAPI docs (included an example with no filters, and one with many) Fixed a bug in the Paginator for handling counts (will follow-up and fix in the template repo) ## Context for reviewers This change has been extensively tested, manually, and through an enormous amount of new unit tests. As the change was already getting quite large, a few things will be dealt with in follow-up tickets: * Query filtering: #1455 * Fixing logging formatting: #1466 * Additional order_by fields: #1467 For the filters, they're all `one_of` filters which means that only one of the supplied values needs to match for it to pass the where clause (literally the where clauses generate as `where table.column in (1, 2, 3)`). You can see an example query below. The agency filter is a bit odd as I made it a `startswith` style filter instead to handle the way agency codes get nested. We may want to adjust this further in the future, but this will at least technically handle hierarchies of agencies right now. ## Additional information I extensively tested the performance of the queries we run. I locally loaded in ~11k records using our factories (ran the `seed-local-db` script 300 times). With the API functioning, I make SQLAlchemy output the queries it ran and did an `EXPLAIN ANALYZE ...` on the big ones. I then added several indexes which improved the performance. The primary query of the API looks like this: ```sql SELECT opportunity.opportunity_id, opportunity.opportunity_number, opportunity.opportunity_title, opportunity.agency, opportunity.opportunity_category_id, opportunity.category_explanation, opportunity.is_draft, opportunity.revision_number, opportunity.modified_comments, opportunity.publisher_user_id, opportunity.publisher_profile_id, opportunity.created_at, opportunity.updated_at FROM opportunity JOIN current_opportunity_summary ON opportunity.opportunity_id = current_opportunity_summary.opportunity_id JOIN opportunity_summary ON current_opportunity_summary.opportunity_summary_id = opportunity_summary.opportunity_summary_id JOIN link_opportunity_summary_funding_instrument ON opportunity_summary.opportunity_summary_id = link_opportunity_summary_funding_instrument.opportunity_summary_id JOIN link_opportunity_summary_funding_category ON opportunity_summary.opportunity_summary_id = link_opportunity_summary_funding_category.opportunity_summary_id JOIN link_opportunity_summary_applicant_type ON opportunity_summary.opportunity_summary_id = link_opportunity_summary_applicant_type.opportunity_summary_id WHERE opportunity.is_draft IS FALSE AND(EXISTS ( SELECT 1 FROM current_opportunity_summary WHERE opportunity.opportunity_id = current_opportunity_summary.opportunity_id)) AND current_opportunity_summary.opportunity_status_id IN(1,2) AND link_opportunity_summary_funding_instrument.funding_instrument_id IN(1,2) AND link_opportunity_summary_funding_category.funding_category_id IN(1,3,20) AND link_opportunity_summary_applicant_type.applicant_type_id IN(1, 2, 13) AND((opportunity.agency ILIKE 'US-ABC%') OR(opportunity.agency ILIKE 'HHS%')) ORDER BY opportunity.opportunity_id DESC LIMIT 25 OFFSET 25 ``` Without any of the new indexes, `EXPLAIN ANALYZE` gives this a cost of ~1100 (non-specific unit). With the new indexes it becomes ~800. The actual runtime of these queries is in the 5-10ms range with or without the indexes, so it's minor either way. Note that querying the API locally, this gives response times of 50-150ms (slower initially before caching likely takes hold). Also if we're just filtering by something like opportunity status, then the costs are around 10-15. See: https://www.postgresql.org/docs/current/using-explain.html#USING-EXPLAIN-ANALYZE --------- Co-authored-by: nava-platform-bot <[email protected]>
- Loading branch information
1 parent
5d50bc4
commit 15f9d59
Showing
15 changed files
with
1,280 additions
and
59 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
from enum import StrEnum | ||
from typing import Type | ||
|
||
from src.api.schemas.extension import Schema, fields, validators | ||
|
||
|
||
class StrSearchSchemaBuilder: | ||
""" | ||
Builder for setting up a filter in a search endpoint schema. | ||
Our schemas are setup to look like: | ||
{ | ||
"filters": { | ||
"field": { | ||
"one_of": ["x", "y", "z"] | ||
} | ||
} | ||
} | ||
This helps generate the filters for a given field. At the moment, | ||
only a one_of filter is implemented. | ||
Usage:: | ||
# In a search request schema, you would use it like so | ||
class OpportunitySearchFilterSchema(Schema): | ||
example_enum_field = fields.Nested( | ||
StrSearchSchemaBuilder("ExampleEnumFieldSchema") | ||
.with_one_of(allowed_values=ExampleEnum) | ||
.build() | ||
) | ||
example_str_field = fields.Nested( | ||
StrSearchSchemaBuilder("ExampleStrFieldSchema") | ||
.with_one_of(example="example_value", minimum_length=5) | ||
.build() | ||
) | ||
""" | ||
|
||
def __init__(self, schema_class_name: str): | ||
# The schema class name is used on the endpoint | ||
self.schema_fields: dict[str, fields.MixinField] = {} | ||
self.schema_class_name = schema_class_name | ||
|
||
def with_one_of( | ||
self, | ||
*, | ||
allowed_values: Type[StrEnum] | None = None, | ||
example: str | None = None, | ||
minimum_length: int | None = None | ||
) -> "StrSearchSchemaBuilder": | ||
metadata = {} | ||
if example: | ||
metadata["example"] = example | ||
|
||
# We assume it's just a list of strings | ||
if allowed_values is None: | ||
params: dict = {"metadata": metadata} | ||
if minimum_length is not None: | ||
params["validate"] = [validators.Length(min=2)] | ||
|
||
list_type: fields.MixinField = fields.String(**params) | ||
|
||
# Otherwise it is an enum type which handles allowed values | ||
else: | ||
list_type = fields.Enum(allowed_values, metadata=metadata) | ||
|
||
self.schema_fields["one_of"] = fields.List(list_type) | ||
|
||
return self | ||
|
||
def build(self) -> Schema: | ||
return Schema.from_dict(self.schema_fields, name=self.schema_class_name) # type: ignore |
Oops, something went wrong.