-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Btabaska/167 translate strings fix #2024
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
### Time to review: __1 mins__ ## Changes proposed Needed to upgrade dependencies for the API for grype issue: https://github.com/navapbc/simpler-grants-gov/actions/runs/9180615894/job/25245519194?pr=47 ## Additional information As usual, just ran `poetry update`
## Summary Fixes #9 ### Time to review: __10 mins__ ## Changes proposed Setup a search index to run locally via Docker Updated makefile to automatically initialize the index + added a script to wait for the index to start up before proceeding. Setup a very basic client for connecting to the search index (will be expanded more in subsequent PRs) Basic test / test utils to verify it is working (also will be expanded) ## Context for reviewers This is the first step in getting the search index working locally. This actually gets it running, and the client works, we just aren't doing anything meaningful with it yet besides tests. ## Additional information This doesn't yet create an index that we can use, except in the test. However, if you want to test out a search index, you can go to http://localhost:5601/app/dev_tools#/console (after running `make init`) to run some queries against the (one node) cluster. https://opensearch.org/docs/latest/getting-started/communicate/#sending-requests-in-dev-tools provides some examples of how to create + use indexes that you can follow.
… the index (#44) ## Summary Fixes #12 ### Time to review: __5 mins__ ## Changes proposed Made a new set of v1 endpoints that are basically copy-pastes of the v0.1 opportunity endpoints ## Context for reviewers Some changes I want to make to the schemas wouldn't make sense without the search index (eg. adding the filter counts to the response). As we have no idea what the actual launch of the v0.1 endpoint is going to look like, I don't want to mess with any of that code or try to make a weird hacky approach that needs to account for both the DB implementation and the search index one. Also, I think we've heard that with the launch of the search index, we'll be "officially" launched, so might as well call in v1 at the same time. Other than adjusting the names of a few schemas in v0.1, I left that implementation alone and just copied the boilerplate that I'll fill out in subsequent tickets. ## Additional information The endpoint appears locally:  --------- Co-authored-by: nava-platform-bot <[email protected]>
## Summary Fixes #10 ### Time to review: __10 mins__ ## Changes proposed Setup a script to populate the search index by loading opportunities from the DB, jsonify'ing them, loading them into a new index, and then aliasing that index. Several utilities were created for simplifying working with the OpenSearch client (a wrapper for setting up configuration / patterns) ## Context for reviewers Iterating over the opportunities and doing something with them is a common pattern in several of our scripts, so nothing is really different there. The meaningful implementation is how we handle creating and aliasing the index. In OpenSearch you can give any index an alias (including putting multiple indexes behind the same alias). The approach is pretty simple: * Create an index * Load opportunities into the index * Atomically swap the index backing the `opportunity-index-alias` * Delete the old index if they exist This approach means that our search endpoint just needs to query the alias, and we can keep making new indexes and swapping them out behind the scenes. Because we could remake the index every few minutes, if we ever need to re-configure things like the number of shards, or any other index-creation configuration, we just update that in this script and wait for it to run again. ## Additional information I ran this locally after loading `83250` records, and it took about 61s. You can run this locally yourself by doing: ```sh make init make db-seed-local poetry run flask load-search-data load-opportunity-data ``` If you'd like to see the data, you can test it out on http://localhost:5601/app/dev_tools#/console - here is an example query that filters by the word `research` across a few fields and filters to just forecasted/posted. ```json GET opportunity-index-alias/_search { "size": 25, "from": 0, "query": { "bool": { "must": [ { "simple_query_string": { "query": "research", "default_operator": "AND", "fields": ["agency.keyword^16", "opportunity_title^2", "opportunity_number^12", "summary.summary_description", "opportunity_assistance_listings.assistance_listing_number^10", "opportunity_assistance_listings.program_title^4"] } } ], "filter": [ { "terms": { "opportunity_status": [ "forecasted", "posted" ] } } ] } } } ```
## Summary Fixes #6 ### Time to review: __60 mins__ ## Changes proposed ### Move pages from page to app router: 1. Move all pages to [`[locale]`](https://next-intl-docs.vercel.app/docs/getting-started/app-router/with-i18n-routing#getting-started) folder 2. Add [`generateMetata()`](https://nextjs.org/docs/app/api-reference/functions/generate-metadata#generatemetadata-function) function and [next-intl `getTranslations()`](https://next-intl-docs.vercel.app/docs/environments/metadata-route-handlers#metadata-api) implementation * @rylew1 commented we could remove this from each page. To do that we could use [prop arguments](https://nextjs.org/docs/app/api-reference/functions/generate-metadata#with-segment-props) and update the based on the param. There is also more we can do with the metadata to properly add [app links and twitter cards](https://nextjs.org/docs/app/api-reference/functions/generate-metadata#applinks). TODO: create ticket 4. Replace i18n's `useTranslation` with next-intl's `useTranslations` 5. Remove hard-coded strings that were present b/c we were still b/w i18next and next-intl #### Changes * [Move process page to app](32ba4ee) * [Move research page to app](5b5ad1a) * [Move health page to app](a3e6255) * [Move feature flag page to app](395baed) * [Move search page to app router](1e261e3) * [Move newsletter pages to app router](b509ef8) * [Move home page to app router](de1be98) * [Move home page to app router](74077ae) * [Move 404 page to app router](ccbc956) ### Misc 1. [Delete hello api](5bad6ea) * This was left from the project creation 2. [Add USWDS icon component](0120c7b) * as noted in a slack discussion, when trying to access [one of the icons](https://github.com/trussworks/react-uswds/blob/main/src/components/Icon/Icons.ts) using `<Icon.Search/>` next errors: `You cannot dot into a client module from a server component. You can only pass the imported name through`. I'm not sure why it thinks the Icon component is a client module. [Dan A. suggests](vercel/next.js#51593 (comment)) trussworks should re-export as named exports. I tried importing the SVGs directly from the trussworks library but [svgr requires a custom webpack config](https://react-svgr.com/docs/next/) which is a road I didn't want to go down and [react svg](https://www.npmjs.com/package/react-svg) through an error in the app router 😥 . * I implemented @sawyerh 's [suggestion](0120c7b#diff-dadb35bd2f3f61f2c179f033cd0a2874fc343974236f2fb8613664703c751429), which did not work initially b/c next reported the USWDS icon was corrupt, which was fixed by adding a `viewBox` to the svg element 😮💨 . * [Remove unused WtGIContent](75490f7) ### Layout and component updates * [Move layout and update for app router](af112fd) * [Update global components for the app router](40119e6) ### Remaining next-intl config and removal of * [Move i18n strings for app router](eb3c07c) * [Adds next-intl config and removes i18n](c546571) * [Update tests for app router](3b9b193) * [Removes i18next and next-i18n packages](9d2e08a) * [Update storybook settings for app router](39f115d)
## Summary Fixes #56 ## Changes proposed - Update the date creation to the parsed year/month/day so it creates the `Date` object using the local time - this may prevent any type of rounding - Add some tests to ensure the right dates are showing up on search
…freshes (#67) ## Summary Fixes #58 ### Time to review: __3 mins__ ## Changes proposed Set the `persistAuthorization` OpenAPI config locally to True ## Context for reviewers For local development, we frequently need to go to http://localhost:8080/docs - enter the auth token, and then repeat this process every time we reopen this page or refresh. Having to either copy paste or retype in the auth token is tedious. This flag makes it so it gets preserved in your browsers local storage. We are only enabling this for the local endpoint at the moment as there are possibly security implications we would need to consider non-locally (eg. what if someone is using a public computer).
## Summary Fixes #64 ## Changes proposed - update to next 14.2.3 (https://nextjs.org/blog/next-14-2)
## Summary Fixes #69 ### Time to review: __5 mins__ ## Changes proposed Remove the `BASE_RESPONSE_SCHEMA` configuration. Adjust the healthcheck endpoint to be consistent in defining the schema / responses with the other endpoints. ## Context for reviewers APIFlask allows you to set a `BASE_RESPONSE_SCHEMA` - the idea is that its the shared schema that every API endpoint will return, and they will only differ in the `data` object within. This sounds great on paper as it prevents you from needing to define most of the response schema for many endpoints, but in practice, its clunky to use. If we want to modify anything in the response schema outside of the `data` object, it will affect every single endpoint. This means when we add something like the pagination info to our search endpoint, a pagination object appears on the healthcheck response. As we intend to make these docs something the public will use, this is going to be confusing. There is also a "bug" (unsure if it is as it was an intended change a few months ago in APIFlask/apispec) that the error response objects end up nested within themselves in the examples. For example, currently the error response for the healthcheck endpoint (which can literally only return a 5xx error) has an example response of: ```json { "data": { "data": "string", "errors": [ { "field": "string", "message": "string", "type": "string" } ], "message": "string", "status_code": 0 }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` When in reality, the error response it actually returns looks like: ```json { "data": {}, "errors": [], "message": "Service Unavailable", "status_code": 503 } ``` This set of changes works around all of these confusing issues and just requires us to define specific response schemas for each endpoint with some small set of details. I've kept some base schema classes to derive from that we can use in most cases. ## Additional information Before & After Examples in OpenAPI <table> <tr><th>Endpoint</th><th>Before</th><th>After</th><th>Actual Response (before change)</th></tr> <tr><td>GET /health (200) </td> <td> ```json { "data": { "message": "string" }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": null, "message": "Success", "status_code": 200 } ``` </td> <td> ```json { "data": {}, "message": "Service healthy", "pagination_info": null, "status_code": 200, "warnings": [] } ``` </td> </tr> <tr><td>GET /health (503)</td> <td> ```json { "data": { "data": "string", "errors": [ { "field": "string", "message": "string", "type": "string" } ], "message": "string", "status_code": 0 }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": {}, "errors": [], "message": "Error", "status_code": 0 } ``` </td> <td> ```json { "data": {}, "message": "Service unavailable", "pagination_info": null, "status_code": 200, "warnings": [] } ``` </td> </tr> <tr><td>POST /v0.1/opportunities/search (200)</td> <td> ```json { "data": [ {.. Excluding for brevity } ], "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": [ {.. Excluding for brevity } ], "message": "Success", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 200 } ``` </td> <td> ```json { "data": [ {}, {}, {} // excluded for brevity ], "message": "Success", "pagination_info": { "order_by": "opportunity_id", "page_offset": 1, "page_size": 3, "sort_direction": "ascending", "total_pages": 1010, "total_records": 3030 }, "status_code": 200, "warnings": [] } ``` </td> </tr> <tr><td>POST /v0.1/opportunities/search (401 or 422)</td> <td> ```json { "data": { "data": "string", "errors": [ { "field": "string", "message": "string", "type": "string" } ], "message": "string", "status_code": 0 }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": {}, "errors": [], "message": "Error", "status_code": 0 } ``` </td> <td> ```json { "data": {}, "errors": [], "message": "The server could not verify that you are authorized to access the URL requested", "status_code": 401 } ``` </td> </tr> </table> --------- Co-authored-by: nava-platform-bot <[email protected]>
## Summary Fixes #45 ### Time to review: __3 mins__ ## Changes proposed * update `config.py` database url * add function in `cli.py` * updated packages in `poetry.lock` ## Context for reviewers N/A ## Additional information Row created manually in the database alongside a row created via `test_connection` 
## Summary Fixes #88 ### Time to review: __5 mins__ ## Changes proposed Remove the version field from the docker-compose files Adjust the docker commands to use `docker compose` instead of `docker-compose` ## Context for reviewers A recent version of docker removed the need for specifying a version in the compose files - so the field is now obsolete and just gives a warning whenever you run a command: https://docs.docker.com/compose/compose-file/04-version-and-name/ The change in the command we run is actually from 2021 and makes sure we use docker compose v2. Depending on how you've setup your local instance of docker, `docker-compose` may have been aliased to `docker compose` anyways (I actually think if it wasn't, certain things break anyways). This is just using the proper format for certain. ## Additional information I went through running several docker commands and noticed no difference, as this change shouldn't meaningfully change anything, that is to be expected.
## Summary Fixes #96 ## Changes proposed - Add new id-based opportunity page - Add new `OpportunityListingAPI` class extended from `BaseAPI` - Make `searchInputs`/`QueryParamData` in `BaseAPI` and `errors.ts` optional params (only used for search page) - Update sitemap to replace [id] in url with 1 - Add test coverage
## Summary Fixes #{74} ### Time to review: __3 mins__ ## Changes proposed > Added API setup to PA11y > Updated Pa11y runtime to enable FF for search ## Context for reviewers > FF requirement was discovered with Ryan while exploring feature. Will enable the search page for proper testing > • svg elements with an img role have an alternative text (https://dequeuniversity.com/rules/axe/4.2/svg-img-alt?application=axeAPI) (html) <html lang="en"><head><meta charset="utf-8"><me...</html> • svg elements with an img role have an alternative text (https://dequeuniversity.com/rules/axe/4.2/svg-img-alt?application=axeAPI) (html) <html lang="en"><head><meta charset="utf-8"><me...</html> ## Additional information >Two new errors found now that API results are loading. 
…rom search (#54) ## Summary Fixes #14 ### Time to review: __10 mins__ ## Changes proposed Created utilities for creating requests to opensearch, and parsing the responses into more manageable objects. Added some logic for configuring how we create indexes to use a different tokenizer + stemmer as the defaults aren't great. ## Context for reviewers The search queries we need to create aren't that complex, but they're pretty large and very nested objects. To help with this, I've built a few generic utilities for creating the requests by using a builder to pass in each of the components of the search request. This way when the API gets built out next, the search logic really is just taking our requests and passing the details to the factories, which is pretty trivial. Responses are a bit less complex, they're just very nested, and adding a simple wrapper around them in the response helps any usage of the search client need to do a bit less indexing into dictionaries (eg. to access the response objects I was doing `values = [record["_source"] for record in response["hits"]["hits"]]` which is fun). That logic just safely handles parsing the responses in a very generic manner. Note that in both cases, there are many cases that we don't handle yet (a ton of other request params for example), we can add those as needed. Just focused on the ones we need right now. --- One unrelated change made it in here and that was adjusting how the analysis was done on an index. In short, the default tokenization/stemming of words was clunky for our use case. For example, the default stemmer treats `king` and `kings` as separate words. I adjusted the stemmer to use the [snowball stemmer](https://snowballstem.org/) which seems to work a bit better although we should definitely investigate this further. I also changed the tokenization to be on whitespaces as before it would separate on dashes, and a lot of terms in our system have dashes (opportunity number and agency pretty prominently). ## Additional information Since this logic is potentially going to be shared across many components (if we ever build out more search indexes) - I tried to document it pretty thoroughly with links to a lot of documentation.
## Summary Fixes #24 ### Time to review: __10 mins__ ## Changes proposed Adds an endpoint to fetch opportunity versions * Only includes some of the filters that we'll need to include Adds a lot of utilities for setting up opportunities for local development and testing with versions ## Context for reviewers https://docs.google.com/document/d/18oWmjQJKunMKy6cfnfUnyGEX33uu5UDPnRktD_wRFlE/edit#heading=h.4xmkylqq7mnx provides a lot of context for how opportunity versioning works in the existing system - which is to say its very very complex. I'm sure we'll alter that behavior as we go forward, for now I kept the endpoint simple in terms of its filters, just removing obvious cases (eg. the summary record is marked as deleted). I'm also not certain what we want to do with naming. I really don't like my current approach of "forecast" and "non-forecast", but we can address that later as well. -- Beyond understanding what versioning logic we needed to support, the most complex component by far is setting up the data in the first place in an easy manner. I originally tried some ideas using the factory classes themselves, but due to the order of operations necessary to do that, that wasn't possible (in short, to create prior history records, I first need the current record, but that doesn't exist until after everything else in a factory runs). So, I went with a builder process that wraps the factories and sets up some reasonable scenarios for you. Its clunky, but seems to work well enough. --------- Co-authored-by: nava-platform-bot <[email protected]>
### Time to review: __1 mins__ ## Changes proposed Update packages in poetry.lock ## Context for reviewers Working around a vulnerability https://github.com/navapbc/simpler-grants-gov/actions/runs/9664332757/job/26659684726 which is because we have a fairly old version of a package in our lock file. Skimmed through the package updates, they all seem to be very very minor version updates as most other packages were fairly up-to-date. --------- Co-authored-by: nava-platform-bot <[email protected]>
…rint and issue data to database (#84) ## Summary Fixes #46 ### Time to review: __x mins__ ## Changes proposed * added `sprint-db-data-import` to Makefile * added `export_json_to_database` ## Context for reviewers > One strategy would be to keep the make sprint-data-export and issue-data-export and create make sprint-db-data-import and issue-data-db-import so that the data is exported to JSON and then imported into the database. > > A single make command could then be created to run the the export and then import files. ## Additional information Sample data in database <img width="1133" alt="Screen Shot 2024-06-26 at 3 38 47 PM" src="https://github.com/navapbc/simpler-grants-gov/assets/37313082/34c962d6-a78e-4963-be15-ef0f7de3bccf">
## Summary Fixes #16 ### Time to review: __10 mins__ ## Changes proposed Make the v1 search opportunity endpoint connect to the search index and return results. Adjust the structure of the response to be more flexible going forward. ## Context for reviewers The actual building of the search request / parsing the response is pretty simple. Other than having to map some field names, that logic is mostly contained in the builder I made in the prior PR. However, there is a lot of configuration and other API components that had to be modified as part of this including: * Adjusting the API response schema (to better support facet counts) * Piping through the search client + index alias name configuration. * A monumental amount of test cases to verify everything is connected / behavior works in a way we expect - note that I did not test relevancy as that'll break anytime we adjust something. ## Additional information Note that the change in API schema means the API does not work with the frontend, but there are a few hacky changes you can make to connect them. In [BaseApi.ts](https://github.com/navapbc/simpler-grants-gov/blob/main/frontend/src/app/api/BaseApi.ts#L47) change the version to `v1`. In [SearchOpportunityAPI.ts](https://github.com/navapbc/simpler-grants-gov/blob/main/frontend/src/app/api/SearchOpportunityAPI.ts#L56) add `response.data = response.data.opportunities;` to the end of the `searchOpportunities` method. With that, the local frontend will work. To actually get everything running locally, you can run: ```sh make db-recreate make init make db-seed-local args="--iterations 10" poetry run flask load-search-data load-opportunity-data make run-logs # and also to run the frontend npm run dev ``` Then go to http://localhost:3000/search --------- Co-authored-by: nava-platform-bot <[email protected]>
## Summary Fixes #138 ### Time to review: __5 mins__ ## Changes proposed Setup S3 localstack for having a local version of S3 to use (for future work) Script / utils for interacting with S3 ## Context for reviewers Localstack is a tool that creates a mock version of AWS locally. While the ability to mock out certain features varies, S3 being just a file storage system is pretty simple and fully featured even when mocked. Note that localstack has a paid version as well that adds more features, but all of S3's features are [supported in the free community tier](https://docs.localstack.cloud/references/coverage/coverage_s3/). We've used localstack for s3 and a few other AWS services on other projects. ## Additional information The script creates the S3 bucket in localstack. You can actually interact with the localstack instance of s3 with the AWS cli like so: ```sh aws --endpoint-url http://localhost:4566 s3 ls > 2024-07-12 13:10:24 local-opportunities ``` I created a tmp file in it succesfully: ```sh aws --endpoint-url http://localhost:4566 s3 cp tmp.txt s3://local-opportunities/path/to/tmp.txt ``` I can see the tmp file: ```sh aws --endpoint-url http://localhost:4566 s3 ls s3://local-opportunities/path/to/ > 2024-07-12 13:23:22 15 tmp.txt ``` And I can download it: ```sh aws --endpoint-url http://localhost:4566 s3 cp s3://local-opportunities/path/to/tmp.txt local_tmp.txt ```
## Summary Fixes #86 ### Time to review: __10 mins__ ## Changes proposed Modified the search endpoint to be able to return its response as a CSV file ## Context for reviewers My understanding is that the CSV download in the current experience is a frequently used feature - so adding it is worthwhile. An important detail is that all it takes to switch from getting the response as a normal JSON response body is to change the new "format" field in the request. So if the frontend added a new download button, they would just make an identical request adding this format field (they'd likely want to also adjust the page size to return more than 25 items). The actual logic is pretty simple, instead of return the normal JSON body, we instead construct a CSV file object and return that. There is some level of formatting/parsing that we need to do with this, but its pretty minor. Note that it is explicit with which fields it returns that way the CSV won't keep changing on users if we make adjustments to the schemas elsewhere. As for returning the file, it just relies on Flask itself. I'm not as familiar with file operations in an endpoint like this, so if there are scaling concerns (ie. very large output files), let me know. I know there are a few tools in Flask for streaming file responses and other complexities. If we wanted to add support for more file types like a JSON file or XML, we'd just need to add converters for those and the file logic should all work the same. I originally implemented this as JSON but realized it was just the exact response body shoved in a file - if a user wants that they might just create the file themselves from the API response. ## Additional information You can see what the file looks like that this produced either by running the API yourself, or looking at this one I generated. Note that for the list fields, I used `;` to separate the values within a single cell. [opportunity_search_results_20240617-152953.csv](https://github.com/user-attachments/files/15873437/opportunity_search_results_20240617-152953.csv) --------- Co-authored-by: nava-platform-bot <[email protected]>
## Summary Fixes #104 ### Time to review: __10 mins__ ## Changes proposed Restructured the transformation code - Split each chunk of the transformation logic into separate "Subtask" classes - Made a constants file - A few duplicated pieces of implementation were pulled into functions that the subtasks are derived from Some additional logging Created a Subtask class for breaking up a task into multiple steps for organizational reasons Added configuration to the transformation step for enabling/disabling different parts of the process (not used yet - but lets us build things out and not worry about breaking non-local environments). ## Context for reviewers This looks far larger than it actually is, most of the actual changes are very small, I made all of the changes without adjusting the tests (outside of a few small bits of cleanup) and then refactored the tests as well. This does not aim to change the meaningful behavior of the transformation logic, but instead make it a lot easier to parse. Now when we add new transformations, that's conceptually simpler as its adding another one of the subtasks rather than adding to the massive mess of functions it was before. There are a few small logging / metric related changes from the Subtask just so we can have very granular metrics of how long each part of the task takes. ## Additional information I ran locally with a full snapshot of the production data and didn't see anything of note different from prior runs. Still takes ~10 minutes. --------- Co-authored-by: nava-platform-bot <[email protected]>
…hema (#168) ## Summary Fixes #163 ### Time to review: 15 mins ## Changes proposed - Added .with_start_date to search_schema builder to allow building a date field with key of "start_date" - Added .with_end_date to search_schema builder to allow building a date field with key of "end_date" - Added post_date and close_date properties to OpportunitySearchFilterV1Schema class, which utilize the above to build schema filters for post_date and close_date which can utilize start_date and/or end_date fields. - Added two unit tests in test_opportunity_route_search that will test the data validation of these new filters. One test is for 200 response cases and the other test is for 422 (invalid) response cases. ## Context for reviewers Note: As noted in the AC of Issue #163, this PR does NOT include implementation of the filters. Currently, these filters do nothing as they haven't been tied to any sort of query. This PR is just to lay the ground work. --------- Co-authored-by: nava-platform-bot <[email protected]>
## Summary Fixes #59 ### Time to review: __30 mins__ ## Changes proposed This makes the search page static and adds a suspense boundary for the data being fetched by the server. The data comes from the API and is called from 3 components: * [`<SearchPaginationFetch />`](https://github.com/navapbc/simpler-grants-gov/pull/101/files#diff-9dbdda5096b97ad049cccea24c5a046581d26c151a6f94fcc32c05cb33ee9dee) * [`<SearchResultsHeaderFetch />`](https://github.com/navapbc/simpler-grants-gov/pull/101/files#diff-14a084f66c050414cc2bbd0875256511630438971022073301bbfe91c4aa8cd1) * [`<SearchResultsListFetch />`](https://github.com/navapbc/simpler-grants-gov/pull/101/files#diff-aabe6a7d19434a9b26199430bbcde5d31a0790aebc4cd844b922ac2fa1348dce) This also simplifies the state model by pushing state changes directly to the browser query params and rerendering the changed items. This makes things a lot simpler and thus a lot of state management code is removed and there results list is no longer wrapped in a form and passing form refs between components. This is the recommended approach by next: https://nextjs.org/learn/dashboard-app/adding-search-and-pagination There are several items that needed to be shared among the client components: the query, total results count, and total pages. These are wrapped in a `<QueryProvider />` that updates the state of these items. This was added so that if someone enters a query in the text box and the clicks a filter their query is not lost, so that the "N Opportunities" text doesn't need to be rerendered when paging or sorting, and so that the pager stays the same length when paging or sorting. The data is fetched a couple of times in a duplicative fashion, however this follows [NextJS best practice](https://nextjs.org/docs/app/building-your-application/rendering/composition-patterns#sharing-data-between-components) since the requests are cached. The pager has been updated to reload only when there is a change in the page length. Because of an issue with the way the pager renders, it is unavailable while data is being updated: <img width="1229" alt="image" src="https://github.com/navapbc/simpler-grants-gov/assets/512243/a097b0e2-f646-43b5-bc5a-664db02780a2"> This is because the Truss React component [switches between a link and a button as it renders](https://github.com/trussworks/react-uswds/blob/main/src/components/Pagination/Pagination.tsx#L42) and there isn't an option to supply query arguments, so if a user where to click it they would lose the query params. Overall this puts us on nice footing for the upcoming work using NextJS best practice.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
analytics
api
ci/cd
database
documentation
Improvements or additions to documentation
frontend
javascript
python
shell
storybook
typescript
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #{ISSUE}
Time to review: x mins
Changes proposed
Context for reviewers
Additional information