Feat(ingest/teradata) schema lazy loader#16247
Conversation
| # Quote identifiers to prevent SQL injection | ||
| escaped_schema = schema.replace('"', '""') | ||
| escaped_table = table_name.replace('"', '""') | ||
| query_str = f'SELECT * FROM "{escaped_schema}"."{escaped_table}" WHERE 1=0' |
There was a problem hiding this comment.
Potential SQL injection via string-based query concatenation - critical severity
SQL injection might be possible in these locations, especially if the strings being concatenated are controlled via user input.
Remediation: If possible, rebuild the query to use prepared statements or an ORM. If that is not possible, make sure the user input is verified or sanitized. As an added layer of protection, we also recommend installing a WAF that blocks SQL injection attacks.
View details in Aikido Security
Codecov Report❌ Patch coverage is
❌ Your patch check has failed because the patch coverage (11.40%) is below the target coverage (75.00%). You can increase the patch coverage or adjust the target coverage. 📢 Thoughts on this report? Let us know! |
|
✅ Meticulous spotted 0 visual differences across 1009 screens tested: view results. Meticulous evaluated ~8 hours of user flows against your PR. Expected differences? Click here. Last updated for commit d30771f. This comment will update as new commits are pushed. |
Bundle ReportBundle size has no change ✅ |
When either include_tables or include_views was off, the Teradata source was calling DataHub to bulk-load every Teradata dataset at startup for lineage/usage, which gets slow with lots of assets.
Added a lazy_schema_resolver config (default True). When it’s on we use a resolver that fetches schema from DataHub only when a table is first referenced during SQL parsing, so no upfront scroll over all assets. Set lazy_schema_resolver: false to keep the old bulk-load behavior. Mirrors the Snowflake lazy resolver behavior.