Skip to content

atheendre130505/ai-loan-eligibility-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Powered Loan Eligibility Engine Detailed Design and System Workings

Project Purpose

This project is a fully-automated loan eligibility platform to help users discover personal loan products, assess eligibility using advanced filters and AI, and receive personalized offers by email. It demonstrates scalable cloud backend engineering, workflow automation, use of AI/LLMs, and seamless integration of AWS and n8n.

System Overview

User data is uploaded in bulk via CSV to an S3 bucket.

AWS Lambda, triggered by S3, parses the file and populates an RDS PostgreSQL database.

Multiple n8n workflows run on a self-hosted Docker instance:

Workflow A: Finds and imports loan products via web scraping/simulated scraping.

Workflow B: Matches users to eligible products using multi-stage logic and optional AI.

Workflow C: Sends branded, personalized emails via AWS SES.

Technology Summary

AWS (Lambda, S3, RDS/PostgreSQL, SES)

n8n (Docker, custom JavaScript/Node.js nodes)

Cohere API or similar LLM for AI scoring

Serverless Framework for deploying AWS components

Detailed System Workings

User Data Ingestion

The user uploads a CSV file (containing user_id, email, monthly_income, credit_score, employment_status, age) through a frontend UI.

This file is stored in an S3 bucket.

Upload triggers an S3 event notification to a Lambda function.

The Lambda downloads the file, parses it row by row, and inserts user records in the users table in the PostgreSQL database.

On successful upload and processing, the Lambda makes a webhook call to n8n to trigger downstream workflows.

n8n Workflow A: Loan Product Discovery

This workflow is started by either a schedule or a webhook (for testing).

It scrapes predefined financial aggregator and bank websites to extract up-to-date loan product information, such as product_name, bank_name, interest_rate, minimum income, and required credit_score.

In testing or development, simulated data can be generated in a Code node.

Each loan product’s required fields and details are mapped to the loan_products table schema.

The n8n PostgreSQL node inserts the loan products directly into the database, bypassing HTTP expression issues and improving reliability.

This workflow can be set to run daily, so loan offerings are kept current.

n8n Workflow B: User-Loan Matching

Triggered either manually or automatically after new user CSV batches are inserted.

Stage 1: SQL pre-filter—fetches users and loan products, matching only candidates whose income and credit score meet minimum thresholds.

Stage 2: Custom logic in a Code node refines matches (e.g., applies additional business rules for employment type, age, etc.), ranks candidates, and limits the list to top products per user.

Stage 3: Top matches are further enriched using an LLM (Cohere/Gemini/OpenAI) via API to give a final recommendation or score, but only on the most promising user-product pairs (for cost efficiency).

Each match is stored in a user_loan_matches table with reference to user, product, match score, and AI output.

n8n Workflow C: User Notification

Triggered after matches are created.

Queries for users with new matches.

Generates personalized emails listing eligible loan products, their terms, interest rates, and other benefits.

Uses AWS SES to deliver emails to each eligible user’s inbox, tracking delivery success.

Infrastructure and Data Flow

All AWS credentials and database connections for n8n are securely configured using environment variables.

The database schema includes users, loan_products, and user_loan_matches, all with appropriate indices for scaling.

Workflows are modular and decoupled: ingestion, discovering offers, matching, and notification all work independently, monitored using logs in AWS CloudWatch and inside n8n.

How to Use the System

Deploy AWS infrastructure using the Serverless Framework.

Start the n8n Docker Compose stack and import all workflow JSONs.

Configure database and AWS credentials in n8n.

Upload a CSV using the UI to S3 to begin end-to-end processing.

For loan data, either run Workflow A with real scraping or generate test data for rapid prototyping.

Monitor status and results through database queries, logs, or email inboxes.

Advanced Details

Automated scraping can adapt to website changes by updating selectors or data extraction strategies inside n8n.

The matching pipeline is tuned for efficiency, using SQL operations and business logic as filters before invoking expensive or slow AI evaluations.

Workflows use error-handling nodes and can support retry and notification of failures.

Rollout-ready with clear repository structure, automated deploy scripts, and provided documentation for replication or extension.

Key Takeaways

This platform exemplifies event-driven design for scalable data ingestion.

It leverages cloud-native components with best practices (decoupled, scalable, minimal hardcoding).

Security, reliability, and maintainability are built in at every stage, with a modern DevOps approach.

Designed to be extensible for new datasets, additional loan criteria, or channels (SMS, WhatsApp) with minimal workflow changes.

About

AI-Powered Loan Eligibility Engine

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages