Hi, I'm Laya Myadam

AI & ML Engineer

I help organizations identify where ML can add value β€” and where it cannot.

Laya Myadam

About Me

I'm an AI Engineer passionate about building intelligent systems that solve real-world problems. At Saayam For All, I'm developing a volunteer-matching algorithm that connects people with meaningful opportunities using advanced machine learning techniques.

πŸ’‘

What I Do

I specialize in building AI/ML solutions with a focus on reinforcement learning, NLP, and optimization systems. My work spans quantitative trading models, customer churn prediction, adaptive pricing engines, and intelligent decision-making systems. I thrive on transforming complex data challenges into actionable, scalable intelligence.

πŸš€

My Approach

I combine strong mathematical foundations with hands-on deep learning expertise. Whether fine-tuning neural networks, optimizing GPU performance with CUDA, or architecting end-to-end ML pipelines, I focus on building systems that are efficient, scalable, and grounded in real-world impact.

Always excited to build intelligent systems that create meaningful impact.

Technical Skills

A comprehensive toolkit for building intelligent, scalable AI systems

Programming Languages

PythonCC++CUDA

RL & ML Frameworks

PyTorchTensorFlowRay RLlibStable-Baselines3TF-AgentsScikit-learn

Data Engineering

NumPyPandasApache SparkRayStreaming PipelinesOnline Learning

Optimization & Analytics

StatisticsLinear ProgrammingOperations ResearchCausal InferenceGame Theory

Cloud & MLOps

AWSDockerKubernetesFastAPIModel MonitoringA/B TestingGitCI/CD

Deep Learning

Neural NetworksCNNsRNNsTransformersNLPComputer VisionGPU Optimization
6+
Skill Categories
40+
Technologies
3+
Programming Languages
∞
Possibilities
/// System_Portfolio ///

Technical Projects

Project 01
GitHub

FRAUDSHIELD AI

  • Built a Generative AI–powered fraud detection system using LLM + RAG with FAISS vector database on the IEEE-CIS dataset, integrating structured financial data with external news embeddings β€” achieving 92% precision, 89% recall, and 0.90 F1-score.

  • Fine-tuned a Transformer-based LLM using LoRA/PEFT (PyTorch + Hugging Face) on the Credit Card Fraud dataset (284K+ records) to generate structured fraud explanations, improving analyst review efficiency by 27% with 93% factual consistency.

  • Designed a hybrid retrieval pipeline (SQL + semantic vector search) combining tabular fraud features with contextual embeddings, improving early risk identification by 20% and increasing Recall@K by 18% β€” ROC-AUC improved from 0.86 to 0.93.

  • Deployed end-to-end with LangChain, Streamlit, FastAPI, and AWS β€” enabling real-time fraud alert generation and LLM-based risk reasoning, reducing inference latency by 35% while sustaining >90% precision in production-style evaluation.

/// Evaluation Metrics
92%
Precision
89%
Recall
0.90
F1 Score
0.93
ROC-AUC
+18%
Recall@K ↑
35%
Latency ↓
/// Tech Stack
XGBoostLLMFAISSRAGLoRA/PEFTLangChainAWSStreamlitGroqFastAPIPyTorchHuggingFace
Dashboard Preview
6 Screenshots
FRAUDSHIELD AI screenshot 1
01
FRAUDSHIELD AI screenshot 2
02
FRAUDSHIELD AI screenshot 3
03
FRAUDSHIELD AI screenshot 4
04
FRAUDSHIELD AI screenshot 5
05
FRAUDSHIELD AI screenshot 6
06
Project 02
GitHub

QUANTITATIVE TRADING SYSTEM

  • Volatile HFT markets required sub-millisecond decision-making beyond traditional models.

  • Engineered an automated RL-based alpha engine using contextual bandits and Nash Equilibrium principles.

  • Stabilized forecast accuracy, achieving consistent 0.5-1% alpha generation and reducing RMSE by 17%.

/// Evaluation Metrics
1%
Alpha Gen
17%
RMSE ↓
Stable
Forecast Acc
<1hr
Response
/// Tech Stack
RLlibTime-SeriesLSTMPyTorchPythonNash EquilibriumContextual Bandits
Dashboard Preview
3 Screenshots
QUANTITATIVE TRADING SYSTEM screenshot 1
01
QUANTITATIVE TRADING SYSTEM screenshot 2
02
QUANTITATIVE TRADING SYSTEM screenshot 3
03
Project 03
GitHub

HOMESCOUT : AI INTELLIGENCE

  • Identifying fake rental listings across 5,000+ entries required manual verification of multi-modal data.

  • Engineered an AI fraud detection system using BERT (NLP) and Computer Vision to analyze text/image patterns.

  • Reduced manual research time from 5 hours to 10 minutes with 91% precision via RAG architecture.

/// Evaluation Metrics
91%
Precision
10min
Research Time
<500ms
Latency
97%
Time Saved
/// Tech Stack
BERTComputer VisionRAGNLPPythonFAISSFastAPI
Dashboard Preview
5 Screenshots
HOMESCOUT : AI INTELLIGENCE screenshot 1
01
HOMESCOUT : AI INTELLIGENCE screenshot 2
02
HOMESCOUT : AI INTELLIGENCE screenshot 3
03
HOMESCOUT : AI INTELLIGENCE screenshot 4
04
HOMESCOUT : AI INTELLIGENCE screenshot 5
05
Project 04
GitHub

CHURN INTERVENTION MATRIX

  • Rising customer churn in a competitive landscape needing proactive, budget-conscious retention.

  • Developed a Deep RL platform treating retention as a dynamic game, integrating online learning and Minimax strategies.

  • Improved churn prediction accuracy by 15% and secured a 1% incremental revenue uplift through targeted interventions.

/// Evaluation Metrics
15%
Accuracy ↑
1%
Rev Uplift
Real-time
Inference
Full CX
Coverage
/// Tech Stack
Deep RLGame TheoryMinimaxPythonTensorFlowScikit-learnOnline Learning
Dashboard Preview
4 Screenshots
CHURN INTERVENTION MATRIX screenshot 1
01
CHURN INTERVENTION MATRIX screenshot 2
02
CHURN INTERVENTION MATRIX screenshot 3
03
CHURN INTERVENTION MATRIX screenshot 4
04
Project 05
GitHub

ADAPTIVE PRICING ENGINE

  • Static pricing failing to capture elasticity across 500k+ daily transactions, missing revenue.

  • Built a real-time engine using Causal Inference and Elasticity Modeling, leveraging Spark Streaming for dynamic adjustments.

  • Realized 15% forecast improvement and optimized margins with <1 minute pricing response time.

/// Evaluation Metrics
15%
Forecast ↑
0.8%
Margin ↑
<1min
Response
500K+
Tx/Day
/// Tech Stack
SparkRayCausal InferencePythonElasticity ModelingKafkaStreaming
Dashboard Preview
4 Screenshots
ADAPTIVE PRICING ENGINE screenshot 1
01
ADAPTIVE PRICING ENGINE screenshot 2
02
ADAPTIVE PRICING ENGINE screenshot 3
03
ADAPTIVE PRICING ENGINE screenshot 4
04
Project 06
GitHub

DOCUSENSE : RESEARCH AUTO

  • Extracting insights from complex financial contracts and reports was a slow, manual analyst process.

  • Developed a generative AI system using LangChain and HuggingFace to embed data into FAISS vector stores.

  • Reduced manual research effort by 40% while achieving 90%+ accuracy in automated clause extraction.

/// Evaluation Metrics
90%+
Accuracy
40%
Effort ↓
Multi-Doc
Docs
Automated
Extraction
/// Tech Stack
LangChainHuggingFaceFAISSGenAIPythonStreamlitTransformers
Dashboard Preview
7 Screenshots
DOCUSENSE : RESEARCH AUTO screenshot 1
01
DOCUSENSE : RESEARCH AUTO screenshot 2
02
DOCUSENSE : RESEARCH AUTO screenshot 3
03
DOCUSENSE : RESEARCH AUTO screenshot 4
04
DOCUSENSE : RESEARCH AUTO screenshot 5
05
DOCUSENSE : RESEARCH AUTO screenshot 6
06
DOCUSENSE : RESEARCH AUTO screenshot 7
07
Project 07
GitHub

TRADEGPT : SIGNAL INTERPRETER

  • Market volatility signals are often difficult for human traders to interpret in real-time.

  • Built a fine-tuned Transformer system in PyTorch to translate price swings and news sentiment into reasoning.

  • Enhanced signal interpretability with 87% prediction accuracy and a 30% reduction in manual analysis.

/// Evaluation Metrics
87%
Accuracy
30%
Analysis ↓
NL Output
Logic
Real-time
Inference
/// Tech Stack
PyTorchTransformersSentiment AnalysisNLPFine-tuningPythonFinBERT
Dashboard Preview
4 Screenshots
TRADEGPT : SIGNAL INTERPRETER screenshot 1
01
TRADEGPT : SIGNAL INTERPRETER screenshot 2
02
TRADEGPT : SIGNAL INTERPRETER screenshot 3
03
TRADEGPT : SIGNAL INTERPRETER screenshot 4
04

Professional Journey

Building intelligent systems, one algorithm at a time

πŸš€

AI/ML Engineer

Saayam For All

California, USA

Aug 2025 - Present

Developed reinforcement learning agents using Multi-Armed Bandits and PPO for recommendation systems, increasing user engagement by 16% and reducing recommendation errors by 10%

Built Graph Attention Networks (GAT) to model social and knowledge graph relationships, improving link prediction accuracy by 20% and enabling personalized recommendations for 50K+ users

Fine-tuned pretrained LLMs (DistilBERT, T5) using PEFT techniques (LoRA/adapters) and integrated vector databases (FAISS, Pinecone) for embeddings, boosting task-specific F1-scores by 18% and reducing inference latency by 12%

Developed and deployed advanced generative AI systems leveraging LangChain, RAG, and Multi-Chain Prompting (MCP) to automate volunteer-requester matching, resource discovery, and intelligent assistance workflows, achieving high semantic relevance and 87%+ accuracy

Implemented end-to-end ML pipelines with Docker, FastAPI, and AWS, reducing deployment cycles from 3 weeks to 2 days and achieving sub-200ms end-to-end inference latency

Engineered robust evaluation frameworks for LLM outputs, including safety checks, statistical validation, and error analysis, reducing model failures by 92% and detecting 15+ critical issues across services

Built scalable generative AI pipelines on AWS, combining RAG-based retrieval and automated monitoring to track 20+ performance and quality metrics, maintaining 99.5% uptime for 1,000+ users

⚑

Machine Learning Engineer

Tata Consultancy Services

Verizon (Hyderabad, India)

Jun 2021 - Sep 2023

Developed and fine-tuned machine learning models using scikit-learn, XGBoost, and Random Forest for regression and classification tasks, achieving 18–22% reduction in RMSE through advanced feature engineering and hyperparameter optimization

Built deep learning architectures with PyTorch, including CNNs for image recognition and LSTM/GRU networks for sequential text data, boosting model accuracy by 15% and lowering inference time by ~20%

Implemented extbf{GPU-accelerated training pipelines} for predictive maintenance applications, leveraging distributed computing and parallel processing to reduce training durations by 40% and detect failures proactively

Designed end-to-end NLP pipelines using TF-IDF, Word2Vec, GloVe embeddings, and early BERT fine-tuning, deploying scalable REST APIs with FastAPI, Docker, and AWS, improving F1-scores and cutting inference latency by 25–30%

Engineered data preprocessing and ETL workflows with Pandas, NumPy, and Apache Spark, handling large-scale datasets (10M+ records) efficiently and reducing data processing times by 35%

Collaborated with cross-functional teams to integrate ML models into production systems, ensuring seamless deployment and monitoring, resulting in a 20% increase in system reliability and user satisfaction

2+
Years Experience
5K+
Volunteers Matched
45%
Downtime Reduced
20+
ML Microservices

Let's Connect

Open to new opportunities, collaborations, and innovative AI projects. Let's build something amazing together.

Location

USA

Ready to Build Something Extraordinary?

Whether it's a challenging AI problem, a collaborative project, or an exciting opportunity – I'd love to hear from you.

Send a Message