End-to-end machine learning pipelines — from raw data to trained, validated, and visualized models ready for use.
Not just model training — full pipelines. Data preprocessing, architecture selection, training, evaluation, and output visualization, delivered as something usable.

I've built stacked LSTMs for time-series forecasting, CNNs for image classification, and physics-informed neural networks to solve PDEs — all trained, validated, and benchmarked against real metrics (RMSE, MAE, exact analytical solutions). I know how to build models that aren't just accurate on paper.
I work primarily in Python with PyTorch, scikit-learn, and the standard data science stack (Pandas, NumPy, Matplotlib, Seaborn). I handle everything from raw dataset cleaning and feature engineering through to final output rendering — including 2D, 3D, and animated visualizations.
Cleaning, normalization (MinMaxScaler, StandardScaler), feature engineering, and train/test splitting done right.
Architecture selection, hyperparameter tuning, and multi-epoch training with validation curves tracked throughout.
Rigorous evaluation with appropriate metrics — RMSE, MAE, accuracy, F1 — compared against baselines.
Clean, interpretable plots — predicted vs. actual, loss curves, confusion matrices, and animated visualizations.
We clarify what success looks like — target variable, evaluation metric, and data availability — before choosing an approach.
Week 1Exploratory data analysis, cleaning, normalization, and feature engineering to get data model-ready.
Week 1–2Architecture built and trained iteratively — with validation loss tracked and hyperparameters tuned throughout.
Week 2–4Final benchmarking against metrics, clean visualization output, and documented code ready for handover or integration.
Week 4–5