This page gathers a couple of compact ML experiments I’ve been building to learn, prototype fast, and ship small tools that are fun to use. The focus is pragmatic: clean data pipelines, sensible baselines, and models that are easy to deploy locally—no cloud dependency and full control over the stack.
DrawingNeuralNetwork — draw, classify, iterate
A lightweight drawing app where you can sketch digits/objects and get a prediction in real time. I use it as a playground to test preprocessing tricks, training loops, and evaluation without the friction of a big framework. The UI is intentionally minimal so the iteration loop is fast: draw → classify → tweak → repeat.
- Goal: make live inference tangible and debuggable; turn model behavior into immediate feedback.
 - Pipeline: canvas image → normalization & denoise → resize → tensorize → model inference → top-k output.
 - Models: compact CNN baselines (PyTorch) trained on standard datasets (e.g., MNIST / CIFAR-10) with simple augmentations (shift/rotate/contrast).
 - Why it’s useful: you can “feel” when preprocessing or augmentation helps because the effect shows up while you draw.
 
Over time I’ve collected small insights: centering and scale normalization matter more than fancy layers for hand-drawn input; a conservative denoise helps low-confidence strokes; and top-k with probabilities is better UX than a single hard label.
FaceRecognition — local, real-time identification
A real-time face recognition demo built on top of OpenCV. It detects faces from a webcam stream and performs on-device identification using a small, interpretable pipeline—useful for kiosks, lab setups, or home experiments.
- Detector: efficient OpenCV detector for real-time performance on CPU.
 - Recognition: per-user enrollments and a compact descriptor for similarity matching (no external services).
 - Data flow: capture → detect → align/crop → embed/describe → nearest match with threshold & unknown handling.
 - Why local: privacy and latency; it runs fully offline and is easy to integrate with other processes (e.g., doors, dashboards).
 
In practice, good lighting and enrollment variety (frontal + slight yaw, with/without glasses) make the system robust. Threshold tuning is key to balance false accepts vs rejects; for multi-user scenarios, I log confidence to review edge cases.
What I learned
- Feedback beats theory: interactive demos (drawing/preview) surface issues faster than offline metrics alone.
 - Preprocessing > depth (often): consistent sizing, centering, and contrast adjustments usually yield bigger gains than adding layers.
 - Explainability helps adoption: surfacing top-k and similarity scores builds trust and makes debugging easier.
 
Next steps
- Add a tiny quantized model for the drawing app to compare FP32 vs INT8 on low-power hardware.
 - Swap the face detector with a modern lightweight DNN and expose a calibration page for enrollment/thresholds.
 - Bundle both apps with a simple REST API, so other services (e.g., the robot dashboard) can request predictions.
 
Gallery