Portfolio

This collection of posts represents my completed articles, however you can expect periodic revisions and updates as I continue to learn and grow.

A Practical Framework for Safe Deployment of Autonomous AI Agents

Autonomous coding agents can increase delivery speed, but they also introduce operational, financial, and security risks at scale. This paper presents a governance-first architecture for safer autonomy: explicit Functional Requirements, deterministic stop conditions, operating system–enforced containment, staged promotion controls, and auditable execution artifacts. It is...

Measuring movement with computer vision

At a basic level, video is a time-stamped measurement tool. Every frame is a discrete snapshot of body position at a known interval. At 60 frames per second, each frame represents about 16.7 milliseconds. That is already within the range where clinically meaningful differences in...

Multi-agents software development

A multi-agent foundation LLM is built for parallel, role-separated work on complex problems. It assumes that meaningful software development is not linear. Architecture, implementation, testing, refactoring, validation, and documentation are distinct cognitive tasks that benefit from concurrent execution. In this model, multiple specialized agents operate...

LLM context is not optional

Modern large language models represent a compressed statistical summary of an enormous portion of what has been publicly written. This post provides clear context, key ideas, and practical takeaways.

Ai languages: One Ruler to Measure Them All

Artificial intelligence speaks many tongues, yet not all languages are treated equally. Most large language models are trained on vast English datasets, leaving other languages to fill in the gaps with whatever online text exists. For years, this asymmetry shaped an assumption that English would...

Recurrent Neural Network (RNN) cell in PyTorch

This minimal PyTorch example implements a custom recurrent neural network (RNN) cell from first principles, showing how sequence memory emerges through feedback. The cell maintains a hidden state vector h, which evolves over time using the current input x and the previous hidden state through...

LM Studio for macOS: Privacy and Open-Source Transparency

LM Studio is a popular desktop application that allows users to run large language models (LLMs) locally on macOS. It advertises a privacy-first approach by processing data entirely on the user’s machine, without sending any information to external servers. However, questions around its transparency and...

How to get a model from HuggingFace on Mac OS

Why? Git needs to know how to handle large files (like model weights) separately from normal, small text files. This post provides clear context, key ideas, and practical takeaways.

AIKO, the Tiny Language Model (TLM)

Introduction: A Language Model of My Own We are surrounded by large language models: systems trained on the vastness of the internet. Models like GPT, Claude, and LLaMA can write essays, answer science questions, explain math, generate stories, and even simulate personalities. But as I’ve...

AI Zen Garden: Multi-Agent LLM Collegium on Your Desktop

I often imagine early humans gathered around a bonfire, sharing stories and chipping away at obsidian shards to create tools. In my own life, I notice a curious parallel: I sit here with my favorite note-taking app, aptly named Obsidian, and chip away at my...

YOLOv8

YOLO uses boxes instead of more natural polygons because they allow the network to represent object locations with just 4 numbers, making computation more efficient and consistent across all grid cells. Polygons would require a variable number of points per object, complicating tensor shapes and...