Uki D. Lucas’ blog and portfolio


Portfolio

  • A Practical Framework for Safe Deployment of Autonomous AI Agents -

    Summary

  • Measuring movement with computer vision -

    Today, I was measuring my movement using computer vision.

  • Multi-agents software development -

    A multi-agent foundation LLM is built for parallel, role-separated work on complex problems. It assumes that meaningful software development is not linear. Architecture, implementation, testing, refactoring, validation, and documentation are distinct cognitive tasks that benefit from concurrent execution. In this model, multiple specialized agents operate at the same time within a shared project state. Each agent has a defined role, constraints, and success criteria. One agent may reason purely about system architecture and invariants; another may implement code within a restricted scope; a third may generate or run tests; while a fourth evaluates performance, safety, or maintainability.

  • Running a 24B Mistral Model Locally with MLX macOS -

    LLM model storage

  • Joint Embedding Predictive Architecture (JEPA) -

    Overview

  • LLM context is not optional -

    On numerous occasions, I have heard people say that GPT answers are generic, shallow, or otherwise insufficient.

  • Ai languages: One Ruler to Measure Them All -

    Artificial intelligence speaks many tongues, yet not all languages are treated equally. Most large language models are trained on vast English datasets, leaving other languages to fill in the gaps with whatever online text exists. For years, this asymmetry shaped an assumption that English would always yield the best results in interacting with AI systems. Recent research has begun to challenge that notion.

  • Recurrent Neural Network (RNN) cell in PyTorch -

    This minimal PyTorch example implements a custom recurrent neural network (RNN) cell from first principles, showing how sequence memory emerges through feedback. The cell maintains a hidden state vector h, which evolves over time using the current input x and the previous hidden state through the nonlinear update h = tanh(Wₕₕh + Wₓₕx). The output y = Wₕy h is then computed as a simple linear projection of the hidden state.

  • Testing local LLM on multi-lingual understanding. -

    I have run a quick test on a few LLM models I have installed locally on Mac OS with 64 GB of RAM.

  • LM Studio for macOS: Privacy and Open-Source Transparency -

    #ChatGPT #byUkiDLucas

  • How to get a model from HuggingFace on Mac OS -

    How to get a model from HuggingFace on Mac OS

  • From Behavioral Sciences to a Career in Sensor Perception and AI -

    Every couple of years, I write a post about my career in which I review the past and consider pivots for the future. It is time to post an update for 2025. Because of AI, this year will be the most transformative.

  • AIKO, the Tiny Language Model (TLM) -

    Introduction: A Language Model of My Own We are surrounded by large language models: systems trained on the vastness of the internet. Models like GPT, Claude, and LLaMA can write essays, answer science questions, explain math, generate stories, and even simulate personalities. But as I’ve written in my blog, I’m not chasing scale. I like to experiment with small language models (SLM)—or, in this case, tiny language models (TLM). Partly because I’m burning midnight oil alone and not a green pile of venture capital money. Training even a mediocre LLM can cost tens of millions of dollars. More importantly, I wanted something that runs locally, privately, and fast. A model that lives on my laptop, not in a server farm. Yes, I know how to fine-tune Mistral 7B and use RAG and CAG. I’ll explain later why I trained a model from scratch instead. I also wanted something more playful: an AI Zen Garden where I cultivate a Collegium of AI personalities. Each is trained to reflect a slice of my inner world with their philosophies, moods, and voices. TLM is the first of these. She does not know all the facts ever written about the world. Instead, she is trained to understand how I see the world—to help me think, reflect, and write in my own voice. She speaks from my blog posts and personal notes collected over the decades. She is not artificial in the corporate sense. She is authentic, with her Japanese spunk and thoughtful presence. This book tells the whole story: how I came up with the idea, how I built it, why a small model can still be wise, and how you might make one, too.

  • AI Zen Garden: Multi-Agent LLM Collegium on Your Desktop -

    #BeeHiiv #byUkiDLucas #Zen #Buddhism #public

  • YOLOv8 -

    In this post, I would like to describe the architecture of YOLOv8 from first principles.


Blog Posts