How to get a model from HuggingFace on Mac OS
This guide documents the steps needed to download HuggingFace models (especially MLX models) correctly on Mac OS.
- Install Required Tools
pip install huggingface_hub hf_transfer
brew install git-lfs
- Enable Accelerated Downloads
export HF_HUB_ENABLE_HF_TRANSFER=1
This makes HuggingFace downloads much faster using parallel connections.
- Set up your Local Model Storage
Create a clean folder structure for storing MLX models:
mkdir -p ~/Models/MLX
cd ~/Models/MLX
Example structure:
~/Models/MLX/
├── mistral-7b-instruct-v0.3-4bit/
├── phi-2/
├── smolvlm-256m/
- Initialize Git LFS
After installing git-lfs, you must initialize it:
git lfs install
Why? Git needs to know how to handle large files (like model weights) separately from normal, small text files.
Verify installation:
git lfs --version
- Download and Clone a Model
Move any old partial downloads if necessary:
mv mistral-7b-instruct-v0.3-4bit mistral-7b-instruct-v0.3-4bit-old
Clone the model repo cleanly:
git clone https://huggingface.co/mlx-community/Mistral-7B-Instruct-v0.3-4bit mistral-7b-instruct-v0.3-4bit
Go into the model folder:
cd mistral-7b-instruct-v0.3-4bit
Pull the actual large model weights:
git lfs pull
✅ This downloads all .mlx weights, tokenizer files, configs.
- Example of Final Model Path
After successful download:
~/Models/MLX/mistral-7b-instruct-v0.3-4bit/
├── config.json
├── model-00001-of-00002.safetensors
├── model-00002-of-00002.safetensors
├── tokenizer.json
├── tokenizer_config.json
├── special_tokens_map.json
└── ...
- Important Tips
- Avoid folders with spaces (e.g., use
mistral-7b-instruct, notMistral 7B Instruct). - Always use
git lfs pullafter cloning! - Store models in
~/Models/MLX/for clean organization. - If cloning fails, move old folders aside first.
- Related Tools
Respectfully,
Uki D. Lucas https://x.com/ukidlucas
I am preparing to cancel the subscription to the e-mail newsletter that sends my articles. Please subscribe to x.com (Twitter) above to get updates from me.