How To Learn Programming Today with AI: A Practical Guide for 2025

Learn how to use AI effectively in your programming journey without becoming overly dependent on it. Find the balance between mastering core skills and leveraging AI tools to boost productivity.

Simple Guide to Calculating VRAM Requirements for Local LLMs

Learn how to estimate VRAM requirements for running local language models. This simple formula helps you determine the resources needed for full model execution or fine-tuning.

MacOS Requirement Guide for Running Llama 3 (All Variants)

System requirements for running Llama 3 models, including the latest updates for Llama 3.3. This guide will help you prepare your hardware and environment for efficient performance.

Getting Started with Llama 3.3 Using Ollama (MacOS)

Run Meta's Llama 3.3 on macOS with ease. Learn how to set up Ollama, manage models, and ensure your Mac meets the system requirements for smooth performance.

How to Set Up the Qwen LLM on Mac OS with Ollama

Setting up Qwen LLM on Mac OS has never been easier thanks to Ollama, a platform designed to simplify AI model deployment. This detailed guide will walk you through everything you need to know, from installation to advanced usage, making…



Wei-Ming Thor

I create practical guides on Software Engineering, Data Science, and Machine Learning.

Creator of ApX Machine Learning Platform

Background

Full-stack engineer who builds web and mobile apps. Now, exploring Machine Learning and Data Engineering. Read more

Writing unmaintainable code since 2010.

Skill/languages

Best: JavaScript, Python
Others: Android, iOS, C, React Native, Ruby, PHP

Work

Engineering Manager

Location

Kuala Lumpur, Malaysia

Open Source
Support

Turn coffee into coding guides. Buy me coffee