Run Your Own AI - You can do it! Text, Image, Video, Audio, Code
Welcome to another episode of Freedom Tech Weekend! In today’s live stream, host Marks shows you how to run open‑source Large Language Models (LLMs) locally on your laptop (and eventually on a phone) using a handful of powerful, privacy‑first tools.
🔒 Why go local?
Commercial AI services (ChatGPT, Claude, etc.) keep your data on third‑party servers, exposing you to leaks, ads, and even legal data‑retention. By running models locally with LM Studio, ComfyUI, Whisper, and Qwen Coder, you keep full control of your information while still getting the productivity boost of AI. We do a speed test of Local AI vs Maple AI! ⏱️
💡 What you’ll learn:
- Installing & using LM Studio to load 20‑B and 120‑B models.
- Comparing token‑speed and reasoning settings.
- Generating text, code, and audio with Whisper and Mac Whisper.
- Creating images (and even video) with Comfy UI and the Qwen Image diffusion model.
- Building custom scripts with Qwen Coder.
- Leveraging Maple AI for encrypted cloud inference when local hardware isn’t enough.
🛠️ Tools covered: LM Studio, ComfyUI, Whisper, Mac Whisper, Qwen Coder, Hugging Face model hub, Maple AI (try it at https://trymaple.ai).
👉 Get started this weekend – download the tools, follow the on‑screen steps, and experiment with your own private AI workflow.
👍 Like, subscribe, and hit the bell so you never miss a new Freedom Tech Weekend episode. Have a tool you love? Drop it in the chat or tweet us!
🤖 Get confidential, encrypted AI for personal and work use: Maple AI https://trymaple.ai