VIKASH MISHRA

Run Powerful AI on Your PC: My Hands-On Experience with Ollama and Small Language Models (10-Minute Read)

AI - Ollama Hands on

Hey everyone! Ever felt curious about playing around with Artificial Intelligence, but thought it needed a supercomputer that costs a fortune? Well, I’m here to share my personal journey of diving into the world of AI right on my regular PC, thanks to a fantastic tool called Ollama and the magic of small language models (SLMs). For us in India, access to powerful and private AI is becoming increasingly important, and this method offers just that. Trust me, if I can do it, anyone can! Let’s get started on this exciting adventure. 🚀


My “Aha!” Moment with Local AI

Like many of you, I’ve been fascinated by the buzz around AI. Tools like ChatGPT are incredible, but the thought of running something similar, completely privately, on my own machine always seemed like a distant dream. Concerns about data privacy and the need for constant internet access were always at the back of my mind.

Then, I stumbled upon Ollama. It was like a lightbulb went off! This open-source tool promised a simple way to download, run, and manage large language models (LLMs) locally. My initial thought was, “Yeah, right. My PC can barely handle some games.” But as I dug deeper, I discovered the world of small language models. These are AI models that are intentionally designed to be efficient and lightweight, making them perfect for running on everyday computers without needing top-tier GPUs or massive amounts of RAM.

My “aha!” moment came when I successfully ran my first model. The feeling of having a conversational AI running purely on my laptop, without any internet connection, was truly empowering. It opened up a whole new world of experimentation and learning, all within the secure confines of my own device.


Step-by-Step: How I Got My First AI Model Running Locally

Let me walk you through the exact steps I took to get my first small language model up and running using Ollama. It’s simpler than you might think!

1. Installing Ollama: The Easy First Step

The first step was to get Ollama installed on my PC. I headed over to the official Ollama website (it’s super easy to find with a quick Google search). They have clear installation instructions and download links for Windows, macOS, and Linux. Since I’m on Windows, I downloaded the .exe file. The installation process was a breeze – just a few clicks, and Ollama was ready to go. It even runs a little server in the background without you having to do anything complicated.

2. Opening the Command Line: Your Gateway to AI

Ollama works through your computer’s terminal or command prompt. Think of it as a direct line to your system.

  • For Windows users: Just type “cmd” or “Command Prompt” in your Start Menu search bar and click on it.
  • For macOS/Linux users: Open the “Terminal” application (you can usually find it in your Utilities folder).

Don’t be intimidated by the command line! We’ll only be using a few simple commands.

3. Choosing the Right Small Model: Efficiency is Key

This is where the magic of running AI on a regular PC truly shines. Instead of aiming for massive, resource-hungry models, we focus on small language models (SLMs). These models are trained on vast amounts of data but are optimized for speed and efficiency.

I browsed the Ollama library. It’s like an app store for AI models! I was looking for models with parameter counts in the billions (indicated by ‘B’), rather than tens or hundreds of billions. Based on recommendations and their descriptions, I decided to start with Phi-3 Mini (3.8B). It’s known for its impressive performance despite its small size. Other great options include Gemma (2B or 7B) by Google and Llama 3 (8B) by Meta.

4. Running Your First AI Model: The Exciting Part!

Now for the moment of truth! In my command prompt, I typed the following command:

<pre><code class=”language-bash”>ollama run phi3 </code></pre>

(Of course, if you choose a different model, you’d replace phi3 with the model’s name, like ollama run gemma:2b or ollama run llama3:8b).

The first time you run this command for a particular model, Ollama will automatically download it from the internet. This might take a few minutes, depending on your internet speed. Grab a cup of chai while you wait! ☕

Once the download is complete, you’ll see a >>> Send a message prompt. This is it! You’re now interacting with an AI model running locally on your PC. I typed a simple greeting, and the model responded almost instantly. I was amazed! I then asked it to summarize a short article I had, and it did a pretty good job.

To end the conversation, just type /bye and press Enter.


My Go-To Small Language Model Recommendations for Indian Users

Based on my experience, here are some small language models that I found particularly good for getting started, keeping in mind the kind of tasks we might commonly need:

  • Phi-3 Mini (3.8B): Excellent all-rounder. Great for creative writing, summarizing text, answering questions, and even some basic coding tasks. Its smaller size makes it very responsive.
  • Gemma (2B): If you have a slightly older PC or want something incredibly lightweight, Gemma 2B is a fantastic choice. It’s surprisingly capable for its size and can handle basic natural language tasks efficiently.
  • Llama 3 (8B): While slightly larger than Phi-3 Mini, Llama 3 8B offers a noticeable step up in performance for more complex tasks. It’s a good balance between capability and resource usage.

Remember to check the Ollama library for the exact command to run each specific version of these models (e.g., ollama run gemma:7b).


Level Up Your Local AI Experience: Essential Resources

To help you on your journey with Ollama and local AI, here are some fantastic resources I found incredibly helpful:

  • Official Ollama Documentation: This is your primary source for all things Ollama. You’ll find detailed instructions, command references, and guides on more advanced features: https://ollama.com/docs
  • Ollama Model Library: Keep exploring this page to discover new and interesting small language models: https://ollama.com/library

And here are some engaging video tutorials that can guide you visually:


Why This Matters for Us in India

Running AI models locally has significant benefits, especially for users in India:

  • Data Privacy: Your interactions and data stay on your computer, crucial for sensitive information.
  • Offline Access: No need for a constant internet connection, which can be unreliable in some areas.
  • Cost Savings: Once the models are downloaded, there are no recurring costs associated with usage.
  • Learning and Experimentation: It provides a fantastic platform for students, developers, and anyone curious about AI to learn and experiment without relying on external services.

My experience with Ollama and small language models has been incredibly positive. It’s empowering to have access to AI capabilities right on my PC, privately and affordably. I encourage all of you to give it a try. It’s a fantastic way to explore the exciting world of artificial intelligence without needing a high-end machine. Share your experiences in the comments below – I’d love to hear about the models you try and the cool things you do with them! Happy AI exploring! 😊


Discover more from VIKASH MISHRA

Subscribe to get the latest posts sent to your email.

Category , , , , , , , ,

Discover more from VIKASH MISHRA

Subscribe now to keep reading and get access to the full archive.

Continue reading