Member-only story

How your old PC becomes your AI machine

Park Sehun
3 min readDec 16, 2024

--

OpenAI is no longer just a buzzword; it’s a genuine tool for boosting productivity both personally and within companies. While you often have free access to a GenAI (OpenAI) platform on your device, creating applications or using the API for personal projects can still be challenging and expensive.

While Huggingface is the great one that you can have, I would introduce Ollama today. Ollama is great for running a separate LLM inference that can be done by few command lines.

Ollama

Ollama is an open-source framework designed to facilitate the deployment of large language models in the local environment.

Set up (Linux)

Local deployment: Unlike cloud-based AI services, Ollama allows you to run LLMs directly on your local machine. You can have powerful AI models without relying on internet connectivity or cloud services.

Supported Platforms: Linux, Windows, MacOS.

Hardware requirement: To run Ollama efficiently, GPU is recommended but it can be running on CPU alone. (You might feel very slow). Based on the model, your server (PC) requires a certain RAM as it will put the model on your RAM, so at least your server will need to cover the size of model. (e.g., a 43GB model may require around the RAM size)

--

--

No responses yet

Write a response