Date:

Generate single title from this title How to easily run your favorite local AI models on Linux with this handy app in 100 -150 characters. And it must return only title i dont want any extra information or introductory text with title e.g: ” Here is a single title:”

Write an article about Javier Zayas Photography/Getty Images

I’ve been using AI as one of my go-to tools for research for some time now. Although there are plenty of options that can be accessed via a web browser (such as Opera’s Aria, the first AI service I ever used), I prefer desktop tools because I can install and use local LLMs — so all of my queries remain private. With a desktop tool, I can easily switch between local LLMs like Llama, DeepSeek R1, Mistral Instruct, Orca, GPT4All Falcon, and more. 

Until now, my default Linux AI GUI had been Msty, but I’ve found a new app that’s just as good. That app is GPT4ALL, available for Linux, MacOS, and Windows.

Also: My two favorite AI apps on Linux – and how I use them to get more done

GPT4ALL can be used for standard queries, as a personal writing assistant, to write code, and more. It also happens to have a fantastic UI that is easy to use and will fit in directly with the look and feel of your desktop. 

The GPT4All application running on Pop!_OS Linux. Screenshot by Jack Wallen/ZDNET

One aspect of GPT4All that I appreciate is the ability to choose the compute device for text generation. For example, my System76 Thelio has an AMD Radeon RX 7600 GPU. GPT4All detects that GPU and allows me to either use Vulkan: AMD Radeon RX 7600 (RADV NAVI33) or Vulkan: AMD Radeon RX 7600 (RADV RAPHAEL_MENDOCINO). 

I can also select my default model, change the suggestion mode (for generating follow-up questions), configure the number of CPU threads, enable a system tray app, and much more. I can enable a local API server so that GPT4All can be accessed via http://localhost:4891. The downside of using it in server mode is that it consumes more system resources.

Let’s get GPT4All installed on Linux, so I can show you what’s what.

Installing GPT4All on Ubuntu-based distributions

What you’ll need: Currently, the only supported Linux distributions are those based on Ubuntu, so you’ll need a running instance of an Ubuntu-based distribution. I’ll demonstrate this on Pop!_OS Linux. 

The first thing to do is download the installer file. Point your browser to the Ubuntu Installer download and, when prompted, save the gpt4all-installer-linux.run file in ~/Downloads.

Show more

Open your default terminal window app and change into the Downloads directory with the command cd ~/Downloads. Once there, give the installer executable permissions with the command:

Show more

chmod u+x gpt4all-installer-linux.run

You can now run the installer with the command:

Show more

./gpt4all-installer-linux.run

Note that you do not need to use sudo for the installation. If you try installing GPT4All with sudo, it will error out. When the installation wizard opens, click through the steps to complete the installation.

Also: How I feed my files to a local AI for better, more relevant responses

The GPT4All installation wizard.

The GPT4All installation wizard is simple enough that anyone can do it.

Screenshot by Jack Wallen/ZDNET

The installation should complete without a hitch.

Using GPT4All

When you first open GPT4All, you’ll need to either opt in or out of anonymous usage analytics and anonymous sharing of chats. I would recommend you opt out. Once you’ve done that, you’ll find yourself on the main GPT4All window, where you’ll need to install a local model.

The GPT4All main window.

The well-designed GPT4All application window is easy to understand without much effort.

Screenshot by Jack Wallen/ZDNET

To install your first model, click Models and then click Add Model. In the resulting window, scroll down and locate the model you want, such as Llama 3.2 3B Instruct. When you find it, click Download to start the process. Remember, these models can be fairly large (the smallest being Llama 3.2 3B Instruct at 1.79 GB). 

The Llama 3.2 3B model listing.

I often use one of the Llama models because they’re small and efficient.

Screenshot by Jack Wallen/ZDNET

Once the model has been downloaded and installed, you’ll find it listed in the Models section.

Also: How to run DeepSeek AI locally to protect your privacy – 2 easy ways

Click on the Chats icon in the left sidebar. If you’ve only installed a single model, it’ll be listed at the top center. If you’ve installed multiple models, select the one you want to use from that drop-down. 

GPT4All responding to the query,

I asked, “What is Linux?” for my first query.

Screenshot by Jack Wallen/ZDNET

Type your query in the “Send a message” field and hit Enter on your keyboard. GPT4All will get busy working to answer your query. If you’ve enabled follow-up suggestions, they’ll be listed beneath the answer. I like the follow-up suggestions because they help me dive into some seriously insightful (and sometimes fun) rabbit holes. On my System76 Thelio, I barely noticed the extra compute overhead required by the suggested follow-ups, so it’s worth enabling.

A GPT4All query with suggested follow-up questions.

I consider follow-ups to be a must.

Screenshot by Jack Wallen/ZDNET

And that, my friends, is all there is to getting GPT4All installed on Linux — and the basics for using this helpful tool. 

Want more stories about AI? Sign up for Innovation, our weekly newsletter.

.Organize the content with appropriate headings and subheadings ( h2, h3, h4, h5, h6). Include conclusion section and FAQs section with Proper questions and answers at the end. do not include the title. it must return only article i dont want any extra information or introductory text with article e.g: ” Here is rewritten article:” or “Here is the rewritten content:”

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here