Back to all tools

Category

Local runtimes

Products that run models on a local machine or privately controlled infrastructure.

6 tools in this category

Ollama

Ollama

A local runtime for downloading and serving large language models on personal or private infrastructure.

Ollama is used to run models locally for development, privacy-sensitive experiments, offline workflows, and simple self-hosted inference without a large cloud setup.

Audience
Developer
Access
Local

Aider

Aider

A terminal-native open-source coding assistant for applying edits to local repositories with model guidance.

Aider is used to chat against a codebase, make file edits from the terminal, and combine local development loops with external or local models.

Audience
Developer
Access
API

Continue

Continue

An open-source coding assistant framework for building custom AI workflows inside the editor.

Continue is used to wire local or hosted models into IDEs, create custom coding actions, and control how assistants interact with repositories.

Audience
Developer
Access
API

LM Studio

LM Studio

A desktop application for downloading, testing, and serving open models locally.

LM Studio is used by developers and enthusiasts who want a desktop interface for local model experimentation, offline testing, and lightweight local inference APIs.

Audience
Developer
Access
Desktop

Stability AI

Stable Diffusion

A family of image generation models that can be used through hosted services or self-hosted deployments.

Stable Diffusion is used to generate and edit images, build custom creative tools, and run visual generation workflows on private infrastructure or hosted platforms.

Audience
Hybrid
Access
API