How to Use Local AI Models with Shakespeare
- How to Use Local AI Models with Shakespeare
How to Use Local AI Models with Shakespeare
Want complete control over your AI-powered website building? Local AI models offer enhanced privacy, faster responses, and dramatically reduced costs in your development workflow. With Shakespeare’s support for local models including GPT-OSS, DeepSeek-R1, Gemma 3 and other models running on your machine, you can build websites entirely offline while keeping your projects and data completely private.
Why Choose Local AI Models?
Complete Privacy
- No data ever leaves your machine
- Perfect for sensitive business projects
- Full control over your intellectual property
- No concerns about terms of service changes
Zero Ongoing Costs
- Pay once for hardware, use forever
- No per-request charges
- No monthly subscriptions
- Unlimited usage without budget worries
Full Customization
- Fine-tune models for your specific needs
- Complete control over model behavior
- No rate limits or usage restrictions
- Experiment freely without costs
Getting Started: Installing Ollama
System Requirements
| Level | Specs |
|---|---|
| Minimum | 8GB RAM, modern CPU |
| Recommended | 16GB+ RAM, GPU with 8GB+ VRAM |
| Optimal | 32GB+ RAM, high-end GPU |
Installation Steps
macOS Installation
# Install via Homebrew
brew install ollama
# Or download from ollama.ai
curl -fsSL https://ollama.ai/install.sh | sh
Linux Installation
# Install via curl
curl -fsSL https://ollama.ai/install.sh | sh
# Or use package manager
sudo apt install ollama # Ubuntu/Debian
sudo pacman -S ollama # Arch Linux
Windows Installation
- Download the installer from ollama.ai
- Run the installer as administrator
- Restart your system after installation
Choosing the Right Model
For Website Building: Top Local Model Recommendations
GPT-OSS - OpenAI’s Open-Weight Powerhouse
ollama pull gpt-oss
Best for: Powerful reasoning, agentic tasks, versatile developer use cases
Trade-offs:
- ✓ Excellent at complex web development
- ⚠ Requires decent hardware
DeepSeek-R1 - Enterprise-Grade Reasoning
ollama pull deepseek-r1
Best for: Open reasoning with performance approaching O3 and Gemini 2.5 Pro
Trade-offs:
- ✓ Leading-edge reasoning capabilities
- ⚠ Slower inference due to reasoning depth
Gemma 3 - Google’s Single-GPU Solution
ollama pull gemma3
Best for: Google’s most capable model that runs on a single GPU
Trade-offs:
- ✓ Excellent performance on consumer hardware
- ⚠ May need more guidance for complex tasks
Quick Model Comparison
| Model | Size | RAM Required | Speed | Code Quality | Content Quality |
|---|---|---|---|---|---|
| GPT-OSS | 8GB | 16GB | Fast | Excellent | Excellent |
| DeepSeek-R1 | 7GB | 16GB | Medium | Excellent | Very Good |
| Gemma 3 | 4GB | 8GB | Very Fast | Very Good | Good |
Configuring CORS for Browser Access
Important: CORS Configuration Required
Since Shakespeare runs in your browser, you need to configure Ollama to accept cross-origin requests (CORS). This is a crucial step that allows Shakespeare to communicate with your local Ollama instance.
Checking CORS Status
First, verify if CORS is already enabled:
curl -X OPTIONS http://localhost:11434 -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -I
If you see HTTP/1.1 403 Forbidden, CORS is not enabled and needs configuration.
Enabling CORS by Platform
Enabling CORS on macOS
# Allow all origins (easiest for local development)
launchctl setenv OLLAMA_ORIGINS "*"
# Or specify specific origins for better security
launchctl setenv OLLAMA_ORIGINS "localhost:3000,localhost:5173,shakespeare.app"
# Optional: Make Ollama accessible on your network
launchctl setenv OLLAMA_HOST "0.0.0.0"
# Restart Ollama for changes to take effect
Enabling CORS on Linux
Edit the Ollama service configuration:
sudo systemctl edit ollama.service
Add these environment variables:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Then restart the service:
sudo service ollama restart
Enabling CORS on Windows
- Open System Properties → Environment Variables
- Add new system variables:
OLLAMA_ORIGINSwith value*(or specific origins)OLLAMA_HOSTwith value0.0.0.0(optional, for network access)
- Restart Ollama from the system tray
Verifying CORS Configuration
After configuration, test again:
curl -X OPTIONS http://localhost:11434 -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -I
Success looks like:
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Configuring Local Models in Shakespeare
Step 1: Start Ollama Service
# Start Ollama server (with CORS already configured)
ollama serve
# The service will run on http://localhost:11434
Step 2: Test Your Model
ollama run gpt-oss "Write a simple HTML page with a header"
Step 3: Configure in Shakespeare
- Open Shakespeare and go to Settings > AI Settings
- Scroll to “Add Custom Provider”
- Click to expand the custom provider section
- Enter the following configuration:
- Provider Name: “ollama”
- API Endpoint:
http://localhost:11434/v1
The Bottom Line
Local Models with Shakespeare Provide
- ✓ Enhanced privacy - Your code never leaves your machine
- ✓ Faster responses - No network latency
- ✓ Reduced costs - Zero API fees after initial setup
- ✓ Complete control - Run any compatible model
Ready to supercharge your development workflow? Start with GPT-OSS and experience the benefits of local AI models in Shakespeare.
Ready to Build with Shakespeare?
Start building amazing projects with AI-powered development on Nostr.
Turn your ideas into reality through natural conversation with AI.
This article was originally published on Soapbox.pub
Write a comment