Skip to content

Coder Server Configuration

This section describes the configuration and features of the code-server environment.

Accessing Code Server

  • URL: http://localhost:8080
  • Authentication: Password-based (see below for password retrieval)

Retrieving the Code Server Password

After the first build, the code-server password is stored in:

configs/code-server/.config/code-server/config.yaml

Look for the password: field in that file. For example:

password: 0c0dca951a2d12eff1665817

Note: It is recommended not to change this password manually, as it is securely generated.

Main Configuration Options

  • bind-addr: The address and port code-server listens on (default: 127.0.0.1:8080)
  • auth: Authentication method (default: password)
  • password: The login password (see above)
  • cert: Whether to use HTTPS (default: false)

Installed Tools and Features

The code-server environment includes:

  • Node.js 18+ and npm
  • Claude Code (@anthropic-ai/claude-code) globally installed
  • Python 3 and tools:
  • python3-pip, python3-venv, python3-full, pipx
  • Image and PDF processing libraries:
  • CairoSVG, Pillow, libcairo2-dev, libfreetype6-dev, libjpeg-dev, libpng-dev, libwebp-dev, libtiff5-dev, libopenjp2-7-dev, liblcms2-dev
  • weasyprint, fonts-roboto
  • Git for version control and plugin management
  • Build tools: build-essential, pkg-config, python3-dev, zlib1g-dev
  • MkDocs Material and a wide range of MkDocs plugins, installed in a dedicated Python virtual environment at /home/coder/.venv/mkdocs
  • Convenience script: run-mkdocs for running MkDocs commands easily

Using MkDocs

The virtual environment for MkDocs is automatically added to your PATH. You can run MkDocs commands directly, or use the provided script. For example, to build the site, from a clean terminal we would rung:

cd mkdocs 
mkdocs build

Claude Code Integration

The code-server environment comes with Claude Code (@anthropic-ai/claude-code) globally installed via npm.

What is Claude Code?

Claude Code is an AI-powered coding assistant by Anthropic, designed to help you write, refactor, and understand code directly within your development environment.

Usage

  • Access Claude Code features through the command palette or sidebar in code-server.
  • Use Claude Code to generate code, explain code snippets, or assist with documentation and refactoring tasks.
  • For more information, refer to the Claude Code documentation.

Note: Claude Code requires an API key or account with Anthropic for full functionality. Refer to the extension settings for configuration.

Call Claude

To use claude simply type claude into the terminal and follow instructions.

claude

Shell Environment

The .bashrc is configured to include the MkDocs virtual environment and user-local binaries in your PATH for convenience.

Code Navigation and Editing Features

The code-server environment provides robust code navigation and editing features, including:

  • IntelliSense: Smart code completions based on variable types, function definitions, and imported modules.
  • Code Navigation: Easily navigate to definitions, references, and symbol searches within your codebase.
  • Debugging Support: Integrated debugging support for Node.js and Python, with breakpoints, call stacks, and interactive consoles.
  • Terminal Access: Built-in terminal access to run commands, scripts, and version control operations.

Collaboration Features

Code-server includes features to support collaboration:

  • Live Share: Collaborate in real-time with others, sharing your code and terminal sessions.
  • ChatGPT Integration: AI-powered code assistance and chat-based collaboration.

Security Considerations

When using code-server, consider the following security aspects:

  • Password Management: The default password is securely generated. Do not share it or expose it in public repositories.
  • Network Security: Ensure that your firewall settings allow access to the code-server port (default: 8080) only from trusted networks.
  • Data Privacy: Be cautious when uploading sensitive data or code to the server. Use environment variables or secure vaults for sensitive information.

Ollama Integration

The code-server environment includes Ollama, a tool for running large language models locally on your machine.

What is Ollama?

Ollama is a lightweight, extensible framework for building and running language models locally. It provides a simple API for creating, running, and managing models, making it easy to integrate AI capabilities into your development workflow without relying on external services.

Getting Started with Ollama

Staring Ollama

For ollama to be available, you need to open a terminal and run:

ollama serve

This will start the ollama server and you can then proceed to pulling a model and chatting.

Pulling a Model

To get started, you'll need to pull a model. For development and testing, we recommend starting with a smaller model like Gemma 2B:

ollama pull gemma2:2b

For even lighter resource usage, you can use the 1B parameter version:

ollama pull gemma2:1b

Running a Model

Once you've pulled a model, you can start an interactive session:

ollama run gemma2:2b

Available Models

Popular models available through Ollama include:

  • Gemma 2 (1B, 2B, 9B, 27B): Google's efficient language models
  • Llama 3.2 (1B, 3B, 11B, 90B): Meta's latest language models
  • Qwen 2.5 (0.5B, 1.5B, 3B, 7B, 14B, 32B, 72B): Alibaba's multilingual models
  • Phi 3.5 (3.8B): Microsoft's compact language model
  • Code Llama (7B, 13B, 34B): Specialized for code generation

Using Ollama in Your Development Workflow

API Access

Ollama provides a REST API that runs on http://localhost:11434 by default. You can integrate this into your applications:

curl http://localhost:11434/api/generate -d '{
  "model": "gemma2:2b",
  "prompt": "Write a Python function to calculate fibonacci numbers",
  "stream": false
}'

Model Management

List installed models:

ollama list

Remove a model:

ollama rm gemma2:2b

Show model information:

ollama show gemma2:2b

Resource Considerations

  • 1B models: Require ~1GB RAM, suitable for basic tasks and resource-constrained environments
  • 2B models: Require ~2GB RAM, good balance of capability and resource usage
  • Larger models: Provide better performance but require significantly more resources

Integration with Development Tools

Ollama can be integrated with various development tools and editors through its API, enabling features like:

  • Code completion and generation
  • Documentation writing assistance
  • Code review and explanation
  • Automated testing suggestions

For more information, visit the Ollama documentation.

For more detailed information on configuring and using code-server, refer to the official code-server documentation.