Skip to main content

Design Philosophy

Our goal: eliminate >99% of CLI and terminal work for Linux systems.

Most people who buy GPU servers don't want to be Linux administrators. They want their model running, their data backed up, their files accessible from the Mac on their desk, their dashboards lighting up — without ever typing sudo apt-get. Server AI Hub is built around that principle.

Every CLI moment we leave in the product is a moment we owe you better tooling for.

:::tip If you reach for the terminal — file an enhancement request If you find yourself dropping into the terminal to do something, that's a feature gap, not user error. Write up what you were trying to do, what you typed, and why the UI didn't get you there, and send it to support@6ixlabs.ai with the subject "Enhancement:". The goal is to retire every common terminal task into a button. We can't do that without your reports. :::

What this saves you

The headline number is >80% time savings versus the DIY workflow for setting up a new model on a GPU server. The Hub eliminates the day or two most teams spend every time they want to run a new HuggingFace model:

Step you used to do by handThe Hub does this for you
Browse HF, copy the model ID, set up the token, run huggingface-cli download, watch the disk fillOne-click HF integration in the Models tab. Auth, download, cache layout, disk-quota check — all handled.
Read the NVIDIA Sparkrun docs, pick a recipe, set up CUDA pathsSparkrun is wired in. Click a recipe, pick a model, click Launch.
Write a docker-compose.yaml. Get the image name wrong. Get the runtime wrong. Get the volume mounts wrong.Compose files are generated. Image, runtime, network, volumes — all picked.
Pick a memory size. Get OOM-killed. Pick a smaller size. Wait. Repeat.The Hub knows your hardware and the model's footprint and sizes the container correctly the first time.
Wire the new container into Caddy, into Tailscale, into LiteLLM's router, into the dashboardReverse proxy, TLS, LiteLLM registration, dashboard card — all wired automatically the moment the container is healthy.

The Hub is opinionated, but it's opinionated about the parts you don't want to be opinionated about — paths, ports, configs, runtimes, network plumbing. The parts you actually care about (which model, with which prompt, against which corpus) stay yours.

The five pillars

1. Finder, not a shell

Browse, upload, search, rename, and share files from a real graphical file manager. Mount the box's drives from your Mac like any other network share. The terminal exists in the product, but only as the escape hatch for the rare 1% — not as the default surface for moving files.

2. Containers as buttons, not as YAML

Models, embedders, judges, dashboards — every service has a card with Start, Stop, Logs, and Restart. The compose file is generated for you, versioned, and recoverable. You can read it if you want. You should never have to write it.

3. Observability that's already on

Grafana, Prometheus, Langfuse, container metrics, GPU utilization, model latency — pre-wired. Click a gauge to see history; click a service to see its log. There is no "step 1: install Prometheus." It's already running.

4. Backups and antivirus that schedule themselves

Borg snapshots, iCloud sync, ClamAV scans, signature DB refresh — one-click defaults that match the cadence we'd recommend if you'd asked. The user-facing decision is "do you want this?" The technical decisions ("how often, with what retention, where") have been made for you.

5. Networking that just works

Tailscale and Caddy do the WireGuard and TLS for you; you get a private hostname inside your team's tailnet and a public URL with a real cert without ever editing /etc/hosts. DNS, certs, port forwarding, fail2ban — handled.

What this means in practice

Things you should never have to do by hand:

  • Install or update a Docker container
  • Edit a docker-compose.yml
  • Run freshclam or clamscan
  • Write a cron expression
  • SSH to the box to delete a file
  • Edit /etc/samba/smb.conf to share a folder
  • Set up Prometheus, Grafana, or AlertManager
  • Issue or renew a TLS certificate
  • Reset a service to known-good state

Things we will not do for you:

  • Pretend the underlying Linux box doesn't exist. The terminal is one click away when you need it.
  • Hide the file paths or the actual config. Everything we generate is on disk and readable.
  • Lock you into the product. A Hub-managed box is just a Linux box with a curated stack — you can pull the plug on the Hub and keep running.

How this changes our roadmap

Every feature ships with the question "can someone who has never used Linux do this?" If the answer is no, it isn't done yet.

This bias has practical consequences:

  • Defaults are opinionated. We pick the antivirus cadence, the backup retention, the firewall rules, the observability stack. You can override, but the out-of-the-box state should be the right answer for 90% of users.
  • Visibility beats config. Show the user what's happening (a scan is running, a backup just completed, a model is loading) rather than expect them to know to check.
  • One way to do a thing. When there are three equivalent ways to run a model, we ship the one we'd recommend and document it. The other two still work — they're just not on the menu.

If this resonates, you're our target user. If "an opinionated stack with curated defaults" sounds limiting, you probably want a different product.

What we can't ship out of the box

We can't pre-include everything. The Hub ships with a curated stack — the models, vector store, embedders, judges, dashboards, and tooling we'd recommend if you asked. But the ecosystem is enormous: there are dozens of vector databases, hundreds of model serving frameworks, thousands of useful Python packages.

If you need something we didn't ship: email support@6ixlabs.ai with the subject "Package request:" and tell us what you want integrated and why. We rank package requests by demand and ship the high-value ones into the catalog. Examples of packages we'd consider on request:

  • Alternative vector stores: Weaviate, Milvus, pgvector, Chroma
  • Alternative model serving: TGI, Triton, llama.cpp, Ollama as a service
  • Specialty embedders or rerankers not currently in the catalog
  • Dev tools: Jupyter Hub, VS Code Server, RStudio Server
  • Workflow engines: Airflow, Prefect, Dagster

Two routes for extending the Hub are on the roadmap (see Extending the Hub) — both closed, single-vendor. Tier 1 is the 6ix Labs catalog: every package built, signed, and supported in-house. Tier 2 is the Enterprise API: a documented API + SDK on a subscription, so Enterprise customers can integrate their own internal services without 6ix Labs being the bottleneck. There is no community-contribution route and no plan for one.