I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?

  • acockworkorange
    link
    fedilink
    arrow-up
    4
    ·
    5 hours ago

    It’s the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?

    • stink@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 minutes ago

      Windows > Windows Subsystem for Linux (WSL) Ubuntu > docker container

      I think WSL 2 actually runs Linux in a virtual environment. I’ve tried getting my own LLM instance running on my windows machine but it’s been such a pain.