wuphysics87@lemmy.ml to Privacy@lemmy.ml · 11 hours agoCan you trust locally run LLMs?message-squaremessage-square16fedilinkarrow-up147arrow-down15file-text
arrow-up142arrow-down1message-squareCan you trust locally run LLMs?wuphysics87@lemmy.ml to Privacy@lemmy.ml · 11 hours agomessage-square16fedilinkfile-text
I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?
minus-squareacockworkorangelinkfedilinkarrow-up3·2 hours agoIt’s the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?
It’s the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?