nayminlwin@lemmy.ml to Linux@lemmy.ml · 1 year agoAny experience with teaching kids Linux?message-squaremessage-square106fedilinkarrow-up1158arrow-down14file-text
arrow-up1154arrow-down1message-squareAny experience with teaching kids Linux?nayminlwin@lemmy.ml to Linux@lemmy.ml · 1 year agomessage-square106fedilinkfile-text
minus-squarewebghost0101@sopuli.xyzlinkfedilinkarrow-up1·1 year agoCan you tell me something about what card you used to run what llm? What is its performance? There is so little out there about this.
minus-squareProperlyProperTea@lemmy.mllinkfedilinkarrow-up1·1 year agoI have an RX6800XT and I use KoboldCPP to run models I download off of Huggingface. I’m not sure how many tokens per second it generates, probably about 10? If you want to try it yourself here’s a link to the Github page: https://github.com/LostRuins/koboldcpp
Can you tell me something about what card you used to run what llm? What is its performance?
There is so little out there about this.
I have an RX6800XT and I use KoboldCPP to run models I download off of Huggingface.
I’m not sure how many tokens per second it generates, probably about 10?
If you want to try it yourself here’s a link to the Github page: https://github.com/LostRuins/koboldcpp