Skip to content

llama.cpp buildcache-cuda Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:buildcache-cuda

Recent tagged image versions

  • Published about 11 hours ago · Digest
    sha256:07d2721684ed8ef902737b08ce5c64d073d851fb41f893922236a3652e7adfe6
    5 Version downloads
  • Published about 11 hours ago · Digest
    sha256:74a8df3d88e76972266cc73f4067a71e2abb62710c89cf731067121e323e389d
    360 Version downloads
  • Published about 11 hours ago · Digest
    sha256:217445e0c06be09ddb288e46685bfb0845e3807303d2d86e7086bbf7a14adfc5
    10 Version downloads
  • Published about 11 hours ago · Digest
    sha256:43d991be649efe3f40dca33b7932da20a4e42b35c188248eb10ec96b913023a9
    43 Version downloads
  • Published about 11 hours ago · Digest
    sha256:8951408e4b19103d53e36aeff695933181d4c9669f6d51af5d73cd4b7a347df7
    0 Version downloads

Loading

Details


Last published

11 hours ago

Discussions

2.67K

Issues

849

Total downloads

692K