Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantize TriLM models using Q2_K_S #552

Merged
merged 4 commits into from
Aug 24, 2024

Conversation

ikawrakow
Copy link
Contributor

This PR enables accurate Q2_K_S quantization of Llama-compatible ternary models.

To use it, grab one of the TriLM ternary models from here and convert to f16 GGUF. Or, if you don't have the llama.cpp conversion script convert_hf_to_gguf.py (or don't know how to convert HF safetensors to GGUF), you can get the 4B parameter TriLM ternry model converted to GGUF from https://huggingface.co/Green-Sky/TriLM_3.9B-GGUF/resolve/main/TriLM_3.9B_Unpacked-4.0B-F16.gguf?download=true

Once you have a f16 TriLM model, quantize using

./o/llama.cpp/quantize/quantize --ignore-imatrix-rules  $trilm_model_f16.gguf q2ks.gguf q2_K_S

Here is a perplexity calculation for the 4B TriLM model using the Q2_K_S quantization on a Ryzen-7950X:

Q2_K_S: Final estimate: PPL = 11.1229 +/- 0.07867 main: llamafile version 0.8.13 main: seed = 1724397871 llama_model_loader: loaded meta data with 29 key-value pairs and 273 tensors from junk.bin (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = TriLM_3.9B_Unpacked llama_model_loader: - kv 3: general.size_label str = 4.0B llama_model_loader: - kv 4: general.license str = apache-2.0 llama_model_loader: - kv 5: llama.block_count u32 = 30 llama_model_loader: - kv 6: llama.context_length u32 = 2048 llama_model_loader: - kv 7: llama.embedding_length u32 = 3072 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 9216 llama_model_loader: - kv 9: llama.attention.head_count u32 = 24 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 24 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 14: general.file_type u32 = 21 llama_model_loader: - kv 15: llama.vocab_size u32 = 50688 llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 17: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = olmo llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,50688] = ["<|endoftext|>", "<|padding|>", "!",... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,50688] = [3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,50009] = ["Ġ Ġ", "Ġ t", "Ġ a", "h e", "i n... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 0 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 61 tensors llama_model_loader: - type q2_K: 208 tensors llama_model_loader: - type q4_K: 3 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 25 llm_load_vocab: token to piece cache size = 0.2984 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 50688 llm_load_print_meta: n_merges = 50009 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 30 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 24 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 9216 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q2_K - Small llm_load_print_meta: model params = 3.99 B llm_load_print_meta: model size = 1.31 GiB (2.82 BPW) llm_load_print_meta: general.name = TriLM_3.9B_Unpacked llm_load_print_meta: BOS token = 0 '<|endoftext|>' llm_load_print_meta: EOS token = 0 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<|endoftext|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 0 '<|endoftext|>' llm_load_print_meta: max token length = 1024 llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 1341.96 MiB .......................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 180.00 MiB llama_new_context_with_model: KV self size = 180.00 MiB, K (f16): 90.00 MiB, V (f16): 90.00 MiB llama_new_context_with_model: CPU output buffer size = 0.19 MiB llama_new_context_with_model: CPU compute buffer size = 105.00 MiB llama_new_context_with_model: graph nodes = 966 llama_new_context_with_model: graph splits = 1

system_info: n_threads = 16 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
perplexity: tokenizing the input ..
perplexity: tokenization took 562.262 ms
perplexity: calculating perplexity over 560 chunks, n_ctx=512, batch_size=512, n_seq=1
perplexity: 1.40 seconds per pass - ETA 13.07 minutes
[1]6.7442,[2]8.4756,[3]9.0202,[4]9.3941,[5]9.4523,[6]9.9244,[7]10.6157,[8]11.0102,[9]11.8860,[10]12.1197,[11]12.1545,[12]12.4319,[13]13.0245,[14]12.3298,[15]11.9941,[16]11.7933,[17]11.5968,[18]11.8347,[19]11.5766,[20]11.5143,[21]11.4763,[22]11.3108,[23]10.9281,[24]10.9472,[25]10.6608,[26]10.5598,[27]10.6654,[28]10.7373,[29]10.7532,[30]10.7427,[31]10.7589,[32]10.8002,[33]10.8736,[34]10.9678,[35]11.0388,[36]10.8633,[37]10.9149,[38]10.8300,[39]10.7560,[40]10.7993,[41]10.8425,[42]10.8662,[43]10.8423,[44]10.9535,[45]10.9956,[46]10.8788,[47]10.8744,[48]10.8504,[49]10.9461,[50]11.0045,[51]10.9613,[52]11.1070,[53]11.1045,[54]11.1023,[55]11.1596,[56]11.2275,[57]11.2530,[58]11.2925,[59]11.3699,[60]11.4226,[61]11.5025,[62]11.5748,[63]11.6842,[64]11.7378,[65]11.7486,[66]11.7578,[67]11.8552,[68]11.8481,[69]11.8897,[70]11.8190,[71]11.8226,[72]11.8453,[73]11.8114,[74]11.7874,[75]11.7339,[76]11.7124,[77]11.7445,[78]11.7897,[79]11.7550,[80]11.7990,[81]11.7904,[82]11.7987,[83]11.7995,[84]11.7636,[85]11.8196,[86]11.8109,[87]11.8063,[88]11.8096,[89]11.7873,[90]11.8105,[91]11.8160,[92]11.8585,[93]11.8406,[94]11.9084,[95]11.9615,[96]11.9558,[97]11.9789,[98]11.9383,[99]11.9024,[100]11.8790,[101]11.9234,[102]11.9636,[103]11.9491,[104]12.0003,[105]12.0513,[106]12.0775,[107]12.0516,[108]12.1147,[109]12.1452,[110]12.0888,[111]12.0815,[112]12.0760,[113]12.0283,[114]12.0484,[115]12.0164,[116]11.9938,[117]11.9452,[118]11.9036,[119]11.9001,[120]11.8765,[121]11.8303,[122]11.8024,[123]11.7724,[124]11.7125,[125]11.6527,[126]11.6500,[127]11.6333,[128]11.6140,[129]11.6075,[130]11.5956,[131]11.5723,[132]11.5262,[133]11.5390,[134]11.5340,[135]11.5706,[136]11.5382,[137]11.5057,[138]11.5511,[139]11.4914,[140]11.4415,[141]11.3896,[142]11.3444,[143]11.2602,[144]11.2068,[145]11.1545,[146]11.1260,[147]11.0867,[148]11.0589,[149]11.0040,[150]10.9592,[151]10.9186,[152]10.8690,[153]10.8409,[154]10.8003,[155]10.7621,[156]10.7460,[157]10.7444,[158]10.7345,[159]10.7641,[160]10.7538,[161]10.7992,[162]10.8377,[163]10.8792,[164]10.9230,[165]10.9375,[166]10.9944,[167]11.0206,[168]11.0846,[169]11.1310,[170]11.1402,[171]11.1283,[172]11.1994,[173]11.2296,[174]11.2491,[175]11.2516,[176]11.2511,[177]11.2682,[178]11.2867,[179]11.2993,[180]11.3239,[181]11.3553,[182]11.3691,[183]11.3723,[184]11.3940,[185]11.4293,[186]11.4692,[187]11.4748,[188]11.4914,[189]11.4748,[190]11.4411,[191]11.4330,[192]11.3965,[193]11.4250,[194]11.4480,[195]11.4724,[196]11.4327,[197]11.4480,[198]11.4180,[199]11.4114,[200]11.4097,[201]11.4048,[202]11.3998,[203]11.3994,[204]11.3721,[205]11.3682,[206]11.3358,[207]11.3260,[208]11.3019,[209]11.2969,[210]11.3005,[211]11.2749,[212]11.2618,[213]11.2614,[214]11.2312,[215]11.2142,[216]11.2100,[217]11.1992,[218]11.1721,[219]11.1596,[220]11.1455,[221]11.1311,[222]11.1305,[223]11.1238,[224]11.1057,[225]11.1185,[226]11.1054,[227]11.0931,[228]11.1081,[229]11.1130,[230]11.1283,[231]11.1252,[232]11.1494,[233]11.1524,[234]11.1697,[235]11.1853,[236]11.2010,[237]11.2140,[238]11.2155,[239]11.2340,[240]11.2252,[241]11.2545,[242]11.2829,[243]11.2822,[244]11.2831,[245]11.2909,[246]11.2658,[247]11.2298,[248]11.2068,[249]11.1784,[250]11.1679,[251]11.1598,[252]11.1570,[253]11.1502,[254]11.1465,[255]11.1296,[256]11.1223,[257]11.1088,[258]11.0803,[259]11.0549,[260]11.0462,[261]11.0155,[262]11.0086,[263]10.9839,[264]10.9697,[265]10.9528,[266]10.9423,[267]10.9372,[268]10.9205,[269]10.9296,[270]10.8917,[271]10.9109,[272]10.9312,[273]10.9292,[274]10.9338,[275]10.9325,[276]10.9368,[277]10.9420,[278]10.9529,[279]10.9608,[280]10.9686,[281]10.9806,[282]10.9973,[283]11.0099,[284]10.9993,[285]10.9825,[286]10.9841,[287]10.9713,[288]10.9741,[289]10.9672,[290]10.9867,[291]10.9850,[292]10.9932,[293]10.9963,[294]10.9907,[295]10.9801,[296]10.9892,[297]10.9999,[298]10.9912,[299]10.9917,[300]10.9961,[301]10.9917,[302]10.9820,[303]10.9891,[304]10.9891,[305]11.0066,[306]11.0089,[307]11.0140,[308]11.0027,[309]11.0113,[310]11.0107,[311]11.0252,[312]11.0284,[313]11.0285,[314]11.0104,[315]10.9975,[316]10.9928,[317]10.9874,[318]10.9987,[319]10.9906,[320]10.9788,[321]10.9845,[322]10.9813,[323]10.9913,[324]10.9806,[325]10.9726,[326]10.9570,[327]10.9502,[328]10.9427,[329]10.9458,[330]10.9423,[331]10.9313,[332]10.9256,[333]10.9080,[334]10.8948,[335]10.8877,[336]10.8913,[337]10.8886,[338]10.8768,[339]10.8928,[340]10.8992,[341]10.9017,[342]10.9078,[343]10.9131,[344]10.9189,[345]10.9283,[346]10.9301,[347]10.9355,[348]10.9339,[349]10.9522,[350]10.9671,[351]10.9936,[352]11.0119,[353]11.0203,[354]11.0408,[355]11.0568,[356]11.0755,[357]11.0819,[358]11.0934,[359]11.1085,[360]11.1109,[361]11.1099,[362]11.1298,[363]11.1404,[364]11.1586,[365]11.1685,[366]11.1767,[367]11.1853,[368]11.2028,[369]11.2214,[370]11.2279,[371]11.2154,[372]11.2170,[373]11.2186,[374]11.2299,[375]11.2391,[376]11.2500,[377]11.2467,[378]11.2434,[379]11.2484,[380]11.2506,[381]11.2505,[382]11.2551,[383]11.2542,[384]11.2512,[385]11.2513,[386]11.2398,[387]11.2316,[388]11.2179,[389]11.2129,[390]11.2117,[391]11.2131,[392]11.2102,[393]11.2052,[394]11.2241,[395]11.2297,[396]11.2299,[397]11.2245,[398]11.2199,[399]11.2038,[400]11.2051,[401]11.2048,[402]11.2028,[403]11.2008,[404]11.2086,[405]11.1920,[406]11.2024,[407]11.2138,[408]11.2153,[409]11.2137,[410]11.2134,[411]11.2285,[412]11.2369,[413]11.2342,[414]11.2295,[415]11.2355,[416]11.2229,[417]11.2250,[418]11.2235,[419]11.2250,[420]11.2381,[421]11.2285,[422]11.2121,[423]11.2089,[424]11.2083,[425]11.2022,[426]11.1895,[427]11.1850,[428]11.1760,[429]11.1639,[430]11.1650,[431]11.1618,[432]11.1453,[433]11.1452,[434]11.1360,[435]11.1346,[436]11.1336,[437]11.1358,[438]11.1414,[439]11.1361,[440]11.1476,[441]11.1613,[442]11.1546,[443]11.1530,[444]11.1520,[445]11.1579,[446]11.1596,[447]11.1599,[448]11.1589,[449]11.1623,[450]11.1693,[451]11.1720,[452]11.1781,[453]11.1653,[454]11.1653,[455]11.1515,[456]11.1549,[457]11.1603,[458]11.1566,[459]11.1577,[460]11.1442,[461]11.1449,[462]11.1626,[463]11.1718,[464]11.1750,[465]11.1806,[466]11.1773,[467]11.1699,[468]11.1684,[469]11.1573,[470]11.1519,[471]11.1440,[472]11.1401,[473]11.1408,[474]11.1277,[475]11.1171,[476]11.1081,[477]11.1026,[478]11.0960,[479]11.0957,[480]11.0978,[481]11.0932,[482]11.0831,[483]11.0956,[484]11.1076,[485]11.1301,[486]11.1295,[487]11.1256,[488]11.1271,[489]11.1226,[490]11.1246,[491]11.1259,[492]11.1254,[493]11.1229,[494]11.1161,[495]11.1130,[496]11.1131,[497]11.1075,[498]11.1096,[499]11.1109,[500]11.1070,[501]11.0987,[502]11.1014,[503]11.1013,[504]11.1100,[505]11.1064,[506]11.1063,[507]11.1030,[508]11.1046,[509]11.0980,[510]11.1023,[511]11.0997,[512]11.1093,[513]11.1149,[514]11.1107,[515]11.1247,[516]11.1304,[517]11.1342,[518]11.1449,[519]11.1428,[520]11.1310,[521]11.1361,[522]11.1443,[523]11.1483,[524]11.1443,[525]11.1325,[526]11.1199,[527]11.1291,[528]11.1116,[529]11.1020,[530]11.0676,[531]11.0555,[532]11.0531,[533]11.0561,[534]11.0641,[535]11.0563,[536]11.0634,[537]11.0667,[538]11.0740,[539]11.0871,[540]11.1014,[541]11.0974,[542]11.1094,[543]11.1118,[544]11.1043,[545]11.1022,[546]11.0937,[547]11.0958,[548]11.0969,[549]11.1099,[550]11.1075,[551]11.1039,[552]11.0994,[553]11.0977,[554]11.0958,[555]11.0980,[556]11.1049,[557]11.1104,[558]11.1225,[559]11.1254,[560]11.1229,
Final estimate: PPL = 11.1229 +/- 0.07867

llama_print_timings: load time = 140.58 ms
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: prompt eval time = 813920.01 ms / 286720 tokens ( 2.84 ms per token, 352.27 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 817625.61 ms / 286721 tokens

And this is the perplexity run using the fp16 model on an RTX-4080

FP16: PPL = 11.1531 +/- 0.07854 main: build = 3403 (bd99ed7d) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: seed = 1724399025 llama_model_loader: loaded meta data with 29 key-value pairs and 273 tensors from ../../llama.cpp/models/tri4B/TriLM_3.9B_Unpacked-4.0B-F16.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = TriLM_3.9B_Unpacked llama_model_loader: - kv 3: general.size_label str = 4.0B llama_model_loader: - kv 4: general.license str = apache-2.0 llama_model_loader: - kv 5: llama.block_count u32 = 30 llama_model_loader: - kv 6: llama.context_length u32 = 2048 llama_model_loader: - kv 7: llama.embedding_length u32 = 3072 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 9216 llama_model_loader: - kv 9: llama.attention.head_count u32 = 24 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 24 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 14: general.file_type u32 = 1 llama_model_loader: - kv 15: llama.vocab_size u32 = 50688 llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 17: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = olmo llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,50688] = ["<|endoftext|>", "<|padding|>", "!",... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,50688] = [3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,50009] = ["Ġ Ġ", "Ġ t", "Ġ a", "h e", "i n... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 0 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 61 tensors llama_model_loader: - type f16: 212 tensors llm_load_vocab: special tokens cache size = 25 llm_load_vocab: token to piece cache size = 0.2984 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 50688 llm_load_print_meta: n_merges = 50009 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 30 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 24 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 9216 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 3.99 B llm_load_print_meta: model size = 7.44 GiB (16.00 BPW) llm_load_print_meta: general.name = TriLM_3.9B_Unpacked llm_load_print_meta: BOS token = 0 '<|endoftext|>' llm_load_print_meta: EOS token = 0 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<|endoftext|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 0 '<|endoftext|>' llm_load_print_meta: max token length = 1024 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.26 MiB llm_load_tensors: offloading 30 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 31/31 layers to GPU llm_load_tensors: CPU buffer size = 297.00 MiB llm_load_tensors: CUDA0 buffer size = 7317.71 MiB ............................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 180.00 MiB llama_new_context_with_model: KV self size = 180.00 MiB, K (f16): 90.00 MiB, V (f16): 90.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.19 MiB llama_new_context_with_model: CUDA0 compute buffer size = 105.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 7.01 MiB llama_new_context_with_model: graph nodes = 966 llama_new_context_with_model: graph splits = 2

system_info: n_threads = 1 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
perplexity: tokenizing the input ..
perplexity: tokenization took 461.126 ms
perplexity: calculating perplexity over 560 chunks, n_ctx=512, batch_size=512, n_seq=1
perplexity: 0.07 seconds per pass - ETA 0.62 minutes
[1]6.8725,[2]8.6122,[3]9.0747,[4]9.4520,[5]9.4802,[6]9.9427,[7]10.6136,[8]11.0248,[9]11.8929,[10]12.1284,[11]12.1640,[12]12.4496,[13]13.0463,[14]12.3556,[15]12.0381,[16]11.8485,[17]11.6505,[18]11.8854,[19]11.6302,[20]11.5744,[21]11.5395,[22]11.3857,[23]10.9919,[24]11.0146,[25]10.7308,[26]10.6249,[27]10.7242,[28]10.7979,[29]10.8165,[30]10.8065,[31]10.8283,[32]10.8635,[33]10.9359,[34]11.0273,[35]11.0961,[36]10.9341,[37]10.9828,[38]10.8964,[39]10.8187,[40]10.8616,[41]10.9043,[42]10.9238,[43]10.8975,[44]11.0060,[45]11.0475,[46]10.9323,[47]10.9286,[48]10.9063,[49]11.0009,[50]11.0593,[51]11.0184,[52]11.1621,[53]11.1634,[54]11.1627,[55]11.2243,[56]11.2931,[57]11.3198,[58]11.3612,[59]11.4398,[60]11.4890,[61]11.5656,[62]11.6410,[63]11.7520,[64]11.8062,[65]11.8171,[66]11.8255,[67]11.9228,[68]11.9139,[69]11.9597,[70]11.8881,[71]11.8923,[72]11.9140,[73]11.8790,[74]11.8535,[75]11.8000,[76]11.7757,[77]11.8089,[78]11.8546,[79]11.8182,[80]11.8605,[81]11.8528,[82]11.8627,[83]11.8627,[84]11.8270,[85]11.8827,[86]11.8747,[87]11.8709,[88]11.8753,[89]11.8543,[90]11.8775,[91]11.8820,[92]11.9238,[93]11.9072,[94]11.9765,[95]12.0308,[96]12.0237,[97]12.0452,[98]12.0038,[99]11.9673,[100]11.9437,[101]11.9874,[102]12.0270,[103]12.0107,[104]12.0628,[105]12.1158,[106]12.1419,[107]12.1158,[108]12.1792,[109]12.2099,[110]12.1538,[111]12.1455,[112]12.1390,[113]12.0906,[114]12.1108,[115]12.0770,[116]12.0537,[117]12.0054,[118]11.9644,[119]11.9606,[120]11.9362,[121]11.8876,[122]11.8596,[123]11.8287,[124]11.7689,[125]11.7077,[126]11.7041,[127]11.6882,[128]11.6668,[129]11.6596,[130]11.6469,[131]11.6236,[132]11.5778,[133]11.5909,[134]11.5852,[135]11.6220,[136]11.5880,[137]11.5553,[138]11.6000,[139]11.5413,[140]11.4915,[141]11.4380,[142]11.3921,[143]11.3081,[144]11.2543,[145]11.2026,[146]11.1744,[147]11.1338,[148]11.1050,[149]11.0488,[150]11.0037,[151]10.9637,[152]10.9126,[153]10.8840,[154]10.8416,[155]10.8028,[156]10.7867,[157]10.7854,[158]10.7748,[159]10.8036,[160]10.7931,[161]10.8374,[162]10.8770,[163]10.9188,[164]10.9618,[165]10.9754,[166]11.0334,[167]11.0597,[168]11.1235,[169]11.1695,[170]11.1805,[171]11.1677,[172]11.2382,[173]11.2680,[174]11.2870,[175]11.2895,[176]11.2885,[177]11.3052,[178]11.3226,[179]11.3341,[180]11.3576,[181]11.3896,[182]11.4027,[183]11.4051,[184]11.4260,[185]11.4615,[186]11.5019,[187]11.5066,[188]11.5234,[189]11.5068,[190]11.4726,[191]11.4653,[192]11.4296,[193]11.4589,[194]11.4816,[195]11.5061,[196]11.4657,[197]11.4806,[198]11.4504,[199]11.4438,[200]11.4417,[201]11.4366,[202]11.4311,[203]11.4310,[204]11.4032,[205]11.3990,[206]11.3652,[207]11.3548,[208]11.3302,[209]11.3250,[210]11.3277,[211]11.3021,[212]11.2882,[213]11.2876,[214]11.2574,[215]11.2411,[216]11.2366,[217]11.2260,[218]11.1985,[219]11.1846,[220]11.1695,[221]11.1548,[222]11.1535,[223]11.1467,[224]11.1281,[225]11.1396,[226]11.1270,[227]11.1147,[228]11.1305,[229]11.1352,[230]11.1499,[231]11.1470,[232]11.1712,[233]11.1745,[234]11.1913,[235]11.2069,[236]11.2219,[237]11.2342,[238]11.2360,[239]11.2546,[240]11.2460,[241]11.2747,[242]11.3030,[243]11.3025,[244]11.3031,[245]11.3116,[246]11.2869,[247]11.2510,[248]11.2280,[249]11.1998,[250]11.1894,[251]11.1815,[252]11.1794,[253]11.1725,[254]11.1685,[255]11.1508,[256]11.1435,[257]11.1297,[258]11.1009,[259]11.0751,[260]11.0668,[261]11.0354,[262]11.0283,[263]11.0037,[264]10.9891,[265]10.9723,[266]10.9618,[267]10.9568,[268]10.9409,[269]10.9502,[270]10.9117,[271]10.9309,[272]10.9507,[273]10.9483,[274]10.9529,[275]10.9520,[276]10.9559,[277]10.9607,[278]10.9719,[279]10.9805,[280]10.9896,[281]11.0017,[282]11.0184,[283]11.0310,[284]11.0209,[285]11.0049,[286]11.0058,[287]10.9925,[288]10.9953,[289]10.9889,[290]11.0087,[291]11.0076,[292]11.0155,[293]11.0187,[294]11.0134,[295]11.0027,[296]11.0113,[297]11.0226,[298]11.0135,[299]11.0132,[300]11.0179,[301]11.0132,[302]11.0040,[303]11.0111,[304]11.0117,[305]11.0300,[306]11.0317,[307]11.0368,[308]11.0253,[309]11.0338,[310]11.0326,[311]11.0469,[312]11.0498,[313]11.0498,[314]11.0325,[315]11.0200,[316]11.0153,[317]11.0105,[318]11.0228,[319]11.0146,[320]11.0025,[321]11.0084,[322]11.0054,[323]11.0153,[324]11.0049,[325]10.9975,[326]10.9812,[327]10.9746,[328]10.9668,[329]10.9701,[330]10.9664,[331]10.9554,[332]10.9489,[333]10.9316,[334]10.9181,[335]10.9109,[336]10.9146,[337]10.9119,[338]10.8997,[339]10.9154,[340]10.9220,[341]10.9241,[342]10.9301,[343]10.9354,[344]10.9411,[345]10.9504,[346]10.9523,[347]10.9579,[348]10.9560,[349]10.9746,[350]10.9897,[351]11.0159,[352]11.0344,[353]11.0430,[354]11.0634,[355]11.0793,[356]11.0985,[357]11.1057,[358]11.1176,[359]11.1326,[360]11.1355,[361]11.1344,[362]11.1540,[363]11.1651,[364]11.1840,[365]11.1939,[366]11.2021,[367]11.2109,[368]11.2281,[369]11.2465,[370]11.2526,[371]11.2402,[372]11.2421,[373]11.2436,[374]11.2556,[375]11.2648,[376]11.2761,[377]11.2731,[378]11.2699,[379]11.2750,[380]11.2774,[381]11.2772,[382]11.2819,[383]11.2816,[384]11.2785,[385]11.2783,[386]11.2666,[387]11.2592,[388]11.2456,[389]11.2406,[390]11.2392,[391]11.2405,[392]11.2378,[393]11.2326,[394]11.2516,[395]11.2573,[396]11.2575,[397]11.2519,[398]11.2468,[399]11.2304,[400]11.2316,[401]11.2315,[402]11.2294,[403]11.2272,[404]11.2350,[405]11.2187,[406]11.2287,[407]11.2401,[408]11.2413,[409]11.2399,[410]11.2393,[411]11.2542,[412]11.2622,[413]11.2596,[414]11.2552,[415]11.2618,[416]11.2493,[417]11.2513,[418]11.2497,[419]11.2514,[420]11.2643,[421]11.2546,[422]11.2382,[423]11.2356,[424]11.2347,[425]11.2287,[426]11.2160,[427]11.2117,[428]11.2029,[429]11.1910,[430]11.1920,[431]11.1889,[432]11.1720,[433]11.1716,[434]11.1623,[435]11.1613,[436]11.1601,[437]11.1625,[438]11.1679,[439]11.1637,[440]11.1752,[441]11.1891,[442]11.1824,[443]11.1806,[444]11.1797,[445]11.1857,[446]11.1874,[447]11.1881,[448]11.1875,[449]11.1908,[450]11.1981,[451]11.2011,[452]11.2069,[453]11.1941,[454]11.1931,[455]11.1794,[456]11.1823,[457]11.1879,[458]11.1836,[459]11.1848,[460]11.1714,[461]11.1721,[462]11.1901,[463]11.1993,[464]11.2024,[465]11.2075,[466]11.2036,[467]11.1962,[468]11.1950,[469]11.1839,[470]11.1788,[471]11.1709,[472]11.1669,[473]11.1675,[474]11.1545,[475]11.1440,[476]11.1348,[477]11.1297,[478]11.1230,[479]11.1222,[480]11.1242,[481]11.1197,[482]11.1093,[483]11.1221,[484]11.1339,[485]11.1568,[486]11.1564,[487]11.1523,[488]11.1536,[489]11.1495,[490]11.1518,[491]11.1532,[492]11.1525,[493]11.1500,[494]11.1429,[495]11.1399,[496]11.1400,[497]11.1346,[498]11.1367,[499]11.1381,[500]11.1343,[501]11.1255,[502]11.1283,[503]11.1281,[504]11.1371,[505]11.1340,[506]11.1341,[507]11.1309,[508]11.1326,[509]11.1264,[510]11.1306,[511]11.1279,[512]11.1378,[513]11.1433,[514]11.1392,[515]11.1534,[516]11.1589,[517]11.1626,[518]11.1729,[519]11.1713,[520]11.1594,[521]11.1644,[522]11.1724,[523]11.1764,[524]11.1725,[525]11.1605,[526]11.1477,[527]11.1571,[528]11.1395,[529]11.1302,[530]11.0954,[531]11.0833,[532]11.0813,[533]11.0841,[534]11.0920,[535]11.0842,[536]11.0912,[537]11.0944,[538]11.1016,[539]11.1152,[540]11.1295,[541]11.1257,[542]11.1379,[543]11.1400,[544]11.1324,[545]11.1303,[546]11.1219,[547]11.1243,[548]11.1259,[549]11.1389,[550]11.1365,[551]11.1332,[552]11.1287,[553]11.1268,[554]11.1253,[555]11.1273,[556]11.1344,[557]11.1401,[558]11.1527,[559]11.1556,[560]11.1531,
Final estimate: PPL = 11.1531 +/- 0.07854

llama_print_timings: load time = 1135.73 ms
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: prompt eval time = 27666.26 ms / 286720 tokens ( 0.10 ms per token, 10363.53 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 29861.14 ms / 286721 tokens

We see the Q2_K_S achieves the same PPL as the fp16 model (it is even slightly better, but this is simply a matter of luck). Without this PR, the best we can do is quantize to Q2_K, which results in a model that is ~10% larger and perplexity of 11.6984 +/- 0.08250.

This table shows prompt processing (pp512) and token generation speed (tg128)

cpu_info size threads test t/s
AMD Ryzen 9 7950X 16-Core Processor (znver4) 1.31 GiB 16 pp512 371.85
AMD Ryzen 9 7950X 16-Core Processor (znver4) 1.31 GiB 2 tg128 28.64
AMD Ryzen 9 7950X 16-Core Processor (znver4) 1.31 GiB 4 tg128 40.20
AMD Ryzen 9 7950X 16-Core Processor (znver4) 1.31 GiB 8 tg128 39.42

This is of course not as good as a dedicated ternary model implementation such as this one, both in terms of quantized model size and performance. On the other hand, with this PR model size and performance are within 20% of what is achievable, so it is not a bad workaround.

Copy link
Collaborator

@jart jart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is cool! Also very fast. I love this ternary LLM.

cpu_info model_filename size test t/s
Apple M2 Ultra (+fp16+dotprod) TriLM_3.9B.Q2_K_S 1.31 GiB pp512 229.19
Apple M2 Ultra (+fp16+dotprod) TriLM_3.9B.Q2_K_S 1.31 GiB tg16 66.60
AMD Ryzen Threadripper PRO 7995WX (znver4) TriLM_3.9B.Q2_K_S 1.31 GiB pp512 1072.47
AMD Ryzen Threadripper PRO 7995WX (znver4) TriLM_3.9B.Q2_K_S 1.31 GiB tg16 89.29
Intel Core i9-14900K (alderlake) TriLM_3.9B.Q2_K_S 1.31 GiB pp512 168.64
Intel Core i9-14900K (alderlake) TriLM_3.9B.Q2_K_S 1.31 GiB tg16 50.18
Raspberry Pi 5 Model B Rev 1.0 (+fp16+dotprod) TriLM_3.9B.Q2_K_S 1.31 GiB pp512 17.11
Raspberry Pi 5 Model B Rev 1.0 (+fp16+dotprod) TriLM_3.9B.Q2_K_S 1.31 GiB tg16 6.72

Running o//llama.cpp/perplexity/perplexity -m /fast/TriLM_3.9B.Q2_K_S.gguf -f ~/vendor/wiki.test.raw --temp 0 --chunks 32 gives me a final ppl estimate of 10.8002. For comparison, Meta LLaMA 3.1 8B BF16 gives me 6.9076.

@@ -1950,6 +1950,44 @@ void quantize_row_q2_K_ref(const float * restrict x, block_q2_K * restrict y, in

const float q4scale = 15.f;

// [kawrakow] Detect TriNet
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a an // [kawrakow] ... comment to every line that's been changed under llama.cpp/? That makes it easy for me to synchronize sources. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be good now.

@@ -422,11 +425,12 @@ int main(int argc, char ** argv) {
}
}

if ((params.ftype == LLAMA_FTYPE_MOSTLY_IQ2_XS || params.ftype == LLAMA_FTYPE_MOSTLY_IQ2_XXS ||
if (!params.ignore_imatrix_rules && imatrix_data.empty() && // [kawrakpow] - be able to ignore imatrix rules
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting typo.

Copy link
Collaborator

@jart jart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@jart jart merged commit 98eff09 into Mozilla-Ocho:main Aug 24, 2024
2 checks passed
@jart
Copy link
Collaborator

jart commented Aug 25, 2024

TriLM is now available on Hugging Face as llamafile. https://huggingface.co/Mozilla/TriLM-llamafile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants