Update demo-leaderboard/gpt2-demo/results_2023-11-21T18-10-08.json
Browse files
demo-leaderboard/gpt2-demo/results_2023-11-21T18-10-08.json
CHANGED
@@ -61,9 +61,5 @@
|
|
61 |
"model_dtype": "torch.float16",
|
62 |
"model_name": "demo-leaderboard/gpt2-demo",
|
63 |
"model_sha": "ac3299b02780836378b9e1e68c6eead546e89f90"
|
64 |
-
}
|
65 |
-
"git_hash": "a3e56afe",
|
66 |
-
"pretty_env_info": "PyTorch version: 2.1.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.3 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: 14.0.0-1ubuntu1.1\nCMake version: version 3.27.9\nLibc version: glibc-2.35\n\nPython version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-6.1.58+-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.2.140\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: Tesla V100-SXM2-16GB\nNvidia driver version: 535.104.05\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 2\nOn-line CPU(s) list: 0,1\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) CPU @ 2.00GHz\nCPU family: 6\nModel: 85\nThread(s) per core: 2\nCore(s) per socket: 1\nSocket(s): 1\nStepping: 3\nBogoMIPS: 4000.32\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 32 KiB (1 instance)\nL1i cache: 32 KiB (1 instance)\nL2 cache: 1 MiB (1 instance)\nL3 cache: 38.5 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0,1\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Mitigation; PTE Inversion\nVulnerability Mds: Vulnerable; SMT Host state unknown\nVulnerability Meltdown: Vulnerable\nVulnerability Mmio stale data: Vulnerable\nVulnerability Retbleed: Vulnerable\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers\nVulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Vulnerable\n\nVersions of relevant libraries:\n[pip3] numpy==1.25.2\n[pip3] torch==2.1.0+cu121\n[pip3] torchaudio==2.1.0+cu121\n[pip3] torchdata==0.7.0\n[pip3] torchsummary==1.5.1\n[pip3] torchtext==0.16.0\n[pip3] torchvision==0.16.0+cu121\n[pip3] triton==2.1.0\n[conda] Could not collect",
|
67 |
-
"transformers_version": "4.38.2",
|
68 |
-
"upper_git_hash": null
|
69 |
}
|
|
|
61 |
"model_dtype": "torch.float16",
|
62 |
"model_name": "demo-leaderboard/gpt2-demo",
|
63 |
"model_sha": "ac3299b02780836378b9e1e68c6eead546e89f90"
|
64 |
+
}
|
|
|
|
|
|
|
|
|
65 |
}
|