An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: gemm-optimization

JoeruCodes/CUDA-GEMM-kernel

My attempt of making a GEMM kernel...

Language: Cuda - Size: 67.4 KB - Last synced at: 3 days ago - Pushed at: 3 days ago - Stars: 2 - Forks: 0

iVishalr/GEMM

Fast Matrix Multiplication Implementation in C programming language. This matrix multiplication algorithm is similar to what Numpy uses to compute dot products.

Language: C - Size: 12.7 KB - Last synced at: 19 days ago - Pushed at: almost 4 years ago - Stars: 31 - Forks: 4

tpoisonooo/how-to-optimize-gemm

row-major matmul optimization

Language: C++ - Size: 12.5 MB - Last synced at: 6 months ago - Pushed at: over 1 year ago - Stars: 589 - Forks: 79

hpca-uji/ConvLIB

ConvLIB is a library of convolution kernels for multicore processors with ARM (NEON) or RISC-V architecture

Language: C - Size: 554 KB - Last synced at: 8 months ago - Pushed at: 8 months ago - Stars: 0 - Forks: 1

xziya/gemm-opt

Manually optimize the GEMM (GEneral Matrix Multiply) operation. There is a long way to go.

Language: C++ - Size: 39.1 KB - Last synced at: 5 months ago - Pushed at: over 3 years ago - Stars: 8 - Forks: 0

xylcbd/gemm_base

gemm baseline code.

Language: C++ - Size: 4.88 KB - Last synced at: over 1 year ago - Pushed at: over 7 years ago - Stars: 2 - Forks: 0

mz24cn/gemm_optimization

The repository targets the OpenCL gemm function performance optimization. It compares several libraries clBLAS, clBLAST, MIOpenGemm, Intel MKL(CPU) and cuBLAS(CUDA) on different matrix sizes/vendor's hardwares/OS. Out-of-the-box easy as MSVC, MinGW, Linux(CentOS) x86_64 binary provided. 在不同矩阵大小/硬件/操作系统下比较几个BLAS库的sgemm函数性能,提供binary,开盒即用。

Language: C - Size: 87.1 MB - Last synced at: over 1 year ago - Pushed at: about 6 years ago - Stars: 10 - Forks: 5

hwchen2017/Optimize_DGEMM_on_Intel_CPU

Implementations of DGEMM algorithm using different tricks to optimize the performance.

Language: C - Size: 59.6 KB - Last synced at: almost 2 years ago - Pushed at: over 2 years ago - Stars: 3 - Forks: 0

hwchen2017/Optimize_SGEMM_on_Nvidia_GPU

Implementations of SGEMM algorithm on Nvidia GPU using different tricks to optimize the performance.

Language: Cuda - Size: 204 KB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 1 - Forks: 0

fspiga/phiGEMM 📦

phiGEMM: CPU-GPU hybrid matrix-matrix multiplication library

Language: C - Size: 786 KB - Last synced at: over 1 year ago - Pushed at: over 10 years ago - Stars: 6 - Forks: 1

digital-nomad-cheng/matmul_cuda_kernel_tvm

Generate optimized MatMul cuda kernel automatically using tvm auto schedule

Language: Jupyter Notebook - Size: 48.8 KB - Last synced at: about 2 years ago - Pushed at: about 2 years ago - Stars: 0 - Forks: 0

marina-neseem/Accera-High-Perf-DL

Case Studies for using Accera - the open source cross-platform compiler from Microsoft Research - to create high performance deep learning computations (i.e. GEMM, Convolution, etc.)

Language: Python - Size: 37.1 KB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 2 - Forks: 0

scocoyash/Convolution-To-Gemm

My experiments with convolution

Language: C++ - Size: 23.4 KB - Last synced at: about 2 years ago - Pushed at: almost 5 years ago - Stars: 2 - Forks: 1