GitHub / DanielEftekhari / numerically-efficient-softmax-algorithm
An algorithm to compute the softmax layer in neural networks using low floating-point precision arithmetic.
Stars: 0
Forks: 0
Open issues: 0
License: None
Language:
Size: 361 KB
Dependencies parsed at: Pending
Created at: over 1 year ago
Updated at: over 1 year ago
Pushed at: over 1 year ago
Last synced at: over 1 year ago
Topics: deep-learning, floating-point-arithmetic, softmax, softmax-layer
Loading...