GitHub topics: adaptive-optimizer
muooon/EmoNavi
An emotion-driven optimizer that feels loss and navigates accordingly.
Language: Python - Size: 13.3 MB - Last synced at: about 9 hours ago - Pushed at: about 10 hours ago - Stars: 10 - Forks: 0

shamsbasir/investigating_mitigating_failure_modes_in_pinns
This repository contains the code and models for our paper "Investigating and Mitigating Failure Modes in Physics-informed Neural Networks(PINNs)"
Language: Jupyter Notebook - Size: 4.2 MB - Last synced at: 4 months ago - Pushed at: over 1 year ago - Stars: 18 - Forks: 5

JRC1995/DemonRangerOptimizer
Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
Language: Python - Size: 166 KB - Last synced at: about 2 years ago - Pushed at: almost 5 years ago - Stars: 23 - Forks: 6

AnirudhMaiya/Tom
A novel optimizer that leverages the trend observed in the gradients (https://arxiv.org/pdf/2109.03820.pdf)
Language: Python - Size: 6.84 KB - Last synced at: over 2 years ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

dmivilensky/Adaptive-relatively-smooth-optimization
We introduce the new concept of (α,L,δ)-relative smoothness (see https://arxiv.org/pdf/2107.05765.pdf) which covers both the concept of relative smoothness and relative Lipschitz continuity. For the corresponding class of problems, we propose some adaptive and universal methods which have optimal estimates of the convergence rate.
Language: Jupyter Notebook - Size: 2.52 MB - Last synced at: over 2 years ago - Pushed at: about 4 years ago - Stars: 0 - Forks: 1

sverdoot/optimizer-SUG-torch
Adaptive stochastic gradient method based on the universal gradient method. The universal method adjusts Lipsitz constant of the gradient on each step so that the loss function is majorated by the quadratic function.
Language: Jupyter Notebook - Size: 5.17 MB - Last synced at: over 2 years ago - Pushed at: almost 6 years ago - Stars: 0 - Forks: 0
