An open API service providing repository metadata for many open source software ecosystems.

GitHub / omarmnfy / Finetune-Llama3-using-Direct-Preference-Optimization

This repository contains Jupyter Notebooks, scripts, and datasets used in our finetuning experiments. The project focuses on Direct Preference Optimization (DPO), a method that simplifies the traditional finetuning process by using the model itself as a feedback mechanism.

JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omarmnfy%2FFinetune-Llama3-using-Direct-Preference-Optimization

Stars: 0
Forks: 0
Open issues: 0

License: apache-2.0
Language: Jupyter Notebook
Size: 889 KB
Dependencies parsed at: Pending

Created at: 10 months ago
Updated at: 10 months ago
Pushed at: 10 months ago
Last synced at: 10 months ago

Topics: dpo, finetuning, rlhf, stf

    Loading...