Vigogne

French Instruction-following Models
 French Instruction-following Models
Product Information
This tool is verified because it is either an established company, has good social media presence or a distinctive use case
Release date22 June, 2023
PlatformDesktop

Vigogne Features

The repository contains code for reproducing the Stanford Alpaca in French 🇫🇷 using low-rank adaptation (LoRA) provided by 🤗 Hugging Face's PEFT library. In addition to the LoRA technique, they also use LLM.int8() provided by bitsandbytes to quantize pretrained language models (PLMs) to int8. Combining these two techniques allows us to fine-tune PLMs on a single consumer GPU such as RTX 4090. This project is based on LLaMA, Stanford Alpaca, Alpaca-Lora, Cabrita and Hugging Face. In addition, they adapted the training script to fine-tune on more models such as BLOOM and mT5. They also share the translated Alpaca dataset and the trained LoRA weights such as vigogne-lora-7b and vigogne-lora-bloom-7b1.

Trends prompts: