Fine-tune Quantized Language model using LoRA with peft & transformers on T4 GPU

Fine-tune Quantized Language model using LoRA with peft & transformers on T4 GPU

11 months ago
Anonymous $Pi6HN8Q0B-

Fine-tune Quantized Language model using LoRA with peft & transformers on T4 GPU

Thu Dec 21, 3:22pm UTC
https://dipankarmedh1.medium.com/fine-tune-quantized-language-model-using-lora-with-peft-transformers-on-t4-gpu-287da2d5d7f1