Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessAnthropic is having a moment in the private markets; SpaceX could spoil the partyTechCrunchAmazon is selling a Samsung Galaxy tablet with AI-capabilities for just $270 - aol.comGNews AI SamsungThe Tool That Built the Modern World Is Still the Most Powerful Thing in an Engineer’s ArsenalMedium AI[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIAReddit r/MachineLearningI Tested AI Coding Assistants on the Same Full-Stack App — Here’s the Real WinnerMedium AIIs the Arrow of Time a Crucial Missing Component in Artificial Intelligence?Medium AIv0.20.1: Revert "enable flash attention for gemma4 (#15296)" (#15311)Ollama ReleasesAutomation vs AI: Not Just Similar — They Solve Fundamentally Different ProblemsMedium AIWalmart's AI Checkout Converted 3x Worse. The Interface Is Why.DEV Community✨ Why Humanity Still Moves Toward AI.Medium AIPredicting 10 Minutes in 1 Square Meter: The Ultimate AI Boundary?DEV CommunityOracle Database 26ai: The World’s First AI-Native Database Just Changed EverythingMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessAnthropic is having a moment in the private markets; SpaceX could spoil the partyTechCrunchAmazon is selling a Samsung Galaxy tablet with AI-capabilities for just $270 - aol.comGNews AI SamsungThe Tool That Built the Modern World Is Still the Most Powerful Thing in an Engineer’s ArsenalMedium AI[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIAReddit r/MachineLearningI Tested AI Coding Assistants on the Same Full-Stack App — Here’s the Real WinnerMedium AIIs the Arrow of Time a Crucial Missing Component in Artificial Intelligence?Medium AIv0.20.1: Revert "enable flash attention for gemma4 (#15296)" (#15311)Ollama ReleasesAutomation vs AI: Not Just Similar — They Solve Fundamentally Different ProblemsMedium AIWalmart's AI Checkout Converted 3x Worse. The Interface Is Why.DEV Community✨ Why Humanity Still Moves Toward AI.Medium AIPredicting 10 Minutes in 1 Square Meter: The Ultimate AI Boundary?DEV CommunityOracle Database 26ai: The World’s First AI-Native Database Just Changed EverythingMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

Peft 0.18.1 crashing when fine-tuning

discuss.huggingface.coby kegintheaiApril 3, 20261 min read1 views
Source Quiz

Hi, peft Version: 0.18.1 is crashing when attempting to fine-tune google/gemma-4-E2B. The error msg is shown below. I checked and 0.18.1 is the latest version. Will there be an update soon or is there a workaround? I’d appreciate any help. thanks! ValueError: Target module Gemma4ClippableLinear( (linear): Linear(in_features=768, out_features=768, bias=False) ) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`. 1 post - 1 participant Read full topic

Could not retrieve the full article text.

Read on discuss.huggingface.co →
Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

transformerversionupdate

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Peft 0.18.1…transformerversionupdatefeaturefine-tuningtransformersdiscuss.hug…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 206 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Open Source AI