Jump to content

Max-SDU

Members
  • Posts

    1
  • Joined

  • Last visited

Max-SDU's Achievements

Noob

Noob (1/14)

0

Reputation

  1. My GPU is P104-100, when I am trying to generate a picture, something wrong appear: NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 8040, 8, 40) (torch.float16) key : shape=(1, 8040, 8, 40) (torch.float16) value : shape=(1, 8040, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (7, 0) but your GPU has capability (6, 1) (too old) attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `[email protected]` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) operator wasn't built - see `python -m xformers.info` for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40 Time taken: 3.0 sec. How to solve this?
×
×
  • Create New...