需要安装初始环境

https://visualstudio.microsoft.com/zh-hans/thank-you-downloading-visual-studio/?sku=Community&channel=Release&version=VS2022&source=VSLandingPage&cid=2030&passive=false


1.下载python3.11.9【下载 win  test amd64 压缩包】

https://mirrors4.tuna.tsinghua.edu.cn/python/3.11.9/?C=N&O=A

2下载轮子
https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post10/triton-3.2.0-cp311-cp311-win_amd64.whl安装
3安装pytorch2.4
4安装轮子
python.exe -m pip install triton-3.2.0-cp311-cp311-win_amd64.whl
确认Triton
python.exe -c "import triton; import torch; print('Triton是否可用:', triton.runtime.driver.active.get_current_device() is not None)"
5编译 SageAttention2
cd SageAttention2
python.exe -m pip install -e . --no-build-isolation
6确认安装成功
python.exe -c "import torch; import sageattention; print('SageAttention 算子检测成功!')"
7.ComfyUI-UniversalSmartVAE节点地址https://github.com/uczensokratesa/ComfyUI-UniversalSmartVAE.git


forgeui 无法使用双截棍,nuchaku 失败环境 提示 内核不支持GPU则修改代码

修改文件/backend/nn/svdq.py 里的代码,此段增加的代码如下,其他不变,修改部分有注释

class SVDQFluxTransformer2DModel(nn.Module):
    """https://github.com/nunchaku-tech/ComfyUI-nunchaku/blob/v1.0.0/wrappers/flux.py"""

    def __init__(self, config: dict):
        super().__init__()
        model = NunchakuFluxTransformer2dModel.from_pretrained(config.pop("filename"), offload=shared.opts.svdq_cpu_offload,
        torch_dtype=torch.float16,  # Turing GPUs only support fp16 precision wdushi
        )
        model.set_attention_impl("nunchaku-fp16")  # Turing GPUs only support fp16 attention wdushi


comfyui 则看这个贴子

https://github.com/nunchaku-ai/nunchaku/issues/273

修改代码 flux.1-dev-turing.py

import torch
from diffusers import FluxPipeline

from nunchaku import NunchakuFluxTransformer2dModel
from nunchaku.utils import get_precision

precision = get_precision()  # auto-detect your precision is 'int4' or 'fp4' based on your GPU
transformer = NunchakuFluxTransformer2dModel.from_pretrained(
    f"nunchaku-tech/nunchaku-flux.1-dev/svdq-{precision}_r32-flux.1-dev.safetensors",
    offload=True,
    torch_dtype=torch.float16,  # Turing GPUs only support fp16 precision
)  # set offload to False if you want to disable offloading
transformer.set_attention_impl("nunchaku-fp16")  # Turing GPUs only support fp16 attention
pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.float16
)  # no need to set the device here
pipeline.enable_sequential_cpu_offload()  # diffusers' offloading
image = pipeline("A cat holding a sign that says hello world", num_inference_steps=50, guidance_scale=3.5).images[0]
image.save(f"flux.1-dev-{precision}.png")




备注: 也可以下载这的轮子

https://github.com/woct0rdho/SageAttention 

Issues · snw35/sageattention-wheel


标签: none

添加新评论