Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: UserWarning: NVIDIA GeForce RTX 5070 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. #16884

Open
4 of 6 tasks
fireYtail opened this issue Mar 8, 2025 · 4 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@fireYtail
Copy link

fireYtail commented Mar 8, 2025

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

As suggested, I start the fresh install process by using the pre-release ZIP file, then running update.bat, then running run.bat, which triggers the following:

Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
  Downloading https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB)

But then, it will fail to launch. There is a way to fix this, but the user has to follow these specific steps manually:

  1. Navigate to system\python\ directory and open a new command prompt in this location.
  2. Run python -m pip uninstall torch torchvision
  3. Run python -m pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128 and wait for a very long download of over 3 GB.

Steps to reproduce the problem

Go through the normal NVIDIA GPU installation process and try to launch 1111.

What should have happened?

The run.bat should have detected the incompatibility issue and downloaded https://download.pytorch.org/whl/nightly/cu128 rather than https://download.pytorch.org/whl/cu121. The user shouldn't have to wait for a very long download of over 2 GB, and then have to manually reinstall with another very long download of over 3 GB. This workaround is far from an acceptable solution, and the assumption by developers that all users have fast internet connections is NOT some universal truth. A lot of people have really slow internet connections and they have to make do with that. And a lot of people don't know how to fix this issue (the manual reinstall). You shouldn't expect the average 1111 user to know how to fix it entirely by themselves.

What browsers do you use to access the UI ?

No response

Sysinfo

{
    "Platform": "Windows-10-10.0.19045-SP0",
    "Python": "3.10.6",
    "Version": "v1.10.1",
    "Commit": "82a973c04367123ae98bd9abdf80d9eda9b910e2",
    "Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nnothing to commit, working tree clean",
    "Script path": "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui",
    "Data path": "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui",
    "Extensions dir": "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\extensions",
    "Checksum": "1e53c7437e6197621e0c5e35c395e40c93138da725fc3be3c0e98cda1453a437",
    "Commandline": [
        "launch.py"
    ],
    "Torch env info": {
        "torch_version": "2.1.2+cu121",
        "is_debug_build": "False",
        "cuda_compiled_version": "12.1",
        "gcc_version": null,
        "clang_version": null,
        "cmake_version": null,
        "os": "Microsoft Windows 10 Pro",
        "libc_version": "N/A",
        "python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)",
        "python_platform": "Windows-10-10.0.19045-SP0",
        "is_cuda_available": "True",
        "cuda_runtime_version": null,
        "cuda_module_loading": "LAZY",
        "nvidia_driver_version": "572.70",
        "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 5070 Ti",
        "cudnn_version": null,
        "pip_version": "pip3",
        "pip_packages": [
            "numpy==1.26.2",
            "open-clip-torch==2.20.0",
            "pytorch-lightning==1.9.4",
            "torch==2.1.2+cu121",
            "torchdiffeq==0.2.3",
            "torchmetrics==1.6.2",
            "torchsde==0.2.6",
            "torchvision==0.16.2+cu121"
        ],
        "conda_packages": null,
        "hip_compiled_version": "N/A",
        "hip_runtime_version": "N/A",
        "miopen_runtime_version": "N/A",
        "caching_allocator_config": "",
        "is_xnnpack_available": "True",
        "cpu_info": [
            "Architecture=9",
            "CurrentClockSpeed=3901",
            "DeviceID=CPU0",
            "Family=107",
            "L2CacheSize=4096",
            "L2CacheSpeed=",
            "Manufacturer=AuthenticAMD",
            "MaxClockSpeed=3901",
            "Name=AMD Ryzen 7 3800X 8-Core Processor             ",
            "ProcessorType=3",
            "Revision=28928"
        ]
    },
    "Exceptions": [
        {
            "exception": "CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n",
            "traceback": [
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_models.py, line 693, get_sd_model",
                    "load_model()"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_models.py, line 845, load_model",
                    "load_model_weights(sd_model, checkpoint_info, state_dict, timer)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_models.py, line 440, load_model_weights",
                    "model.load_state_dict(state_dict, strict=False)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 223, <lambda>",
                    "module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 221, load_state_dict",
                    "original(module, state_dict, strict=strict)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2138, load_state_dict",
                    "load(self, state_dict)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2120, load",
                    "module._load_from_state_dict("
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 225, <lambda>",
                    "linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 191, load_from_state_dict",
                    "module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\_meta_registrations.py, line 4516, zeros_like",
                    "res.fill_(0)"
                ]
            ]
        },
        {
            "exception": "CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n",
            "traceback": [
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_models.py, line 693, get_sd_model",
                    "load_model()"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_models.py, line 845, load_model",
                    "load_model_weights(sd_model, checkpoint_info, state_dict, timer)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_models.py, line 440, load_model_weights",
                    "model.load_state_dict(state_dict, strict=False)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 223, <lambda>",
                    "module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 221, load_state_dict",
                    "original(module, state_dict, strict=strict)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2138, load_state_dict",
                    "load(self, state_dict)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2126, load",
                    "load(child, child_state_dict, child_prefix)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 2120, load",
                    "module._load_from_state_dict("
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 225, <lambda>",
                    "linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\webui\\modules\\sd_disable_initialization.py, line 191, load_from_state_dict",
                    "module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)"
                ],
                [
                    "A:\\Sin Sincronización\\Chrome\\sd.webui\\system\\python\\lib\\site-packages\\torch\\_meta_registrations.py, line 4516, zeros_like",
                    "res.fill_(0)"
                ]
            ]
        }
    ],
    "CPU": {
        "model": "AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD",
        "count logical": 16,
        "count physical": 8
    },
    "RAM": {
        "total": "64GB",
        "used": "15GB",
        "free": "49GB"
    },
    "Extensions": [],
    "Inactive extensions": [],
    "Environment": {
        "GRADIO_ANALYTICS_ENABLED": "False"
    },
    "Config": {
        "ldsr_steps": 100,
        "ldsr_cached": false,
        "SCUNET_tile": 256,
        "SCUNET_tile_overlap": 8,
        "SWIN_tile": 192,
        "SWIN_tile_overlap": 8,
        "SWIN_torch_compile": false,
        "hypertile_enable_unet": false,
        "hypertile_enable_unet_secondpass": false,
        "hypertile_max_depth_unet": 3,
        "hypertile_max_tile_unet": 256,
        "hypertile_swap_size_unet": 3,
        "hypertile_enable_vae": false,
        "hypertile_max_depth_vae": 3,
        "hypertile_max_tile_vae": 128,
        "hypertile_swap_size_vae": 3,
        "sd_model_checkpoint": "v1-5-pruned-emaonly.safetensors [6ce0161689]"
    },
    "Startup": {
        "total": 15.140353918075562,
        "records": {
            "initial startup": 0.030001163482666016,
            "prepare environment/checks": 0.009998083114624023,
            "prepare environment/git version info": 0.06100010871887207,
            "prepare environment/torch GPU test": 2.9200439453125,
            "prepare environment/clone repositores": 0.1719987392425537,
            "prepare environment/run extensions installers": 0.0,
            "prepare environment": 3.4820404052734375,
            "launcher": 0.0020003318786621094,
            "import torch": 5.075137138366699,
            "import gradio": 1.2326068878173828,
            "setup paths": 0.9831109046936035,
            "import ldm": 0.00800013542175293,
            "import sgm": 0.0,
            "initialize shared": 0.27900075912475586,
            "other imports": 0.5315518379211426,
            "opts onchange": 0.0,
            "setup SD model": 0.0,
            "setup codeformer": 0.0019986629486083984,
            "setup gfpgan": 0.019000530242919922,
            "set samplers": 0.0,
            "list extensions": 0.0019998550415039062,
            "restore config state file": 0.0,
            "list SD models": 1.1596651077270508,
            "list localizations": 0.0010008811950683594,
            "load scripts/custom_code.py": 0.0070002079010009766,
            "load scripts/img2imgalt.py": 0.0020003318786621094,
            "load scripts/loopback.py": 0.001999378204345703,
            "load scripts/outpainting_mk_2.py": 0.0020003318786621094,
            "load scripts/poor_mans_outpainting.py": 0.0019998550415039062,
            "load scripts/postprocessing_codeformer.py": 0.0010001659393310547,
            "load scripts/postprocessing_gfpgan.py": 0.0009996891021728516,
            "load scripts/postprocessing_upscale.py": 0.0030019283294677734,
            "load scripts/prompt_matrix.py": 0.0019969940185546875,
            "load scripts/prompts_from_file.py": 0.002000093460083008,
            "load scripts/sd_upscale.py": 0.0009996891021728516,
            "load scripts/xyz_grid.py": 0.008001089096069336,
            "load scripts/ldsr_model.py": 1.131108045578003,
            "load scripts/lora_script.py": 0.17199993133544922,
            "load scripts/scunet_model.py": 0.0279996395111084,
            "load scripts/swinir_model.py": 0.023999929428100586,
            "load scripts/hotkey_config.py": 0.0020029544830322266,
            "load scripts/extra_options_section.py": 0.0019996166229248047,
            "load scripts/hypertile_script.py": 0.04599809646606445,
            "load scripts/postprocessing_autosized_crop.py": 0.002000570297241211,
            "load scripts/postprocessing_caption.py": 0.0009999275207519531,
            "load scripts/postprocessing_create_flipped_copies.py": 0.0009999275207519531,
            "load scripts/postprocessing_focal_crop.py": 0.006000518798828125,
            "load scripts/postprocessing_split_oversized.py": 0.0009996891021728516,
            "load scripts/soft_inpainting.py": 0.004001140594482422,
            "load scripts/comments.py": 0.021998882293701172,
            "load scripts/refiner.py": 0.00099945068359375,
            "load scripts/sampler.py": 0.002001047134399414,
            "load scripts/seed.py": 0.00099945068359375,
            "load scripts": 1.4791085720062256,
            "load upscalers": 0.008001089096069336,
            "refresh VAE": 0.002000093460083008,
            "refresh textual inversion templates": 0.0,
            "scripts list_optimizers": 0.001999378204345703,
            "scripts list_unets": 0.0,
            "reload hypernetworks": 0.0009992122650146484,
            "initialize extra networks": 0.014002561569213867,
            "scripts before_ui_callback": 0.002001047134399414,
            "create ui": 0.35355591773986816,
            "gradio launch": 0.7825717926025391,
            "add APIs": 0.007999181747436523,
            "app_started_callback/lora_script.py": 0.0,
            "app_started_callback": 0.0
        }
    },
    "Packages": [
        "accelerate==0.21.0",
        "aenum==3.1.15",
        "aiofiles==23.2.1",
        "aiohappyeyeballs==2.5.0",
        "aiohttp==3.11.13",
        "aiosignal==1.3.2",
        "altair==5.5.0",
        "antlr4-python3-runtime==4.9.3",
        "anyio==3.7.1",
        "async-timeout==5.0.1",
        "attrs==25.1.0",
        "blendmodes==2022",
        "certifi==2025.1.31",
        "charset-normalizer==3.4.1",
        "clean-fid==0.1.35",
        "click==8.1.8",
        "clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
        "colorama==0.4.6",
        "contourpy==1.3.1",
        "cycler==0.12.1",
        "deprecation==2.1.0",
        "diskcache==5.6.3",
        "einops==0.4.1",
        "exceptiongroup==1.2.2",
        "facexlib==0.3.0",
        "fastapi==0.94.0",
        "ffmpy==0.5.0",
        "filelock==3.17.0",
        "filterpy==1.4.5",
        "fonttools==4.56.0",
        "frozenlist==1.5.0",
        "fsspec==2025.3.0",
        "ftfy==6.3.1",
        "gitdb==4.0.12",
        "GitPython==3.1.32",
        "gradio==3.41.2",
        "gradio_client==0.5.0",
        "h11==0.12.0",
        "httpcore==0.15.0",
        "httpx==0.24.1",
        "huggingface-hub==0.29.2",
        "idna==3.10",
        "imageio==2.37.0",
        "importlib_resources==6.5.2",
        "inflection==0.5.1",
        "Jinja2==3.1.6",
        "jsonmerge==1.8.0",
        "jsonschema==4.23.0",
        "jsonschema-specifications==2024.10.1",
        "kiwisolver==1.4.8",
        "kornia==0.6.7",
        "lark==1.1.2",
        "lazy_loader==0.4",
        "lightning-utilities==0.14.0",
        "llvmlite==0.44.0",
        "MarkupSafe==2.1.5",
        "matplotlib==3.10.1",
        "mpmath==1.3.0",
        "multidict==6.1.0",
        "narwhals==1.29.1",
        "networkx==3.4.2",
        "numba==0.61.0",
        "numpy==1.26.2",
        "omegaconf==2.2.3",
        "open-clip-torch==2.20.0",
        "opencv-python==4.11.0.86",
        "orjson==3.10.15",
        "packaging==24.2",
        "pandas==2.2.3",
        "piexif==1.1.3",
        "Pillow==9.5.0",
        "pillow-avif-plugin==1.4.3",
        "pip==25.0.1",
        "propcache==0.3.0",
        "protobuf==3.20.0",
        "psutil==5.9.5",
        "pydantic==1.10.21",
        "pydub==0.25.1",
        "pyparsing==3.2.1",
        "python-dateutil==2.9.0.post0",
        "python-multipart==0.0.20",
        "pytorch-lightning==1.9.4",
        "pytz==2025.1",
        "PyWavelets==1.8.0",
        "PyYAML==6.0.2",
        "referencing==0.36.2",
        "regex==2024.11.6",
        "requests==2.32.3",
        "resize-right==0.0.2",
        "rpds-py==0.23.1",
        "safetensors==0.4.2",
        "scikit-image==0.21.0",
        "scipy==1.15.2",
        "semantic-version==2.10.0",
        "sentencepiece==0.2.0",
        "setuptools==69.5.1",
        "six==1.17.0",
        "smmap==5.0.2",
        "sniffio==1.3.1",
        "spandrel==0.3.4",
        "spandrel_extra_arches==0.1.1",
        "starlette==0.26.1",
        "sympy==1.13.3",
        "tifffile==2025.2.18",
        "timm==1.0.15",
        "tokenizers==0.13.3",
        "tomesd==0.1.3",
        "torch==2.1.2+cu121",
        "torchdiffeq==0.2.3",
        "torchmetrics==1.6.2",
        "torchsde==0.2.6",
        "torchvision==0.16.2+cu121",
        "tqdm==4.67.1",
        "trampoline==0.1.2",
        "transformers==4.30.2",
        "typing_extensions==4.12.2",
        "tzdata==2025.1",
        "urllib3==2.3.0",
        "uvicorn==0.34.0",
        "wcwidth==0.2.13",
        "websockets==11.0.3",
        "wheel==0.45.1",
        "yarl==1.18.3"
    ]
}

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments:
A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\cuda\__init__.py:215: UserWarning:
NVIDIA GeForce RTX 5070 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5070 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(
Loading weights [6ce0161689] from A:\Sin Sincronización\Chrome\sd.webui\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: A:\Sin Sincronización\Chrome\sd.webui\webui\configs\v1-inference.yaml
A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.6s (prepare environment: 2.7s, import torch: 4.6s, import gradio: 1.1s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 1.4s, create ui: 0.5s, gradio launch: 0.6s).
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "threading.py", line 973, in _bootstrap
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 845, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 440, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
    load(self, state_dict)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 1 more time]
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
    module._load_from_state_dict(
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 225, in <lambda>
    linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
    res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.



Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.
Exception in thread Thread-18 (load_model):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\initialize.py", line 154, in load_model
    devices.first_time_calculation()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\devices.py", line 281, in first_time_calculation
    conv2d(x)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
    return originals.Conv2d_forward(self, input)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Loading weights [6ce0161689] from A:\Sin Sincronización\Chrome\sd.webui\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: A:\Sin Sincronización\Chrome\sd.webui\webui\configs\v1-inference.yaml
A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "threading.py", line 973, in _bootstrap
  File "threading.py", line 1016, in _bootstrap_inner
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\ui.py", line 1165, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 845, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 440, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
    load(self, state_dict)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 1 more time]
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
    module._load_from_state_dict(
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 225, in <lambda>
    linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
    res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.



Stable diffusion model failed to load
Loading weights [6ce0161689] from A:\Sin Sincronización\Chrome\sd.webui\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: A:\Sin Sincronización\Chrome\sd.webui\webui\configs\v1-inference.yaml
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "threading.py", line 973, in _bootstrap
  File "threading.py", line 1016, in _bootstrap_inner
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\ui.py", line 1165, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 845, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_models.py", line 440, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
    load(self, state_dict)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 1 more time]
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
    module._load_from_state_dict(
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 225, in <lambda>
    linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
  File "A:\Sin Sincronización\Chrome\sd.webui\webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "A:\Sin Sincronización\Chrome\sd.webui\system\python\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
    res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.



Stable diffusion model failed to load

Additional information

No response

@fireYtail fireYtail added the bug-report Report of a bug, yet to be confirmed label Mar 8, 2025
@Haoming02
Copy link
Contributor

As per #16824, there is a separated release for RTX 50s. Were you using that one?

@fireYtail
Copy link
Author

fireYtail commented Mar 9, 2025

No, when I go to a GitHub repository, I grab the release from the main page and read the instructions on the main page. I don't go into every single issue page of the repository just to find a working version. Expecting users to do that is absolutely unrealistic.

It only took me a few minutes to open this issue explaning both the problem and a solution. Is it so tremendously difficult to add this information to the main page, rather than expect everyone to check every single issue for a solution?

Really, common sense is the least common of senses it seems. If the main release doesn't work for the new GPUs and there are alternative versions, don't provide a link or a small text note, just expect people to check every issue one by one.

@Haoming02
Copy link
Contributor

Well, cu128 is still only nightly afaik

So no stable release for 50s 🤷‍♂

@Haoming02
Copy link
Contributor

just expect people to check every issue one by one

It's pinned btw

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants