Hacker News new | past | comments | ask | show | jobs | submit login

I think the answer is yes, but setup is a bit complicated. I would test this myself, but I don't have an NVIDIA card with at least 10GB of VRAM.

One time:

1. Have "conda" installed.

2. clone https://github.com/CompVis/stable-diffusion

3. `conda env create -f environment.yaml`

4. activate the Venv with `conda activate ldm`

5. Download weights from https://huggingface.co/CompVis/stable-diffusion-v-1-4-origin... (requires registration).

6. `mkdir -p models/ldm/stable-diffusion-v1/`

7. `ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt`. (you can download the other version of the model, like v1-1, v1-2, and v1-3 and symlink them instead if you prefer).

To run:

1. activate venv with `conda activate ldm` (unless still in a prompt running inside the venv).

2. `python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms`.

Also there is a safety filter in the code that will black out NSFW or otherwise expected to be offensive images (presumably also including things like swastikas, gore, etc). It is trivial to disable by editing the source if you want.




Thanks for these instructions.

Unfortunately I'm getting this error message (Win11, 3080 10GB):

> RuntimeError: CUDA out of memory. Tried to allocate 3.00 GiB (GPU 0; 10.00 GiB total capacity; 5.62 GiB already allocated; 1.80 GiB free; 5.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CON

Edit:

>>> from GPUtil import showUtilization as gpu_usage

>>> gpu_usage()

| ID | GPU | MEM |

------------------

| 0 | 1% | 6% |

Edit 2:

Got this optimized fork to work: https://github.com/basujindal/stable-diffusion


I also have a 10g card and saw the same thing - to get it working I had to pass in "--n_samples 1" to the command, which limits the number of generated images to 2 in any given run. This has been working fine for me


I haven't gotten around it, but I remember reading on /g/ that you can make it run on 5GB (sacrificing accuracy).

You should check their threads there, there's some good info.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: