作者zxsx811 (We Are X)
看板AI_Art
标题[讨论] LORA训练错误问题
时间Sun Feb 26 10:02:37 2023
如题
跑训练时出现下列错误
请问该如何解决
Load CSS...
Running on local URL:
http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
2023-02-26T09:14:47.883ZE [8084:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:14:47.883ZE [8084:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:14:47.884ZE [8084:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:14:47.884ZE [10624:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:14:47.884ZE [8084:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:14:47.884ZE [10624:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:14:47.884ZE [10624:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:14:47.884ZE [4552:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:14:47.884ZE [10624:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:14:47.884ZE [4552:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:14:47.884ZE [4552:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:14:47.885ZE [4552:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:15:11.925ZE [8084:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:15:11.925ZE [8084:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:15:11.925ZE [10624:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:15:11.925ZE [8084:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:15:11.926ZE [10624:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:15:11.926ZE [8084:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:15:11.926ZE [4552:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:15:11.927ZE [10624:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:15:11.927ZE [4552:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:15:11.927ZE [10624:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
2023-02-26T09:15:11.927ZE [4552:ShellIpcClient]
shell_ipc_client.cc:132:Connect Can't connect to socket at:
\\.\Pipe\GoogleDriveFSPipe_ken_shell
2023-02-26T09:15:11.928ZE [4552:ShellIpcClient]
shell_ipc_client.cc:609:operator() Failed to connect to the server:
NOT_FOUND: Can't connect to socket at: \\.\Pipe\GoogleDriveFSPipe_ken_shell
=== Source Location Trace: ===
apps/drive/fs/ipc/shell_ipc_client.cc:133
Folder 100_Swordmaiden: 4000 steps
max_train_steps = 4000
stop_text_encoder_training = 0
lr_warmup_steps = 400
accelerate launch --num_cpu_threads_per_process=2 "train_network.py"
--enable_bucket
--pretrained_model_name_or_path="C:/Users/ken/Desktop/sd.webui/webui/models/Stable-diffusion/AOM2-r34-SF1.1.safetensors"
--train_data_dir="C:/Users/ken/Desktop/99+/1" --resolution=512,512
--output_dir="C:/Users/ken/Desktop/99+/2"
--logging_dir="C:/Users/ken/Desktop/99+/3" --network_alpha="1"
--save_model_as=safetensors --network_module=networks.lora
--text_encoder_lr=5e-5 --unet_lr=0.0001 --network_dim=8 --output_name="last"
--lr_scheduler_num_cycles="1" --learning_rate="0.0001"
--lr_scheduler="cosine" --lr_warmup_steps="400" --train_batch_size="1"
--max_train_steps="4000" --save_every_n_epochs="1" --mixed_precision="fp16"
--save_precision="fp16" --seed="1234" --cache_latents
--optimizer_type="AdamW" --bucket_reso_steps=64 --xformers --use_8bit_adam
--bucket_no_upscale
The following values were not passed to `accelerate launch` and had defaults
used instead:
`--num_processes` was set to a value of `1`
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters
or run `accelerate config`.
prepare tokenizer
Use DreamBooth method.
prepare train images.
found directory 100_Swordmaiden contains 40 image files
4000 train images with repeating.
loading image sizes.
100%|████████████████████████████████████
███████████████████████████████████████
█████| 40/40 [00:00<00:00, 1901.64it/s]
make buckets
min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set,
because bucket reso is defined by image size automatically / bucket_no_upscale
が指定された场合は、bucketの解像度は画像サイズから自动计算されるため、
min_bucket_resoとmax_bucket_resoは无视されます
number of images (including repeats) / 各bucketの画像枚数(缲り返し回数を含む
)
bucket 0: resolution (512, 512), count: 4000
mean ar error (without repeats): 0.0
prepare accelerator
Using accelerator 0.15.0 or above.
load StableDiffusion checkpoint
loading u-net: <All keys matched successfully>
loading vae: <All keys matched successfully>
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were
not used when initializing CLIPTextModel:
['vision_model.encoder.layers.7.self_attn.k_proj.bias',
'vision_model.encoder.layers.0.mlp.fc2.bias',
'vision_model.encoder.layers.12.layer_norm1.weight',
'vision_model.encoder.layers.14.layer_norm1.bias',
'vision_model.encoder.layers.22.layer_norm1.weight',
'vision_model.encoder.layers.0.layer_norm1.bias',
'vision_model.encoder.layers.17.self_attn.k_proj.bias',
'vision_model.encoder.layers.4.layer_norm1.weight',
'vision_model.encoder.layers.23.mlp.fc2.bias',
'vision_model.encoder.layers.1.self_attn.out_proj.bias',
'vision_model.encoder.layers.7.self_attn.q_proj.bias',
'vision_model.encoder.layers.3.layer_norm1.weight',
'vision_model.encoder.layers.6.self_attn.q_proj.weight',
'vision_model.encoder.layers.21.self_attn.k_proj.weight',
'vision_model.encoder.layers.11.self_attn.out_proj.weight',
'vision_model.encoder.layers.15.self_attn.k_proj.bias',
'vision_model.encoder.layers.15.mlp.fc2.bias',
'vision_model.encoder.layers.23.layer_norm2.bias',
'vision_model.encoder.layers.4.self_attn.q_proj.weight',
'vision_model.encoder.layers.1.self_attn.k_proj.weight',
'vision_model.encoder.layers.17.self_attn.k_proj.weight',
'vision_model.encoder.layers.17.layer_norm1.weight',
'vision_model.encoder.layers.9.self_attn.k_proj.bias',
'vision_model.encoder.layers.2.self_attn.out_proj.bias',
'vision_model.encoder.layers.20.layer_norm1.bias',
'vision_model.encoder.layers.3.self_attn.q_proj.bias',
'vision_model.embeddings.position_ids',
'vision_model.encoder.layers.2.layer_norm2.bias',
'vision_model.encoder.layers.13.self_attn.v_proj.weight',
'vision_model.encoder.layers.15.mlp.fc1.bias',
'vision_model.encoder.layers.7.layer_norm1.weight',
'vision_model.encoder.layers.22.self_attn.q_proj.weight',
'vision_model.encoder.layers.21.self_attn.q_proj.weight',
'vision_model.encoder.layers.18.mlp.fc2.bias',
'vision_model.encoder.layers.5.self_attn.v_proj.bias',
'vision_model.encoder.layers.2.mlp.fc1.bias',
'vision_model.encoder.layers.16.self_attn.k_proj.weight',
'vision_model.encoder.layers.16.mlp.fc1.bias',
'vision_model.encoder.layers.12.self_attn.k_proj.bias',
'vision_model.encoder.layers.4.self_attn.k_proj.weight',
'vision_model.encoder.layers.17.layer_norm2.bias',
'vision_model.encoder.layers.12.self_attn.v_proj.weight',
'vision_model.encoder.layers.18.mlp.fc2.weight',
'vision_model.encoder.layers.12.self_attn.v_proj.bias',
'vision_model.encoder.layers.8.layer_norm1.weight',
'vision_model.encoder.layers.20.layer_norm2.bias',
'vision_model.encoder.layers.4.mlp.fc1.bias',
'vision_model.encoder.layers.18.layer_norm1.bias',
'vision_model.encoder.layers.9.self_attn.k_proj.weight',
'vision_model.encoder.layers.0.self_attn.q_proj.weight',
'vision_model.pre_layrnorm.bias',
'vision_model.encoder.layers.22.self_attn.v_proj.bias',
'vision_model.encoder.layers.16.self_attn.q_proj.weight',
'vision_model.encoder.layers.19.self_attn.q_proj.bias',
'vision_model.encoder.layers.8.layer_norm2.weight',
'vision_model.encoder.layers.9.self_attn.out_proj.bias',
'vision_model.encoder.layers.14.layer_norm2.bias',
'vision_model.encoder.layers.22.mlp.fc2.weight',
'vision_model.encoder.layers.17.mlp.fc2.weight',
'vision_model.encoder.layers.15.self_attn.q_proj.bias',
'vision_model.encoder.layers.3.self_attn.v_proj.bias',
'vision_model.encoder.layers.16.layer_norm1.weight',
'vision_model.encoder.layers.12.layer_norm2.bias',
'vision_model.encoder.layers.17.layer_norm1.bias',
'vision_model.encoder.layers.16.self_attn.v_proj.bias',
'vision_model.encoder.layers.18.self_attn.out_proj.weight',
'vision_model.encoder.layers.7.mlp.fc1.bias',
'vision_model.encoder.layers.14.self_attn.q_proj.bias',
'vision_model.encoder.layers.14.self_attn.k_proj.weight',
'vision_model.encoder.layers.10.self_attn.q_proj.weight',
'vision_model.encoder.layers.19.self_attn.out_proj.weight',
'vision_model.encoder.layers.20.self_attn.k_proj.bias',
'vision_model.encoder.layers.22.layer_norm2.bias',
'vision_model.encoder.layers.22.self_attn.k_proj.bias',
'vision_model.encoder.layers.10.self_attn.v_proj.bias',
'vision_model.encoder.layers.21.mlp.fc1.bias',
'vision_model.encoder.layers.19.self_attn.v_proj.weight',
'vision_model.encoder.layers.18.self_attn.v_proj.bias',
'vision_model.encoder.layers.21.layer_norm1.bias',
'vision_model.encoder.layers.22.self_attn.out_proj.weight',
'vision_model.encoder.layers.18.mlp.fc1.bias',
'vision_model.encoder.layers.3.layer_norm2.bias',
'vision_model.encoder.layers.11.layer_norm2.weight',
'vision_model.encoder.layers.1.mlp.fc2.bias',
'vision_model.encoder.layers.23.self_attn.q_proj.bias',
'vision_model.post_layernorm.bias',
'vision_model.encoder.layers.18.layer_norm2.bias',
'vision_model.encoder.layers.5.mlp.fc1.bias',
'vision_model.encoder.layers.8.self_attn.k_proj.bias',
'vision_model.encoder.layers.11.self_attn.q_proj.weight',
'vision_model.encoder.layers.19.mlp.fc1.weight',
'vision_model.encoder.layers.2.self_attn.v_proj.bias',
'vision_model.encoder.layers.17.self_attn.out_proj.bias',
'vision_model.encoder.layers.17.self_attn.v_proj.weight',
'vision_model.encoder.layers.6.self_attn.k_proj.weight',
'vision_model.encoder.layers.20.self_attn.out_proj.weight',
'vision_model.encoder.layers.9.self_attn.q_proj.weight',
'vision_model.encoder.layers.15.mlp.fc2.weight',
'vision_model.encoder.layers.20.self_attn.k_proj.weight',
'vision_model.encoder.layers.17.self_attn.out_proj.weight',
'vision_model.encoder.layers.8.mlp.fc2.bias',
'vision_model.encoder.layers.15.self_attn.q_proj.weight',
'vision_model.encoder.layers.8.mlp.fc1.bias',
'vision_model.encoder.layers.17.self_attn.q_proj.bias',
'vision_model.encoder.layers.5.mlp.fc2.bias',
'vision_model.encoder.layers.22.self_attn.v_proj.weight',
'vision_model.encoder.layers.5.self_attn.q_proj.weight',
'vision_model.encoder.layers.3.mlp.fc1.bias',
'vision_model.encoder.layers.1.layer_norm2.bias',
'vision_model.encoder.layers.2.self_attn.k_proj.bias',
'vision_model.encoder.layers.5.layer_norm1.weight',
'vision_model.encoder.layers.13.self_attn.k_proj.bias',
'vision_model.encoder.layers.23.layer_norm1.bias',
'vision_model.encoder.layers.0.layer_norm1.weight',
'vision_model.encoder.layers.17.layer_norm2.weight',
'vision_model.encoder.layers.18.self_attn.q_proj.weight',
'vision_model.encoder.layers.2.self_attn.q_proj.bias',
'vision_model.encoder.layers.5.self_attn.out_proj.weight',
'vision_model.encoder.layers.13.mlp.fc2.weight',
'vision_model.encoder.layers.4.self_attn.k_proj.bias',
'vision_model.encoder.layers.19.self_attn.out_proj.bias',
'vision_model.encoder.layers.8.self_attn.q_proj.bias',
'vision_model.encoder.layers.6.mlp.fc2.bias',
'vision_model.encoder.layers.5.self_attn.out_proj.bias',
'vision_model.encoder.layers.6.mlp.fc1.bias',
'vision_model.encoder.layers.23.self_attn.v_proj.weight',
'vision_model.encoder.layers.3.layer_norm1.bias',
'vision_model.encoder.layers.21.self_attn.k_proj.bias',
'vision_model.embeddings.position_embedding.weight',
'vision_model.encoder.layers.18.self_attn.q_proj.bias',
'vision_model.encoder.layers.21.mlp.fc2.weight',
'vision_model.encoder.layers.7.self_attn.q_proj.weight',
'vision_model.encoder.layers.15.layer_norm2.bias',
'vision_model.encoder.layers.10.self_attn.v_proj.weight',
'vision_model.encoder.layers.0.layer_norm2.bias',
'vision_model.encoder.layers.15.layer_norm2.weight',
'vision_model.encoder.layers.21.self_attn.q_proj.bias',
'vision_model.encoder.layers.15.layer_norm1.weight',
'vision_model.encoder.layers.19.layer_norm2.bias',
'vision_model.encoder.layers.10.mlp.fc2.bias',
'vision_model.encoder.layers.1.self_attn.q_proj.bias',
'vision_model.encoder.layers.20.mlp.fc2.bias',
'vision_model.encoder.layers.8.self_attn.out_proj.bias',
'vision_model.encoder.layers.6.layer_norm2.weight',
'vision_model.encoder.layers.11.self_attn.q_proj.bias',
'vision_model.encoder.layers.10.layer_norm1.weight',
'vision_model.encoder.layers.19.layer_norm1.bias',
'vision_model.encoder.layers.23.self_attn.out_proj.weight',
'vision_model.encoder.layers.11.mlp.fc2.weight',
'vision_model.encoder.layers.9.self_attn.v_proj.weight',
'vision_model.encoder.layers.11.layer_norm2.bias',
'vision_model.encoder.layers.16.layer_norm1.bias',
'vision_model.encoder.layers.10.mlp.fc2.weight',
'vision_model.encoder.layers.19.self_attn.v_proj.bias',
'vision_model.encoder.layers.2.mlp.fc1.weight',
'vision_model.encoder.layers.18.self_attn.v_proj.weight',
'vision_model.encoder.layers.13.layer_norm1.bias',
'vision_model.encoder.layers.0.self_attn.v_proj.weight',
'vision_model.encoder.layers.16.self_attn.out_proj.weight',
'vision_model.encoder.layers.10.self_attn.k_proj.weight',
'vision_model.encoder.layers.2.layer_norm1.bias',
'vision_model.encoder.layers.12.mlp.fc2.bias',
'vision_model.encoder.layers.8.layer_norm1.bias',
'vision_model.encoder.layers.11.self_attn.v_proj.weight',
'vision_model.encoder.layers.19.layer_norm1.weight',
'visual_projection.weight', 'vision_model.encoder.layers.12.mlp.fc1.bias',
'vision_model.encoder.layers.13.self_attn.k_proj.weight',
'vision_model.encoder.layers.20.layer_norm1.weight',
'vision_model.encoder.layers.6.self_attn.out_proj.weight',
'vision_model.post_layernorm.weight',
'vision_model.encoder.layers.14.mlp.fc2.weight',
'vision_model.encoder.layers.18.layer_norm2.weight',
'vision_model.encoder.layers.19.mlp.fc2.weight',
'vision_model.encoder.layers.12.self_attn.k_proj.weight',
'vision_model.encoder.layers.1.mlp.fc1.weight',
'vision_model.encoder.layers.14.layer_norm1.weight',
'vision_model.encoder.layers.2.self_attn.v_proj.weight',
'vision_model.encoder.layers.22.self_attn.q_proj.bias',
'vision_model.encoder.layers.10.layer_norm2.weight',
'vision_model.encoder.layers.11.self_attn.out_proj.bias',
'vision_model.encoder.layers.3.mlp.fc2.weight',
'vision_model.encoder.layers.8.mlp.fc2.weight',
'vision_model.encoder.layers.12.layer_norm2.weight',
'vision_model.encoder.layers.14.self_attn.out_proj.bias',
'vision_model.encoder.layers.17.self_attn.q_proj.weight',
'vision_model.encoder.layers.21.layer_norm2.bias',
'vision_model.encoder.layers.15.self_attn.out_proj.bias',
'vision_model.encoder.layers.22.mlp.fc2.bias',
'vision_model.encoder.layers.17.self_attn.v_proj.bias',
'vision_model.encoder.layers.18.mlp.fc1.weight',
'vision_model.encoder.layers.14.mlp.fc1.weight',
'vision_model.encoder.layers.11.self_attn.k_proj.bias',
'vision_model.encoder.layers.5.layer_norm2.bias',
'vision_model.encoder.layers.12.self_attn.out_proj.weight',
'vision_model.encoder.layers.1.layer_norm1.weight',
'vision_model.encoder.layers.8.layer_norm2.bias',
'vision_model.encoder.layers.20.self_attn.v_proj.weight',
'text_projection.weight',
'vision_model.encoder.layers.5.self_attn.q_proj.bias',
'vision_model.encoder.layers.20.self_attn.v_proj.bias',
'vision_model.encoder.layers.7.self_attn.v_proj.bias',
'vision_model.encoder.layers.23.layer_norm2.weight',
'vision_model.encoder.layers.14.self_attn.q_proj.weight',
'vision_model.encoder.layers.15.self_attn.out_proj.weight',
'vision_model.encoder.layers.11.mlp.fc1.weight',
'vision_model.encoder.layers.4.layer_norm2.weight',
'vision_model.encoder.layers.10.self_attn.out_proj.weight',
'vision_model.encoder.layers.16.self_attn.out_proj.bias',
'vision_model.encoder.layers.20.mlp.fc1.weight',
'vision_model.encoder.layers.5.layer_norm1.bias',
'vision_model.encoder.layers.2.self_attn.out_proj.weight',
'vision_model.encoder.layers.7.self_attn.k_proj.weight',
'vision_model.encoder.layers.13.layer_norm1.weight',
'vision_model.encoder.layers.4.layer_norm1.bias',
'vision_model.encoder.layers.6.self_attn.v_proj.weight',
'vision_model.encoder.layers.11.layer_norm1.weight',
'vision_model.encoder.layers.21.mlp.fc1.weight',
'vision_model.encoder.layers.6.mlp.fc2.weight',
'vision_model.encoder.layers.22.mlp.fc1.weight',
'vision_model.encoder.layers.12.self_attn.q_proj.bias',
'vision_model.encoder.layers.19.mlp.fc1.bias',
'vision_model.pre_layrnorm.weight',
'vision_model.encoder.layers.6.mlp.fc1.weight',
'vision_model.encoder.layers.13.self_attn.q_proj.weight',
'vision_model.encoder.layers.4.layer_norm2.bias',
'vision_model.encoder.layers.5.self_attn.k_proj.weight',
'vision_model.encoder.layers.4.self_attn.out_proj.bias',
'vision_model.encoder.layers.12.self_attn.q_proj.weight',
'vision_model.encoder.layers.4.mlp.fc2.weight',
'vision_model.encoder.layers.13.self_attn.out_proj.weight',
'vision_model.embeddings.class_embedding',
'vision_model.encoder.layers.13.mlp.fc2.bias',
'vision_model.encoder.layers.7.layer_norm2.bias',
'vision_model.encoder.layers.19.layer_norm2.weight',
'vision_model.encoder.layers.8.self_attn.q_proj.weight',
'vision_model.embeddings.patch_embedding.weight',
'vision_model.encoder.layers.6.self_attn.q_proj.bias',
'vision_model.encoder.layers.19.self_attn.q_proj.weight',
'vision_model.encoder.layers.11.layer_norm1.bias',
'vision_model.encoder.layers.2.layer_norm2.weight',
'vision_model.encoder.layers.0.self_attn.k_proj.weight',
'vision_model.encoder.layers.10.layer_norm1.bias',
'vision_model.encoder.layers.8.self_attn.v_proj.weight',
'vision_model.encoder.layers.21.self_attn.out_proj.weight',
'vision_model.encoder.layers.20.self_attn.q_proj.bias',
'vision_model.encoder.layers.0.mlp.fc2.weight',
'vision_model.encoder.layers.4.mlp.fc1.weight',
'vision_model.encoder.layers.20.self_attn.q_proj.weight',
'vision_model.encoder.layers.3.self_attn.out_proj.weight',
'vision_model.encoder.layers.0.mlp.fc1.bias',
'vision_model.encoder.layers.12.mlp.fc2.weight',
'vision_model.encoder.layers.1.self_attn.v_proj.bias',
'vision_model.encoder.layers.21.mlp.fc2.bias',
'vision_model.encoder.layers.14.layer_norm2.weight',
'vision_model.encoder.layers.18.layer_norm1.weight',
'vision_model.encoder.layers.22.layer_norm2.weight',
'vision_model.encoder.layers.1.mlp.fc1.bias',
'vision_model.encoder.layers.7.self_attn.out_proj.bias',
'vision_model.encoder.layers.4.mlp.fc2.bias',
'vision_model.encoder.layers.3.self_attn.v_proj.weight',
'vision_model.encoder.layers.6.self_attn.v_proj.bias',
'vision_model.encoder.layers.10.mlp.fc1.bias',
'vision_model.encoder.layers.9.self_attn.q_proj.bias',
'vision_model.encoder.layers.18.self_attn.out_proj.bias',
'vision_model.encoder.layers.7.layer_norm2.weight',
'vision_model.encoder.layers.16.mlp.fc2.bias',
'vision_model.encoder.layers.13.self_attn.v_proj.bias',
'vision_model.encoder.layers.6.layer_norm1.weight',
'vision_model.encoder.layers.2.layer_norm1.weight',
'vision_model.encoder.layers.6.self_attn.out_proj.bias',
'vision_model.encoder.layers.7.mlp.fc2.weight',
'vision_model.encoder.layers.0.mlp.fc1.weight',
'vision_model.encoder.layers.13.layer_norm2.weight',
'vision_model.encoder.layers.0.self_attn.q_proj.bias',
'vision_model.encoder.layers.7.self_attn.out_proj.weight',
'vision_model.encoder.layers.19.self_attn.k_proj.bias',
'vision_model.encoder.layers.9.layer_norm1.weight',
'vision_model.encoder.layers.11.mlp.fc2.bias',
'vision_model.encoder.layers.23.mlp.fc1.bias',
'vision_model.encoder.layers.16.mlp.fc2.weight',
'vision_model.encoder.layers.21.self_attn.v_proj.weight',
'vision_model.encoder.layers.23.mlp.fc1.weight',
'vision_model.encoder.layers.2.self_attn.k_proj.weight',
'vision_model.encoder.layers.9.layer_norm2.bias',
'vision_model.encoder.layers.8.self_attn.out_proj.weight',
'vision_model.encoder.layers.0.self_attn.k_proj.bias',
'vision_model.encoder.layers.23.self_attn.k_proj.bias',
'vision_model.encoder.layers.2.mlp.fc2.bias',
'vision_model.encoder.layers.3.mlp.fc1.weight',
'vision_model.encoder.layers.16.self_attn.q_proj.bias',
'vision_model.encoder.layers.6.layer_norm1.bias',
'vision_model.encoder.layers.3.self_attn.out_proj.bias',
'vision_model.encoder.layers.16.layer_norm2.bias',
'vision_model.encoder.layers.9.mlp.fc2.weight',
'vision_model.encoder.layers.16.self_attn.v_proj.weight',
'vision_model.encoder.layers.4.self_attn.out_proj.weight',
'vision_model.encoder.layers.5.layer_norm2.weight',
'vision_model.encoder.layers.2.mlp.fc2.weight',
'vision_model.encoder.layers.23.self_attn.v_proj.bias',
'vision_model.encoder.layers.17.mlp.fc1.weight',
'vision_model.encoder.layers.11.self_attn.k_proj.weight',
'vision_model.encoder.layers.23.self_attn.q_proj.weight',
'vision_model.encoder.layers.1.mlp.fc2.weight',
'vision_model.encoder.layers.8.mlp.fc1.weight',
'vision_model.encoder.layers.21.self_attn.v_proj.bias',
'vision_model.encoder.layers.1.self_attn.k_proj.bias',
'vision_model.encoder.layers.0.self_attn.out_proj.bias',
'vision_model.encoder.layers.17.mlp.fc1.bias',
'vision_model.encoder.layers.19.self_attn.k_proj.weight',
'vision_model.encoder.layers.9.mlp.fc1.bias',
'vision_model.encoder.layers.10.self_attn.out_proj.bias',
'vision_model.encoder.layers.12.layer_norm1.bias',
'vision_model.encoder.layers.7.mlp.fc2.bias',
'vision_model.encoder.layers.23.mlp.fc2.weight',
'vision_model.encoder.layers.13.self_attn.q_proj.bias',
'vision_model.encoder.layers.23.self_attn.out_proj.bias',
'vision_model.encoder.layers.19.mlp.fc2.bias',
'vision_model.encoder.layers.6.layer_norm2.bias',
'vision_model.encoder.layers.3.layer_norm2.weight',
'vision_model.encoder.layers.8.self_attn.v_proj.bias',
'vision_model.encoder.layers.5.self_attn.k_proj.bias',
'vision_model.encoder.layers.1.self_attn.q_proj.weight',
'vision_model.encoder.layers.22.mlp.fc1.bias',
'vision_model.encoder.layers.14.mlp.fc1.bias',
'vision_model.encoder.layers.16.layer_norm2.weight',
'vision_model.encoder.layers.21.layer_norm1.weight',
'vision_model.encoder.layers.13.mlp.fc1.bias',
'vision_model.encoder.layers.5.mlp.fc1.weight',
'vision_model.encoder.layers.18.self_attn.k_proj.weight',
'vision_model.encoder.layers.9.layer_norm1.bias',
'vision_model.encoder.layers.7.mlp.fc1.weight',
'vision_model.encoder.layers.17.mlp.fc2.bias',
'vision_model.encoder.layers.4.self_attn.q_proj.bias',
'vision_model.encoder.layers.2.self_attn.q_proj.weight',
'vision_model.encoder.layers.14.self_attn.v_proj.weight',
'vision_model.encoder.layers.22.layer_norm1.bias',
'vision_model.encoder.layers.1.self_attn.v_proj.weight',
'vision_model.encoder.layers.23.layer_norm1.weight',
'vision_model.encoder.layers.13.layer_norm2.bias',
'vision_model.encoder.layers.9.self_attn.out_proj.weight',
'vision_model.encoder.layers.1.self_attn.out_proj.weight',
'vision_model.encoder.layers.14.self_attn.v_proj.bias',
'vision_model.encoder.layers.11.mlp.fc1.bias',
'vision_model.encoder.layers.10.self_attn.k_proj.bias',
'vision_model.encoder.layers.10.layer_norm2.bias',
'vision_model.encoder.layers.15.self_attn.k_proj.weight',
'vision_model.encoder.layers.21.self_attn.out_proj.bias',
'vision_model.encoder.layers.12.self_attn.out_proj.bias',
'vision_model.encoder.layers.10.mlp.fc1.weight',
'vision_model.encoder.layers.20.layer_norm2.weight',
'vision_model.encoder.layers.20.mlp.fc2.weight',
'vision_model.encoder.layers.9.self_attn.v_proj.bias',
'vision_model.encoder.layers.7.self_attn.v_proj.weight',
'vision_model.encoder.layers.3.self_attn.q_proj.weight',
'vision_model.encoder.layers.11.self_attn.v_proj.bias',
'vision_model.encoder.layers.9.mlp.fc1.weight',
'vision_model.encoder.layers.14.mlp.fc2.bias',
'vision_model.encoder.layers.0.self_attn.v_proj.bias',
'vision_model.encoder.layers.20.mlp.fc1.bias',
'vision_model.encoder.layers.16.mlp.fc1.weight',
'vision_model.encoder.layers.23.self_attn.k_proj.weight',
'vision_model.encoder.layers.22.self_attn.out_proj.bias',
'vision_model.encoder.layers.15.self_attn.v_proj.weight',
'vision_model.encoder.layers.9.mlp.fc2.bias',
'vision_model.encoder.layers.6.self_attn.k_proj.bias',
'vision_model.encoder.layers.3.mlp.fc2.bias',
'vision_model.encoder.layers.13.self_attn.out_proj.bias',
'vision_model.encoder.layers.3.self_attn.k_proj.bias',
'vision_model.encoder.layers.15.self_attn.v_proj.bias',
'vision_model.encoder.layers.15.layer_norm1.bias',
'vision_model.encoder.layers.21.layer_norm2.weight',
'vision_model.encoder.layers.1.layer_norm1.bias',
'vision_model.encoder.layers.5.self_attn.v_proj.weight',
'vision_model.encoder.layers.1.layer_norm2.weight',
'vision_model.encoder.layers.0.layer_norm2.weight',
'vision_model.encoder.layers.0.self_attn.out_proj.weight',
'vision_model.encoder.layers.3.self_attn.k_proj.weight',
'vision_model.encoder.layers.5.mlp.fc2.weight',
'vision_model.encoder.layers.7.layer_norm1.bias',
'vision_model.encoder.layers.14.self_attn.k_proj.bias',
'vision_model.encoder.layers.4.self_attn.v_proj.bias',
'vision_model.encoder.layers.9.layer_norm2.weight',
'vision_model.encoder.layers.18.self_attn.k_proj.bias',
'vision_model.encoder.layers.22.self_attn.k_proj.weight',
'vision_model.encoder.layers.20.self_attn.out_proj.bias',
'vision_model.encoder.layers.8.self_attn.k_proj.weight',
'vision_model.encoder.layers.10.self_attn.q_proj.bias',
'vision_model.encoder.layers.13.mlp.fc1.weight', 'logit_scale',
'vision_model.encoder.layers.4.self_attn.v_proj.weight',
'vision_model.encoder.layers.14.self_attn.out_proj.weight',
'vision_model.encoder.layers.12.mlp.fc1.weight',
'vision_model.encoder.layers.16.self_attn.k_proj.bias',
'vision_model.encoder.layers.15.mlp.fc1.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint
of a model trained on another task or with another architecture (e.g.
initializing a BertForSequenceClassification model from a BertForPreTraining
model).
- This IS NOT expected if you are initializing CLIPTextModel from the
checkpoint of a model that you expect to be exactly identical (initializing a
BertForSequenceClassification model from a BertForSequenceClassification
model).
loading text encoder: <All keys matched successfully>
Replace CrossAttention.forward to use xformers
caching latents.
100%|████████████████████████████████████
███████████████████████████████████████
███████| 40/40 [00:03<00:00, 10.95it/s]
import network module: networks.lora
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
prepare optimizer, data loader etc.
Traceback (most recent call last):
File "C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\train_network.py", line
507, in <module>
train(args)
File "C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\train_network.py", line
150, in train
optimizer_name, optimizer_args, optimizer =
train_util.get_optimizer(args, trainable_params)
File "C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\library\train_util.py",
line 1536, in get_optimizer
assert optimizer_type is None or optimizer_type == "", "both option
use_8bit_adam and optimizer_type are specified / use_8bit_adamとoptimizer_type
の両方のオプションが指定されています"
AssertionError: both option use_8bit_adam and optimizer_type are specified /
use_8bit_adamとoptimizer_typeの両方のオプションが指定されています
Traceback (most recent call last):
File "C:\Users\ken\AppData\Local\Programs\Python\Python310\lib\runpy.py",
line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\ken\AppData\Local\Programs\Python\Python310\lib\runpy.py",
line 86, in _run_code
exec(code, run_globals)
File
"C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\venv\Scripts\accelerate.exe\__main__.py",
line 7, in <module>
File
"C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py",
line 45, in main
args.func(args)
File
"C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py",
line 1104, in launch_command
simple_launcher(args)
File
"C:\Users\ken\Desktop\sd.webui\kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py",
line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode,
cmd=cmd)
subprocess.CalledProcessError: Command
'['C:\\Users\\ken\\Desktop\\sd.webui\\kohya\\kohya_ss\\venv\\Scripts\\python.exe',
'train_network.py', '--enable_bucket',
'--pretrained_model_name_or_path=C:/Users/ken/Desktop/sd.webui/webui/models/Stable-diffusion/AOM2-r34-SF1.1.safetensors',
'--train_data_dir=C:/Users/ken/Desktop/99+/1', '--resolution=512,512',
'--output_dir=C:/Users/ken/Desktop/99+/2',
'--logging_dir=C:/Users/ken/Desktop/99+/3', '--network_alpha=1',
'--save_model_as=safetensors', '--network_module=networks.lora',
'--text_encoder_lr=5e-5', '--unet_lr=0.0001', '--network_dim=8',
'--output_name=last', '--lr_scheduler_num_cycles=1',
'--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=400',
'--train_batch_size=1', '--max_train_steps=4000', '--save_every_n_epochs=1',
'--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234',
'--cache_latents', '--optimizer_type=AdamW', '--bucket_reso_steps=64',
'--xformers', '--use_8bit_adam', '--bucket_no_upscale']' returned non-zero
exit status 1.
--
※ 发信站: 批踢踢实业坊(ptt.cc), 来自: 123.194.170.24 (台湾)
※ 文章网址: https://webptt.com/cn.aspx?n=bbs/AI_Art/M.1677376959.A.C4F.html
1F:推 wres666: 你没贴到关键的错误讯息 把完整的log贴到paste.gg再传上 02/26 15:54
2F:→ wres666: 来吧 02/26 15:54
※ 编辑: zxsx811 (123.194.170.24 台湾), 02/26/2023 17:17:44
3F:推 fasu10324: 请问楼主最後有解决问题吗,遇到一样的错误 02/26 19:10
4F:→ mg0825: 进阶设定里面的 Use 8bit adam先取消勾选 02/26 23:41
感谢 真的取消就行了
※ 编辑: zxsx811 (123.194.170.24 台湾), 02/27/2023 01:12:05
6F:推 fasu10324: 阿好像真的OK了 请问这个功能是甚麽压 感谢大大指导 02/27 12:14
7F:推 danny0108: 推 03/06 12:22