I couldn't find all the models mentioned in the error in the files and versions.
Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:\ComfyUI-aki-v1.4-v2\models\segformer_b2_clothes.
The files and versions do not contain tf_model.h5, model.ckpt.index, or flax_model.msgpack.
might be something with you clone of this repo? as there is a pytorch_model.bin and model.safetensors file, you can see in the "files and versions" tab
might be something with you clone of this repo? as there is a pytorch_model.bin and model.safetensors file, you can see in the "files and versions" tab
I have downloaded the 'pytorch_model.bin' and 'model.safetensors' files, but the error message is about tf_model.h5, model.ckpt.index or flax_model.msgpack. I don't know where to get them.
You shouldn't need all of those files, they're all the same things, the model weights but in different formats, the first error you posted means that you need at least 1 of these. I think your path to the folder might be wrong. There's no need to have all pytorch, jax, TF or safetensors weights to load a model.
You shouldn't need all of those files, they're all the same things, the model weights but in different formats, the first error you posted means that you need at least 1 of these. I think your path to the folder might be wrong. There's no need to have all pytorch, jax, TF or safetensors weights to load a model.
Okay. I am now encountering a new error. Please take a look.The content is relatively long.
Error(s) in loading state_dict for SegformerForSemanticSegmentation:
size mismatch for segformer.encoder.patch_embeddings.0.proj.weight: copying a param with shape torch.Size([64, 3, 7, 7]) from checkpoint, the shape in current model is torch.Size([32, 3, 7, 7]).
size mismatch for segformer.encoder.patch_embeddings.0.proj.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.patch_embeddings.0.layer_norm.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.patch_embeddings.0.layer_norm.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.patch_embeddings.1.proj.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
size mismatch for segformer.encoder.patch_embeddings.1.proj.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.patch_embeddings.1.layer_norm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.patch_embeddings.1.layer_norm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.patch_embeddings.2.proj.weight: copying a param with shape torch.Size([320, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([160, 64, 3, 3]).
size mismatch for segformer.encoder.patch_embeddings.2.proj.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.patch_embeddings.2.layer_norm.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.patch_embeddings.2.layer_norm.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.patch_embeddings.3.proj.weight: copying a param with shape torch.Size([512, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 160, 3, 3]).
size mismatch for segformer.encoder.patch_embeddings.3.proj.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.patch_embeddings.3.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.patch_embeddings.3.layer_norm.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.0.0.layer_norm_1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.layer_norm_1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.self.query.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.0.attention.self.query.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.self.key.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.0.attention.self.key.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.self.value.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.0.attention.self.value.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.self.sr.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([32, 32, 8, 8]).
size mismatch for segformer.encoder.block.0.0.attention.self.sr.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.self.layer_norm.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.self.layer_norm.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.attention.output.dense.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.0.attention.output.dense.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.layer_norm_2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.layer_norm_2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.0.mlp.dense1.weight: copying a param with shape torch.Size([256, 64]) from checkpoint, the shape in current model is torch.Size([128, 32]).
size mismatch for segformer.encoder.block.0.0.mlp.dense1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for segformer.encoder.block.0.0.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1, 3, 3]).
size mismatch for segformer.encoder.block.0.0.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for segformer.encoder.block.0.0.mlp.dense2.weight: copying a param with shape torch.Size([64, 256]) from checkpoint, the shape in current model is torch.Size([32, 128]).
size mismatch for segformer.encoder.block.0.0.mlp.dense2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.layer_norm_1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.layer_norm_1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.self.query.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.1.attention.self.query.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.self.key.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.1.attention.self.key.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.self.value.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.1.attention.self.value.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.self.sr.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([32, 32, 8, 8]).
size mismatch for segformer.encoder.block.0.1.attention.self.sr.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.self.layer_norm.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.self.layer_norm.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.attention.output.dense.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([32, 32]).
size mismatch for segformer.encoder.block.0.1.attention.output.dense.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.layer_norm_2.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.layer_norm_2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.0.1.mlp.dense1.weight: copying a param with shape torch.Size([256, 64]) from checkpoint, the shape in current model is torch.Size([128, 32]).
size mismatch for segformer.encoder.block.0.1.mlp.dense1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for segformer.encoder.block.0.1.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1, 3, 3]).
size mismatch for segformer.encoder.block.0.1.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for segformer.encoder.block.0.1.mlp.dense2.weight: copying a param with shape torch.Size([64, 256]) from checkpoint, the shape in current model is torch.Size([32, 128]).
size mismatch for segformer.encoder.block.0.1.mlp.dense2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.block.1.0.layer_norm_1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.layer_norm_1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.self.query.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.0.attention.self.query.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.self.key.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.0.attention.self.key.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.self.value.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.0.attention.self.value.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.self.sr.weight: copying a param with shape torch.Size([128, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([64, 64, 4, 4]).
size mismatch for segformer.encoder.block.1.0.attention.self.sr.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.self.layer_norm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.self.layer_norm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.attention.output.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.0.attention.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.layer_norm_2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.layer_norm_2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.0.mlp.dense1.weight: copying a param with shape torch.Size([512, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for segformer.encoder.block.1.0.mlp.dense1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.1.0.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for segformer.encoder.block.1.0.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.1.0.mlp.dense2.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([64, 256]).
size mismatch for segformer.encoder.block.1.0.mlp.dense2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.layer_norm_1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.layer_norm_1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.self.query.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.1.attention.self.query.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.self.key.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.1.attention.self.key.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.self.value.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.1.attention.self.value.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.self.sr.weight: copying a param with shape torch.Size([128, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([64, 64, 4, 4]).
size mismatch for segformer.encoder.block.1.1.attention.self.sr.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.self.layer_norm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.self.layer_norm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.attention.output.dense.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([64, 64]).
size mismatch for segformer.encoder.block.1.1.attention.output.dense.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.layer_norm_2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.layer_norm_2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.1.1.mlp.dense1.weight: copying a param with shape torch.Size([512, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for segformer.encoder.block.1.1.mlp.dense1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.1.1.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for segformer.encoder.block.1.1.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.1.1.mlp.dense2.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([64, 256]).
size mismatch for segformer.encoder.block.1.1.mlp.dense2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.block.2.0.layer_norm_1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.layer_norm_1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.self.query.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.0.attention.self.query.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.self.key.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.0.attention.self.key.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.self.value.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.0.attention.self.value.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.self.sr.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([160, 160, 2, 2]).
size mismatch for segformer.encoder.block.2.0.attention.self.sr.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.self.layer_norm.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.self.layer_norm.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.attention.output.dense.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.0.attention.output.dense.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.layer_norm_2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.layer_norm_2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.0.mlp.dense1.weight: copying a param with shape torch.Size([1280, 320]) from checkpoint, the shape in current model is torch.Size([640, 160]).
size mismatch for segformer.encoder.block.2.0.mlp.dense1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for segformer.encoder.block.2.0.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([1280, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1, 3, 3]).
size mismatch for segformer.encoder.block.2.0.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for segformer.encoder.block.2.0.mlp.dense2.weight: copying a param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is torch.Size([160, 640]).
size mismatch for segformer.encoder.block.2.0.mlp.dense2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.layer_norm_1.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.layer_norm_1.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.self.query.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.1.attention.self.query.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.self.key.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.1.attention.self.key.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.self.value.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.1.attention.self.value.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.self.sr.weight: copying a param with shape torch.Size([320, 320, 2, 2]) from checkpoint, the shape in current model is torch.Size([160, 160, 2, 2]).
size mismatch for segformer.encoder.block.2.1.attention.self.sr.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.self.layer_norm.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.self.layer_norm.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.attention.output.dense.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([160, 160]).
size mismatch for segformer.encoder.block.2.1.attention.output.dense.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.layer_norm_2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.layer_norm_2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.2.1.mlp.dense1.weight: copying a param with shape torch.Size([1280, 320]) from checkpoint, the shape in current model is torch.Size([640, 160]).
size mismatch for segformer.encoder.block.2.1.mlp.dense1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for segformer.encoder.block.2.1.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([1280, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1, 3, 3]).
size mismatch for segformer.encoder.block.2.1.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for segformer.encoder.block.2.1.mlp.dense2.weight: copying a param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is torch.Size([160, 640]).
size mismatch for segformer.encoder.block.2.1.mlp.dense2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.block.3.0.layer_norm_1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.layer_norm_1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.attention.self.query.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.0.attention.self.query.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.attention.self.key.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.0.attention.self.key.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.attention.self.value.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.0.attention.self.value.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.attention.output.dense.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.0.attention.output.dense.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.layer_norm_2.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.layer_norm_2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.0.mlp.dense1.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for segformer.encoder.block.3.0.mlp.dense1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for segformer.encoder.block.3.0.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([2048, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1, 3, 3]).
size mismatch for segformer.encoder.block.3.0.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for segformer.encoder.block.3.0.mlp.dense2.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for segformer.encoder.block.3.0.mlp.dense2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.layer_norm_1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.layer_norm_1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.attention.self.query.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.1.attention.self.query.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.attention.self.key.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.1.attention.self.key.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.attention.self.value.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.1.attention.self.value.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.attention.output.dense.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for segformer.encoder.block.3.1.attention.output.dense.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.layer_norm_2.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.layer_norm_2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.block.3.1.mlp.dense1.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for segformer.encoder.block.3.1.mlp.dense1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for segformer.encoder.block.3.1.mlp.dwconv.dwconv.weight: copying a param with shape torch.Size([2048, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1, 3, 3]).
size mismatch for segformer.encoder.block.3.1.mlp.dwconv.dwconv.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for segformer.encoder.block.3.1.mlp.dense2.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for segformer.encoder.block.3.1.mlp.dense2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.layer_norm.0.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.layer_norm.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for segformer.encoder.layer_norm.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.layer_norm.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for segformer.encoder.layer_norm.2.weight: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.layer_norm.2.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for segformer.encoder.layer_norm.3.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for segformer.encoder.layer_norm.3.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.linear_c.0.proj.weight: copying a param with shape torch.Size([768, 64]) from checkpoint, the shape in current model is torch.Size([256, 32]).
size mismatch for decode_head.linear_c.0.proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.linear_c.1.proj.weight: copying a param with shape torch.Size([768, 128]) from checkpoint, the shape in current model is torch.Size([256, 64]).
size mismatch for decode_head.linear_c.1.proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.linear_c.2.proj.weight: copying a param with shape torch.Size([768, 320]) from checkpoint, the shape in current model is torch.Size([256, 160]).
size mismatch for decode_head.linear_c.2.proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.linear_c.3.proj.weight: copying a param with shape torch.Size([768, 512]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for decode_head.linear_c.3.proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.linear_fuse.weight: copying a param with shape torch.Size([768, 3072, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1024, 1, 1]).
size mismatch for decode_head.batch_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.batch_norm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.batch_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.batch_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode_head.classifier.weight: copying a param with shape torch.Size([18, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 256, 1, 1]).
size mismatch for decode_head.classifier.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([2]).
You may consider adding ignore_mismatched_sizes=True
in the model from_pretrained
method.
I mean it does just seem like you're using the wrong model size to load the weights. Double check that you're using segformer b2 and not a different variant for the base model. Do try "ignore_mismatched_sizes=True" as suggested by the error tho.