segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of nvidia/mit-b0 on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set:
- Loss: 1.9042
- Mean Iou: 0.1600
- Mean Accuracy: 0.1997
- Overall Accuracy: 0.7338
- Per Category Iou: [nan, 0.27359520957005035, 0.6563592089876799, 0.0, 0.23344374046535918, 0.0, nan, 0.0, 0.0, 0.0, 0.5539341917024321, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6213519498256361, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.8012808797206368, 0.0, 0.8609473035107046, nan, 0.0, 0.0, 0.0]
- Per Category Accuracy: [nan, 0.38598740280061317, 0.9344800917343116, 0.0, 0.23402267811135147, 0.0, nan, 0.0, 0.0, 0.0, 0.6574569071869553, nan, nan, nan, nan, 0.0, 0.0, nan, 0.889953470705536, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9339123774958169, 0.0, 0.9562267789312698, nan, 0.0, 0.0, 0.0]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
---|---|---|---|---|---|---|---|---|
2.8419 | 0.42 | 20 | 3.2243 | 0.1239 | 0.1973 | 0.6992 | [0.0, 0.221283072298205, 0.6482498250140304, 0.0, 0.36607695456244177, 0.013827775204570018, nan, 1.0254201659129828e-05, 0.0, 0.0, 0.5416500682753081, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5339731316050166, 0.0, 0.0006440571922786744, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7498440701547007, 0.0, 0.7659222854515146, 0.0, 0.0, 0.0, 0.0] | [nan, 0.3346613609105567, 0.8582083544770268, 0.0, 0.5101472837243907, 0.015482685970504024, nan, 1.0366454154356502e-05, 0.0, 0.0, 0.6745826026281508, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8093545247364923, 0.0, 0.0006458279514337381, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9324806212895075, 0.0, 0.797418357423677, nan, 0.0, 0.0, 0.0] |
2.3662 | 0.83 | 40 | 2.5147 | 0.1402 | 0.1798 | 0.6989 | [nan, 0.19549119549985344, 0.6036027201962391, 0.0, 0.0019222772099991463, 0.000300503343099692, nan, 0.0, 0.0, 0.0, 0.47853978429259575, nan, nan, nan, nan, 0.0, 0.0, nan, 0.5820555774612892, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.7898452112422248, 0.0, 0.8521568687502872, nan, 0.0, 0.0, 0.0] | [nan, 0.25107981668136076, 0.9396577375184628, 0.0, 0.0019233683746435017, 0.0003025228242666523, nan, 0.0, 0.0, 0.0, 0.5513810659584686, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8953553793561865, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9300976130892274, 0.0, 0.9250758451014455, nan, 0.0, 0.0, 0.0] |
2.1745 | 1.25 | 60 | 2.0428 | 0.1485 | 0.1882 | 0.7162 | [nan, 0.24240648716131, 0.6262941164542789, 0.0, 0.04440846090507781, 0.0, nan, 0.0, 0.0, 0.0, 0.522913696330921, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6194890050543631, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.7947837731119848, 0.0, 0.8609570537373858, nan, 0.0, 0.0, 0.0] | [nan, 0.3318909301752965, 0.9392945927202885, 0.0, 0.04443587164684973, 0.0, nan, 0.0, 0.0, 0.0, 0.6149676720993105, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8836542113759377, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9409947331534898, 0.0, 0.9509521157666382, nan, 0.0, 0.0, 0.0] |
1.986 | 1.67 | 80 | 1.9042 | 0.1600 | 0.1997 | 0.7338 | [nan, 0.27359520957005035, 0.6563592089876799, 0.0, 0.23344374046535918, 0.0, nan, 0.0, 0.0, 0.0, 0.5539341917024321, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6213519498256361, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.8012808797206368, 0.0, 0.8609473035107046, nan, 0.0, 0.0, 0.0] | [nan, 0.38598740280061317, 0.9344800917343116, 0.0, 0.23402267811135147, 0.0, nan, 0.0, 0.0, 0.0, 0.6574569071869553, nan, nan, nan, nan, 0.0, 0.0, nan, 0.889953470705536, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9339123774958169, 0.0, 0.9562267789312698, nan, 0.0, 0.0, 0.0] |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.