File size: 5,069 Bytes
cefc11a
 
 
19d0d22
cefc11a
 
 
 
 
 
 
 
 
 
19d0d22
 
cefc11a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19d0d22
 
 
 
 
 
 
cefc11a
 
19d0d22
cefc11a
 
 
 
 
 
 
 
19d0d22
 
cefc11a
 
 
 
 
 
 
 
 
 
 
 
19d0d22
cefc11a
 
 
 
 
 
 
 
19d0d22
 
cefc11a
 
 
 
 
 
 
 
 
b9aa096
cefc11a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19d0d22
cefc11a
 
 
 
 
 
11be3ff
cefc11a
19d0d22
 
cefc11a
 
 
11be3ff
cefc11a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
license: mit
pipeline_tag: image-segmentation
library_name: ben2
tags:
- BEN2
- background-remove
- mask-generation
- Dichotomous image segmentation
- background remove
- foreground
- background
- remove background
- pytorch
- model_hub_mixin
- pytorch_model_hub_mixin
---

# BEN2: Background Erase Network

[![arXiv](https://img.shields.io/badge/arXiv-2501.06230-b31b1b.svg)](https://arxiv.org/abs/2501.06230)
[![GitHub](https://img.shields.io/badge/GitHub-BEN2-black.svg)](https://github.com/PramaLLC/BEN2/)
[![Website](https://img.shields.io/badge/Website-backgrounderase.net-104233)](https://backgrounderase.net)

## Overview
BEN2 (Background Erase Network) introduces a novel approach to foreground segmentation through its innovative Confidence Guided Matting (CGM) pipeline. The architecture employs a refiner network that targets and processes pixels where the base model exhibits lower confidence levels, resulting in more precise and reliable matting results. This model is built on BEN:
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ben-using-confidence-guided-matting-for/dichotomous-image-segmentation-on-dis-vd)](https://paperswithcode.com/sota/dichotomous-image-segmentation-on-dis-vd?p=ben-using-confidence-guided-matting-for)




## BEN2 access
BEN2 was trained on the DIS5k and our 22K proprietary segmentation dataset. Our enhanced model delivers superior performance in hair matting, 4K processing, object segmentation, and edge refinement. Our Base model is open source. To try the full model through our free web demo or integrate BEN2 into your project with our API:
- 🌐 [backgrounderase.net](https://backgrounderase.net)


## Contact us
- For access to our commercial model email us at [email protected]
- Our website: https://prama.llc/
- Follow us on X: https://x.com/PramaResearch/


## Installation

```
pip install -e "git+https://github.com/PramaLLC/BEN2.git#egg=ben2"
```

## Quick start code

```python
from ben2 import BEN_Base
from PIL import Image
import torch


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

file = "./image.png" # input image

model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()

image = Image.open(file)
foreground = model.inference(image, refine_foreground=False,) #Refine foreground is an extract postprocessing step that increases inference time but can improve matting edges. The default value is False.

foreground.save("./foreground.png")

```


## Batch image processing

```python
from ben2 import BEN_Base
from PIL import Image
import torch


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')



model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()


file1 = "./image1.png" # input image1
file2 = "./image2.png" # input image2
image1 = Image.open(file1)
image2 = Image.open(file2)



foregrounds = model.inference([image1, image2]) #  We recommend that the batch size not exceed 3 for consumer GPUs as there are minimal inference gains due to our custom batch processing for the MVANet decoding steps.
foregrounds[0].save("./foreground1.png")
foregrounds[1].save("./foreground2.png")

```



# BEN2 video segmentation
[![BEN2 Demo](https://img.youtube.com/vi/skEXiIHQcys/0.jpg)](https://www.youtube.com/watch?v=skEXiIHQcys)

## Video Segmentation

```bash
sudo apt update
sudo apt install ffmpeg
```

```python
from ben2 import BEN_Base
from PIL import Image
import torch


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

video_path = "/path_to_your_video.mp4"# input video

model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()


model.segment_video(
    video_path= video_path,
    output_path="./", # Outputs will be saved as foreground.webm or foreground.mp4. The default value is "./"
    fps=0, # If this is set to 0 CV2 will detect the fps in the original video. The default value is 0.
    refine_foreground=False,  #refine foreground is an extract postprocessing step that increases inference time but can improve matting edges. The default value is False.
    batch=1,  # We recommended that batch size not exceed 3 for consumer GPUs as there are minimal inference gains. The default value is 1.
    print_frames_processed=True,  #Informs you what frame is being processed. The default value is True.
    webm = False, # This will output an alpha layer video but this defaults to mp4 when webm is false. The default value is False.
    rgb_value= (0, 255, 0) # If you do not use webm this will be the RGB value of the resulting background only when webm is False. The default value is a green background (0,255,0).
 )


```



**# BEN2 evaluation**
![Model Comparison](BEN2_demo_pictures/model_comparison.png)

RMBG 2.0 did not preserve the DIS 5k validation dataset

![Example 1](BEN2_demo_pictures/grid_example1.png)
![Example 2](BEN2_demo_pictures/grid_example2.png)
![Example 3](BEN2_demo_pictures/grid_example3.png)
![Example 6](BEN2_demo_pictures/grid_example6.png)
![Example 7](BEN2_demo_pictures/grid_example7.png)