id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.00379 | Image Completion via Dual-path Cooperative Filtering | Given the recent advances with image-generating algorithms, deep image
completion methods have made significant progress. However, state-of-art
methods typically provide poor cross-scene generalization, and generated masked
areas often contain blurry artifacts. Predictive filtering is a method for
restoring images, which predicts the most effective kernels based on the input
scene. Motivated by this approach, we address image completion as a filtering
problem. Deep feature-level semantic filtering is introduced to fill in missing
information, while preserving local structure and generating visually realistic
content. In particular, a Dual-path Cooperative Filtering (DCF) model is
proposed, where one path predicts dynamic kernels, and the other path extracts
multi-level features by using Fast Fourier Convolution to yield semantically
coherent reconstructions. Experiments on three challenging image completion
datasets show that our proposed DCF outperforms state-of-art methods. | Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger | 2023-04-30T03:54:53Z | http://arxiv.org/abs/2305.00379v1 | # Image Completion via Dual-Path Cooperative Filtering
###### Abstract
Given the recent advances with image-generating algorithms, deep image completion methods have made significant progress. However, state-of-art methods typically provide poor cross-scene generalization, and generated masked areas often contain blurry artifacts. Predictive filtering is a method for restoring images, which predicts the most effective kernels based on the input scene. Motivated by this approach, we address image completion as a filtering problem. Deep feature-level semantic filtering is introduced to fill in missing information, while preserving local structure and generating visually realistic content. In particular, a Dual-path Cooperative Filtering (DCF) model is proposed, where one path predicts dynamic kernels, and the other path extracts multi-level features by using Fast Fourier Convolution to yield semantically coherent reconstructions. Experiments on three challenging image completion datasets show that our proposed DCF outperforms state-of-art methods.
Pourya Shamsolmoali\({}^{1}\), Masoumeh Zareapoor\({}^{2}\), Eric Granger\({}^{3}\)\({}^{1}\)Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, China
\({}^{2}\)School of Automation, Shanghai Jiao Tong University, China
\({}^{3}\)Lab. d'imagerie, de vision et d'intelligence artificielle, Dept. of Systems Eng., ETS, Canada Image Completion, Image Inpainting, Deep Learning.
## 1 Introduction
The objective of image completion (inpainting) is to recover images by reconstructing missing regions. Images with inpainted details must be visually and semantically consistent. Therefore, robust generation is required for inpainting methods. Generative adversarial networks (GANs) [2, 18] or auto-encoder networks [16, 20, 21] are generally used in current state-of-the-art models [10, 11, 19] to perform image completion. In these models, the input image is encoded into a latent space by generative network-based inpainting, which is then decoded to generate a new image. The quality of inpainting is entirely dependent on the data and training approach, since the procedure ignores priors (for example smoothness among nearby pixels or features). It should be noted that, unlike the generating task, image inpainting has its own unique challenges. First, image inpainting requires that the completed images be clean, high-quality, and natural. These constraints separate image completion from the synthesis tasks, which focuses only on naturalness. Second, missing regions may appear in different forms, and the backgrounds could be from various scenes. Given these constraints, it is important for the inpainting method to have a strong capacity to generalize across regions that are missing. Recent generative networks have made substantial progress in image completion, but they still have a long way to go before they can address the aforementioned problems.
For instance, RFRNet [7] uses feature reasoning on the auto-encoder architecture for the task of image inpainting. As shown in Fig. 1, RFRNet produces some artifacts in output images. JPGNet and MISF [5, 8] are proposed to address generative-based inpainting problems [7, 12, 15] by reducing artifacts using image-level predictive filtering. Indeed, image-level predictive filtering reconstructs pixels from neighbors, and filtering kernels are computed adaptively based on the inputs. JPGNet is therefore able to retrieve the local structure while eliminating artifacts. As seen in Fig. 1, JPGNet's artifacts are more efficiently smoother than RFRNet's. However, many details may be lost, and the actual structures are not reconstructed. LaMa [19] is a recent image inpainting approach that uses Fast Fourier Convolution (FFC) [3] inside their ResNet-based LaMa-Fourier model to address the lack of receptive field for producing repeated patterns in the missing areas. Previously, researchers struggled with global self-attention [22] and its computational complexity, and they were still unable to perform satisfactory recovery for repeated man-made structures as effectively as with LaMa. Nonetheless, as the missing regions get bigger and pass the object boundary, LaMa creates faded structures.
Figure 1: Examples of an image completed with our DCF model compared to baseline methods on the Paris dataset. DCF generates high-fidelity and more realistic images.
In [12], authors adopts LaMa as the base network, and can captures various types of missing information by utilizing additional types of masks. They use more damaged images in the training phase to improve robustness. However, such a training strategy is unproductive. Transformer-based approaches [20, 23] recently have attracted considerable interest, despite the fact that the structures can only be estimated within a low-resolution coarse image, and good textures cannot be produced beyond this point. Recent diffusion-based inpainting models [13, 17] have extended the limitations of generative models by using image information to sample the unmasked areas or use a score-based formulation to generate unconditional inpainted images, however, these approaches are not efficient in real-world applications.
To address this problem, we introduce a new neural network architecture that is motivated by the predictive filtering on adaptability and use large receptive field for producing repeating patterns. In particular, this paper makes two key contributions. First, semantic filtering is introduced to fill the missing image regions by expanding image-level filtering into a feature-level filtering. Second, a Dual-path Cooperative Filtering (DCF) model is introduced that integrates two semantically connected networks - a kernel prediction network, and a semantic image filtering network to enhance image details.
The semantic filtering network supplies multi-level features to the kernel prediction network, while the kernel prediction network provides dynamic kernels to the semantic filtering network. In addition, for efficient reuse of high-frequency features, FFC [3] residual blocks are utilized in the semantic filtering network to better synthesize the missing regions of an image, leading to improved performance on textures and structures. By linearly integrating neighboring pixels or features, DCF is capable of reconstructing them with a smooth prior across neighbors. Therefore, DCF utilizes both semantic and pixel-level filling for accurate inpainting. Following Fig. 1, the propose model produces high-fidelity and realistic images. Furthermore, in comparison with existing methods, our technique involves a dual-path network with a dynamic convolutional operation that modifies the convolution parameters based on different inputs, allowing to have strong generalization. A comprehensive set of experiments conducted on three challenging benchmark datasets (CelebA-HQ [6], Places2 [24], and Paris StreetView [4]), shows that our proposed method yields better qualitative and quantitative results than state-of-art methods.
## 2 Methodology
Predictive filtering is a popular method for restoring images that is often used for image denoising tasks [14]. We define image completion as pixel-wise predictive filtering:
\[I_{c}=I_{m}\vartriangle T, \tag{1}\]
in which \(I_{c}\in\mathbb{R}^{(H\times W\times 3)}\) represents a complete image, \(I_{m}\in\mathbb{R}^{(H\times W\times 3)}\) denotes the input image with missing regions from the ground truth image \(I_{gr}\in\mathbb{R}^{(H\times W\times 3)}\). The tensor \(T\in\mathbb{R}^{(H\times W\times N^{2})}\) has \(HW\) kernels for filtering each pixel and the pixel-wise filtering operation is indicated by the operation \({}^{\prime}\vartriangle^{\prime}\). Rather than using image-level filtering, we perform the double-path feature-level filtering, to provides more context information. Our idea is that, even if a large portion of the image is destroyed, semantic information can be maintained. To accomplish semantic filtering, we initially use an auto-encoder network in which the encoder extracts features of the damaged image \(I_{m}\), and the decoder maps the extracted features to the complete image \(I_{c}\). Therefore, the encoder can be defined by:
\[f_{L}=\rho(I_{m})=\rho_{L}(...\rho_{l}(...\rho_{2}(\rho_{1}(I_{m})))), \tag{2}\]
in which \(\rho(.)\) denotes the encoder while \(f_{l}\) represents the feature taken from the deeper layers (\(l^{th}\)), \(f_{l}=\rho_{l}(f_{l-1})\). For instance, \(f_{l}\) shows the last layer's result of \(\rho(.)\).
In our encoder network, to create remarkable textures and semantic structures within the missing image regions, we adopt Fast Fourier Convolutional Residual Blocks (FFC-Res) [19]. The FFC-Res shown in Fig. 2 (b) has two FFC layers. The channel-wise Fast Fourier Transform (FFT) [1] is the core of the FFC layer [3] to provide a whole image-wide receptive field. As shown in Fig. 2 (c), the FFC layer divides channels into two branches: a) a local branch, which utilizes standard convolutions to capture spatial information, and b) a global branch, which employs a Spectral Transform module to analyze global structure and capture long-range context.
Figure 2: Overview of the proposed architecture. (a) Our proposed DCF inpainting network with (b) FFC residual block to have a larger receptive field. (c) and (d) show the architecture of the FFC and Spectral Transform layers, respectively.
Outputs of the local and global branches are then combined. Two Fourier Units (FU) are used by the Spectral Transform layer (Fig. 2 (d)) in order to capture both global and semi-global features. The FU on the left represents the global context. In contrast, the Local Fourier Unit on the right side of the image takes in one-fourth of the channels and focuses on the semi-global image information. In a FU, the spatial structure is generally decomposed into image frequencies using a Real FFT2D operation, a frequency domain convolution operation, and ultimately recovering the structure via an Inverse FFT2D operation. Therefore, based on the encoder the network of our decoder is defined as:
\[I_{c}=\rho^{-1}(f_{L}), \tag{3}\]
in which \(\rho^{-1}(.)\) denotes the decoder. Then, similar to image-level filtering, we perform semantic filtering on extracted features according to:
\[\hat{f}_{l}[r]=\sum_{s\in\mathcal{N}_{\kappa}}T_{\kappa}^{l}[s-r]f_{l}[s], \tag{4}\]
in which \(r\) and \(s\) denote the image pixels' coordinates, whereas the \(\mathcal{N}_{\kappa}\) consist of \(N^{2}\) closest pixels. \(T_{\kappa}^{l}\) signifies the kernel for filtering the \(\kappa^{th}\) component of \(T_{l}\) through its neighbors \(\mathcal{N}_{\kappa}\). To incorporate every element-wise kernel, we use the matrix \(T_{l}\) as \(T_{\kappa}^{l}\). Following this, Eq. (2) is modified by substituting \(f_{l}\) with \(\hat{f}_{l}\). In addition, we use a predictive network to predict the kernels' behaviour in order to facilitate their adaptation for two different scenes.
\[T_{l}=\varphi_{l}(I_{m}), \tag{5}\]
in which \(\varphi_{l}(.)\) denotes the predictive network to generate \(T_{l}\). In Fig. 2(a) and Table 2, we illustrate our image completion network which consist of \(\rho(.),\rho^{-1},\) and \(\varphi_{l}(.)\). The proposed network is trained using the \(L_{1}\) loss, perceptual loss, adversarial loss, and style loss, similar to predictive filtering.
## 3 Experiments
In this section, the performance of our DCF model is compared to state-of-the-art methods for image completion task. Experiments are carried out on three datasets, CelebA-HQ [6], Places2 [24], and Paris StreetView [4] at \(256\times 256\) resolution images. With all datasets, we use the standard training and testing splits. In both training and testing we use the diverse irregular mask (20%-40% of images occupied by holes) given by PConv [9] and regular center mask datasets. The code is provided at _DCF_.
**Performance Measures:** The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and Frechet inception distance (FID) are used as the evaluation metrics.
### Implementation Details
Our proposed model's framework is shown in Table 2.
**Loss functions.** We follow [15] and train the networks using four loss functions, including \(L_{1}\) loss (\(\ell_{1}\)), adversarial loss (\(\ell_{A}\)), style loss (\(\ell_{S}\)), and perceptual loss (\(\ell_{P}\)), to obtain images with excellent fidelity in terms of quality as well as semantic levels. Therefore, we can write the reconstruction loss (\(\ell_{R}\)) as:
\[\ell_{R}=\lambda_{1}\ell_{1}+\lambda_{a}\ell_{A}+\lambda_{p}\ell_{P}+\lambda_ {s}\ell_{S}. \tag{6}\]
\begin{table}
\begin{tabular}{l|c|c||c|c} \hline \multicolumn{4}{c||}{Feature extracting network} & \multicolumn{2}{c}{Predicting network} \\ \hline Layer & In. & Out/size & In. & Out/size \\ \hline \hline conv(7,3,64) & \(I_{m}\) & \(f_{1}\) / 256 & \(I_{m}\) & \(e_{1}\) / 256 \\ conv(4,64,128) & \(f_{1}\) & \(f_{2}\) / 128 & \(e_{1}\) & \(e_{2}\) / 128 \\ pooling & \(f_{2}\) & \(f_{2}\) / 64 & \(e_{2}\) & \(e_{2}\) / 64 \\ conv(4,128,256) & \(f_{2}\) & \(f_{3}\) / 64 & \([f_{2}^{\prime},e_{2}^{\prime}]\) & \(e_{3}\) / 64 \\ \(f_{3}\) \(\
in which \(\lambda_{1}=1\), \(\lambda_{a}=\lambda_{p}=0.1\), and \(\lambda_{s}=250\). More details on the loss functions can be found in [15].
**Training setting.** We use Adam as the optimizer with the learning rate of \(1e-4\) and the standard values for its hyperparameters. The network is trained for 500k iterations and the batch size is 8. The experiments are conducted on the same machine with two RTX-3090 GPUs.
### Comparisons to the Baselines
**Qualitative Results.** The proposed DCF model is compared to relevant baselines such as RFRNet [7], JPGNet [5], and LaMa [19]. Fig. 3 and Fig. 4 show the results for the Places2 and CelebA-HQ datasets respectively. In comparison to JPGNet, our model preserves substantially better recurrent textures, as shown in Fig. 3. Since JPGNet lacks attention-related modules, high-frequency features cannot be successfully utilized due to the limited receptive field. Using FFC modules, our model expanded the receptive field and successfully project source textures on newly generated structures. Furthermore, our model generates superior object boundary and structural data compared to LaMa. Large missing regions over larger pixel ranges limit LaMa from hallucinating adequate structural information. However, ours uses the advantages of the coarse-to-fine generator to generate a more precise object with better boundary. Fig. 4 shows more qualitative evidence. While testing on facial images, RFRNet and LaMa produce faded forehead hairs and these models are not robust enough. The results of our model, nevertheless, have more realistic textures and plausible structures, such as forehead form and fine-grained hair.
**Quantitative Results.** On three datasets, we compare our proposed model with other inpainting models. The results shown in Table 2 lead to the following conclusions: 1) Compared to other approaches, our method outperforms them in terms of PSNR, SSIM, and FID scores for the most of datasets and mask types. Specifically, we achieve 9% higher PNSR on the Places2 dataset's irregular masks than RFRNet. It indicates that our model has advantages over existing methods. 2) We observe similar results while analyzing the FID. On the CelebA-HQ dataset, our method achieves 2.5% relative lower FID than LaMa under the center mask. This result indicates our method's remarkable success in perceptual restoration. 3) The consistent advantages over several datasets and mask types illustrate that our model is highly generalizable.
## 4 Conclusion
Dual-path cooperative filtering (DCF) was proposed in this paper for high-fidelity image inpainting. For predictive filtering at the image and deep feature levels, a predictive network is proposed. In particular, image-level filtering is used for details recovery, whereas deep feature-level filtering is used for semantic information completion. Moreover, in the image-level filtering the FFC residual blocks is adopted to recover semantic information and resulting in high-fidelity outputs. The experimental results demonstrate our model outperforms the state-of-art inpainting approaches.
#### Acknowledgments
This research was supported in part by NSFC China. The corresponding author is Masoumeh Zareapoor.
\begin{table}
\begin{tabular}{l|l|c c|c c|c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{Method} & \multicolumn{3}{c|}{CelebA-HQ} & \multicolumn{3}{c|}{Places2} & \multicolumn{3}{c}{Paris StreetView} \\ \cline{3-8} & & Irregular & Center & Irregular & Center & Irregular & Center \\ \hline \multirow{8}{*}{PSNR\(\uparrow\)} & RFRNet [7] & 26.63 & 21.32 & 22.58 & 18.27 & 23.81 & 19.26 \\ & JPGNet [5] & 25.54 & 22.71 & 23.93 & 19.22 & 24.79 & 20.63 \\ & TFill [23] & 26.84 & 23.65 & 24.32 & 20.49 & 25.46 & 21.85 \\ & LaMa [19] & 27.31 & 24.18 & **25.27** & 21.67 & 25.84 & 22.59 \\ & GLaMa [12] & 28.17 & 25.13 & 25.08 & 21.83 & 26.23 & 22.87 \\ & DCF (ours) & **28.34** & **25.62** & 25.19 & **22.30** & **26.57** & **23.41** \\ \hline \multirow{8}{*}{SSIM\(\uparrow\)} & RFRNet [7] & 0.934 & 0.912 & 0.819 & 0.801 & 0.862 & 0.849 \\ & JPGNet [5] & 0.927 & 0.904 & 0.825 & 0.812 & 0.873 & 0.857 \\ & TFill [23] & 0.933 & 0.907 & 0.826 & 0.814 & 0.870 & 0.857 \\ & LaMa [19] & 0.939 & 0.911 & 0.829 & 0.816 & 0.871 & 0.856 \\ & GLaMa [12] & 0.941 & 0.925 & **0.833** & 0.817 & 0.872 & 0.858 \\ & DCF (ours) & **0.943** & **0.928** & 0.832 & **0.819** & **0.876** & **0.861** \\ \hline \multirow{8}{*}{FID\(\downarrow\)} & RFRNet [7] & 17.07 & 17.83 & 15.56 & 16.47 & 40.23 & 41.08 \\ & JPGNet [5] & 13.92 & 15.71 & 15.14 & 16.23 & 37.61 & 39.24 \\ & TFill [23] & 13.18 & 13.87 & 15.48 & 16.24 & 33.29 & 34.41 \\ & LaMa [19] & 11.28 & 12.95 & 14.73 & 15.46 & 32.30 & 33.26 \\ & GLaMa [12] & 11.21 & 12.91 & 14.70 & 15.35 & 32.12 & 33.07 \\ \cline{2-8} & DCF w.o. Sem-Fil & 14.34 & 15.24 & 17.56 & 18.11 & 42.57 & 44.38 \\ & DCF w.o. FFC & 13.52 & 14.26 & 15.83 & 16.98 & 40.54 & 41.62 \\ & DCF (ours) & **11.13** & **12.63** & **14.52** & **15.09** & **31.96** & **32.85** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study and quantitative comparison of our proposed and state-of-art methods on center and free form masked images from the CelebA-HQ, Places2, and Paris StreetView datasets. |
2307.16362 | High Sensitivity Beamformed Observations of the Crab Pulsar's Radio
Emission | We analyzed four epochs of beamformed EVN data of the Crab Pulsar at 1658.49
MHz. With the high sensitivity resulting from resolving out the Crab Nebula, we
are able to detect even the faint high-frequency components in the folded
profile. We also detect a total of 65951 giant pulses, which we use to
investigate the rates, fluence, phase, and arrival time distributions. We find
that for the main pulse component, our giant pulses represent about 80% of the
total flux. This suggests we have a nearly complete giant pulse energy
distribution, although it is not obvious how the observed distribution could be
extended to cover the remaining 20% of the flux without invoking large numbers
of faint bursts for every rotation. Looking at the difference in arrival time
between subsequent bursts in single rotations, we confirm that the likelihood
of finding giant pulses close to each other is increased beyond that expected
for randomly occurring bursts - some giant pulses consist of causally related
microbursts, with typical separations of $\sim\!30{\rm\;\mu s}$ - but also find
evidence that at separations $\gtrsim\!100{\rm\;\mu s}$ the likelihood of
finding another giant pulse is suppressed. In addition, our high sensitivity
enabled us to detect weak echo features in the brightest pulses (at
$\sim\!0.4\%$ of the peak giant pulse flux), which are delayed by up to
$\sim\!300{\rm\;\mu s}$. | Rebecca Lin, Marten H. van Kerkwijk | 2023-07-31T01:36:55Z | http://arxiv.org/abs/2307.16362v2 | # High Sensitivity Beamformed Observations of the Crab Pulsar's Radio Emission
###### Abstract
We analyzed four epochs of beamformed EVN data of the Crab Pulsar at \(1658.49\rm\,MHz\). With the high sensitivity resulting from resolving out the Crab Nebula, we are able to detect even the faint high-frequency components in the folded profile. We also detect a total of \(65951\) giant pulses, which we use to investigate the rates, fluence, phase, and arrival time distributions. We find that for the main pulse component, our giant pulses represent about 80% of the total flux. This suggests we have a nearly complete giant pulse energy distribution, although it is not obvious how the observed distribution could be extended to cover the remaining 20% of the flux without invoking large numbers of faint bursts for every rotation. Looking at the difference in arrival time between subsequent bursts in single rotations, we confirm that the likelihood of finding giant pulses close to each other is increased beyond that expected for randomly occurring bursts - some giant pulses consist of causally related microbursts, with typical separations of \(\sim 30\rm\ \mu s\) - but also find evidence that at separations \(\gtrsim\!100\rm\ \mu s\) the likelihood of finding another giant pulse is suppressed. In addition, our high sensitivity enabled us to detect weak echo features in the brightest pulses (at \(\sim\!0.4\%\) of the peak giant pulse flux), which are delayed by up to \(\sim\!300\rm\ \mu s\).
Pulsars (1306) -- Radio bursts (1339) -- Very long baseline interferometry (1769) 0000-0002-4818-2886]Rebecca Lin
0000-0002-4882-0886]Marten H. van Kerkwijk
0000-0002-4882-0886]D.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.
Investigation of the emission from the Crab Pulsar is complicated by propagation effects along the line of sight, especially at lower frequencies, \(\lesssim 2\ \mathrm{GHz}\). While dispersion can be removed using coherent de-dispersion (either during recording, or afterwards with baseband data), scattering effects are difficult to remove. This includes echoes due to propagation in the Crab Nebula itself, which sometimes are bright and obvious (Backer et al., 2000; Lyne et al., 2001), but can also be quite faint (Driessen et al., 2019), making it difficult to disentangle them from microbursts without having a good pulse sample to look for repeating structure.
Another complication in studying the emission of the Crab Pulsar is the radio-bright nebula in which the pulsar resides. This contributes noise and hence many previous studies relied on long integrations to observe both the weaker pulse components and echoes in the average profile. But the contribution to the noise can be reduced by resolving the nebula, using large dishes or arrays, such as the VLA, Arecibo, and Westerbork (Moffett & Hankins, 1996; Cordes et al., 2004; Karuppusamy et al., 2010; Lewandowska et al., 2022).
In this paper, we use the European VLBI Network (EVN) to resolve out the Crab Nebula and obtain high sensitivity data. In Section 2, we describe our observations and data reduction, and in Section 3, we present the resulting pulse profiles and the components that are detectable at our high sensitivity. We turn to an analysis of GPs in Section 4, investigating their rates, fluence, phase, and arrival time distributions, as well as weak echoes seen in the brightest GPs. We summarize our findings in Section 5.
## 2 Observations and Data Reduction
We analyze observations of the Crab Pulsar taken by the EVN, projects EK036 A-D, at four epochs between 2015 Oct and 2017 May (see Table 1). Throughout these observations, calibrator sources were also observed resulting in breaks in our data. While many dishes participated in these observations, for our analysis we only use telescope data that had relatively clean signals across the frequency range of \(1594.49-1722.49\ \mathrm{MHz}\) in both circular polarizations. At each single dish, real-sampled data were recorded in either 2 bit MARK 5B or VDIF format1, covering the frequency range in either eight contiguous \(16\ \mathrm{MHz}\) wide bands or four contiguous \(32\ \mathrm{MHz}\) wide bands.
Footnote 1: For specifications of MARK5B and VDIF, see [https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/](https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/) and [https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf](https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf), respectively.
For these datasets, single dish data were processed and then combined coherently to form a tied-array beam as described in Lin et al. (2023). The resulting RFI-removed, normalized, de-dispersed (using dispersion measures (DMs) listed in Table 1), parallactic angle corrected, and phased baseband data were squared to form intensity data. As in Lin et al. (2023), we estimate the system equivalent flux density (SEFD) for the phased EVN array as \((S_{\text{CN}}+\langle S_{\text{tel}}\rangle)/N_{\text{tel}}\approx 140-160\ \mathrm{ Jy}\), where \(S_{\text{CN}}\approx 833\ \mathrm{Jy}\) is the SEFD of the Crab Nebula at our observing frequency (Bietenholz et al., 1997), \(\langle S_{\text{tel}}\rangle\simeq 300\ \mathrm{Jy}\) is the average nominal SEFD of the telescopes2 and \(N_{\text{tel}}=7\ \mathrm{or}\ 8\) is the number of telescopes used. By combining the single dishes into a synthesized beam, we resolve out the radio-bright Crab Nebula and increase our sensitivity, thus allowing us to investigate the weaker radio emission of the Crab Pulsar.
Footnote 2: [http://old.evlbi.org/cgi-bin/EVNcalc](http://old.evlbi.org/cgi-bin/EVNcalc).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Observation & & \(t_{\text{sep}}\)a & & & DMc & & & Giant Pulsesd & & \\ & Date & (h) & Telescopes usedb & & & & Giant Pulsesd & \\ & Date & (h) & Telescopes usedb & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Observation and Giant Pulse Log.
## 3 Pulse Profiles
For each of the phased EVN datasets, we create folded pulse profiles using polyco files generated with tempo2(Hobbs and Edwards, 2012) from the monthly Jodrell Bank Crab Pulsar ephemerides3(Lyne et al., 1993) and DM from Table 1. We averaged over all frequencies and used \(512\) phase bins, rotating in phase such that the MP is at phase \(0\). We show the resulting profiles in Figure 1, with each profile scaled to its maximum to ease comparison. With our high sensitivity, we can see all five pulse components expected from the multifrequency overview of Hankins et al. (2015), corresponding to the LFC, MP, IP, HFC1 and HFC2 (with the latter two detected at \(\sim\!1.66\ \mathrm{GHz}\) for the first time).
Footnote 3: [http://www.jb.man.ac.uk/~pulsar/crab.html](http://www.jb.man.ac.uk/~pulsar/crab.html).
We fit the pulse components in the EKO36 datasets with five Gaussians to look for possible changes, both between our epochs and relative to the compilation from Hankins et al. (2015). Our fitted parameters are presented in Table 2, together with the values inferred from Hankins et al. (2015). One sees that the results for our four observations are all consistent. At \(1.4\ \mathrm{GHz}\), Lyne et al. (2013) found that the separations between the MP and IP and between the MP and LFC increase at a rate of \(0\fdg 5\pm 0\fdg 2\) per century and \(11\arcdeg\pm 2\arcdeg\) per century, respectively. Using these rates, we expect pulse phase changes for the IP and LFC of \(\sim\!0\fdg 008\) and \(\sim\!0\fdg 17\), respectively, which are not detectable within our uncertainties.
Comparing with Hankins et al. (2015), we find good agreement in pulse phase for all components (though now we do need to take into account the drift in pulse phase). We noticed, however, that while the widths of our LFC, HFC1 and HFC2 are consistent with those given by Hankins et al. (2015), the widths of the MP and IP seem smaller, even if they are still within the nominal, rather large uncertainties of Hankins et al. (2015). Looking in more detail at their Figure 3 with measurements, one sees considerable scatter for the MP and IP, even though those strong, narrow peaks should be the easiest to measure. This might suggest that some profiles were slightly smeared (e.g., because the data were not dedispersed to exactly the right DM, which is known to vary for the Crab Pulsar, or because of changes in scattering timescale at lower frequencies, see McKee et al., 2018). For a comparison with recent data, we estimated widths from the \(2-4\) and \(4-6\ \mathrm{GHz}\) pulse profiles in Figure 1 of Lewandowska et al. (2022), which were taken using the VLA in D configuration to resolve out the Crab Nebula and thus have high signal-to-noise ratio; we find these are all consistent with ours.
Figure 1: Folded pulse profile of the Crab Pulsar at \(1658.49\ \mathrm{MHz}\) from EK036 observations in \(512\) phase bins centered on the MP. At this frequency, 5 components: LFC, MP, IP, HFC1 and HFC2 are visible. In the left panel, the profiles are normalized to their peak MP component. As the HFC1 and HFC2 components (indicated by arrows) are very faint, we show the grey region of the left panel zoomed in by a factor of \(15\) in the right panel, with vertical lines marking the peak of these components.
At lower frequencies, the pulse profiles often show echo features (e.g., Driessen et al., 2019). At our frequencies, those are expected to be too weak at delays where they might be seen in the folded pulse profile, and indeed we see none. However, at frequencies like ours, echoes can still be seen in individual pulses. For instance, at \(1.4\;\mathrm{GHz}\), Crossley et al. (2004) saw that individual bright pulses all had an echo delayed at \(\sim\!50\;\mathrm{\mu s}\) (which had no counterpart at \(4.9\;\mathrm{GHz}\)). From aligning GPs before stacking them in our datasets, Lin et al. (2023) also saw hints of echo features within \(\sim\!25\;\mathrm{\mu s}\) of the peaks of GPs in EK036 B and D. In Section 4.6, we confirm echoes in our data using a more careful analysis, finding that for EK036 D faint echoes are visible out to to \(\sim\!300\;\mathrm{\mu s}\).
## 4 Giant Pulses
### Search
In Lin et al. (2023), we searched for GPs by flagging peaks above \(8\sigma\) in a \(16\;\mathrm{\mu s}\) wide running average of the intensity time stream. While we reliably found GPs, the long time window meant we could not distinguish between bursts arriving in quick succession within that time window. Hence, the previous technique was unsuitable for one of our goals, of measuring arrival time differences between bursts, including between the microbursts that GPs sometimes are composed of. Below, we describe a revised technique, which allows us to more reliably identify multiple bursts (see Figure 2). Unsurprisingly, with our new technique we detected more multiple bursts than we had previously, as can be seen by comparing numbers listed in Section 6.3 of Lin et al. 2023) with those in Table 3.
For every pulsar period in the EK036 dataset, we take \(2.0\;\mathrm{ms}\) snippets of baseband data centered at the MP and
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{
\begin{tabular}{c} Pulse \\ Comp. \\ \end{tabular} } & Obs./ & Amplitude & Pulse Phase & FWHM \\ & Ref. & (\%) & (deg.) & (deg.) \\ \hline LFC\(\dots\) & A & 3.6(3) & \(-38.0(3)\) & 7.5(6) \\ & B & 3.35(17) & \(-37.67(19)\) & 7.7(4) \\ & C & 3.7(2) & \(-37.2(3)\) & 7.7(6) \\ & D & 3.9(2) & \(-37.8(2)\) & 8.1(5) \\ & H15 & \(\dots\) & \(-35.78(14)\) & 7.2(12) \\ MP \(\dots\) & A & & & 2.786(11) \\ & B & & & 2.708(7) \\ & C & & & 2.756(11) \\ & D & & & 2.836(9) \\ & H15 & & & 3.9(11) \\ IP\(\dots\) & A & 15.2(4) & 145.38(4) & 3.48(10) \\ & B & 15.2(2) & 145.28(3) & 3.59(7) \\ & C & 15.3(4) & 145.25(4) & 3.46(10) \\ & D & 14.4(3) & 145.28(4) & 3.59(8) \\ & H15 & \(\dots\) & 145.25(4) & 5.4(11) \\ HFC1\(\dots\) & A & 0.58(13) & 203(3) & 28(7) \\ & B & 0.88(9) & 198.4(13) & 25(3) \\ & C & 0.68(12) & 194(3) & 34(7) \\ & D & 0.94(11) & 196.2(15) & 36(5) \\ & H15 & \(\dots\) & 198.2(8) & 25(5) \\ HFC2\(\dots\) & A & 1.5(2) & 259.7(8) & 11.8(19) \\ & B & 1.19(14) & 259.2(7) & 11.7(16) \\ & C & 1.23(19) & 257.7(9) & 12(2) \\ & D & 1.51(15) & 259.8(7) & 14.8(16) \\ & H15 & \(\dots\) & 259.1(4) & 11.6(12) \\ \hline \end{tabular} Note. βAmplitudes and phases are relative to the MP. H15 refers to Hankins et al. (2015), and corresponding values are from evaluating the fits presented in his Tables 2 and 3 at our central observing frequency of \(1658.49\;\mathrm{MHz}\). The phases for the LFC and IP have been extrapolated to MJD 57607 (midway between EK036 A and D) using \(d\phi/dt\) values from Lyne et al. (2013). Numbers in parentheses are \(1\sigma\) uncertainties in the last digit.
\end{table}
Table 2: Properties of the Pulse Profile Components.
Figure 2: Sample MP pulse rotations with GPs as detected by our algorithm (see Section 4.1 for details), shown at a time resolution of \(1.25\;\mathrm{\mu s}\). _Top_: Single pulse with scattering tail. _Middle_: Two pulses, each with their own scattering tail. _Bottom_: A profile showing the difficulties inherent in classifying pulses: our algorithm found three pulses, but if another algorithm were to classify this as two or four pulses, that would also seem reasonable.
IP component phase windows (roughly \(2\) times the size of the pulse component determined from the folded pulse profile) and create pulse intensity stacks for each component4. We average these stack across the eight frequency bands and bin over 10 time samples, or \(0.625~{}\mu\)s, a value chosen to be large enough for a reliable GP detection yet well less than the scattering timescale of \(\sim\)\(5~{}\mu\)s during these observations (Lin et al., 2023). To detect GPs, we first subtract the off-pulse region (determined from the \(0.5~{}\mathrm{ms}\) region on either side of each pulse stack), then filter with a uniform filter of size \(5\) (\(3.125~{}\mu\)s), and finally record all samples above a detection threshold of \(5\sigma\).
Footnote 4: We only search for GPs inside these windows since Lin et al. (2023) found none outside for the same dataset.
To turn these sets of above-the-noise locations into detections of individual GPs, we use the following three-step process5. First, we connect detections within \(8\) samples (\(5~{}\mu\)s, i.e., of order the scattering time), since those are likely related. Second, we remove detections spanning \(4\) samples (\(2.5~{}\mu\)s) or less, since these are likely spurious. Third, we increase the width of a detection by \(4\) samples (\(2.5~{}\mu\)s) on either side, mostly to ensure that if we integrate over the mask, we will capture most of the flux independent of pulse strength. With this procedure, the minimum final pulse width is \(8.125~{}\mu\)s, slightly larger than the scattering timescale, and we confidently detect pulses above a threshold of \(\sim\)\(0.15~{}\mathrm{kJy}~{}\mu\)s. The brightest GP we detect has a fluence of \(\sim 560~{}\mathrm{kJy}~{}\mu\)s. With our relatively high initial detection threshold, we do not find any GPs outside our pulse windows, suggesting that we have no false detections in our sample. Nevertheless, as can be seen from the overall pulse statistics in Table 1, we find many GPs, about \(2-3\) per second or about one for every dozen pulsar rotations.
Footnote 5: Using the binary_closing, binary_opening and binary_dilation functions, respectively, from scipyβs multidimensional image processing functions (Virtanen et al., 2020).
In some pulse rotations, we detect more than one distinct GP, where "distinct" means that the pulse is separated by at least \(5~{}\mu\)s (roughly the scattering timescale) from another pulse at our detection threshold. Here, we note that whether or not a GP is detected as single or multiple depends on the detection threshold: a GP classified as a single one at our threshold might be classified as separated at a higher threshold if it has two bright peaks with some flux in between (e.g., because the scattering tail of the first peak overlaps with the start of the next one, or a weaker burst fills in the space in between). This dependence on detection threshold may explain why Bhat et al. (2008) found no pulses wider than \(10~{}\mu\)s, as they took a high detection cutoff, of \(3~{}\mathrm{kJy}~{}\mu\)s. This kind of arbitrariness seems unavoidable given the variety in pulse shapes that we see; it often is a rather subjective decision on what to take as a single bursts. To give a sense, we show in Figure 2 an example of a pulse rotation with a single burst as well as two examples of rotations with multiple bursts. In Section 4.5, we estimate the fraction of multiple bursts that is causally related from the statistics of pulse separations.
### Rates
With the high sensitivity of the phased EVN array, we detected a total of \(65951\) GPs over \(7.32~{}\mathrm{hr}\), implying an average detection rate of \(2.5~{}\mathrm{s}^{-1}\). From Table 1, one sees that the rates are not the same for each epoch. Comparable detection rates are seen for both MP and IP GPs in EK036 A and C, but those are about a factor \(2\) smaller than the rates for EK036 B and D (which are comparable to each other).
Similar changes in detection rate were found for bright pulses by Lundgren et al. (1995) at \(800~{}\mathrm{MHz}\), Bera & Chengalur (2019) at \(1330~{}\mathrm{GHz}\) and by Kazantsev et al. (2019) at \(111~{}\mathrm{MHz}\). Lundgren et al. (1995) suggests that almost
Figure 3: GP pulse detection rates in each EK036 observation. Times when the telescope was not observing the Crab Pulsar are shaded grey. The MP (blue) and IP (orange) detection rates appear to scale together and are relatively constant across each observation.
certainly, these are due to changes in the scattering screen, which are known to cause changes in the scattering time on similar timescales and are expected to cause changes in magnification as well. To verify that there are no variations at shorter timescales, we calculated rates at roughly \(5\,\mathrm{min}\) intervals. As can be seen in Figure 3, we find that in a given epoch, the rates are indeed steady.
### Fluences
The fluence distribution of the Crab Pulsar's GPs is typically described by power-law approximations to the reverse cumulative distribution,
\[N_{\mathrm{GP}}(E>E_{0})=CE_{0}^{\alpha}, \tag{1}\]
where \(\alpha\) is the power-law index, \(C\) a proportionality constant, and \(E_{0}\) the GP fluence such that \(N_{\mathrm{GP}}(E>E_{0})\) is the occurrence rate of GPs above \(E_{0}\). For our data, one sees in Figure 4, that for all observations the distributions indeed appear power-law like at high fluence, with \(\alpha\approx-2.0\) and \(-1.6\) for MP and IP, respectively. These values are roughly consistent with values found at similar frequencies: e.g., Popov & Stappers (2007) find \(-1.7\) to \(-3.2\) for MP GPs and \(-1.6\) for IP GPs at \(1197\,\mathrm{MHz}\), and Majid et al. (2011) finds \(\alpha=-1.9\) for the combined MP and IP distribution at \(1664\,\mathrm{MHz}\).
However, as noted by Hankins et al. (2015) already, the power-law indices show large scatter and should be taken as roughly indicative only, showing, e.g., that at higher frequencies, very bright pulses are relatively rare. Indeed, in our data, like in more sensitive previous studies (e.g., Lundgren et al., 1995; Popov & Stappers, 2007; Bhat et al., 2008; Karuppusamy et al., 2010), the fluence distribution clearly flattens at lower fluences. At the very low end, this is because our detection method misses more pulses, but the changes above \(\sim 0.2\,\mathrm{kJy}\,\mathrm{\mu s}\) are real. This turnover may at least partially explain why a variety of power-law indices was found previously, as the measured index will depend on what part of the fluence distribution is fit (which will depend also on the magnification by scattering), as well as why for very high fluences, well away from the turn-over, the power-law index seems fairly stable (Bera & Chengalur, 2019).
Comparing the distributions for the different epochs, one sees that they are very similar except for a shift left or right in the figure. This confirms that the differences in rates seen between the epochs are due differences in magnification due to scintillation (and not due to the Crab Pulsar varying the rate at which pulses are emitted, which would, to first order, shift the distributions up and down).
As the fluence distributions looked roughly parabolic in log-log space, we also show cumulative log-normal distributions in Figure 4, of the form,
\[N_{\mathrm{GP}}(E>E_{0})=\frac{A}{2}\left[\mathrm{erfc}\left(\frac{\ln E_{0}- \mu}{\sigma\sqrt{2}}\right)\right], \tag{2}\]
where \(A\) is a scale factor, \(\mu\) and \(\sigma\) are the mean and standard deviation of \(\ln E_{0}\), and \(\mathrm{erfc}\) is the complementary error function. One sees that these describe the observed cumulative distributions quite well.
Figure 4: Reverse cumulative GP fluence distribution showing the occurrence rates of GPs. For comparison, power-law distributions (solid black lines) and log-normal distributions (dashed black line) are shown, with indices \(\alpha\) and widths \(\sigma\) as listed in the legend.
If the intrinsic distributions were log-normal, it would imply that especially for the MP, most of the flux is already captured and that the total rate of GPs is not much larger than our detection rate. For the log-normal distribution shown in Figure 4, for the MP, \(A=2.7\ \mathrm{s}^{-1}\) and the mean GP fluence is \(\langle E\rangle=\exp(\mu+\frac{1}{2}\sigma^{2})=1.2\ \mathrm{kJy\,\mu s}\) and only 1.5% of the total flux is below \(0.15\ \mathrm{kJy\,\mu s}\), while for the IP, \(A=1.6\ \mathrm{s}^{-1}\) and \(\langle E\rangle=0.24\ \mathrm{kJy\,\mu s}\), and 13% of the flux is below.
We can verify whether our MP GPs account for most of the flux by calculating pulse profiles with and without removing pulse rotations where GPs are detected. As can be seen in Figure 5, significant flux remains in both MP and IP. For the MP, even though the remaining signal is brighter in epochs B and D, the fraction is lower: about 18% in B and D, in comparison with 23% in A and C. This again can be understood if the larger detection rate is due to an overall magnification: a larger fraction of the pulses - and hence of the total flux - is detected.
Our result is similar (but more constraining) than that of Majid et al. (2011), who showed that at least \(54\%\) of overall pulsed energy flux for the Crab Pulsar is emitted in the form of GPs. But it is in contrast for what is seen by Abbate et al. (2020) for PSR J1823\(-\)3021A, where the detected GPs make up only a small fraction of the integrated pulse emission (\(4\%\) and \(2\%\) for their C1 and C2 components, respectively), and by Geyer et al. (2021) for PSR J0540\(-\)6919, where the detected GPs only make up \(7\%\) of the total flux. This might indicate a difference in the emission process. As these authors noted, however, a larger population of undetected GPs may still be hidden below their detection threshold.
For our observations, for both MP and IP, the residual flux is much larger than expected based on the log-normal distribution, thus indicating that the true fluence distribution has more pulses at low fluence (many more for the IP); if additional pulses were emitted also in rotations that we do not detect them, their typical fluence would be the residual flux integrated over one cycle, which is \(\sim 25\ \mathrm{Jy\,\mu s}\) for MP and a little less for IP. This is well below our detection limit, so consistent in that sense, but from the distributions shown in Figure 4, one would expect a much smaller rate than once per pulse period at \(25\ \mathrm{Jy\,\mu s}\). This might suggest that there are even more but typically fainter bursts (note that it cannot be fainter bursts accompanying the GPs we already detect, since we excluded the full rotations in calculating the resid
Figure 5: Mean and median MP and IP pulse profiles obtained using all pulse rotations (in blue and orange, respectively) and using only those in which no GPs were detected (green and red, respectively) in \(6.25\ \mathrm{\mu s}\) bins. Note that because the noise in an individual profile is not normally distributed, but rather follows a \(\chi_{k}^{2}\) distribution, the median is slightly below zero in the off-pulse region, by \((1-2/3k)^{3}-1\simeq-6/9k\simeq-0.0002\) of the SEFD of \(\sim\!150\ \mathrm{Jy}\) (Section 2), or \(\sim\!-0.03\ \mathrm{Jy}\) given \(k=3200\) degrees of freedom (complex dedispersed timestream squared, averaged over 2 polarizations, 8 bands, and 100 time bins).
ual emission), or that there is some steady underlying emission. It would be worthwhile to test this with more sensitive future observations.
### Pulse Phases
Defining the time of arrival of a GP as the time when an increase in flux is first detected, the longitude windows where MP and IP GPs occur have total widths of \(\sim 680\)\(\mu\)s and \(860\)\(\mu\)s (or \(\sim\!7\fdg 3\) and \(\sim\!9\fdg 2\)), respectively (averaged over the four epoch). As can be seen in Figure 6, the majority of GPs occur within much narrower windows: the root-mean-square deviations around the mean arrival phases are \(\sim\!100\)\(\mu\)s and \(\sim\!130\)\(\mu\)s (or \(\sim\!1\fdg 1\) and \(\sim\!1\fdg 4\)), respectively. The number distribution is roughly Gaussian, with a slightly negative skewness (i.e., a longer tail toward earlier phases and thus with a mode towards later phases). This was also observed by Majid et al. (2011) at a similar frequency of \(1664\)\(\mathrm{MHz}\). In EKO36 D, a few MP pulses are detected beyond the range found in the other epochs. As we will discuss in Section 4.6, these "outlier" detections are due to echoes (hence, they are are omitted in our determinations of widths above).
In Figure 6, we also show the flux distributions as a function of pulse phase, including the median flux of the GPs detected in any given phase bin. One sees no obvious variation, i.e., no hint of, e.g., brighter pulses having an intrinsically narrower phase distribution. This suggests that only the probability of seeing a pulse depends on pulse phase. In our earlier work on these data, where we studied how the pulse spectra and their correlations are affected by scattering (Lin et al., 2023), we concluded that we resolved the regions from which the nanoshots that comprise individual GPs are emitted, and that this is most easily understood if the emitting plasma is ejected highly relativistically, with \(\gamma\simeq 10^{4}\) (as was already suggested by Bij et al., 2021). If so, the emission would be beamed to angles much smaller than the width of the phase windows, and the range of phases over which we observe GPs would reflect the range of angles over which plasma is ejected.
### Arrival Times
Several studies (e.g., Karuppusamy et al., 2010; Majid et al., 2011) have found that GPs in different rotations are not correlated, and that there is no correlation between MP and IP GPs, but that instead the distribution of the time delays between successive GPs follows an exponential distribution, as expected for a Poissonian process. Within a given cycle, though, multiple correlated microbursts can occur (Sallmen et al., 1999; Hankins and Eilek, 2007).
With our high sensitivity, we can investigate this in more detail. In Table 3 we show the number of rotations in which we detect multiple MP or IP bursts (i.e., double, triple etc.), as well as the number expected (listed only where larger than 0) for the case where all events are independent,
\[N_{n}=p_{n}N_{r}=\begin{pmatrix}N_{\mathrm{p}}\\ n\end{pmatrix}\left(\frac{1}{N_{r}}\right)^{n}\left(1-\frac{1}{N_{r}}\right)^{ N_{\mathrm{p}}-n}N_{r}, \tag{3}\]
where \(p_{n}\) is the probability of a given rotation to have \(n\) bursts (assuming a binomial distribution), \(N_{r}\) is the total number of rotations observed, and \(N_{\mathrm{p}}\) is the total number of bursts found (and where for numerical values we inserted numbers from Table 1: \(N_{\mathrm{p}}=N_{\mathrm{MP}}\) or \(N_{\mathrm{IP}}\) and \(N_{r}=t_{\mathrm{exp}}/P_{\mathrm{Crab}}\), where \(P_{\mathrm{Crab}}=33.7\)\(\mathrm{ms}\) is the rotation period of the pulsar). One sees that we detect significantly more multiples than expected by chance6, i.e., some of the detected pulses are composed of multiple, causally related microbursts.
Footnote 6: In Lin et al. (2023), we wrongly concluded the multiples were consistent with arising by chance. Sadly, we used incorrect estimates of \(N_{n}\).
In principle, one could estimate the number of independent bursts, \(N_{\mathrm{p}}^{\mathrm{ind}}\), in each epoch by subtracting from \(N_{\mathrm{p}}\) the excess pulses from Table 3, but this would not be quite correct since the excess would be relative to estimates made using the total number of observed pulses \(N_{\mathrm{p}}\), not the (lower) number of independent pulses \(N_{\mathrm{p}}^{\mathrm{ind}}\). One could iterate, but an easier, unbiased estimate of \(N_{\mathrm{p}}^{\mathrm{ind}}\) can be made using the observed fraction of rotations in which we do not see any bursts, which should equal \(N_{0}/N_{r}=p_{0}=\left(1-1/N_{r}\right)^{N_{\mathrm{p}}^{\mathrm{ind}}}\). Solving for \(N_{\mathrm{p}}^{\mathrm{ind}}\), we find that \(N_{\mathrm{p}}^{\mathrm{ind}}=fN_{\mathrm{p}}\) with fractions \(f\) that are consistent between all epochs, at \(91.8\pm 0.2\) and \(95.2\pm 0.5\)% for MP and IP, respectively. Hence, about 8 and 5% of the detected MP and IP pulses, respectively, are extra components. Or, as fractions of independent MP and IP pulses, \((6,1,0.12)\) and \((4,0.3,0.0)\%\), respectively, are causally related double, triple, or quadruple microbursts.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Observation & \multicolumn{3}{c}{MP} & \multicolumn{3}{c}{\(\dots\)} & IP & \multicolumn{3}{c}{\(\dots\)} \\ Code & 2 & 3 & 4 & 5 & 6 & 2 & 3 & 4 \\ \hline \hline EK036 A & 1820(599) & 200(12) & 24 & 0 & 0 & 144(17) & 4 & 2 \\ EK036 B & 1431(611) & 170(18) & 22 & 3 & 1 & 237(43) & 16 & 2 \\ EK036 C & 611(213) & 67 (4) & 6 & 0 & 0 & 54( 7) & 4 & 0 \\ EK036 D & 934(395) & 117(10) & 23 & 6 & 1 & 116(19) & 9 & 0 \\ \hline \end{tabular} Note. β Numbers in parentheses are those expected if bursts occur randomly; for that case, one does not expect to find any rotations with 4 or more MP bursts or 3 or more IP bursts. Note that our GP detection method does not differentiate between microbursts and echoes, which becomes important for a few very bright pulses in EKO36 D, for which echoes were present. In addition, we are not able to distinguish microbursts that occur very close together in time. The number of detections differ from Lin et al. (2023) as a different, more robust, search algorithm is implemented here (see Section 4.1).
\end{table}
Table 3: Number of Rotations with Multiple Bursts.
To investigate the distributions further, we show histograms of the time delay between pulses in Figure 7. Overdrawn are expectations for randomly arriving, independent pulses. We constructed these by bootstrapping, where we repeatedly reassign new random pulse cycles to our observed sets of pulses, and then recalculate the time delay distributions. Note that in our bootstraps, we do not randomize pulse phase, so that the observed phase distribution is correctly reflected in the time delays. One sees that as a function of pulse cycle (right column panels for MP and IP GPs in Fig. 7), the time delay distributions are not well defined.
Figure 6: MP GP and IP GP fluence and count distributions as a function of pulse phase for each EK036 observation. We used pulse phase bins of \(0.1\%\) and fluence bins of \(0.1\ \mathrm{dex}\). The light purple line in the fluence panels show the median for bins with more than \(2\) detected pulses.
ure 7), the observed histograms follow the expected exponential distribution (although the observed counts are slightly lower than the expected ones because not all pulses are independent, as is implicitly assumed in the bootstraps).
For the time delays between pulses that occur in the same cycle (left column panels for MP and IP GPs in Figure 7), the observed distributions are very different from those expected for randomly occurring bursts. One sees a large peak at short delays, representing the excess microbursts from Table 3, following a roughly exponential distribution with a mean time between bursts of \(\sim 30\;\mu\)s or so. Intriguingly, at somewhat larger time difference, there seem to be fewer bursts than expected for independent events. This suggests that while a given detection has an enhanced probability of being in a group of causally related microbursts, the occurrence of a burst also suppresses the likelihood of another, independent, burst being produced in the same rotation. Thus, our results confirm that GPs are often composed of multiple microbursts, and they indicate that another, independent GP is less likely to occur right after.
### Scattering Features
In Figure 6, one sees that in EK036 D, several MP GPs were detected at pulse phases quite far from the median phase. To investigate this, we looked at the arrival times of all GPs detected in EK036 D (see left panel of Figure 8). We found that the outliers occurred in two pulse rotations, which turned out to contain the brightest GPs in EK036 D. Looking at the pulse profiles of these brightest GPs, one sees that they are very similar (see right panels of Figure 8). In fact, closer
Figure 7: Time delays between successive GPs for the MP (in blue) and IP (in orange) components for each EK036 observation. On the left MP and IP columns, time delays within a pulse rotation are shown with bins of \(10\;\mu\)s and \(20\;\mu\)s for the MP and IP respectively; the low counts in the first bin reflect the minimum separation of \(8.75\;\mu\)s between detected pulses. On the right MP and IP columns, time delays in pulse rotations are shown with bins of \(1\) rotation and \(4\) rotations for the MP and IP respectively. The red lines show the average time delay histograms for \(1000\) bootstrap iterations, in which we randomized the rotation in which a pulse was seen (but not the phase, to keep the observed phase distribution).
examination reveals that all of the brightest GPs detected in EK036 D show similar pulse profiles. This implies that the pulses far from the median pulse phase arrive late because they are actually weak echoes of the main burst, with amplitudes down to \(\sim 0.4\%\) of the peak flux and delays up to \(\sim 300~{}\mu\)s.
In Figure 9, we show singular value decomposition (SVD) approximations of the average MP GP profile for each epoch (for the IP, too few bright pulses were available). This was created from MP GP rotations with peak intensities greater than \(200~{}\mathrm{Jy}\) and seemingly single peaks, aligned using time offsets found by correlation with a reference pulse. To avoid giving too much weight to the brightest pulses, and thus risking that remaining substructure enters the average profile, we normalized each rotation by the intensity at the correlation maximum before doing the SVD. One sees that all profiles are fairly sharply peaked, but sit on top of a base, which has the expected asymmetric part extending to later time due to scattering, as well as a more symmetric component, likely resulting from the collective effect of faint microbursts. Comparing the epochs, one sees that for EK036 A-C, the profile dropoff is relatively smooth and becomes undetectable after \(\sim\!200~{}\mu\)s, while in EK036 D, the tail is much longer, extending to \(\sim\!400~{}\mu\)s, and is much more bumpy.
Almost certainly, all bumps are echoes, including those at shorter delay in EK036 B (more clearly seen in the linear-scale plots in Lin et al.2023), Indeed, looking carefully at the stack of profiles in Figure 9, one sees that the echoes in EK036 D drift in time, moving slightly further away from the MP during the observation, with perhaps even a hint that echoes further away from the main bursts drift faster than those closer in. (Note that this stack is not completely linear in time, although given that the GP detection rate is roughly constant throughout, it is not far off.) This change in time is expected for echoes off a structure with changing distance from the line of sight, and indeed has been seen for a very prominent echo by Backer et al. (2000); Lyne et al. (2001). Overall, our observations suggests echoes are common, as also concluded from daily monitoring at \(600~{}\mathrm{MHz}\) by Serafin-Nadeau et al. (2023, in prep.).
Figure 8: _Left_: MP GPs and IP GPs detected in the EK036 D data. The gray shaded regions indicate when the telescope was not observing the Crab Pulsar and the black vertical lines mark our MP GP and IP GP windows. In the inset, we show two pulse rotations containing the brightest GPs βAβ and βBβ, in red and orange respectively. _Right, Top_: Waterfalls of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution and \(1~{}\mathrm{MHz}\) frequency resolution. _Right, Bottom_: Pulse profile of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution scaled to the peak of each pulse. Pulses βAβ and βBβ show similar features and we conclude that during the EK036 D observations, weak echoes were present at large delays.
## 5 Summary of Conclusions
The fine time resolution and high sensitivity in our beam-formed EVN data allowed us to confidently detect \(65951\) GPs with fluences above \(\sim 150\ \mathrm{Jy\ \mu s}\) over a short period of \(7.32\mathrm{hr}\). Within each of our four observations, we found that the GP detection rates are fairly constant, but that between epochs they differ by a factor of \(\sim\!2\). Similar changes were seen previously, and were suggested by Lundgren et al. (1995) to reflect changes in overall magnification of the scattering screens along the line of sight.
The changes in magnification are consistent with the pulse fluence distributions, which are power-law like at high fluence, but with a flattening at lower fluences; the distributions from the different epochs can be shifted to each other with a change in fluence scale. We noted that the fluence distributions are similar to what is expected for log-normal distributions, but found that the residual signals seen in the GP phase windows after removing the GPs we detected were larger than expected if the log-normal distribution continued also below our detection limit. Nevertheless, it suggests that with only somewhat more sensitive observations, it should be possible to get a fairly complete sampling of all GPs that contribute to the average flux, at least for the MP component.
Analyzing the pulse phase distributions, we confirm previous observations showing that the majority of GPs occur within very narrow phase windows. Furthermore, we observe no significant variations in the median flux distributions as a function of pulse phase. This suggests that it is the probability of observing a pulse that depends on pulse phase, not its energy, implying that the angle within which a pulse is emitted is much narrower than the rotational phase window, as expected if the plasma causing them is travelling highly relativistically (Bij et al., 2021; Lin et al., 2023).
With our high detection rates, we were able to investigate the distribution of time delays between successive bursts within the same pulse rotation. We detect a larger number than expected if all bursts were due to a Poissonian process, and infer that \(\sim\!5\%\) of bursts come in groups of 2 or 3 causally related microbursts, with a typical separation in time of \(\sim\!30\ \mu\)s.
Additionally, our high sensitivity revealed weak echo features for individual bright pulses, which drift slightly but sig
Figure 9: _Line plots_: SVD approximation of the MP pulse profile for all observations. In EK036 B, echoes are seen close to the profileβs peak (see Lin et al., 2023 for more details). The profile for EK036 D shows multiple weak echoes up to \(\sim\!300\ \mu\)s. _Image_: The MP pulse stack for EK036 D, using a logarithmic colour scale to bring out faint features. Each pulse is aligned by correlating with the rotation with the brightest pulse in EK036 D (which is appears to be a simple single microburst) and then normalized by the intensity at time \(0\) (the black dashed line). The echoes appear to move out over time, as one can see by comparing the location of the most prominent faint echo with the dashed white vertical line near it (time is increasing both upwards and to the right in this image).
nificantly even over our timescales of just a few hours. We infer that echo events are not rare.
Given our findings, we believe even more sensitive follow-up studies of the Crab Pulsar would be very useful. This would be possible using more small dishes (spaced sufficiently far apart that the Crab Nebula is well-resolved) and by recording a larger bandwidth.
## Acknowledgements
We thank the anonymous referee for their comments, which improved the clarity of this manuscript. We thank the Toronto Scintillometry group, and in particular Nikhil Mahajan, for useful discussion on GP statistics. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium (Loken et al., 2010; Ponce et al., 2019). SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. M.Hv.K. is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) via discovery and accelerator grants, and by a Killam Fellowship.
The European VLBI Network (EVN) is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project codes: EK036 A-D.
astropy (Astropy Collaboration et al., 2013, 2018, 2022), Baseband (Van Kerkwijk et al., 2020), CALC10 (Ryan & Vandenberg, 1980), numpy (Harris et al., 2020), matplotlib (Hunter, 2007), pulsarbat (Mahajan & Lin, 2023), scipy (Virtanen et al., 2020), tempo2 (Hobbs & Edwards, 2012).
|
2301.07687 | Maybe, Maybe Not: A Survey on Uncertainty in Visualization | Understanding and evaluating uncertainty play a key role in decision-making.
When a viewer studies a visualization that demands inference, it is necessary
that uncertainty is portrayed in it. This paper showcases the importance of
representing uncertainty in visualizations. It provides an overview of
uncertainty visualization and the challenges authors and viewers face when
working with such charts. I divide the visualization pipeline into four parts,
namely data collection, preprocessing, visualization, and inference, to
evaluate how uncertainty impacts them. Next, I investigate the authors'
methodologies to process and design uncertainty. Finally, I contribute by
exploring future paths for uncertainty visualization. | Krisha Mehta | 2022-12-14T00:07:06Z | http://arxiv.org/abs/2301.07687v1 | # Maybe, Maybe Not: A Survey on Uncertainty in Visualization
###### Abstract
Understanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization.
## 1 Introduction
With a rise in the complexity and dimensionality of data, analyzing and modeling data becomes more challenging. When most of our decisions are data-driven, it becomes imperative that we know the nature of the data and the patterns it contains. As a result, analyzing the inherent uncertainty in the data is gaining more significance. In various fields, uncertainty can signify different things. For instance, data bias, random or systematic error, and statistical variance are all factors that contribute to data uncertainty. Without understanding the underlying uncertainty in our data, we cannot make accurate predictions. Similarly, to observe the true structure of our data and as well as identify patterns in it, we need to visualize it. Today, we can no longer undermine the significance of uncertainty nor ignore the importance of visualizations for data analysis.
As mentioned before, uncertainty is bound to exist whenever there is data. Therefore representation of uncertainty in data visualizations is crucial. Consider the example of hurricane path maps, as shown in Figure 1. The increase in the width of the predicted path with time is not due to an increase in the size of the hurricane. Instead, it is representing the inherent uncertainty in the data. In other words, the visualization indicates that compared to Friday, Sunday's hurricane path is more difficult to predict with any degree of accuracy.
Information tends to be withheld from the viewer when one does not portray uncertainty in the visualization. Therefore the viewer might occasionally be ignorant of this exclusion. This breach of trust can have significant consequences for both the author and the viewer. Given this significance, it is reasonable to assume that visualizations frequently include uncertainty. But how often do we encounter charts that represent uncertainty? How frequently do we check for bias in graphs that represent public surveys? As it turns out, not frequently.
In a recent study [9], 121 journalism articles, social science surveys, and economic estimates were examined. Out of 449 visualizations created for inference, the study demonstrates that only 14 accurately depict uncertainty. "What's Going on in This Graph?" is a New York Times (NYT) initiative to increase graphical literacy, especially among students. Different categories of charts, such as maps, parts-to-whole, and associations, are published for students to explore and analyze. When I looked into the distribution of these charts, I found that only 6 out of the 136 charts show uncertainty.
The question I ask is, do we actually examine uncertainty representations when we come across them in order to make decisions, or do we simply ignore them? Does uncertainty offer value or just clutter these visualizations? I try to investigate these questions in this paper. Visualizations are an integral part of newspapers, government bills, and business earnings reports to name a few. The public uses them to gain insights, spot trends, and make decisions.
Hence, when we visualize data, it becomes critical to support those visualizations with information about uncertainty. People frequently use visualizations to examine data and make observations. A lack of uncertainty representation could result in incorrect and erroneous interpretations. However, it can be challenging to visualize uncertainty. There are limited standard guidelines or protocols that authors can follow when they create such charts. Given these drawbacks, uncertainty visualization is considered one of the top research problems in data visualization [13]. With the help of a few uncertainty visualization examples, this survey studies how uncertainty contributes to every phase in visualization. Most research in this area focuses on creating charts with uncertainty and how viewers may perceive them. However, uncertainty is also influential in the other parts of the data visualization process, such as during data collection and preprocessing.
**The objectives of this paper are as follows:**
* Provide an entry point for anyone who wants to learn about uncertainty visualization
* Delineate the significance of uncertainty visualizations
* Explore how uncertainty influences every phase of the data visualization process
Figure 1: An example chart for Matthew showing its five-day forecast track [5]
* Understand the challenges authors and viewers face when interacting with it
* Discuss the open problems and future research directions in the field
This work is divided into the following sections. Section 2 defines uncertainty and describes the relationship between uncertainty and visualization. In Section 3, I classify the data visualization pipeline into four phases, analyzing the involvement of uncertainty in each phase. The classification helps look at each phase individually, focusing on the challenges and bottlenecks authors and viewers face when working with uncertainty visualization. Finally, I study some state-of-the-art methods to visualize uncertainty and discuss future directions for research. I conclude the paper in Section 4.
## 2 Uncertainty and Visualization
Visualizations are incredibly important for examining, analyzing, and interpreting data in the era of big data. Visualizations are evidence that a picture really does say a thousand words. They aid viewers in seeing trends, background noise, and outliers. Asking the correct questions can be quite challenging when there is an abundance of data. Through visualizations, viewers can determine what questions the data can help answer. With improvements in hardware, software, and graphics theory, data visualizations are adopted more frequently and widely [26]. Viewers use visualizations to make decisions. However, making decisions and drawing observations by looking at visualizations can be complex due to the statistical variance and uncertainty present in these visualizations.
As mentioned previously, uncertainty can have different definitions based on different scenarios [3]. Broadly speaking, uncertainty is classified into two types, aleatory and epistemic. Aleatory uncertainty rises from random fluctuation and unknown outcomes when an experiment is run multiple times in a consistent environment. For example, in a drug trial, a participant's blood pressure can vary due to stress and anxiety. There might also be measurement errors in the sphygmomanometer. Aleatory uncertainty can be minimized by controlling individual factors and increasing the number of readings. Epistemic uncertainty, on the other hand, rises from a lack of knowledge, like predicting the outcome of the same experiment in a completely different, unknown environment. For example, predicting the effect of a drug on a new disease. Uncertainty can be measured, like risks but can also be unquantified, like bias. While aleatory uncertainty is more widely represented in the visualizations [25], both types can be represented with distribution graphs.
Uncertainty and visualizations are interweaved, and working with one often requires working with the other. In 1644, Michael Florent van Langren was one of the first researchers to use visualization for statistical analysis [25]. He used a 1D line graph to present the 12 known estimated longitudinal distances between Toledo and Rome, as shown in Figure 2. Instead of using a table to show this data, Langren used this graph to showcase the wide range of variation. Even though all the distances were over-estimated (actual distance, in longitude, is shown using the arrow), the graph remains classic in demonstrating the power of visualization.
The popular Anscombe's quartet [1] is a perfect example of how data with similar statistics might have a very different distribution which is observed when visualized. The quartet consists of four datasets with 11 points having nearly the same mean, sample variance, correlation, linear regression, and coefficient of determination. The four datasets may appear very similar to viewers looking at the data and the descriptive statistics. However, when one visualizes them, the difference in their distribution is very evident, as shown in Figure 3. Looking at data in tabular form may hide insightful observations and can lead to erroneous conclusions. Today, researchers across all domains use extensive libraries such as [12, 19, 22, 4, 11] to analyze data uncertainty.
Using visualizations to represent and study uncertainty in data is widely adopted. However, uncertainty in visualizations is often not communicated [9]. One of the earliest instances of uncertainty being presented can be traced back to the 18th century. Joseph Priestley, a British scientist, created "A Chart of Biography" to present the lifespans of famous people as shown in Figure 4. He used horizontal lines to portray the lifetime of about 2000 people and used dots before or after the lines to communicate uncertainty.
Visualizations of uncertainty, however, are not common. Numerous factors influence why authors decide against visualizing uncertainty. Since they do not know all the information about the dataset, viewers may draw inaccurate conclusions in the absence of uncertainty representation. Nevertheless, introducing more uncertainty could also make the audience feel too overwhelmed to pay attention to it. The study of why visualizing uncertainty is rare is
Figure 4: Priestleyβs Chart of Biography [21]
Figure 3: Anscombeβs quartet represents four datasets with similar statistics but very different distributions.
Figure 2: Langrenβs line graph is one of the first visualizations to present uncertainty
still in its early stages. In the section that follows, I go through each of these issues in more detail and look at how uncertainty affects every stage of data visualization.
## 3 Uncertainty in Visualization
Previous works in the field have attempted to classify the data visualization process differently. [14] considers sampling, modeling, visualization, and decision-making as the primary sources of uncertainty. This paper follows a similar classification. I divide the visualization pipeline into **data collection, preprocessing, visualization and inference** as shown in Figure 5. Pang et al. [18] classify the process into data collection, derivation, and visualization and discuss how uncertainty is introduced in each stage.
Under the data collection phase, the paper mainly discusses the uncertainty added due to measurement errors. However, there are other sources, such as bias and sampling error, that the paper fails to describe. I investigate these uncertainties in Section 3.3.1. The authors then discuss the change data undergoes when it is preprocessed. These changes include converting one unit to another, rescaling, and resampling. However, they do not mention other vital issues such as missing data, approximation, and interpolation that I examine in Section 3.3.2. Next, the authors highlight how uncertainty also influences the data visualization stage itself. They mainly focus on radiosity and volume rendering, while this paper delves more into 2D visualizations. Finally, I explore how viewers infer these visualizations and the challenges they face while making a decision from these charts.
Uncertainty is presented at every phase of this classification. However, understanding and evaluating uncertainty in each of these phases is unique. Therefore, authors are required to approach these uncertainties based on their type and complexity, understand their abstraction, and then present them in visualizations in a way that is easy to grasp.
Given the interdisciplinary nature of visualizations, the format, quantity, and type of data used to create them vary immensely. Different data implies different data collection processes and uncertainties. Uncertainty is intertwined with data acquisition and can arise from random variables and modeling errors [14]. Pang et al. [18] explain how almost all acquired data has statistical variation. Collected data can have errors, bias, and variance. [23] study how bias can be introduced during the process of collecting data. Datasets are prone to various biases that include but are not limited to selection bias, volunteer bias, admission bias, survivor bias, and misclassification bias.
It is imperative that datasets resemble the true population as closely as possible. Data can also contain different types of errors, such as coverage error, sampling error, nonresponse error, and measurement error [7]. Missing data points is another common challenge researchers face during data collection.
Correcting these errors is not always possible, but they can be mentioned in the visualization to inform the viewer. However, uncertainty is often ignored when authors create visualizations. Other times this uncertainty in data is not communicated to them [9]. For example, when I analyze a piece called "Free Speech" (as shown in Figure 6) published in the What's Going On in This Graph section of the NYT. [16], we can see how information about uncertainty from the data source is not mentioned directly in the graph. The bars of the graph do not sum to 100 percent since they are missing the no-response segment. The article mentions that the margin of error for the sample is +/- 3.1%, but the graph makes no mention of it.
Efforts are being made by researchers to improve the way uncertainty in the data collection phase is captured, processed, and communicated. Athawale et al. [2] propose using statistical summary maps to represent uncertainty in scalar field data caused by data acquisition.
### _Data Preprocessing_
Raw data is imperfect and can consist of noise and error. Once data is collected, it undergoes processing for accuracy and standardization. However, this phase adds uncertainty to the data that may not be immediately evident. For example, fundamental transformations like rounding off values, converting data from one unit to another, rescaling, resampling, and quantizing can add uncertainty [1]. Even though this might seem minor, the impact can be significant. For example, based on whether we take the value of pi as 22/7(3.14285) or 3.14159, the area of the Sun can vary by a difference of 239x106 sq. miles.
A significant setback that most datasets suffer from is missing data. Data can have missing values for many reasons, such as instrument malfunction, incomplete observations, and lost data. Missing values leave a gap in the dataset, which makes room for uncertainty. Working with such uncertainty requires the authors to take extra measures during preprocessing. Authors attempt to find close estimates of the missing values to provide the viewers with a complete picture. One way to tackle this problem is by deleting the complete entry that has the missing value. This leads to a loss of data and insights. Another option is to make an educated guess about the missing value. However, this is highly unreliable and often not recommended. Using interpolation, imputation, or other techniques can induce errors [3].
Sometimes, authors choose to encode these estimated values differently in their designs to inform the viewer about the gap in the dataset. However, how authors choose to visualize this encoding becomes very influential in how viewers perceive these graphs. Whether authors highlight, downplay, annotate or remove the missing values determines how much confidence and credibility the
Figure 5: The data visualization process divided into four stages to show how uncertainty affects each stage
Figure 6: Free Speech, a graph by the New York Times based on a national poll including 1,507 U.S residents [16]
viewer shows in the visualization [24].
### Visualization Creation
Since uncertainty isgrained in different parts of the data collection process, it is not easy to identify and control it. However, once the data is cleaned and processed, the authors face a new problem. Creating visualizations requires authors to make various decisions on behalf of the viewer. Authors are expected to choose the type of visualization based on data type, which may lead them to choose the scaling, sorting, ordering, and aesthetics [27]. Compelling visualizations are accurate and suggest an understanding and interpretation of data. Hence, it is the author's responsibility to analyze data correctly before creating any visualizations. Midway [15] describes ten design principles authors can follow to create charts. However, none of those principles discuss how uncertainty can be presented. Creating effective visualizations is hard. However, when we add uncertainty representation, the task becomes much more complex [17]. The data visualization community of researchers, designers, journalists, etc., has been reluctant to add uncertainty to their charts. Authors are aware of how significant uncertainty visualization is. Yet, they choose to exclude uncertainty when they design their charts for various reasons discussed below.
#### 3.2.1 Uncertainty is hard to represent
Though data is replete with uncertainty, the difficulty lies in determining if it should be represented and how. If the uncertainty has no direct relationship to the goal of the visualization, then it may not be included in the visualization. But this is not a conclusion that authors can quickly draw. The rise in techniques of visualizing uncertainty can make it harder for authors to decide which one to choose from. One of the biggest challenges in visualizing uncertainty is discovering and communicating the relationship and impact that the uncertainty has on the data. Data visualization is often a preferred choice for analysis due to its ability to present high-dimensional data. However, uncertainty also has dimensions, generally classified into scalar, vector, and tensor [20]. While scalar and vector fields of uncertainty are depicted in charts, tensor fields are often avoided. Mapping these dimensions of uncertainty along with the dimensions of data is challenging and often overlooked when creating charts. Instead, authors tend to simplify uncertainty to align with the dimensionality of the data.
#### 3.2.2 Uncertainty is hard to calculate and verify
Another reason why authors choose to exclude uncertainty from their charts is that calculating uncertainty is complex [9]. It is well known that even mathematicians and statisticians sometimes find it challenging to calculate the error or variance in a dataset. Verifying if the presented uncertainty is correct is challenging. Moreover, if the authors make an error while designing their charts, they end up providing wrong information to the viewers and losing their trust.
#### 3.2.3 Viewers may be overwhelmed
[9] explains why the inclusion of uncertainty in graphs is not widely adopted. Authors believe that uncertainty can be challenging for the viewers to perceive and understand. As a result, viewers may choose to either look at an alternative graph that does not contain any uncertainty representation or overlook the uncertainty in their graph altogether.
#### 3.2.4 Uncertainty can add clutter to the visualization
Authors can be unsure of how effective communicating uncertainty is. They also worry about adding more information to an already visually complex visualization. For many authors, the goal of a chart is to express a signal [9] that can be useful to their viewers. This signal tends to present a single point or a single source of truth. Uncertainty tends to challenge that notion by obfuscating the signal. Additionally, expressing the intricacy of uncertainty through a visual abstraction is challenging. The dimensionality of the data also plays a vital role in deciding whether uncertainty should be represented or not. An increase in the dimensionality of data makes it harder for the human visual system to perceive it effectively. Sometimes even two-dimensional charts can be overwhelming for the viewer. In such a case, representing uncertainty adds visual overload [20].
### Visualization Inference
Uncertainty is hard to understand and analyze. When faced with perceiving an uncertain visualization, viewers can get confused or derive inaccurate information from it. One easy method viewers tend to use is to ignore the uncertainty in the graph altogether. Another way is to substitute tricky calculations with easy ones or use heuristics to make decisions. However, this may not always give a correct observation. The most common approach to show uncertainty is by using box plots and error bars. Though widely used, viewers may find them challenging to analyze [6]. Sometimes visualizing uncertainty as frequency instead of distribution provide a better understanding.
Currently, research is being done to create visualizations that help understand uncertainty more intuitively. For example, hypothetical outcome plots (HOPs) represent uncertainty by animating a finite set of individual draws [10]. This approach expects no prior knowledge of the domain from the viewer. However, using HOPs in physical media might be challenging. Bubble treemaps [8] are another approach for visualizing uncertainty. These circular treemaps encode additional information about uncertainty by allocating additional space for visuals.
While uncertainty is still underrepresented in visualizations, more researchers are slowly adding it to their designs. One of the significant setbacks in uncertainty visualizations for authors is calculating uncertainty, while for viewers, it is graphical literacy. Efforts can be taken to increase this literacy through different programs gradually. Furthermore, work should be done to understand what visualization type best suits a given uncertainty type. This relationship can also depend on the type of data being represented and the target audience viewing the graph. For example, it is necessary for graphs published in newspapers and reports to be easily understandable by the public. Hence, studies focusing on visualizing uncertainty with no prior knowledge or information can be very insightful.
## 4 Conclusion
Uncertainty visualization is one of the most complex research areas in data visualization today. This work provided an overview of uncertainty visualization and the relationship between uncertainty and visualization. I divided the visualization pipeline into four phases and surveyed papers to study how uncertainty interacts with each phase of the process. The work also investigated why the representation of uncertainty is not widely practiced by the data visualization community and the challenges viewers face when inferring from such a graph. Lastly, I discussed a few state-of-the-art methods to design uncertainty visualization and offered a glance into the interesting future research this field has to offer.
|
2309.09088 | Enhancing GAN-Based Vocoders with Contrastive Learning Under
Data-limited Condition | Vocoder models have recently achieved substantial progress in generating
authentic audio comparable to human quality while significantly reducing memory
requirement and inference time. However, these data-hungry generative models
require large-scale audio data for learning good representations. In this
paper, we apply contrastive learning methods in training the vocoder to improve
the perceptual quality of the vocoder without modifying its architecture or
adding more data. We design an auxiliary task with mel-spectrogram contrastive
learning to enhance the utterance-level quality of the vocoder model under
data-limited conditions. We also extend the task to include waveforms to
improve the multi-modality comprehension of the model and address the
discriminator overfitting problem. We optimize the additional task
simultaneously with GAN training objectives. Our results show that the tasks
improve model performance substantially in data-limited settings. | Haoming Guo, Seth Z. Zhao, Jiachen Lian, Gopala Anumanchipalli, Gerald Friedland | 2023-09-16T20:04:16Z | http://arxiv.org/abs/2309.09088v2 | # Enhancing Gan-Based Vocoders with Contrastive Learning Under Data-Limited Condition
###### Abstract
Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In this paper, we apply contrastive learning methods in training the vocoder to improve the perceptual quality of the vocoder without modifying its architecture or adding more data. We design an auxiliary task with mel-spectrogram contrastive learning to enhance the utterance-level quality of the vocoder model under data-limited conditions. We also extend the task to include waveforms to improve the multi-modality comprehension of the model and address the discriminator overfitting problem. We optimize the additional task simultaneously with GAN training objectives. Our result shows that the tasks improve model performance substantially in data-limited settings. Our analysis based on the result indicates that the proposed design successfully alleviates discriminator overfitting and produces audio of higher fidelity.
Haoming Guo, Seth Z. Zhao, Jiachen Lian, Gopala Anumanchipalli, Gerald Friedland University of California, Berkeley
+
Footnote β : This paper is based on Haomingβs thesis [1] at University of California, Berkeley.
**Index Terms**: GAN, self-supervised learning, vocoder
## 1 Introduction
Generative Adversarial Networks (GANs) [2] have been widely used in vocoders and have achieved the state-of-the-art in the domain [3, 4, 5]. However, training GAN vocoders still meets two challenges, data insufficiency and discriminator overfitting.
In the realm of single-speaker speech synthesis, the limited size of available datasets poses a significant challenge. To enhance the performance of vocoders operating under such constraints, we propose the use of unsupervised learning techniques to extract additional self-supervised signals for training. Self-supervised learning (SSL) methods have demonstrated efficacy in a diverse array of speech domains, including representation learning [6, 7, 8, 9, 10], synthesis [11, 12, 13, 14], and multi-modality [15, 16]. Drawing on the exceptional transfer learning capabilities of SSL, we seek to harness this power in the realm of Vocoder modeling, focusing specifically on the application of contrastive learning. Although contrastive learning has been explored in the context of speech recognition [6], we are unaware of any previous efforts to apply this approach to Vocoder modeling. In this work, our aim is to leverage contrastive learning as an auxiliary task to enhance the vocoding performance of GAN generators under data-limited conditions.
The second challenge, discriminator overfitting, is also shown to be crucial, especially on small dataset [17, 18, 19], and the convergence of GAN also critically depends on the quality of discriminators [20]. Contrastive learning on the discriminator has been proved to alleviate this problem in image generation [21], and the method, in general, is also shown to increase model's performance and robustness on vision and language tasks [22, 23, 24, 25]. However, in speech synthesis, a naive approach of mel-spectrogram contrastive learning will only involve the generator, which encodes mel-spectrograms, but not the discriminator, which encodes the waveform. Therefore, we propose to extend the training to the discriminator by using a multi-modal contrastive task between mel-spectrograms and waveforms.
Our contributions can be summarized as the following.
1. We propose a contrastive learning task with masked mel-spectrograms to improve the performance on limited data.
2. We design a novel contrastive learning task of matching mel-spectrogram to waveforms to regularize the discriminator and improve the perceptual quality of the generator.
3. We implement a framework for integrating contrastive learning into the GAN training pipeline.
4. We provide experimental results and in-depth analysis of the methods' effectiveness compared to the baseline.
## 2 Methods
In this section, we first introduce the auxiliary contrastive task that we have designed for the GAN vocoder model. Subsequently, we explicate the details of how we modified the task to train both the generator and the discriminator of the
vocoder model. Finally, we illustrate our proposed training framework, which synergizes the contrastive task with GAN objectives. It is worth noting that we have utilized the same model architecture as HiFi-GAN [4]. However, it is pertinent to mention that our method can be applied to other GAN frameworks for vocoders as well.
### Mel-spectrogram Contrastive Learning
In our GAN model, the generator takes a mel-spectrogram as input and outputs a raw waveform through a stack of convolutional layers. We use a learnable feed-forward layer to project the features of the convolutional layers onto a latent space \(R^{D}\), where elements of similar semantics are close to each other through contrastive learning. For each anchor in a batch of \(N\) samples, we apply masking on randomly selected intervals in time and frequency to create a positive sample, while all other \((N-1)\) input samples and \((N-1)\) masked samples are used as negative samples. Together, the method results in \(1\) positive pair and \(2(N-1)\) negative pairs in the batch. We then adapt the InfoNCE loss [26] used in CLIP [27] for our loss function as follows:
\[\mathcal{L}_{cl}=-\frac{1}{N}\sum_{i=1}^{N}\left(\log\frac{\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{v}_{k})}{\sum_{j=1;i\neq j}^{2N}\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{v}_{j}))}\right) \tag{1}\]
where \(\mathbf{v}_{k}\in R^{D}\) is the masked sample from \(\mathbf{v}_{i}\in R^{D}\) and \(\tau\) is a temperature parameter. This method is shown in Fig. 1.
### Mel-spectrogram Waveform Contrastive Learning
In addition to training solely the generator, we propose a novel task that involves contrastive spectrogram-waveform matching. This task serves to train both the generator and the discriminators, promoting rich semantic representation and preventing overfitting of the discriminators to the real or fake classification. The method is illustrated in Fig. 2. For a batch of pairs of mel-spectrograms and waveforms, we assign the labels of the true pairs to be positive and those of the other pairs to be negative, resulting in \(N\) positive pairs and \(N(N-1)\) negative pairs in a batch of \(N\) samples. We use the backbone of the generator to encode the mel-spectrogram and the backbone of the discriminator to encode the waveform. Similar to the method in section 2.1, we use two separate feed-forward layers to project each encoded feature to the same latent dimension \(R^{D}\). Then, we perform the modified loss function
\[\mathcal{L}_{cl}=-\frac{1}{N}\sum_{i=1}^{N}\left(\log\frac{\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{w}_{i})}{\sum_{j=1;i\neq j}^{N}\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{w}_{j}))}\right) \tag{2}\]
where \(\mathbf{w}_{i}\in R^{D}\) is the latent embedding of the waveform corresponding to the \(i\)th mel-spectrogram, \(\mathbf{v}_{i}\in R^{D}\) is the latent embedding of the \(i\)th mel-spectrogram, and \(\tau\) is a temperature parameter. HiFi-GAN contains multiple discriminators, so we calculate a contrastive loss between the mel-spectrogram embedding and each of the waveform embeddings and sum them up. For simplicity, we refer them as one discriminator in this paper unless otherwise mentioned.
### Multi-tasking Framework
To integrate contrastive learning with GAN tasks, we adopt a multi-tasking framework that makes auxiliary tasks a joint optimization objective with original learning goals [28]. As illustrated in Fig. 3, we create additional heads for the training
Figure 1: **Illustration of Mel-spectrogram Contrastive Learning.** The Mel Encoder is the backbone of the generator. This method only trains the generator in a GAN framework.
Figure 2: **Illustration of Mel-spectrogram & Waveform Contrastive Learning.** The Mel Encoder is the backbone of the generator, and the Wave Encoder is the backbone of the discriminator. Therefore, this method trains both the generator and discriminator.
generator and discriminator with auxiliary tasks. The total loss for training the vocoder model thus becomes:
\[\mathcal{L}_{G}=\mathcal{L}_{adv}+\lambda_{fm}\mathcal{L}_{fm}+\lambda_{mel} \mathcal{L}_{mel}+\lambda_{el}\mathcal{L}_{cl} \tag{3}\]
\[\mathcal{L}_{D}=\mathcal{L}_{adv}+\mathcal{I}_{disc}\lambda_{cl}\mathcal{L}_{cl} \tag{4}\]
where \(\mathcal{L}_{G}\) is the total loss for the generator and \(\mathcal{L}_{D}\) is the total loss for the discriminator. \(\mathcal{L}_{adv}\) is the adversarial loss, \(\mathcal{L}_{fm}\) is the feature matching loss, and \(\mathcal{L}_{mel}\) is the mel-spectrogram reconstruction loss in the original HiFi-GAN training pipeline. \(\mathcal{L}_{mel}\) can be either of the contrastive loss described in section 2.1 or 2.2, and \(\mathcal{I}_{disc}\) is an indicator of whether the latter is used. Each loss is weighted with a \(\lambda\) coefficient which can be set as hyperparameters. We use a \(\lambda_{fm}\) of 2, \(\lambda_{mel}\) of 45 from the HiFi-GAN setting [4] and a \(\lambda_{cl}\) of 1.
## 3 Experiments
### Experimental Setting
In this section, we describe the details of our experimental settings including the dataset, model choice, hyperparameters and evaluation metrics.
#### 3.1.1 Dataset
In order to have a fair comparison with other vocoder models, we train the model on the LJSpeech dataset [29] which is also used in other vocoder works like HiFi-GAN [4]. LJSpeech is a public single-speaker dataset with 13100 short English audio clips whose durations span from 1 second to 10 seconds. We use the default data split with 12950 training samples and 150 validation samples. We use the same preprocessing configurations with HiFi-GAN, including 80 bands of mel-spectrograms as input and FFT size of 1024, window size of 1024, and hop size of 256 for conversion from waveform to mel-spectrograms.[4]
#### 3.1.2 Implementation details
For experimental comparison on audio quality, we choose the most powerful HiFi-GAN V1 and the most lightweight HiFi-GAN V3 as the baseline methods, and we use the same model architecture as the backbone to apply the contrastive tasks described in section 2.1 and 2.2. Under the multi-tasking framework, we train HiFi-GAN along with the contrastive learning methods with a batch size of 16, an AdamW optimizer, and a learning rate of 0.0002. For the following experiments on the full dataset, all models are trained for 400k steps (about 96 hours) on one Nvidia TITAN RTX GPU. The experiments on 20% of the dataset train for 300k steps (about 72 hours) on the same device, and those on 4% of the dataset train for 200k steps. The model inference time on GPU is about 70ms for V1 models and 32ms for V3 models.
#### 3.1.3 Evaluation metrics
To objectively evaluate our models compared to the baseline, we measure the mean average error (MAE) and mel-cepstral distortion (MCD) [30] on mel-spectrograms. On both metrics, lower scores indicate closer alignment with the ground truth. We also include a 5-scale mean opinion score (MOS) on audio quality as a subjective evaluation performed on 50 samples excluded from the training set.
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline Model & MAE & MCD & MOS (CI) \\ \hline Ground Truth & - & - & 4.32 (\(\pm 0.05\)) \\ \hline HiFi-GAN V1 & **0.111** & **4.203** & **4.21** (\(\pm 0.05\)) \\ + Mel CL & 0.114 & 4.289 & 4.18 (\(\pm 0.06\)) \\ + Mel-Wave CL & 0.113 & 4.228 & 4.20 (\(\pm 0.05\)) \\ \hline HiFi-GAN V3 & **0.203** & 7.786 & 4.10 (\(\pm 0.05\)) \\ + Mel CL & 0.204 & 7.766 & **4.13** (\(\pm 0.07\)) \\ + Mel-Wave CL & **0.203** & **7.723** & 4.09 (\(\pm 0.06\)) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Objective and subjective evaluation results for models with mel-spectrogram contrastive loss (Mel CL) and mel-spectrogram contrastive loss (Mel-Wave CL). Models are trained on the full training set. CI is 95% confidence interval of the MOS score.
Figure 3: **Illustration of our multi-tasking frameworks.** GAN-based Vocoder models [3, 4] follow an adversarial network (**top**) consisting of a generator that generates raw waveforms from mel-spectrograms and a discriminator that aims to distinguish real from generated waveform samples. To incorporate the auxiliary contrastive learning task, we propose a multi-tasking (**bottom**) framework, which we set the contrastive task as additional learning objectives along with the original GAN optimization objectives. This framework applies to both contrastive learning methods described in section 2.1 and 2.2.
### Results
We present the results of models trained on full data with the multi-tasking framework in Table 1. Below, we refer Mel CL as the mel-spectrogram contrastive learning in section 2.1, and Mel-Wave CL as the mel-spectrogram waveform contrastive learning in section 2.2. For V1 models, the baseline performs slightly better than the proposed methods by margins of 0.02 on MAE, 0.025 on MCD, and 0.01 on MOS. For V3 models, on the objective tests, we observe that the model trained with mel-spectrogram contrastive loss has comparable performance with the baseline, while the one trained with mel-spectrogram waveform contrastive loss achieves the highest scores on both metrics. The results show that our proposed methods have at least comparable performance to the baseline HiFi-GAN when training on the full dataset. On the subjective tests, the V3 model with Mel CL achieves the highest MOS score, 0.03 above the V3 baseline. The model with Mel-Wave CL has a similar MOS score with the baseline on the full dataset. Overall, when trained on the full dataset, the proposed methods have limited gains on top of the baseline.
To investigate how each model performs under data limitation, we train the three models on 20% of the dataset and evaluate them with the same validation set. We present the results in Table 2. With less data, the baseline HiFi-GAN V3 suffers a significant performance degradation across all metrics, including 0.371 on MCD and 0.22 on MOS. Meanwhile, the V3 model trained with Mel CL experiences an increase of 0.194 on MCD and a drop of 0.18 on MOS. The V3 model trained with Mel-Wave CL has an increase of 0.251 on MCD and a drop of only 0.05 on MOS. It suggests Mel-Wave CL is most resistant to data insufficiency. The two proposed methods have comparable scores on the objective evaluation, but the model with Mel-Wave CL obtains a significantly higher score on the subjective test, 0.16 higher than the V3 baseline. The findings align with our hypothesized alleviation of discriminator overfitting by Mel-Wave CL, which is a more severe problem on the small training dataset. Both of the proposed methods perform substantially better than the baseline by 0.07 and 0.16 respectively.
A similar trend exists in the HiFi-GAN V1 experiments, where Mel-Wave CL achieves the best scores and the least performance drop on all metrics. One slightly surprising finding is that the larger model V1 often experiences a smaller performance drop compared to the smaller model V3 when trained on 20% data. Typically, a larger model is expected to be more prone to overfitting when trained on less data, which should lead to a larger performance drop. In this specific case, however, HiFi-GAN V1 has a larger generator but the same discriminator as HiFi-GAN V3 [4], which is our suspected reason for the finding. Overall, the results show the benefits of additional supervision signals from contrastive learning in data-limited situations and the superior performance of Mel-Wave CL on a small dataset.
## 4 Conclusion
This paper describes our proposed contrastive learning framework to improve GAN vocoders. Our results show the legacy of using contrastive learning as an auxiliary task that facilitates vocoder training without adding more data or modifying model architecture. We demonstrate that the proposed framework is significant especially when training on limited data by extracting additional supervision signals and reducing discriminator overfitting.
For future work, we plan to repeat the experiments on different model architectures and datasets to test our method's generalizability. In particular, we want to test its extension to multi-speaker datasets, another domain where data insufficiency is critical. We will also explore other metrics to evaluate the discriminator overfitting problem more holistically.
|
2307.16404 | Nonvolatile Magneto-Thermal Switching in MgB2 | Ongoing research explores thermal switching materials to control heat flow.
Specifically, there has been interest in magneto-thermal switching (MTS)
materials based on superconductors, which only exhibited switching behavior
when a magnetic field was applied. However, a recent report highlighted
nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux
trapping. In this study, we focused on flux trapping in a type-II
superconductor MgB2. Magnetization and thermal conductivity measurements under
magnetic fields were conducted on polycrystalline MgB2. We confirmed that
magnetic flux was indeed trapped in MgB2 even after demagnetization.
Additionally, we observed nonvolatile MTS in MgB2 as well as Sn-Pb solders.
These results suggest that the nonvolatile MTS may be a widespread
characteristic of superconducting materials with flux trapping. | Hiroto Arima, Yoshikazu Mizuguchi | 2023-07-31T04:59:19Z | http://arxiv.org/abs/2307.16404v1 | # Nonvolatile Magneto-Thermal Switching in MgB\({}_{2}\)
###### Abstract
Ongoing research explores thermal switching materials to control heat flow. Specifically, there has been interest in magneto-thermal switching (MTS) materials based on superconductors, which only exhibited switching behavior when a magnetic field was applied. However, a recent report highlighted nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux trapping. In this study, we focused on flux trapping in a type-II superconductor MgB\({}_{2}\). Magnetization and thermal conductivity measurements under magnetic fields were conducted on polycrystalline MgB\({}_{2}\). We confirmed that magnetic flux was indeed trapped in MgB\({}_{2}\) even after demagnetization. Additionally, we observed nonvolatile MTS in MgB\({}_{2}\) as well as Sn-Pb solders. These results suggest that the nonvolatile MTS may be a widespread characteristic of superconducting materials with flux trapping.
The recent advancements in electronic device technology have spurred research into thermal switching materials, which enable control of heat flow through external parameters[1; 2]. Recent progress has been made in the development of thermal switching materials, where the control of thermal conductivity (\(\kappa\)) is achieved through the application of electric[3] and magnetic fields[4; 5]. Among these materials, superconductors have received particular attention in magneto-thermal switching (MTS) research [6; 7]. Here, we introduce an index to assess the effectiveness of MTS known as the MTS ratio (MTSR). The MTSR is calculated as the ratio of the change in \(\kappa\) between the presence and absence of a magnetic field. The MTSR is expressed as [\(\kappa(H)\) - \(\kappa(0\) Oe)] / \(\kappa(0\) Oe). It is widely recognized that, in the normal state, heat is carried by charge carriers, whereas in the superconducting state, heat transport by Cooper pairs is negligible. Consequently, the phase transition from the superconducting state to the normal state results in an increase in \(\kappa\). Recent studies reported MTSR of 650 % for Nb[6] and over 1000 % for high purity 5N-Pb[7]. However, previously reported MTS using superconductors had a limitation, \(\kappa(H)\) returned to its initial value \(\kappa(0\) Oe) when the magnetic field was reduced to zero, indicating that MTS was effective only in the presence of a magnetic field. In the most recent discovery reported in arXiv: 2307.05957 (preprint)[8], a nonvolatile MTS, which retains the altered \(\kappa(H)\) even when the magnetic field is completely removed, has been identified. Surprisingly, this nonvolatile MTS material was discovered in commercially available Sn-Pb solders. The nonvolatile MTSR is defined as [\(\kappa\) (0 Oe, demagnetized) - \(\kappa(0\) Oe, initial)]/\(\kappa\) (0 Oe, initial), and it has been determined that the nonvolatile MTSR of flux-core-free Sn45-Pb55 solder was 150 %. The origin of nonvolatile MTS in Sn-Pb solders is attributed to the presence of magnetic flux trapped in the solder even after the applied magnetic field is removed, resulting in a partial loss of superconducting bulkiness at \(H=0\) Oe. While magnetic flux trapping in Sn-Pb solders is relatively rare due to both Sn and Pb being type-I superconductors, the magnetic flux trap after demagnetization is commonly observed in type-II superconductor samples.
In this study, our primary focus is on exploring the occurrence of nonvolatile MTS in type-II superconductors, with particular emphasis on MgB\({}_{2}\), which has been studied for its flux trapping properties[9; 10]. MgB\({}_{2}\) was discovered in 2001 and stands out among intermetallic superconductors for having the highest superconducting transition temperature \(T_{\rm SC}\sim 39\) K under ambient pressure [11]. This compound exhibits a unique characteristic as a multi-gap superconductor, with multiple conduction bands and independent superconducting gaps present on the Fermi surface[12; 13]. Shortly after its discovery, it was observed that grain boundaries in MgB\({}_{2}\) could
serve as effective pinning centers, contributing to high critical current density (\(J_{\rm c}\)) in superconducting materials[14; 15; 16; 17]. Consequently, extensive research has been conducted to investigate the relationship between magnetic flux trapping at grain boundaries and \(J_{\rm c}\).
Until now, the association between magnetic flux trapping and nonvolatile MTS has solely been reported in Sn-Pb solders. To gain a deeper understanding of this phenomenon, it is essential to explore other materials. MgB\({}_{2}\) presents an appealing platform for investigating nonvolatile MTS due to the existing body of research on flux trapping effects at grain boundaries[9]. While previous studies have conducted thermal conductivity measurements under magnetic field on MgB\({}_{2}\)[18; 19], there has been no specific focus on nonvolatile MTS. In this study, magnetization measurements and thermal conductivity measurements under magnetic fields were conducted for commercial MgB\({}_{2}\). Notably, nonvolatile MTS was also observed in MgB\({}_{2}\).
Polycrystalline MgB\({}_{2}\) used in this experiment was a commercially available powder sample (99%, KOJUNDO). Before the measurements, the powder sample underwent a high-pressure sintering process. In this experiment, high-pressure sintering was performed at relatively low temperatures to suppress grain growth. The specific conditions for this high-pressure sintering entailed a pressure of 3 GPa and a temperature of 400 \({}^{\circ}\)C, sustained around 30 minutes. The crystal structure was examined through powder X-ray diffraction employing the Cu-K\(\alpha\) radiation using the \(\theta\)-2\(\theta\) method (Miniflex-600 RIGAKU). The Rietveld refinement of the XRD data was performed using the RIETAN-FP package[20]. The scanning electron microscope (SEM, TM3030, Hitachi High-Tech) was used for microstructure observation. The thermal conductivity was measured using a Physical Property Measurement System (PPMS, Quantum Design) equipped with a thermal transport option (TTO). The measurement employed a four-probe steady-state method, incorporating a heater, two thermometers, and a base-temperature terminal. For the thermal conductivity measurements of MgB\({}_{2}\), a cylindrical sample with a diameter of 4.61 mm and a height of 4.10 mm was employed. The magnetization measurements were carried out using a superconducting quantum interference device (SQUID) magnetometry technique, employing the Magnetic Property Measurement System (MPMS3, Quantum Design) in a VSM (vibrating sample magnetometry) mode. In this experiment, thermal conductivity measurements were conducted on a high-pressure sintered MgB\({}_{2}\) sample within a week. Subsequently, the sample was crushed, and further analyses including XRD and magnetization measurements, and SEM imaging were performed. All the experiments were carried out using the same batch of sample.
Figure 1 illustrates the XRD patterns obtained from the high-pressure sintered MgB\({}_{2}\) sample.
In the high-pressure sintered sample, the presence of MgB\({}_{4}\) and MgO were detected as an impurity, alongside the main MgB\({}_{2}\) peaks. The reliability factor, denoted as \(R_{\rm wp}\), was determined to be \(R_{\rm wp}=3.7\) %, and the goodness-of-fit indicator, represented by \(S\), was calculated as \(S=1.8\). The results of Rietveld refinement indicated that the sample composition consisted of approximately 90 % MgB\({}_{2}\), 5 % MgB\({}_{4}\), and 5% MgO. The as-purchased MgB\({}_{2}\) powder contained a similar amount of MgB\({}_{4}\) and MgO. The discrepancy with the nominal purity of 99% MgB\({}_{2}\) is likely a result of certain compounds not being accounted for in the chemical analysis. Furthermore, the XRD profile exhibited broadening, implying lattice strain induced by the high-pressure sintering process.
Figure 2 shows the SEM image of the high-pressure sintered MgB\({}_{2}\). Numerous granular grains were observed in the structure of the high-pressure sintered MgB\({}_{2}\), with the majority of the grain sizes measuring less than approximately 5 \(\mu\)m.
Figure 3 (a) illustrates the temperature dependence of the magnetization \(4\pi M\) measured at 10 Oe under both zero-field-cooling (ZFC) and field-cooling (FC) conditions. The magnetization measurement under ZFC demonstrates a large shielding signal below \(T_{\rm SC}\sim 39\) K. The difference between ZFC and FC measurements is a characteristic behavior commonly observed in type-II superconductors. The temperature dependence of \(4\pi M\) exhibited broadening, which has also been reported in previous studies on high-pressure sintered MgB\({}_{2}\)[17]. The exact cause of this broadening is not yet clear, but the inhomogeneity of the crystals likely plays a role, as suggested by the broad profile observed in the XRD measurement. Figure 3 (b) depicts the temperature dependence of \(4\pi M\) measured at 10 Oe after FC at three different fields : 1000 Oe, 10000 Oe, and 70000 Oe. In all cases, \(4\pi M\) exhibited ferromagnetic-like behavior below \(T_{\rm SC}\), similar to the findings of previously reported hydrogen-rich superconductors[21] and Sn-Pb solders[8], implying the presence of trapped magnetic flux at grain boundaries of MgB\({}_{2}\). The value of magnetization at 1.8 K increased as the field increased from 1000 Oe to 10000 Oe, but it did not change further with the application of a higher magnetic field. This suggests that the amount of trapped magnetic flux increases with the applied magnetic field, but there is a threshold where the trapped magnetic flux saturates. To further discuss, we show the \(4\pi M\)-\(H\) curves at 2.5 K and 4.0 K in Figs. 3(c) and 3(e), respectively. These curves display the distinct shape commonly observed in type-II superconductors, which signifies the presence of flux trapping in the material. As depicted in Figures 3(d) and 3(f), the inner magnetic flux density (\(B\)) given by \(B=H+4\pi M\) near 0 Oe is displayed at 2.5 K and 4.0 K. The results at 2.5 K and 4.0 K showed similarities: immediately after the zero-field-cooling, the initial magnetic flux density of MgB\({}_{2}\) was \(B=0\). However, upon applying a magnetic field to
MgB\({}_{2}\), \(B\) did not return to its initial value when the applied field reached \(H\) = 0, due to the magnet flux trapping. The magnetic flux density trapped at \(H\) = 0 Oe was 500 G for both temperatures.
Figure 4 (a) depicts the temperature dependence of \(\kappa\) in both a zero magnetic field and a magnetic field of 10000 Oe. In the absence of a magnetic field, \(\kappa\) decreased as the temperature decreased. The observed variation in the slope of \(\kappa\) at approximately 10 K was consistent with previous measurements on polycrystalline MgB\({}_{2}\)[22]. Furthermore, \(\kappa\) at 50 K in this experiment was approximately 3.5 W/Km, which aligns with the order of magnitude reported in previous studies, where values ranged from 5 W/Km[23] to 9 W/Km[22]. It is noted that thermal conductivity is a sensitive indicator of grain boundaries, and therefore, the discrepancy with previous studies is attributed to the sample dependence. When a magnetic field of 10000 Oe was applied, a similar trend in \(\kappa\) was observed, but the decrease in \(\kappa\) was suppressed. This can be attributed to the suppression of the superconducting state in MgB\({}_{2}\) under the magnetic field. Figures 4(b) and 4(c) illustrate the magnetic field dependence of \(\kappa\) at 2.5 K and 4 K, respectively. When the MgB\({}_{2}\) was zero-field-cooled to 2.5 K, the initial \(\kappa\) in the absence of magnetic field was 6.9 mW/Km. When a magnetic field was applied, \(\kappa\) increased and reached a value of 14.0 mW/Km at 10000 Oe. As the magnetic field gradually decreased from 10000 Oe, \(\kappa\) showed a decrease. However, the value at 0 Oe deviated from the initial value, indicating nonvolatile MTS. Upon further reduction of the magnetic field, a minimum value of \(\kappa\) was observed, followed by an increase in \(\kappa\). Similar trends were observed when the magnetic field was increased from -10000 Oe. As mentioned earlier, the presence of approximately 500 G of trapped magnetic flux in MgB\({}_{2}\) after demagnetization partially suppresses the superconducting state and prevented \(\kappa\) from returning to its initial value. The nonvolatile MTSR observed in MgB\({}_{2}\) at 2.5 K in this experiment was 18 %, which is smaller than to that of flux-core-free Sn45-Pb55 solder[8]. Furthermore, nonvolatile MTS was also observed at 4.0 K, although the nonvolatile MTSR decreased to that at 2.5 K, reaching 15 %.
The primary discovery of this study is the confirmation of nonvolatile MTS occurring in the magnetic flux trapped at the grain boundaries of the type-II superconductor MgB\({}_{2}\). This finding diverges from prior research, which predominantly focused on composites such as Sn-Pb solders. Notably, the phenomenon of flux trapping at grain boundaries has been observed not only in MgB\({}_{2}\) but also in other type-II superconductors, including cuprate superconductors and iron-based superconductors [24]. This suggests that the trapping of flux at grain boundaries is a widespread occurrence in various types of type-II superconducting materials. In this study, the maximum value of the nonvolatile MTSR achieved for MgB\({}_{2}\) remained relatively small at 18 % at 2.5 K. To
further enhance the nonvolatile MTSR, potential methods include controlling the grain boundary size to increase the trapped magnetic flux and regulating the thermal conductivity in the normal conducting region. However, further systematic investigations are required in this regard. Recent advancements in machine learning have contributed to the elucidation of heat conduction mechanisms in grain boundaries and nanopolycrystals [25]. Given that nonvolatile MTS is a relatively new phenomenon, it is crucial to not only investigate the thermal conductivity under magnetic field in various materials but also consider theoretical approaches that utilize machine learning to gain a deeper understanding of nonvolatile MTS.
The motivation for this study was derived from the discovery of nonvolatile MTS induced by magnetic flux trapping in Sn-Pb solders. Drawing inspiration from this phenomenon, our research focused on investigating the magnetic field dependence of thermal conductivity in type-II superconductor MgB\({}_{2}\), a material renowned for its ability to trap magnetic flux at grain boundaries. Through our experiments, we successfully observed nonvolatile MTS in MgB\({}_{2}\) and identified magnetic flux trapping as the underlying mechanism. Moving forward, it is imperative to extend this research to encompass other type-II superconductors with effective pinning centers. Such endeavors will contribute to a deeper understanding of nonvolatile MTS at a fundamental level and facilitate improvements in both the nonvolatile MTSR and the operational temperature range, thereby paving the way for potential engineering applications.
## Acknowledgment
We thank O. Miura and K. Uchida for supports in experiments and fruitful discussion on the results. This work was partly supported by JST-ERATO (JPMJER2201), TMU Research Project for Emergent Future Society, and Tokyo Government-Advanced Research (H31-1).
|
2307.16410 | HiREN: Towards Higher Supervision Quality for Better Scene Text Image
Super-Resolution | Scene text image super-resolution (STISR) is an important pre-processing
technique for text recognition from low-resolution scene images. Nowadays,
various methods have been proposed to extract text-specific information from
high-resolution (HR) images to supervise STISR model training. However, due to
uncontrollable factors (e.g. shooting equipment, focus, and environment) in
manually photographing HR images, the quality of HR images cannot be
guaranteed, which unavoidably impacts STISR performance. Observing the quality
issue of HR images, in this paper we propose a novel idea to boost STISR by
first enhancing the quality of HR images and then using the enhanced HR images
as supervision to do STISR. Concretely, we develop a new STISR framework,
called High-Resolution ENhancement (HiREN) that consists of two branches and a
quality estimation module. The first branch is developed to recover the
low-resolution (LR) images, and the other is an HR quality enhancement branch
aiming at generating high-quality (HQ) text images based on the HR images to
provide more accurate supervision to the LR images. As the degradation from HQ
to HR may be diverse, and there is no pixel-level supervision for HQ image
generation, we design a kernel-guided enhancement network to handle various
degradation, and exploit the feedback from a recognizer and text-level
annotations as weak supervision signal to train the HR enhancement branch.
Then, a quality estimation module is employed to evaluate the qualities of HQ
images, which are used to suppress the erroneous supervision information by
weighting the loss of each image. Extensive experiments on TextZoom show that
HiREN can work well with most existing STISR methods and significantly boost
their performances. | Minyi Zhao, Yi Xu, Bingjia Li, Jie Wang, Jihong Guan, Shuigeng Zhou | 2023-07-31T05:32:57Z | http://arxiv.org/abs/2307.16410v1 | # HiREN: Towards Higher Supervision Quality for Better Scene Text Image Super-Resolution
###### Abstract
Scene text image super-resolution (STISR) is an important pre-processing technique for text recognition from low-resolution scene images. Nowadays, various methods have been proposed to extract text-specific information from high-resolution (HR) images to supervise STISR model training. However, due to uncontrollable factors (_e.g._ shooting equipment, focus, and environment) in manually photographing HR images, the quality of HR images cannot be guaranteed, which unavoidably impacts STISR performance. Observing the quality issue of HR images, in this paper we propose a novel idea to boost STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to do STISR. Concretely, we develop a new STISR framework, called High-Resolution ENhancement (HiREN) that consists of two branches and a quality estimation module. The first branch is developed to recover the low-resolution (LR) images, and the other is an _HR quality enhancement_ branch aiming at generating high-quality (HQ) text images based on the HR images to provide more accurate supervision to the LR images. As the degradation from HQ to HR may be diverse, and there is no pixel-level supervision for HQ image generation, we design a kernel-guided enhancement network to handle various degradation, and exploit the feedback from a recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. Then, a _quality estimation module_ is employed to evaluate the qualities of HQ images, which are used to suppress the erroneous supervision information by weighting the loss of each image. Extensive experiments on Text/Zoom show that HiREN can work well with most existing STISR methods and significantly boost their performances.
Scene text image super-resolution, scene text recognition, super-resolution, resolution enhancement
## I Introduction
Scene text recognition [1, 2] (STR), which aims at recognizing texts from scene images has wide applications in scene text based image understanding (_e.g._ auto-driving [3], TextVQA [4], Doc-VQA [5], and ViteVQA [6]). Despite the fact that STR has made great progress with the rapid blossom of deep learning in recent years, performance of text recognition from low-resolution (LR) text images is still unsatisfactory [7]. Therefore, scene text image super-resolution (STISR) [8, 9, 7] is gaining popularity as a pre-processing technique to recover the missing details in LR images for boosting text recognition performance as well as the visual quality of the scene texts.
As shown in Fig. 1(a), recent STISR works usually try to directly capture pixel-level (via \(L1\) or \(L2\) loss) or text-specific information from high-resolution (HR) text images to supervise the training of STISR models. For instance, Gradient profile loss [7] calculates the gradient fields of HR images as ground truth for sharpening the boundaries of the super-resolution (SR) images. PCAN [10] is proposed to learn sequence-dependent features and high-frequency information of the HR images to better reconstruct SR text images. STT [8] exploits character-level attention maps from HR images to assist the recovery. [11] and TG [9] extract stroke-level information from HR images through specific networks to provide more fine-grained supervision information. [12, 13, 14] additionally introduce external modules to extract various text-specific clues to facilitate the recovery and use the supervision from HR images to finetune their modules.
Although various techniques that extract information from the HR images have been proposed to improve the recognition accuracy, they all assume that the HR images are completely trustworthy, which is actually not true, due to the uncontrollable factors (e.g. shooting equipment, focus, and environment) in manually photographing the HR images. As shown in Fig. 1(c), the HR images may suffer from blurring (the 1st and 2nd cases) and low-contrast (the 3rd case), which unavoidably impacts the performance of STISR. In the worst case, these quality issues may cause the failure of recognition on HR images and lead to wrong supervision information. What is worse, the HR quality problem in real world is absolutely not negligible, as the recognition accuracy on HR images can be as low as 72.4% (see Tab. II).
Considering the fact that improving the photographing of LR/HR images and eliminating environmental impacts are extremely expensive (if not impossible) in the wild, and applying huge models for extracting more accurate information is also time-consuming and costly, in this paper we propose a novel solution to advance STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to perform STISR. To this end, we develop a new, general and easy-to-use STISR framework called **H**igh-**R**esolution **EN**chancement (HiREN) to improve STISR by providing more accurate supervision. In particular, as shown in Fig. 1(b), besides the typical LR recovery branch, HiREN additionally introduces an HR enhancement branch that aims at improving the quality of HR images and a quality estimation (QE) module to conduct a quality-aware supervision. Here, the
resulting high-quality (HQ) images, instead of the HR images as in existing works, are used to supervise the LR recovery branch. Note that the degradation from HQ to HR is unknown, and there is no explicit supervision for HR enhancement, existing STISR approaches are not able to solve the task of HR enhancement. To tackle these problems, on the one hand, we introduce a degradation kernel predictor to generate the degradation kernel and then use this kernel as a clue to enhance various degraded HR images. On the other hand, we exploit the feedback of a scene text recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. What is more, to suppress the erroneous supervision information, a quality estimation (QE) module is proposed to evaluate the quality of the HQ images through the normalized Levenshtein similarity [15] of the recognized text and the ground truth, and then use this quality estimation to weight the loss of each HQ image.
Such design above offers our method four-fold advantages:
* _General_. Our framework can work with most existing STISR approaches in a plug-and-play manner.
* _Easy-to-use_. After training the HR enhancement branch, our method can be plugged online to the training of existing techniques easily.
* _Efficient_. HiREN does not introduce additional cost during inference. What is more, HiREN can also be deployed offline by caching all the enhanced HR images. This offline deployment does not introduce any additional training cost.
* _High-performance_. Our method can significantly boost the performances of existing methods.
Contributions of this paper are summarized as follows:
* We propose a novel approach for STISR. To the best of our knowledge, this is the first work to consider and exploit the quality of HR images in STISR. That is, different from existing approaches that extract various text-specific information, Our work pioneers the exploration of the quality issue of HR images.
* We develop a general, efficient and easy-to-use **H**igh-**R**esolution **EN**hancement (HiREN) framework to boost STISR by improving the supervision information from the HR images.
* We conduct extensive experiments on TextZoom, which show that HiREN is compatible with most existing STISR methods and can significantly lift their performances.
The rest of this paper is organized as follows: Section II surveys related works and highlights the differences between our method and the existing ones; Section III presents our method in detail; Section IV introduce the experimental results of our method and performance comparisons with existing methods; Section V further discusses the quality issues of HR images, error cases and limitations of the proposed method; Section VI concludes the paper while pinpointing some issues of future study.
## II Related Work
In this section, we briefly review the super-resolution techniques and some typical scene text recognizers. According to whether exploiting text-specific information from HR images, recent STISR methods can be roughly divided into two groups: generic super-resolution approaches and scene text image super-resolution approaches.
### _Generic Image Super-Resolution_
Generic image super-resolution methods [16, 17, 18, 19] usually recover LR images through pixel information
Fig. 1: Overview of existing STISR approaches and our method, and examples illustrating the quality problem of HR images. (a) The framework of existing STISR methods; (b) The HiREN framework; (c) Some examples of low-quality HR images and their enhanced results (HQ) by our method, as well as the recognized results. For each case, the 1st row shows HR and HQ images, the 2nd row presents the normalized HR and HQ images to highlight their visual differences, and the 3rd row gives the recognized characters: red indicates incorrectly recognized, and black means correctly recognized.
from HR images captured by pixel loss functions. In particular, SRCNN [20] is a three-layer convolutional neural network. [21] and SRResNet [22] adopt generative adversarial networks to generate distinguishable images. [23] employs convolutional layers, transposed convolution and sub-pixel convolution layers to extract and upscale features. RCAN [24] and SAN [25] introduce attention mechanisms to boost the recovery. Nowadays, transformer-structured approaches [26, 27, 28] are proposed to further advance the task of generic image super-resolution. Nevertheless, these approaches ignore text-specific properties of the scene text images, which leads to low recognition performance when applied to STISR.
### _Scene Text Image Super-Resolution_
Recent approaches focus on extracting various text-specific information from the HR images, which is then utilized to supervise model training. Specifically, [29, 30] calculate text-specific losses to boost performance. [31] proposes a multi-task framework that jointly optimizes recognition and super-resolution branches. [7] introduces TSRN and gradient profile loss to capture sequential information of text images and gradient fields of HR images for sharpening the texts. PCAN [10] is proposed to learn sequence-dependent and high-frequency information of the reconstruction. STT [8] makes use of character-level information from HR images extracted by a pre-trained transformer recognizer to conduct a text-focused super-resolution. [32] proposes a content perceptual loss to extract multi-scale text recognition features to conduct a content aware supervision. TPGSR [12], TATT [13], and C3-STISR [14] extract text-specific clues to guide the super-resolution. In particular, TPGSR is the first method that additionally introduces a scene text recognizer to provide text priors. Then, the extracted priors are fed into the super-resolution to iteratively benefit the super-resolution. TATT [13] introduces a transformer-based module, which leverages global attention mechanism, to exert the semantic guidance of text prior to the text reconstruction process. C3-STISR [14] is proposed to learn triple clues, including recognition clue from a STR, linguistical clue from a language model, and a visual clue from a skeleton painter to rich the representation of the text-specific clue. TG [9] and [11] exploit stroke-level information from HR images via stroke-focused module and skeleton loss for more fine-grained super-resolution. Compared with generic image super-resolution approaches, these methods greatly advance the recognition accuracy through various text-specific information extraction techniques. Nevertheless, they all assume that HR images are completely trustable, which is actually not true in practice. As a result, their extracted supervision information may be erroneous, which impacts the STISR performance. Since HiREN applies these methods to implement the LR recovery branch, to elaborate the differences among various super-resolution techniques in this paper, we give a summary of these methods in Tab. I on three major aspects: how their super-resolution blocks and loss functions are designed, and whether they use iterative super-resolution technique to boost the performance.
### _Scene Text Recognition_
Scene text recognition (STR) [33, 1, 2, 34, 35] has made great progress in recent years. Specifically, CRNN [36] takes CNN and RNN as the encoder and employs a CTC-based [37] decoder to maximize the probabilities of paths that can reach the ground truth. ASTER [38] introduces a spatial transformer network (STN) [39] to rectify irregular text images. MORAN [40] proposes a multi-object rectification network. [41, 42, 43] propose novel attention mechanisms. AutoSTR [44] searches backbone via neural architecture search (NAS) [45]. More recently, semantic-aware [46, 43], transformer-based [47], linguistics-aware [48, 49], and efficient [50, 51] approaches are proposed to further boost the performance. Although these methods are able to handle irregular, occluded, and incomplete text images, they still have difficulty in recognizing low-resolution images. For example, as can be seen in Sec. IV-C, CRNN, MORAN, and ASTER only achieve the recognition accuracy of 27.3%, 41.1% and 47.2% respectively when directly using LR images as input. What is more, finetuning these recognizers is insufficient to accurately recognize texts from LR images, as reported in [7]. Therefore, a pre-processor is required for recovering the details of low-resolution images.
### _Difference between Our Method and Existing STISR Works_
The motivation of HiREN is totally different from that of existing STISR approaches. As described above, existing methods focus on extracting text-specific information from HR images to supervise STISR. On the contrary, HiREN first lifts the quality of HR images, then uses the enhanced images to supervise STISR. This allows HiREN to work with most existing STISR approaches and boost their recognition performances in a general, economic and easy-to-use way.
## III Method
Here, we first give an overview of our framework HiREN, then briefly introduce the LR recovery branch. Subsequently, we present the HR enhancement branch and the quality estimation module in detail, followed by the usage of HiREN.
### _Overview_
Given a low-resolution (LR) image \(I_{LR}\in\mathbb{R}^{C\times N}\). Here, \(C\) is the number of channels of the image, \(N=H\times W\) is the collapsed spatial dimension, \(H\) and \(W\) are the height and width of image \(I_{LR}\). Our aim is to produce a super-resolution (SR)
\begin{table}
\begin{tabular}{c|c c c} \hline Method & Super-resolution block & Loss function \(\mathcal{L}_{LR}\) & Iterative \\ \hline SRCNN [20] & SRCNN [20] & MSE & \(\times\) \\ SRResNet [22] & SRResNet [22] & MSE & \(\times\) \\ TSRN [7] & SSB [7] & Gradient profile loss [7] & \(\times\) \\ PCAN [10] & PCA [10] & Edge guidance loss [10] & \(\times\) \\ STT [8] & TBSRN [8] & Text-focused loss [8] & \(\times\) \\ TPGSR [12] & SRN [7] & Gradient profile loss [7] & \(\checkmark\) \\ TG [9] & SSB [7] & Stroke-focused loss [9] & \(\times\) \\ \hline \end{tabular}
\end{table} TABLE I: Differences between typical STISR methods from three aspects: super-resolution block, loss function, and whether this method is iterative or not.
image \(I_{SR}\in\mathbb{R}^{C\times(4\times N)}\) with the magnification factor of \(\times 2\). Fig. 2 shows the architecture of our framework HiREN, which is composed of two major branches: the _LR recovery branch_\(f_{LR}\) that takes \(I_{LR}\) as input to generate a super-resolution image \(I_{SR}=f_{LR}(I_{LR})\) and a corresponding loss \(\mathcal{L}_{o}\), and the _HR enhancement branch_\(f_{HR}\) that takes \(I_{HR}\) as input to generate a high-quality (HQ) image \(I_{HQ}=f_{HR}(I_{HR})\) where \(I_{HQ}\in\mathbb{R}^{C\times(4\times N)}\), and a _quality estimation module_\(f_{QE}\) that takes \(I_{HQ}\) and \(\mathcal{L}_{o}\) as input to compute a quality-aware loss \(\mathcal{L}_{LR}\) to supervie the LR branch:
\[\mathcal{L}_{LR}=f_{QE}(I_{HQ},\mathcal{L}_{o}). \tag{1}\]
During inference, \(f_{HR}\) and \(f_{QE}\) are removed. Thus, HiREN does not introduce extra inference cost.
### _LR Recovery Branch_
In HiREN, the LR recovery branch can be one of the existing STISR approaches. As shown in Fig. 2, these methods usually work in the following way: 1) Start with a spatial transformer network (STN) [39] since in the TextZoom dataset [7] the HR-LR pairs are manually cropped and matched by humans, which may incur several pixel-level offsets. 2) Several super-resolution blocks are used to learn sequence-dependent information of text images. 3) A pixel shuffle module is employed to reshape the super-resolved image. 4) Various loss functions are served as \(\mathcal{L}_{o}\) to extract text-specific information from ground truth (\(I_{HR}\) in existing works, \(I_{HQ}\) in HiREN) to provide the supervision. To elaborate the differences among various LR branches tested in this paper, we give a summary of these methods in Tab. I.
As the motivation of HiREN is totally different from that of the existing methods, our method can work with most of them and significantly improve their performances.
### _HR Enhancement Branch_
#### Iii-C1 Overall introduction.
The enhancement of HR images is a challenging task, where the challenges lie in two aspects that will be detailed in the sequel. Formally, the HR image \(I_{HR}\) and the corresponding HQ image \(I_{HQ}\) we are pursuing are connected by a degradation model as follows:
\[I_{HR}=k\otimes I_{HQ}+n, \tag{2}\]
where \(\otimes\) denotes the convolution operation, \(k\) is the degradation kernel, and \(n\) is the additive noise that follows Gaussian distribution in real world applications [52, 53]. Different from the degradation from \(I_{HR}\) to \(I_{LR}\) where the kernel is determined by lens zooming, unfortunately, the degradation \(k\) of \(I_{HQ}\) is unknown. As shown in Fig. 1(c), such degradation can be but not limited to blurring (the 1st and 2nd cases) and low-contrast (the 3rd case). What is more, we also lack pixel-level supervision information of \(I_{HQ}\). These two challenges make existing STISR methods unable to enhance \(I_{HR}\). To cope with the first challenge, here we adopt blind image deblurring techniques [54, 55, 53, 52] to boost the recovery of \(I_{HR}\). Specifically, as shown in Fig. 2, our HR enhancement branch consists of two components: a _kernel predictor_\(P\) and a _kernel-guided enhancement network_\(f_{ke}\). The kernel predictor aims at estimating the degradation kernel \(k\) (_i.e.,_\(k=P(I_{HR})\) where \(k\in\mathbb{R}^{d}\), and \(d\) is the size of the kernel), while the kernel-guided enhancement network takes the predicted kernel and \(I_{HR}\) as input to conduct a kernel-guided enhancement: \(I_{HQ}=f_{ke}(I_{HR},k)\). The predicted kernel is utilized as a clue to strengthen the model's ability to handle various degradation and boost the recovery of HR images. As for the second challenge, we introduce a pre-trained scene text recognizer \(R\) to provide the supervision for generating more recognizable HQ images. And after training the HR enhancement branch \(f_{HR}\), HiREN uses the trained \(f_{HR}\) to generate HQ images, which are exploited for training the LR recovery branch.
#### Iii-C2 The kernel predictor.
As shown in Fig. 3, to generate a prediction of the degradation kernel, we first utilize convolution layers to obtain a spatial estimation of the kernel. Then, we employ global average pooling [56] to output the global prediction by evaluating the spatial mean value. Thus, we can
Fig. 2: The framework of HiREN. Red lines are valid only during training.
get the prediction of the kernel of size \(\mathbb{R}^{d}\), in a simple yet effective way.
#### Iii-C3 The kernel-guided enhancement network.
As shown in Fig. 3, our kernel-guided enhancement network is designed in the following way: 1) Start with an input convolution to change the channel number from \(C\) to \(C^{\prime}\). 2) Repeat \(N\) modified SRB blocks [7]. Each block consists of two convolution layers and one Bi-directional GRU [57] (BGRU) to handle sequential text images. At this step, we first stretch the predicted kernel \(k\) to pixel shape, then concatenate the pixel kernel with the feature map extracted by convolution layers at channel dimension. 3) An output convolution is applied to getting the final enhanced HQ image \(I_{HQ}\).
#### Iii-C4 Loss functions.
Here, we design the loss functions of the HR enhancement branch \(f_{HR}\). As shown in Fig. 2, there are two loss functions in \(f_{HR}\). The first one is the recognition loss \(\mathcal{L}_{rec}\) that is used to make the enhanced image \(I_{HQ}\) to be more easily recognized than \(I_{HR}\). It is provided by a pre-trained recognizer \(R\) and the text-level annotation of \(I_{HR}\). Suppose the encoded text-level annotation is \(p_{GT}\in\mathbb{R}^{L\times|\mathcal{A}|}\), where \(L\) is the max prediction length of recognizer \(R\), and \(|\mathcal{A}|\) denotes the length of the alphabet \(\mathcal{A}\). Then, the recognition loss can be evaluated by
\[\mathcal{L}_{rec}=-\sum_{j=0}^{L}p_{GT}^{j}log(R(I_{HQ})^{j}), \tag{3}\]
which is the cross entropy of \(p_{GT}\) and \(R(I_{HQ})\). Beside the recognition loss, it is essential to keep the style of the enhanced images, which has also been pointed out in a recent work [8]. Though HR images are not trustworthy, pixel information from HR images can help the model to enhance the input images, rather than totally regenerate them, which is a much more challenging and uncontrollable task. In HiREN, we use mean-squared-error (MSE) to compute pixel loss to keep the style unchanged. Formally, we have
\[\mathcal{L}_{sty}=||I_{HQ},I_{HR}||_{2}. \tag{4}\]
With the recognition loss Eq. (3) and the style loss Eq. (4), the whole loss function of the HR enhancement branch can be written as follows:
\[\mathcal{L}_{HR}=\alpha\mathcal{L}_{rec}+\mathcal{L}_{sty}, \tag{5}\]
where \(\alpha\) is a hyper-parameter to trade-off the two losses.
### _Quality Estimation Module_
Though we can improve the quality of supervision information with the help of the HR enhancement branch, we cannot guarantee the correctness of the supervision information. Therefore, to suppress wrong supervision information, we design a quality estimation module \(f_{QE}\) to evaluate the qualities of HQ images and weight the losses of HQ images according to their qualities.
Let the original loss of the LR branch be \(\mathcal{L}_{o}\in\mathbb{R}^{B}\), where \(B\) denotes the batch size. We adopt the Levenshtein similarity [15] between the \(i\)-th HQ image's recognition result \(pred_{i}\) of a recognizer \(R\) and the corresponding ground truth \(gt_{i}\) to measure its quality, and then utilize the quality values of all HQ images to compute the final loss:
\[\mathcal{L}_{LR}=\mathcal{L}_{o}[NS(pred_{1},gt_{1}),...,NS(pred_{B},gt_{B})] ^{\top}/B, \tag{6}\]
where \(NS(\cdot,\cdot)\) denotes the Levenshtein similarity, which has the following two advantages: 1) its value falls between 0 and 1; 2) it has a smooth response, thus can gracefully capture character-level errors [58]. These advantages make it suitable to weight the losses of HQ images.
### _The Usage of HiREN_
In this section, we introduce the usage of HiREN. As mentioned above, there are two ways to deploy it. One way is called "online", which can be easily implemented by plugged the HR enhancement branch to the training procedure of the LR recovery branch. The online installation algorithm of HiREN is given in Alg. 1. As shown in Alg. 1, the first thing we should do is to develop the HR enhancement branch (_i.e.,_ L4\(\sim\)L10). Specifically, given a STISR dataset \(\mathcal{D}\), we
Fig. 3: The structure of the HR enhancement branch, which consists of two components: (a) the kernel predictor \(P\), and (b) the kernel-guided enhancement network \(f_{ke}\).
first sample HR images and their corresponding text-level annotations from \(\mathcal{D}\) (L5), then generate the enhanced images \(I_{HQ}\) (L6). Finally, recognition loss and style loss described in Sec. III-C4 are computed to optimize the loss \(f_{HR}\). After that, we plug the developed HR enhancement branch to the training procedure of the LR recover branch (L11\(\sim\)L16). In particular, after sampling LR and HR images from the dataset \(\mathcal{D}\) (L12), we use the HR enhancement branch to generate the HQ image \(I_{HQ}\) (L13). Finally, the HQ image, rather than the HR image used in typical works, and the SR image are utilized to compute the text-specific loss \(\mathcal{L}_{l}\) to supervise the LR recovery branch (L11\(\sim\)L12).
The other way is called "offline", which can be implemented by caching all the enhanced HQ images. As can be checked in Alg. 2, after developing the HR enhancement branch \(f_{HR}\), we sample all the LR-HR image pairs in the old dataset \(\mathcal{D}\). Then, the corresponding HQ images are generated and then add to the new dataset \(\mathcal{\tilde{D}}\) (L6). During training the LR recovery branch, what we need to do is to sample LR-HQ image pairs to compute the loss \(L_{o}\) for the optimization of the model. Such an installation does not introduce any additional training cost to the LR recovery branch. It is worth mentioning that the HR enhancement branch is removed during inference. That is, HiREN does not introduce any additional inference cost.
```
1:Input: Training dataset \(\mathcal{D}\) and the developed HR enhancement branch \(f_{HR}\)
2:Initialize \(f_{LR}\)
3:\(\mathcal{\hat{D}}=\emptyset\)
4:for\(I_{LR},I_{HR}\sim\mathcal{D}\)do
5:\(I_{HQ}=f_{HR}(I_{HR})\)
6: Add \((I_{HQ},I_{LR})\) to \(\mathcal{\hat{D}}\)
7:while\(f_{LR}\) is not converged do
8:\(I_{HQ},I_{LR}\sim\mathcal{\hat{D}}\)
9:\(I_{SR}=f_{LR}(I_{LR})\)
10: Compute \(\mathcal{L}_{o}\) according to \(I_{SR}\) and \(I_{HQ}\)
11: Optimize \(f_{LR}\) with respect to \(\mathcal{L}_{o}\)
12:return\(f_{LR}\)
```
**Algorithm 2** The offline usage of HiREN.
## IV Performance Evaluation
In this section, we first introduce the dataset and metrics used in the experiments and the implementation details. Then, we evaluate HiREN and compare it with several state-of-the-art techniques to show its effectiveness and superiority. Finally, we conduct extensive ablation studies to validate the design of our method.
### _Dataset and Metrics_
Two groups of datasets are evaluated in this paper: low-resolution scene text dataset TextZoom and regular scene text recognition datasets.
#### Iv-A1 Low-resolution scene text dataset
The **TextZoom**[7] dataset consists of 21,740 LR-HR text image pairs collected by lens zooming of the camera in real-world scenarios. The training set has 17,367 pairs, while the test set is divided into three settings based on the camera focal length: easy (1,619 samples), medium (1,411 samples), and hard (1,343 samples).
#### Iv-A2 Regular STR datasets
These datasets are used to check the generalization power of our model trained on TextZoom when being adapted to other datasets. In particular, three regular STR datasets are evaluated in our paper to further check the advantage of HiREN: IC15-352 [8], SVT [59], and SVTP [60]. In what follows, we give brief introductions on these datasets.
The **IC15-352** dataset is first divided in [8]. This dataset consists of 352 low-resolution images collected from the IC15 [61] dataset.
Street View Text (**SVT**) [59] is collected from the Google Street View. The test set contains 647 images. Many images in SVT are severely suffered from noise, blur, and low-resolution.
SVT-Perspective (**SVTP**) [60] is proposed for evaluating the performance of reading perspective texts. Images in SVTP are picked from the side-view images in Google Street View. Many of them are heavily distorted by the non-frontal view angle. This dataset contains 639 images for evaluation.
The major metric used in this paper is word-level recognition accuracy that evaluates the recognition performance of STISR methods. Following the settings of previous works [9], we remove punctuation and convert uppercase letters to low-crease letters for calculating recognition accuracy. Besides, _Floating-point **O**perations **P**er **S**econd_ (FLOPS) is used to evaluate the computational cost of various methods. Following [9, 32], we only report _Peak Signal-to-Noise Ratio_ (PSNR) and _Structure Similarity Index Measure_ (SSIM) [62] as the auxiliary metrics to evaluate the fidelity performance because of the quality issue of the HR images.
### _Implementation Details_
All experiments are conducted on 2 NVIDIA Tesla V100 GPUs with 32GB memory. The PyTorch version is 1.8. The HR enhancement branch is trained using Adam [63] optimizer with a learning rate of 0.0001. The batch size \(B\) is set to 48. The LR recovery branch is trained with the same optimizer and batch size but a higher learning rate of 0.001, which is suggested in [12]. The recognizer \(R\) used in our method is proposed in [8]. The hyper-parameters in HiREN are set as follows: \(\alpha\) is set to 0.1, which is determined through grid search. The number of SRB blocks is set to 5 (_i.e.,_\(N=5\)) and \(C^{\prime}\) is set to 32, which is the same as in [7]. The size of kernel \(k\) is set to 32 (_i.e.,_\(d=32\)), which is similar to that suggested in [52]. Our training and evaluation are based on the following protocol: save the averagely best model during training with CRNN as the recognizer, and use this model to evaluate the other recognizers (MORAN, ASTER) and the three settings (easy, medium, hard).
### _Performance Improvement on SOTA Approaches_
#### Iv-C1 Recognition performance improvement
Here, we evaluate our method on **TextZoom**. Since HiREN is a framework that can work with most existing methods, we plug HiREN to the training of several typical super-resolution methods to
check the universality and effectiveness of HiREN, including one generic method SRCNN [20], two recent proposed STISR methods TSRN [7], TG [9], and one iterative-based and clue-guided STISR method TPGSR [12]. To show that HiREN can support various recognizers, we follow previous works [12, 8, 9] and evaluate the recognition accuracy on three recognizers: CRNN [36], MORAN [40] and ASTER [38]. We re-implement these methods to unify hardware, software, and evaluation protocols for fair comparison. Generally, our results are higher than those in the original papers. For example, with CRNN the averaged accuracy of TG is boosted from 48.9% to 49.6%. All the results are presented in Tab. II.
We first check the universality of HiREN. As can be seen in Tab. II, HiREN significantly boosts the recognition performance in almost all the cases, except for one case on TPGSR, which means that HiREN can work well with various existing techniques. As for the performance improvement of HiREN, taking a non-iterative method for example. The state-of-the-art TG [9] achieves 49.6%, 57.6% and 61.2% averaged accuracy respectively with the three recognizers (see the 9th row). After equipping our method HiREN, the accuracy is lifted to 51.1%, 58.6% and 61.7% (increasing by 1.5%, 1.0%, and 0.5%) respectively (see the 10th row). This demonstrates the effectiveness of our method. Results on more datasets and recognizers are given in the supplementary materials to demonstrate its universality.
It is worth mentioning that our HR enhancement branch can also be applied to weakly supervising the enhancement of LR and HR images to lift their recognition accuracies, as shown in the 3rd and 5th rows of Tab. II. This further supports the universality of our technique. Results above show the promising application potential of our method -- not only work with STISR methods, but also pioneer weakly supervised enhancement of LR and HR text images.
Furthermore, to better demonstrate the universality of HiREN, we conduct more experiments on more STR datasets and recently proposed STR datasets. We first evaluate our method on three STR datasets, including IC15-352, SVT, and SVTP. We use the STISR models (TSRN, TG, TPGSR, and our techniques performed on them) developed on the TextZoom dataset to evaluate these datasets. The experimental results on IC15-352, SVT, and SVTP are given in Tab. III. As shown in Tab. III, HiREN also works well on them and achieve lifted performance in almost all the cases. In particular, the performance of TPGSR on three datasets are lifted from 66.2%, 77.4%, 62.8% to 66.8%, 78.7%, and 63.6%, respectively, which demonstrates the advantage of HiREN.
Apart from that, we also give the experimental results on more recently proposed recognizers including SEED [46] and ABINet [48]. The experimental results are given in Tab. IV. As can be checked in Tab. IV, these recent recognizers still find difficulty in recognizing low-resolution text images. For example, SEED and ABINet can only correctly read 45.8% and 61.0% of LR images, which are inferior to performance of reading HR images (_i.e._, 84.8% and 89.8%). Our method HiREN can also achieve boosted performance on these recognizers in almost all the cases.
#### Iv-B2 Fidelity improvement
We also report the results of fidelity improvement (PSNR and SSIM) on major existing methods in Tab. V. Notice that these fidelity metrics have the following limitations. On the one hand, PSNR and SSIM globally measure the similarity between SR image and the ground truth image, including both characters and background. With the goal of lifting the recognition ability and readability of the scene text images, STISR should put more emphasis on recovering characters rather than the background [9, 32]. On the other hand, as pointed out by our paper, HR images are suffering various quality issues. Ergo, it is inappropriate to measure the pixel similarity between erroneous HR images
\begin{table}
\begin{tabular}{c||c c c c|c c c c|c c c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{CRNN [36]} & \multicolumn{3}{c}{MORAN [40]} & \multicolumn{3}{c}{ASTER [38]} \\ \cline{2-13} & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average \\ \hline \hline LR & 37.5\% & 21.4\% & 21.1\% & 27.3\% & 56.2\% & 35.9\% & 28.2\% & 41.1\% & 64.0\% & 42.0\% & 31.7\% & 47.2\% \\ +HiREN & 37.7\% & **27.9\%** & **23.5\%** & **30.2\%** & **57.9\%** & **38.2\%** & **28.7\%** & **42.6\%** & **66.4\%** & **43.4\%** & **32.3\%** & **48.5\%** \\ \hline HR & 76.4\% & 75.1\% & 64.6\% & 72.4\% & **89.0\%** & 83.1\% & 71.1\% & 81.6\% & 93.4\% & 87.0\% & 75.7\% & 85.9\% \\ +HiREN & **77.5\%** & **75.4\%** & **65.0\%** & **72.9\%** & 88.8\% & **83.7\%** & **71.9\%** & **82.0\%** & **93.5\%** & **87.5\%** & **76.2\%** & **86.3\%** \\ \hline \hline SRCNN & 39.8\% & 23.4\% & 21.7\% & 29.0\% & 57.7\% & 36.1\% & 28.5\% & 41.8\% & 65.5\% & 41.9\% & 31.7\% & 47.5\% \\ +HiREN & 41.6\% & **24.0\%** & **23.7\%** & **30.4\%** & **61.1\%** & **38.6\%** & **29.3\%** & **44.0\%** & **67.5\%** & **44.7\%** & **32.8\%** & **49.5\%** \\ \hline TSRN & 52.8\% & 39.8\% & 31.6\% & 42.1\% & 64.5\% & 49.3\% & 36.7\% & 51.1\% & 69.7\% & 54.8\% & 41.3\% & 56.2\% \\ +HiREN & **56.5\%** & **44.1\%** & **32.2\%** & **45.0\%** & **68.5\%** & **52.5\%** & **38.6\%** & **54.2\%** & **73.5\%** & **56.3\%** & **39.2\%** & **57.4\%** \\ \hline TG & 60.5\% & 49.0\% & 37.1\% & 49.6\% & 72.0\% & 57.6\% & 40.0\% & 57.6\% & 76.0\% & 61.4\% & 42.9\% & 61.2\% \\ +HiREN & **62.4\%** & **51.2\%** & **37.5\%** & **51.1\%** & **73.4\%** & **58.4\%** & **41.0\%** & **58.6\%** & **77.5\%** & **61.5\%** & **43.0\%** & 61.7\% \\ \hline TPGSR & 63.1\% & 52.0\% & 38.6\% & 51.8\% & **74.9\%** & 60.5\% & 44.1\% & **60.5\%** & **78.9\%** & 62.7\% & 44.5\% & 62.8\% \\ +HiREN & **63.5\%** & **52.7\%** & **38.8\%** & **52.4\%** & 74.7\% & **60.9\%** & **44.1\%** & **60.5\%** & 78.3\% & **63.5\%** & **45.6\%** & **63.5\%** \\ \hline \end{tabular}
\end{table} TABLE II: Performance (recognition accuracy) improvement on TextZoom.
\begin{table}
\begin{tabular}{c|c c} \hline Method & SEED [46] & ABINet [48] \\ \hline LR & 45.8\% & 61.0\% \\ HR & 84.8\% & 89.8\% \\ \hline TSRN & 56.3\% & **64.0\%** \\ +HiREN & **56.5\%** & 63.8\% \\ \hline TG & 60.7\% & **66.0\%** \\ +HiREN & **60.9\%** & 65.9\% \\ \hline TPGSR & 61.7\% & 67.5\% \\ +HiREN & **62.2\%** & **68.1\%** \\ \hline \end{tabular}
\end{table} TABLE IV: Performance of recent recognizers on TextZoom.
\begin{table}
\begin{tabular}{c||c c c} \hline Method & IC15-352 & SVT \\ \hline LR & 49.4\% & 74.8\% & 60.8\% \\ \hline TSRN & 48.9\% & 72.6\% & **61.4\%** \\ +HiREN & **52.3\%** & **74.8\%** & 60.3\% \\ \hline TG & 59.1\% & 74.2\% & 60.2\% \\ +HiREN & **61.7\%** & **76.5\%** & **68.5\%** \\ \hline TPGSR & 66.2\% & 77.4\% & 62.8\% \\ +HiREN & **66.8\%** & **78.7\%** & **63.6\%** \\ \hline \end{tabular}
\end{table} TABLE III: Performance comparison on three STR datasets with CRNN as recognizer.
whose pixels are not trustworthy. Therefore, we only present PSNR and SSIM as auxiliary metrics to roughly draw some conclusions.
Notice that existing methods utilize SR-HR image pairs to calculate PSNR and SSIM. However, as mentioned above, the HR images are suffering from quality issues. Hence, we additionally provide the fidelity results of calculating PSNR and SSIM between SR and HQ images. The experimental results are given in Tab. V. As can be seen in Tab. V, 1) A higher PSNR does not means a higher recognition accuracy. For example, the PSNR of TG in SR-HR is inferior to that of TSRN (_i.e.,_ 21.47 v.s. 21.84) but TG performs better on recognition accuracy (_i.e.,_ 49.6% v.s. 42.1%). The reason lies in that TG is a stroke-focused technique, focusing on recovering fine-grained stroke details rather than the whole image quality including background that is minor to recognition. This is consistent with the results in [9]. 2) Comparing with the original models, after applying HiREN, the SR-HQ fidelity performance of the new models are boosted in almost all cases. 3) HiREN gets a low performance on the PSNR and SSIM of SR-HR images but obtains an improved recognition performance, which supports the quality issue of HR images.
#### Iv-B3 Visualization
Here, we visualize several examples in Fig. 4 to better demonstrate the performance of our technique. We can see that HiREN can help the existing methods to recover the blurry pixels better (see the 2nd \(\sim\) 6th cases). In particular, a better "ee" in the 2nd and 3rd cases,'m' in the 4th case, 'f' in the 5th case, and 'e' in the 6th case are obtained by our technique. Besides, in some extremely tough cases where even with the HR images the recognition is hard, HiREN can still achieve better recovery (see the 7th case). These results show the power of HiREN.
#### Iv-B4 Training and inference cost
We have discussed the high performance of our technique above. In this section, we provide the results of training and inference costs to show the efficiency of HiREN. Specifically, We take TG and TPGSR
\begin{table}
\begin{tabular}{c||c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Metrics} \\ \cline{2-5} & \multicolumn{2}{c|}{SR-HR} & \multicolumn{2}{c|}{SR-HQ} & \multicolumn{2}{c}{Avg} \\ \cline{2-5} & PSNR & SSIM(\(\times 10^{-2}\)) & PSNR & SSIM(\(\times 10^{-2}\)) & Acc \\ \hline \hline LR & 20.35 & 69.61 & 20.73 & 68.76 & 27.3\% \\ \hline TSRN & 21.84 & 76.34 & 21.08 & 74.76 & 42.1\% \\ \hline \(\star\)HiREN & **22.01** & **76.60** & **21.46** & **76.23** & **45.0\%** \\ \hline TG & **21.47** & **73.57** & **20.89** & 72.59 & 49.6\% \\ \(\star\)HiREN & 21.12 & 73.43 & 20.84 & **73.78** & **51.1\%** \\ \hline TPGSR & **22.05** & **76.71** & 21.05 & **76.77** & 51.8\% \\ \(\star\)HiREN & 21.69 & 75.97 & **21.15** & 76.44 & **52.4\%** \\ \hline \end{tabular}
\end{table} TABLE V: Fidelity and recognition results on major existing methods. The results are obtained by averaging three settings (easy, medium and hard).
Fig. 4: Examples of generated images. Here, GT indicates ground truth. We use CRNN as the recognizer. Red/black characters indicate incorrectly/correctly recognized.
\begin{table}
\begin{tabular}{c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Metrics} \\ \cline{2-3} & Training cost & Inference cost \\ \hline TG & 19.60 & 0.91 \\ +HiREN(Online) & 20.59 & 0.91 \\ +HiREN(Offline) & 19.60 & 0.91 \\ \hline TPGSR & 7.20 & 7.20 \\ +HiREN(Online) & 8.19 & 7.20 \\ +HiREN(Offline) & 7.20 & 7.20 \\ \hline \end{tabular}
\end{table} TABLE VI: The training and inference costs of our method. The cost is measured by the FLOPs(G).
as baselines and add HiREN to them and count their FLOPS during training and inference. The experimental results are presented in Tab. VI. In terms of training cost, we can see that the offline deployment of HiREN does not incur any additional cost. As for online version, we can see that the additional computational cost caused by HiREN is negligible (_e.g,_ from 19.60G to 20.59G, only 0.99G). What is more, neither of the two variants introduce any additional inference cost. In conclusion, the offline deployment not only saves training and inference cost, but also significantly boosts the performance. These results validate the efficiency of our method.
### _Ablation Study_
We conduct extensive ablation studies to validate the design of our method. Since our method is designed to enhance HR images during training, the metric used in this section is the recognition accuracy measured by the average accuracy of CRNN on training set, denoted as \(Acc_{train}\).
#### Iv-D1 Design of the HR enhancement branch
Here, we check the design of the HR enhancement branch. As mentioned above, two techniques are developed to promote the enhancement of HR images: kernel-guided enhancement network \(f_{ke}\) and the loss \(\mathcal{L}_{HR}\). We conduct experiments to check their effects. The experimental results are presented in Tab. VII. Visualization of the effect of the HR enhancement branch is given in the supplementary materials.
_The effect of the HR enhancement branch._ Comparing the results in the 1st and 7th rows of Tab. VII, we can see that the HR enhancement branch lifts the accuracy from 66.9% to 74.1%, which proves the effect of the branch as a whole.
_The effect of kernel-guided enhancement network._ To check the power of the kernel-guided enhancement network, we design a variant that removes the kernel predictor. Comparing the results of the 2nd and 7th rows in Tab. VII, we can see that the variant without the kernel predictor is inferior to that with the kernel predictor (72.7% v.s. 74.1%). This demonstrates the effectiveness of the proposed kernel-guided enhancement network.
_The design of loss function._ Here, we check the design of the loss function used in the HR enhancement branch. We first remove the recognition loss \(\mathcal{L}_{rec}\) and the style loss \(\mathcal{L}_{sty}\) separately. As can be seen in the 3rd, 4th, and 7th rows in Tab. VII, comparing with the combined one, the performance of using only one single loss is degraded. Next, we check the selection of style loss. Specifically, we consider three candidates (MSE, Charbonnier and L1) for the style loss function. As can be seen in the 5th, 6th, and 7th rows of Tab. VII, MSE loss outperforms Charbonnier loss [64] and L1 loss. The reason lies in that MSE penalizes large errors and is more tolerant to small errors, which is more suitable for HiREN to enhance the blurry or missed character details and keep the style unchanged [65]. Ergo, MSE is selected as the style loss in HiREN.
#### Iv-D2 Hyper-parameter study
Here, we provide the grid search results of the hyper-parameter \(\alpha\) introduced in HiREN for balancing the two losses. The results are presented in Tab. VIII. As can be seen in Tab. VIII, the best performance is achieved when \(\alpha\)=0.1 and 0.05.
#### Iv-D3 The effect of loss quality estimation module
Here, we compare the performances of different models w/o the quality estimation module. As can be seen in Tab. IX, without \(f_{QE}\), all methods are degraded, which demonstrates the effect of the quality estimation module.
## V Discussion
In this section, we discuss some issues to better demonstrate the advantages of HiREN and point out some limitations of the proposed method.
### _Which kind of quality issues do HR images have?_
We conduct a visualization study to demonstrate the quality issues of HR images. As can be checked in Fig. 5, HR images are suffering from including but not limited to low-contrast (1st, 2nd and 6th cases), blurry (3rd and 4th cases) and motion blur (5th case). These unknown degradations obviously threaten the recognition of HR images and subsequently provide erroneous supervision to the recovery of the LR images.
### _How does HiREN lift the quality of supervision information?_
To cope with various quality problems of HR images, HiREN generates HQ images through different strategies. In particular, HiREN makes the texts more prominent to solve low-contrast (e.g. the 1st and 2nd cases in Fig. 5). With respect to the blurry issue, HiREN makes the incorrectly recognized texts more distinguishable (e.g. "e" in the 3rd case and "ri" in the 4th case in Fig. 5). HiREN also tries to reduce the motion blur in the 5th case of Fig. 5. Although in some tough cases, HiREN fails to generate a correct HQ image (e.g. the 6th case in Fig. 5), our quality estimation module weights its loss to a small value to suppress the erroneous supervision information.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Method & SRCNN & TSRN & TG & TFGSR \\ \hline without \(f_{QE}\) & 30.2\% & 44.2\% & 51.0 & 51.9\% \\ with \(f_{QE}\) & **30.4**\% & **45.0**\% & **51.1** & **52.4**\% \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Ablation study on the quality estimation module. The metric is the recognition accuracy of CRNN on the test set of TextZoom.
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline \multirow{2}{*}{ID} & \multirow{2}{*}{Kernel-guided} & \multicolumn{3}{c|}{Loss functions} & \multirow{2}{*}{\(Acc_{train}\)} \\ \cline{2-2} \cline{4-5} & & \(\mathcal{L}_{rec}\) & & \(\mathcal{L}_{sty}\) \\ \hline \hline
1 & β & β & β & 66.9 \\ \hline
2 & β & β & MSE & 72.7 \\
3 & β & β & β & 66.1 \\
4 & β & β & MSE & 67.4 \\
5 & β & β & Charb & 67.5 \\
6 & β & β & L1 & 67.3 \\
7 & β & β & MSE & 74.1 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: The ablation studies of the HR enhancement branch. Here, β means the corresponding module is not applied, and Charbonnier Loss [64].
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{6}{c}{\(\alpha\)} \\ \cline{2-7} & 0.5 & 0.2 & 0.1 & 0.05 & 0.025 & 0.01 & 0.005 \\ \hline \(Acc_{train}\) & 73.6 & 73.4 & **74.1** & **74.1** & 72.3 & 72.2 & 71.2 \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: The determination of \(\alpha\). The metric is \(Acc_{train}\).
### _Error Analysis_
In this section, we perform an error analysis of HiREN to provide possible research directions for further works. Concretely, we provide some error cases in Fig. 6 to illustrate the limitations of recent works and HiREN. As can be seen in the 1st\(\sim\)2nd cases, recent methods usually rely on a vocabulary [66], which makes the models guess the blurry pixels via the corpus that can be learned from the training dataset. This degrades the models' ability to recover numbers and punctuation. As a result, although HiREN recovers more characters than the original TPGSR, the word-level recovery still fails. Besides, as shown in the 3rd case, in some tough cases where the LR and HR images are extremely difficult to read, TPGSR and HiREN also fail to effectively do the recovery. This indicates the challenge of STISR.
### _Limitations of HiREN_
On the one hand, HiREN may introduce some noise to the HR images and worsen their quality. However, such noise is very minor compared to the advantage brought by HiREN. Specifically, we find that 9,565 erroneously recognized images in TextZoom dataset are successfully enhanced by HiREN, which leads to correct recognition results, while only 128 images are deteriorated from correct to wrong. On the other hand, the training of the HR enhancement branch requires the feedback of a scene text recognizer and text-level annotations. This indicates that HiREN still needs some weak supervision information for supervision.
## VI Conclusion
In this paper, we present a novel framework called HiREN to boost STISR performance. Different from existing works, HiREN aims at generating high-quality text images based on high-resolution images to provide more accurate supervision information for STISR. Concretely, recognizing the difficulty in catching the degradation from HQ to HR and obtaining the supervision information from HR images, we explore degradation kernel-guided super-resolution and the feedback of a recognizer as well as text-level annotations as weak supervision to train a HR enhancement branch. What is more, to suppress erroneous supervision information, a novel quality estimation module is designed to evaluate the qualities of images, which are used to weight their losses. Extensive experiments demonstrate the universality, high-performance and efficiency of HiREN. Our work provides a new solution for the STISR task.
In the future, we will try to explore more advanced models to further advance the proposed technique. One the one hand, we will try to further improve the recovery ability of the HR enhancement branch or address the vocabulary reliance issue. On the other hand, we plan to apply HiREN to self-supervised or unsupervised settings when the recognizer and text-level annotations are not trustworthy or text-level annotations are lack during training. Last but not least, we will extend the idea of the proposed quality enhancement branch to build a new noisy learning algorithm for STISR.
|
2304.00044 | On The Theory of Ring Afterglows | "Synchrotron and inverse Compton emission successfully explain the observed\nspectra of gamma-ray bu(...TRUNCATED) | Marcus DuPont, Andrew MacFadyen, Re'em Sari | 2023-03-31T18:02:12Z | http://arxiv.org/abs/2304.00044v1 | "# On The Theory of Ring Afterglows\n\n###### Abstract\n\nSynchrotron and inverse Compton emission s(...TRUNCATED) |
2309.12494 | Evidential uncertainty sampling for active learning | "Recent studies in active learning, particularly in uncertainty sampling, have\nfocused on the decom(...TRUNCATED) | Arthur Hoarau, Vincent Lemaire, Arnaud Martin, Jean-Christophe Dubois, Yolande Le Gall | 2023-09-21T21:26:50Z | http://arxiv.org/abs/2309.12494v2 | "# Evidential uncertainties on rich labels\n\n###### Abstract\n\nRecent research in active learning,(...TRUNCATED) |
2309.07927 | "Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech\n Recognition for Children V(...TRUNCATED) | "Recent advancements in Automatic Speech Recognition (ASR) systems,\nexemplified by Whisper, have de(...TRUNCATED) | Ahmed Adel Attia, Jing Liu, Wei Ai, Dorottya Demszky, Carol Espy-Wilson | 2023-09-12T06:58:18Z | http://arxiv.org/abs/2309.07927v3 | "Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech Recognition for Children vs. (...TRUNCATED) |
2309.00090 | Benford's Law under Zeckendorf expansion | "In the literature, Benford's Law is considered for base-b expansions where\nb>1 is an integer. In t(...TRUNCATED) | Sungkon Chang, Steven J. Miller | 2023-08-31T19:16:07Z | http://arxiv.org/abs/2309.00090v1 | "# Benford's Law under Zeckendorf expansion\n\n###### Abstract\n\nIn the literature, Benford's Law i(...TRUNCATED) |
Arxiver Dataset
Arxiver consists of 63,357 arXiv papers converted to multi-markdown (.mmd) format. Our dataset includes original arXiv article IDs, titles, abstracts, authors, publication dates, URLs and corresponding markdown files published between January 2023 and October 2023.
We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
Curation
The Arxiver dataset is created using a neural OCR - Nougat. After OCR processing, we apply custom text processing steps to refine the data. This includes extracting author information, removing reference sections, and performing additional cleaning and formatting. Please refer to our GitHub repo for details.
Using Arxiver
You can easily download and use the arxiver dataset with Hugging Face's datasets library.
from datasets import load_dataset
# whole dataset takes 1.44GB
dataset = load_dataset("neuralwork/arxiver")
print(dataset)
Alternatively, you can stream the dataset to save disk space or to partially download the dataset:
from datasets import load_dataset
dataset = load_dataset("neuralwork/arxiver", streaming=True)
print(dataset)
print(next(iter(dataset['train'])))
References
The original articles are maintained by arXiv and copyrighted to the original authors, please refer to the arXiv license information page for details. We release our dataset with a Creative Commons Attribution-Noncommercial-ShareAlike (CC BY-NC-SA 4.0) license, if you use this dataset in your research or project, please cite it as follows:
@misc{acar_arxiver2024,
author = {Alican Acar, Alara Dirik, Muhammet Hatipoglu},
title = {ArXiver},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co./datasets/neuralwork/arxiver}}
}
- Downloads last month
- 825