검색 상세

Controllable Text Style Transfer with Syntax Guidance

초록/요약

Text style transfer, a challenging task in natural language processing, seeks to transform the stylistic attributes of a given text while maintaining its original meaning. Stylistic attributes are defined by a user. Most frequently used styles are sentiment, formality, politeness, etc. This task can be solved by two methodologies: supervised learning and unsupervised learning. The supervised approach, while effective, faces the challenge of constructing parallel datasets, which are pairs of source and target texts. This limitation has encouraged the investigation of unsupervised text style transfer, a possible approach that eliminates the need for parallel dataset. One method for achieving unsupervised text style transfer is controllable style transfer, which enables the regulation of the degree of style modification. However, a challenge with controllable style transfer is that the fluency of the translated text deteriorates as the degree of style modification increases. To address this issue, we introduce a novel methodology that integrates syntactic parsing information into the style transfer process. By leveraging syntactic information, our model is guided to generate natural sentences that effectively reflect the desired style while maintaining fluency. Extensive experimental results have shown that incorporating syntactic parsing information into the controllable style transfer process leads to significant improvements in both the overall performance and the fluency of the generated text compared to existing controllable style transfer methods. This enhancement stems from the ability of syntactic information to provide guidance for the model, enabling it to generate natural-sounding sentences that accurately reflect the desired style while maintaining coherence.

more

목차

1. Introduction 1
2. Related Work 5
2.1. Entangle-based Text Style Transfer 5
2.2. Controllable Style Transfer 5
2.3. Syntax-guided Generation 6
3. Methodology 7
3.1. Model Architecture 8
3.2. Training 11
3.3. Inference 13
4. Experiment 14
4.1. Dataset 14
4.2. Evaluation Metric 15
4.3. Baseline Models 16
4.4. Implementation Details 18
5. Results 19
5.1. Quantitative Evaluation 19
5.2. Qualitative Evaluation 20
5.3. Ablation Study 21
5.4. Syntax Preservation 22
5.5. Syntax-guided Reconstruction Ability 23
5.6. Embedding Visualization 24
6. Conclusion 25
References 26

more