Text simplification consists of re-writing a text with an objective of making it accessible to a larger audience of readers, for instance by reducing the length and complexity of sentences, and reducing the use of rare words.
Unlike summarization where progress has been rapid with the emergence of large datasets in many textual domains, text simplification remains challenging, as datasets are small and of poor quality due to the difficulty of aligning texts at different readability levels.
We propose to circumvent the data limitation by approaching text simplification as an unsupervised task and show that this method is promising both on automated metrics as well as through a user-study testing ease of reading via comprehension questions.
An Example Simplification
Can you tell which is: (1) the original (complex) text, (2) the manual simplification from a Newsela educator (the reference corpus), (3) the output of our unsupervised model.
Each summer, libraries in St. Louis, Missouri, host many types of free camps — yoga, chess and even a Harry Potter "Sorting Hat Camp." In 2020, camp dreams seemed far-fetched given the global coronavirus pandemic. That didn't stop St. Louis libraries, though. Instead of canceling, they brought camp into kids' homes. |
In St. Louis, Missouri, libraries hold many types of free camps in the summer. They have yoga, games and even a Harry Potter camp. In 2020, camp ideas seemed unlikely due to the spread of COVID-19. That did not stop St. Louis libraries, though. They did not cancel their programs. They brought camp into kids' homes. |
In the summer months, St. Louis has many free classes for kids, including yoga and a Harry Potter "Sorting Hat Camp.'' In 2020, camp dreams again seemed far-fetched given the crisis. That didn't stop St. Louis libraries, though. They brought camp in. |
Spoiler: they are in order, left is the original text, center is the Newsela manual rewrite, and right is the output of the unsupervised model.
Researchers
- Philippe Laban, UC Berkeley, Graduate Student
- Marti Hearst, UC Berkeley, Professor and PI
- Tobias Schnabel, Microsoft Research
- Paul Bennett, Microsoft Research, PI
Overview
We propose to build a reward function that can assess the quality of a simplified paragraph, without relying on training data. Once we establish the reward function, we use a Reinforcement-Learning-based algorithm to train a text generator to optimize this reward.
We craft our reward to consist of three components that must be balanced simultaneously to produce high-quality simplifications:
- A Coverage model: the simplified text must cover the same facts and nuances of the original text.
- A Fluency model: the simplified text must be valid grammatically, and use standard English expressions
- A Simplicity model:the text should be simpler according to a Readability Metric, such as the Flesch-Kincaid measure.
Because all three rewards must be maximized jointly, our final reward is the product of these three metrics.
Can you imagine why removing any of the three metrics would lead to sub-optimal solutions? |
Automated Results
We train a model (GPT-2 Medium) using our unsupervised method, as well as another using supervision on the Newsela dataset, a popular dataset for news simplification.
We evaluate whether the models are able to lower the expected grade-level required to read 100 held-out paragraphs according to the Lexile measure, a gold-standard used in most school textbooks. We compare it to gold-standard manual re-writes from the Newsela corpus where an educator manually re-wrote a paragraph for a target Lexile number.
Model |
% of paragraphs with lower Lexile score |
SARI Score (BLEU for simplification) |
---|---|---|
Manual Newsela rewrite |
79 % |
|
Unsupervised model |
72 % |
0.71 |
64 % |
0.67 |
|
Seq2seq Supervised baseline |
52 % |
0.47 |
User Study Results
We ran a user study with the aim to go beyond automated offline evaluation and measure the potential usefulness of text simplification.
We select several short news articles each re-written by hand and algorithmically. A user reads a single version of the document, and must complete a comprehension quiz consisting of 5 multiple choice questions.
We measure the average time it takes participants to get all answers right, as well as the average number of re-submissions of the quiz.
Our preliminary, statistically non-significant results suggest that simplified texts tend to lead to faster quiz completion times, and fewer resubmissions on average, with the Unsupervised model leading to a decrease in time spent of roughly 29%.
Contact
Update August 2021:
- We submitted our work to ACL2021 and it was accepted! Check the full paper here: https://aclanthology.org/2021.acl-long.498/
- All the code and trained models are available on Github: https://github.com/tingofurro/keep_it_simple
- We started a follow-up project focused on detecting inconsistencies in summarization and simplification. We submitted our work, and it is still under review for now. Stay tuned for updates!