Normalising Non-standardised Orthography in Algerian Code-switched User-generated Data

Wafia Adouane1, Jean-Philippe Bernardy2, Simon Dobnik2
1Department of Philosophy, Linguistics and Theory of Science- Gothenburg University, 2University of Gothenburg


We work with Algerian, an under-resourced non-standardised Arabic variety, for which we compile a new parallel corpus consisting of user-generated textual data matched with normalised and corrected human annotations following data-driven and our linguistically motivated standard. We use an end-to-end deep neural model designed to deal with context-dependent spelling correction and normalisation. Results indicate that a model with two CNN sub-network encoders and an LSTM decoder performs the best, and that word context matters. Additionally, pre-processing data token-by-token with an edit-distance based aligner significantly improves the performance. We get promising results for the spelling correction and normalisation, as a pre-processing step for downstream tasks, on detecting binary Semantic Textual Similarity.