1

Here

News Discuss 
Text tokenization is a fundamental pre-processing step for almost all the information processing applications. This task is nontrivial for the scarce resourced languages such as Urdu. as there is inconsistent use of space between words. In this paper a morpheme matching based approach has been proposed for Urdu text tokenization. https://www.pomyslnaszycie.com/

Comments

    No HTML

    HTML is disabled


Who Upvoted this Story