Lexical density

In computational linguistics, lexical density constitutes the estimated measure of content per functional (grammatical) and lexical units (lexemes) in total. It is used in discourse analysis as a descriptive parameter which varies with register and genre. Spoken texts tend to have a lower lexical density than written ones, for example.

Lexical density may be determined thus:

Where:

= the analysed text's lexical density

= the number of lexical word tokens (nouns, adjectives, verbs, adverbs) in the analysed text

= the number of all tokens (total number of words) in the analysed text

(The variable symbols applied herein are by no means conventional; they were arbitrarily chosen for the nonce to illustrate the example in question.)

See also

Further reading


This article is issued from Wikipedia - version of the 7/12/2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.