Friday, August 31, 2018

Niamey 1978 & Cape Town 2018: 2. Extended Latin & African language Wikipedias


Image adapted from banner on the Yoruba Wikipedia, August 2018
What are the implications of extended Latin characters and combinations for production of digital materials in African languages written with them? The previous post discussed some of the process of seeking to harmonize transcriptions, in which the Niamey 1978 conference and its African Reference Alphabet (ARA) were prominent. That process had a logic and left a legacy for the representation in writing of many African languages. This post asks if there is a trade-off between the complexity of the Latin-based writing system and how much is produced in it using contemporary digital technologies.

One easy, although by no means conclusive, way to consider this question is to look at Wikipedia editions in African languages (those that are written in Latin script). The following table disaggregates 35 African language editions by the number of articles (from the list of Wikipedias, as of 9 August 2018) and the four "categories" of Latin-based orthography1 introduced in African Languages in a Digital Age (ch. 7, p. 58):

Number of articles
Category 1
Category 2
"Category 1" + Latin 1 
Category 3
"Cat. 1" or "2" + any of Latin Extended A, B, etc, Add'l, & IPA
Category 4
"Category 3" + Combining diacritics
< 500
Swati (447)
(Chewa)
Sango (255)

Fula (226)
Venda (265)
Chewa2 (389)
Dinka (75)
Ewe (345)
500-1000
Sotho (543)
Tumbuka2 (562)
Tsonga (563)
Kirundi (611)
Tswana (641)
Xhosa (741)
Oromo (772)
-
Akan (561)
Twi (609)
Bambara3 (646)
(Bambara)
1000-2000
Zulu (1011)
Kinyarwanda (1823)
Kongo (1179)
Luganda4 (1162)
Wolof (1167)
Gikuyu (1357)
Kabiye (1455)
Hausa (1891)
Igbo5 (1340)
2000-5,000
Shona (3761)
-
Kabyle (2860)
Lingala (3028)
5000-10,000
Somali (5307)
-
-
10,000-25,000
-
-
-
-
25,000-50,000
Swahili (44,375)
-
-
Yoruba5 (31,700)
> 50,000
Malagasy (85,033)
Afrikaans (52,847)
-
-
# of articles / # of editions = Average
146,190 / 14 =
10,442
54,281 / 3 =
18,094
20,655 /13 =
1,589
36,488 / 5 =
7,298
Grouping 1&2, 3&4
168,471 total articles / 17 editions = 
11,789
57,143 total articles / 18 editions = 
3,175


Looking at the top row with the smallest editions (less than 500 articles), one is tempted to highlight the high presence of African languages whose orthographies include extended Latin - categories 3 & 4. However, in the group of next highest number of articles (500-1000) there are more editions with category 1 orthographies (the simplest) than there are editions with category 3 in the group above that (1000-2000). And the next highest ranges (covering 2000-10,000) are roughly even between category 1 on the one hand, and 3 & 4 on the other. But then the three largest editions (and 3/4 above 25,000) are category 1 & 2.

So with just a visual analysis, there does not seem to be any clear pattern from arraying the editions in this way. Of course there will be other factors than the complexity of the script affecting the success of a Wikipedia edition written in it. But are there ways of looking at this raw data that can give us a clearer idea what might be the effect is of extended Latin - the ARA plus orthographies with other modified letters and diacritic combinations - on the size of Wikipedia editions?

One approach is to consider all the above editions combined, per category of orthography (totaling by column). This puts the focus on the degree of complexity of the writing system, perhaps muting the effect of other language- & location-specific factors. On the second to last row are column totals of the number of articles in all editions listed above, divided by the number of editions, to give an average figure.This yields an uneven pattern (2>1>4>3), since in the cases of 2 & 4, one large edition in a small total number of editions skews the category average up.

By the totals of the two simpler categories (1 & 2) and of the two extended Latin categories (3 & 4), however, one obtains possibly more useful numbers. This aggregation can be rationalized for our purposes here by the fact that the lower two categories are generally supported by commercially available keyboards and input systems,6 while the higher two categories, require a specialized way to input of additional characters and maybe diacritics (such as an alternative keyboard driver, or an online character picker).7

The figures thus obtained show editions written in extended and complex Latin having on average about a third the number or articles as those written in ASCII and Latin-1. Admittedly, this result is in part the result of the way categories have been chosen and figures aligned, but I'm proposing them as a perspective on the use of extended (and complex) Latin, and possible gaps in support. Before considering this in more detail, it is useful to compare with the numbers for non-Latin scripts.

What about non-Latin scripts & African language Wikipedias?


Number of articles
Non-Latin
< 500
Tigrinya (168)
10,000-25,000
Amharic (14,321)
Egyptian Arabic (19,170)
# of articles / # of editions = Average
33,659 / 3 = 11,220
There are only three editions of Wikipedia in African languages written in non-Latin scripts.8 Two of those - Amharic and Tigrinya - are written with the Ge'ez or Ethiopic script unique to the Horn of Africa.

Arabic is the third. How to count this language for the purposes of this informal analysis raises a question. Arabic, of course, is established as a first language in North Africa for centuries, but it is also a world language, spoken natively in the southwest Asia (having originated in Arabia), and learned as a second language in many regions. Drawing users from this wide community, the Arabic Wikipedia is among the top 20 overall, with twice as many articles as all of the editions discussed above combined. It is more than an African language edition. For this analysis, therefore, I have chosen instead to count just the Egyptian Arabic Wikipedia.

Taking these three editions, we then get an average number of articles (11,220), which is close to what is seen for the Latin categories 1 & 2 (11,789). The usual caveats apply for such a small sample, but taking the numbers as they are, it is interesting that Wikipedias in the complex Arabic alphabet and the large Ge'ez abugida (alphasyllabary) are on average much larger than those of the ostensibly simpler extended Latin (3,175).9

Again, script complexity is but one factor, and in this case probably not the most important, since the two non-Latin scripts in question have long histories of use in text in parts of Africa - much longer than any form of Latin script. Nevertheless, from the narrow perspective of what is required for users to edit Wikipedia, the technical issues are in some ways comparable if even more demanding.

Arabic has had standard keyboards since the days of typewriters. The issues there are not so much the input, but whether systems can handle the directionality and composition requirements of the script.

The Ge'ez script on the other hand, does not involve complex composition rules or bidirectionality. However, it has a total of over 300 characters (including numerals and punctuation; more again if extended ranges are added). The good news is that there are numerous input systems to facilitate their input. Literacy in the script and availability of input systems would not be limiting factors for content development in major languages using this script. The difference in development of the Amharic and Tigrinya editions of Wikipedia may relate to both the larger population speaking Amharic (as a first or second language), and its use officially in a relatively large country (Ethiopia). Development of content in Tigrinya - a cross-border language - might also be hindered by issues particular to one of the two countries where it has many speakers (Eritrea).

From the above one might suggest that complexity of the written form (to be taken here as including the nature of the script itself, and the size of the character set) may be a limiting factor on content development, but that other factors, such as a literate tradition, official use, and technical support for digital production may overcome such limitations. In the case of African languages written in Latin script, however, any literate tradition is recent, and they are often marginalized in official and educational contexts. For those written with extended Latin, there is the additional factor of lack of an easy and standardized way of inputting special characters. Paradoxically, it seems, a modification of the most widely used alphabet on the planet may actually hobble efforts to edit in these languages.

Facilitating input in extended Latin for African language Wikipedias?


Wikipedia editing screen with "Special Characters"
drop-down modified to show all available ranges.
Assuming that the inconvenience of finding ways to input extended Latin characters may be a factor in the success of African language Wikipedias written with categories 3 and 4 orthographies, a quick fix might be to add new ranges for the modified letters used in African languages to the "special characters" picker in the edit screens. As it currently structured, the extended characters necessary for a category 3 or 4 orthography might be sprinkled around in up to 3 different ranges (see at right). And within each range, they are not presented in a clear order, so sometimes hard to find.

Since it may be too complicated to have a special range for each language edition, another possibility would be to draw inspiration from the Niamey 1978 meeting's ARA, and combine all extended Latin characters and combinations needed for all current African language Wikipedias into a common new range.

Of course as mentioned above, there are other factors that can contribute to the success or not of Wikipedia editions in African languages written with extended Latin, but this innovation would at least make editing more convenient for contributors to these  editions. And perhaps it might have a positive effect on the quantity and quality of articles in these Wikipedias.

In the third, and concluding article in this series, I'll step back to look at this analysis and consider some other ways to look at the data on African language editions of Wikipedia, and in particular, those written in extended Latin.

1. This categorization was intended to help characterize the technical requirements for display and input of various languages. Although the technology has improved to the point that more complex scripts are generally displayed without the kinds of issues one encountered a even a decade ago, input still requires extra steps or workarounds. The four categories are additive in that each higher category builds on those below, with added potential issues. It is also a "one jot" system in that for example, a single extended Latin character, say š in Northeren Sotho or ŋ in Wolof, makes their orthographies category 3 rather than category 1 or 2 (respectively), and the use of the combining tilde over the extended Latin character for open-o - ɔ̃ - makes Ewe a category 4 rather than 3. In terms of input, the higher the category, the more the potential issues with display and input (although technical advances tend to level the field, esp. as concerns display).
2. The only non-basic Latin character used in Chewa is the w with circumflex: ŵ. Apparently it represents a sound important in only one dialect of the language, and is used infrequently in contemporary publications. On the other hand, there is a proposed (not adopted) orthography for Tumbuka that includes the ŵ. Without this character, either language would be a category 1 orthography; with it, category 3.
3. Bambara is a tonal language. Most often, it seems, tones are not marked in text, however they can be for clarity, and some dictionaries make a point of indicating tone in the entries (not just pronunciation). If tones are unmarked, Bambara would be considered is a category 3 orthography; with tones, category 4. 
4. The addition of the letter ŋ puts Luganda in category 3 rather than 1.
5. The dot-under (or small vertical line under) characters used notably in Yoruba and Igbo are particular to southern Nigeria, and not included in the ARA. Yoruba in Benin is written with characters from the ARA.These are tonal languages, and tone is usually marked.
6. When I first proposed the category (itself a modification of an earlier effort), there were some questions why have a category 2 separate from category 1. That distinction had its origins in the early days of computing where systems used 7-bit fonts, meaning that accented letters (diacritic characters) used in, say, French or Portuguese, could not be displayed. Even as systems using 8-bit fonts enabled use of diacritics commonly used in European languages, display issues would still crop up (as a sequence of characters where an accented letter should be). Nowadays, such display issues are rare, and limited (as far as I can tell) to documents in legacy encodings. On the other hand, input of accented characters used may require, depending on the keyboard one is using, switching keyboard drivers or using extra keystrokes - so one will occasionally see ASCIIfication of text in such languages (apparently as a user choice).
7. The difference between categories 3 (extended Latin) and 4 (complex Latin) once were significant enough from point of view of display that informal appeals to Unicode to change its policy of not encoding new "precomposed" characters were common.
8. The Wikipedia incubator projects.includes several African language projects, which are not covered here. These include some in non-Latin scripts (Arabic versions, N'Ko, and Tamazight) and some in Latin-based orthographies. I mentioned one of the latter - Krio - in a previous post, and hope to do an overview of this space in the near future.
9. Average for all African language editions is 7704. By comparison the average for all Wikipedias is 166k.

Monday, August 13, 2018

Niamey 1978 & Cape Town 2018: 1. Some thoughts about extended Latin & content in African languages

Image features the 31 modified letters & diacritic combinations in
the African Reference Alphabet, 1978. (Nor all are currently in use.)

The world 40 years ago, when the Meeting of Experts on Transcription and Harmonization of African Languages took place in Niamey, and that of the Wikimania 2018 conference in Cape Town (which ended last month) seem very distant from each other. But from the angle of the written form of African languages at least, the concerns of the two events are not so distant.

One of these concerns is the extended Latin alphabets that were on the agenda in Niamey, and which are used in about half of the African language editions of Wikipedia. This post and the next consider these two vantage points, asking whether extended Latin is associated with less content creation, and what might be done to facilitate its use of the longer Latin alphabet.

Adapting the Latin script to African realities


In 1978, representatives of countries that had gained independence no more than a couple of decades earlier, or in some cases only a few years before, met in Niamey to advance work on writing systems for the first languages of the continent. One of the linguistic legacies of the colonial period was the Latin alphabet (even in lands where other systems had been used). But given the phonological requirements sometimes very different than what Latin letters represented in Europe, linguists added various modified letters, diacritics, and digraphs to write African languages (sometimes even a special system for a single publication1.

So, that legacy also often took the form of multiple alphabets and  orthographies for a single language, reflecting the different origins of European linguists (frequently Christian missionaries from different denominations), locations in which they worked (perhaps places where speakers of a language had particular dialects or accents), and individual skills and choices. After independence, many African countries undertook to simplify this situation, but they still often ended up with alphabets and spelling conventions different from those in neighboring countries.

The linguists and language specialists in Niamey, as in other such conferences of that era (many of which, like the one in Bamako in 1966, were supported by UNESCO), were concerned with further simplifying these discrepancies, with accurate and consistent transcription of languages that were for the most part spoken in two or more countries (whose speaker communities were divided by borders). That included adopting certain modified letters and diacritic combinations for sounds that were meaningfully significant in African languages (some of which correspond with characters in the International Phonetic Alphabet).

Language standardization, which is actually a complex set of decisions, was a real concern where there were on the one hand diverse peoples grouped in each state and on the other hand limited resources for producing materials and training teachers. At its most basic level, though, standardization of any sort required an agreed upon set of symbols and conventions for transcription.2

A reference alphabet for shared orthographies


The African Reference Alphabet (ARA)3 produced by the Niamey meeting was an effort in that direction. It built on the longer post-independence process to facilitate use and development of written forms of African languages - a process that had its roots in the early introduction of the Latin script (before the formal establishment of colonial rule) and efforts during the colonial period such as the influential (at least in the British colonies) 1928 Africa Alphabet. The ARA was intended - and to some degree at least still serves - as sort of a palette from which orthographies for specific linguistic, multilingual national, and cross-border language needs could be addressed.4

And that set of concerns - alphabets, orthographies and spelling conventions - turned out to be the starting point for later efforts in the context of information and communication technology (ICT) to localize software and interfaces, including Wikipedia and other Wikimedia interfaces, and to develop African language content online, including for Wikimedia projects. Even if it does not seem as visible as other challenges.

What I haven't seen is an evaluation of the efforts at Niamey and the other expert meetings on harmonization of transcriptions, although the most used of the characters in the ARA can be seen in various publications, and all but perhaps one are in the Unicode standard.

In any event. the situations of the various African languages are diverse, with some having well established corpora while others are "less-resourced," and in the worst case, inconsistently written.

Extended Latin and composing on digital devices


One important element in discussions in the process of which Niamey was part, was the role of modified letters - what are now called extended Latin characters - in transcribing many African languages. The ARA includes no less than 30 of them (22 modified letters and 8 basic Latin with diacritics5). These added characters and combinations are not intended to all be used in any one language, but represent standard options for orthographies. The incorporation of some of these into a writing of a single language makes the writing clearer, and has no drawbacks for teaching, learning, reading, or handwriting (although there are arguments against the use of diacritics). Since the establishment of Unicode for character encoding, the screen display of these characters is not a problem (so long as fonts have been created including glyphs for the characters).

However even the presence of even just one or two extended Latin characters leads to problems with standard keyboards and keypads - where are you going to place an additional character, and how is the user to know how to find it? This is a set of issues that was of course recognized back in the era of typewriters. One of the spinoffs from the Niamey conference was the 1982 proposal by Michael Mann and David Dalby (who attended the meeting) for an all lower-case "international niamey keyboard," which put all the modified characters (of an expanded version of the ARA) in the spots normally occupied by upper-case letters.

While that proposal never went far (I hope to return to the subject later) - due in large part to its abandonment of capital letters - it was but one extreme approach to a conundrum that is still with us. That is, how to facilitate input of Latin characters and combinations that are not part of the limited character sets that physical keyboards and keyboards are primarily designed for. It's not that there aren't ways of facilitating input - virtual keyboard layouts (keyboard drivers that can be designed like and shared, like Keyman, and onscreen keyboards) have been with us for years, and there are other input systems (voice recognition / speech-to-text being one). The problem is lack of standard arrangements and systems for many languages. Or perhaps in the matter of input systems, the old wag, "the nice thing about standards is there are so many to choose from," applies.

The result, arguably, may be a drag on widespread use of extended Latin characters, and as a consequence of popular use on digital devices of languages whose orthographies include them. Or a choice to ASCIIfy text (using only basic Latin), as has been the case with Hausa on international radio websites. Or even confusion based on continued use of outdated 8-bit font + keyboard driver systems, as witnessed in at least one case with Bambara (see discussion and example).

What can the level of contributions to African language editions of Wikipedia tell us about the effect of extended Latin? This will be explored in the next post: Extended Latin & African language Wikipedias.

1. For example some works on forest flora which had lists of common names in major languages of the region.
2. Arguably in the case of a language written in two or three different scripts, one could have a system in each script and an accepted way to transliterate between or among them.
3. The only other prominent use I found of the term "reference alphabet" was that of the ITU for their version of ISO 646 (basically the same as ASCII): "International Reference Alphabet." The concept of reference alphabet seems to be a useful one in contexts where many languages are spoken and writing systems for them aren't yet established.
4. This approach - adopting a standard or reference alphabet for numerous languages - was taken by various African countries, for example Cameroon and Nigeria. These efforts were without doubt influenced by the process of which Niamey and the ARA were part.
5. By comparison, the Africa Alphabet had 11 modified letters and did not use diacritics. All 11 of the characters added in the Africa Alphabet were incorporated in the ARA. It is worth noting that in the range of modified letters / special characters created over the years, some are incorporated into many orthographies, others fewer, and some are rarely used if at all.