Thursday, February 21, 2019

IMLD 2019 & IYIL 2019 in Africa

On this International Mother Language Day 2019 (IMLD 2019), which has as its theme "Indigenous languages matter for development, peace building and reconciliation," a question: What are "indigenous languages" in Africa?

The question arises also since we are over a month into the International Year of Indigenous Languages (IYIL 2019), an observation declared by the United Nations General Assembly in its 2016 resolution on the Rights of Indigenous Peoples. The purpose of the IYIL 2019 as stated in that resolution (under #13) is:

"to draw attention to the critical loss of indigenous languages and the urgent need to preserve, revitalize and promote indigenous languages and to take further urgent steps at the national and international levels"

I won't propose a definitive answer as to what counts as "indigenous language" as my understanding is that the themes of IMLD and IYIL are inclusive, but it seems to be an important issue to think about in the interests of encouraging wide participation in Africa.

With that in mind, it is worth noting that the South African Centre for Digital Language Resources (SADiLaR) calls for "Celebration of South African languages" in the context of IYIL 2019. Also of interest is that the January-February issue of the UNESCO Courier, themed "Indigenous Languages and Knowledge (IYIL 2019)," has an article on the Mbororo people of Chad, whose mother tongue is a variety of Fula (Fulfulde/Pulaar) - a language that originated in a region far to the west.

Some of the broader issues concerning indigenous languages of Africa I hope to come back to in a following article, but in the meantime wishing all a happy IMLD 2019!

Sunday, January 06, 2019

Writing Bambara wrong & a petition to VOA

Why does the Voice of America (VOA) Bambara service's web content use a frenchized transcription of Bambara while Radio France International (RFI) uses the Bambara orthography?

Screenshot from page on VOA Bambara website. In Bambara orthography:
Jamana tigi: Ibrahima Bubakar Keyita ye Sankura foli kɛ ka ɲɛci jamana
denw ma. (Presidential New Year's address to the people of the country)
This question comes to mind in light of a petition being circulated by the Cercle Linguistique Bamakois asking that VOA follow the Bambara orthography on its web presence. (An English version of the petition is included at the end of this post.)

According to Sam Samake, a language specialist in Bamako, VOA's rationale for its current approach is to reach a large number of listeners who do not read or write Bambara in the official orthography,1 but who have been schooled in the French language system. In the view of Dr. Coleman Donaldson, a researcher on Manding languages (which include Bambara), this is part of a pattern of disregard for the spelling and orthographic conventions adopted by the Malian government and used now in many in primary schools.2 (This system happens also to harmonize with orthographies of neighboring countries due to the process that included the Bamako 1966 and Niamey 1978 conferences.)

Use or non-use of African language orthographies - and implications of respect or disrespect that accompany that choice - is not at all a new discussion. Coleman has a more recent examination of the broader problem as it appears in Mali, in a book chapter:3
"In a context where there is no shortage of people trained in official Bamanan orthography, the fact that the multinational telecommunications firm Orange fails to respect the official conventions is not simply a case of shoddy work; it is in fact part of the message."
Screenshot from RFI Mandenkan homepage. (Days & times of broadcasts).
In fact, as Sam pointed out, Malian government personnel, including for example everyone in the national broadcasting service, ORTM, have been trained in this orthography.1 So it does not appear at all accidental that major international entities like VOA and Orange opt to write Bambara as they please.

In this context, it is interesting to note RFI's decision on "Mandenkan" web content. Mandenkan, or Manding, is a group of largely interintelligible languages including Bambara (or Bamanankan) in the Mande family. RFI uses what looks like Bambara in the proper Malian orthography. That said, the amount of text in the language is limited to a static page on its main site (from which the image above was drawn), and some text in older posts on its Facebook page.

L2 literacy & L1 illiteracy?



VOA's decision to use a frenchized (or Frenchified) transcription of Bambara - which it should be noted has no standard form, pretty much by definition - is apparently premised on the notion that many people in their audience don't read the standard Bambara orthography. There may be something to this, to the extent that in Mali, formal education is mainly or exclusively in French, and people who read French can sound out text with spellings reflecting French phonetics.

However this reasoning (or rationale) has at least two problems. First, it is not clear how much of the audience cannot read Bambara written in the official orthography. Have there been any surveys? And secondly, for a native speaker of the language, the official orthography would not seem that hard to work through.

On the latter point, a word about multilingual literacy, or its absence, in Africa. The fact that many people in Africa have been taught to read in a Europhone language (French, in the case of Mali), which for the vast majority is a second language ("L2"), but never formally taught in their first language ("L1") or local lingua franca (like Bambara in Mali), leads to situations where many people are not comfortable reading in their familiar African languages. I've been among those calling attention to the problem in using one measure of literacy in such multilingual contexts.4

However, that's not the same as saying an L2 (and non-L1) literate person should access their L1 only with the phonetcs of the L2. The bridge from L2-only literacy to L1 literacy is not as long as that from illiteracy to basic literacy of any kind. And the Latin-based orthography of Bambara (what we are talking about here) is not that difficult to master. After all, it doesn't seem to have put a crimp in RFI Mandenkan's effectiveness.

Tech issues: A problem? And a potential


One needs to ask if maybe a hidden issue with VOA and the Bambara orthography isn't the issue with keyboards and input. Is it possible that a simple input solution enabling the VOA Bambara service staff to type the special characters used in Bambara could change this discussion?

Also, could VOA use the perceived shortcomings in audience mastery of the Bambara orthography to engage their audience with some kind of online learning app? This would certainly generate a more favorable buzz than what the current situation is doing.

Petition to VOA


The only version of the petition I am aware of is the one in French on the Change.org site. A Bambara version would be logical - as a "medium is the message" statement if nothing else - but I have not seen any. Appended below for information of people who do not read French, but do read English, is a quick translation5 of the text of the petition into the latter:
Voice of America journalists must respect the Bambara orthography
Considering that Mali, since its accession to independence and through all the successive regimes, has emphasized the importance of the languages and cultures of the country;

Considering that the question of languages spoken in Mali is included in the country's constitution;

Considering that for decades there have been departments dedicated to the question of the languages of Mali;

Considering the remarkable work done by Malian and foreign linguists on the languages spoken in Mali from 1960 to the present day;

Considering the intellectual and financial effort made by Mali and its international partners (in particular the African Academy of Languages) in the codification and use of Mali's languages in schools and in the media;

Considering the learning and the respect of these standards in writing as an obligation in order to perpetuate the work of codification carried out;

Considering that the state of Mali through the dedicated departments guarantees these standards;

Considering that the journalists of the Mandenkan team of RFI (Radio France Internationale) have been trained and correctly use the spelling rules of Bambara;

Considering that the Bambara team of the Voice of America (VOA) does not respect any Bambara spelling rules;

We hereby call on the State of Mali (through the Ministry of National Education / Malian Academy of Languages) and the African Academy of Languages to remind the Voice of America of strict respect for the spelling rules of Bambara on the VOA Bambara page.

Recommend for this purpose:

The training of Bambaraphone journalists of VOA in the spelling rules of Bambara.

What about Hausa?


This discussion would not be complete without mention of the continued use of ASCIIfied Hausa by the international radio operations, including VOA and RFI. And how is it that RFI gets Mandenkan (Bambara) right, but not Hausa?
_______
1. He mentioned this in a discussion about the topic on the Facebook African Languages group page (2 Jan. 2019). Sam is a former Peace Corps/Mali language program instructor and administrator. We have known each other since the slightly famous Peace Corps pre-service training in Moribabougou, Mali in 1983.
2. See Coleman's blog post on this topic and the VOA petition, "Voice of America's Bambara Orthography and a Petition," on his interesting site about Manding languages (which include Bambara), An ka taa.
3. Coleman Donaldson. 2017. "Orthography, Standardization and Register: The Case of Manding." In P. Lane, J. Costa, & H. De Korne (Eds.), Standardizing Minority Languages: Competing Ideologies of Authority and Authenticity in the Global Periphery (pp. 175–199). New York, NY: Routledge.
4. See, for instance, "Multilingual Literacy Day, 2014" (8 Sep 2014).
5. Based on what Google Translate produced, which was much more useful than Systranet's output.

Saturday, December 29, 2018

Niamey 1978 & Cape Town 2018: 3. Other angles on Wikipedias in extended/complex Latin

Last August, I began a set of three posts marking the 40th anniversary of the Niamey 1978 meting on harmonizing African language orthographies, and associating that with the Wikimedia 2018 conference in Cape Town - the first in sub-Saharan Africa. This post concludes the series.

The central element of this discussion is the extended Latin alphabet, which is used in the orthographies of many African languages to accurately convey meaningfully important sounds, and which often makes input with ordinary keyboards or keypads difficult.
Composite image from welcome page of Eve Wikipedia.

Looking again at Wikipedias in extended/complex Latin


So far, this article has raised some questions, made some admittedly superficial comparisons, and speculated as to factors related to success or not of Wikipedia editions in African languages. What might be ways of improving this analysis?

One way would be different approaches to sorting and categorizing the Wikipedias using the same numbers in the tables in the previous post. And one of those approaches could be to consider the relative degree of complexity of each orthography within a category. For example in category 3, Fulfulde uses 3, 4 or 5 extended Latin characters and Hausa 3 or 4 (depending on the country), and Luganda and Northern Sotho only one. Yoruba, when typed with the precomposed dot-under (aka "subdot") characters and combining diacritics for tones is less complicated than the same language using the classic style small vertical line under - because the latter requires a combining diacritic and that may mean "stacking" two diacritics on a character where a tone is involved.

A different way would be to look at the quality of content in the various Wikipedia editions. In some, the raw numbers of articles (which are the figures used in the tables mentioned above) are inflated by shell articles, by which I mean a stub that may have only a few words of text and perhaps an image. The list of Wikipedias does include a "depth" metric, which might be used (or perhaps adapted) to look for possible correlations between the quantity and quality of the content on the one hand, and the nature of the orthography on the other.

Yet another way would be to consider the numbers of people working on these editions. Wikipedia counts the numbers of users, active users, and administrators per edition. Could one use these figures to better understand whether the more successful Wikipedia editions in extended Latin (in terms of numbers of articles or a depth metric) are so because of the efforts of a relatively small number of users? That's not to imply any negative judgment of such cases, but it would be useful to know if if a complicated writing system (from the point of view of input) is not a hurdle for a large number of contributors (active users), or if it's really a case of a few savvy individuals carrying the load.1

And another approach would be to expand the scope of analysis to consider other factors: How many people speak the language? Is it taught in schools? How much printed material is available? Are there different dialects or written conventions that a contributor to or a reader of a given African language edition of Wikipedia must navigate? Any of these and perhaps others might, individually or in combination, affect the potential production of web content in general, and the success of these Wikipedias in particular.

One could also put the numbers in the background and do a qualitative study focusing on the experience of the editors of African language editions of Wikipedia. What might emerge from such discussions concerning the range of tasks involved in building and maintaining an active Wikipedia?

And then there are some stray questions certainly worth checking out. For instance, does the base interface language on which all of the African Wikipedias are built (English vs French) have any bearing at all on the success of Wikipedia editions? What about the degree of localization of the interfaces (from English or French to the language of content? And does that degree of localization relate at all to the complexity of the script?

Research towards success with African language Wikipedias


Although the number of Wikipedias in African languages is relatively small (about 13% of all editions, and collectively contain less than 1% of the total number of articles in all Wikipedias combined2), there are arguably enough data and diverse user experiences to give us a better idea of both how to develop small Wikipedias in Africa, and how much of a factor the scripts used to write them might be in their relative success.

Looking beyond African languages to the experience of Wikipedia editions in other languages written in extended Latin (and non-Latin scripts) would be instructive. This would likely highlight not only methods to facilitate input of diverse writing systems, but also supportive environments (or "localization ecologies") for these languages in general. 

Success for African language editions of Wikipedia may not be found in imitating work on other editions so much as it would in identifying ways to leverage the strengths and unique resources of African language communities. Nevertheless, facilitating input is fundamental, relying at its most basic level on common technology (for keyboards, etc.) and features of the Mediawiki software.

With rapid advances in language technology, an additional focus should be how to adapt speech-to-text to African languages to facilitate creation of content from oral narratives, interviews, and exposition. This is a topic I hope to return to later.


1. The Yoruba (category 4 orthography) and Northern Sotho (category 3) Wikipedias, for instance, each benefited at different times from large numbers of articles created by a single user in their respective communities.
2. That's 38 of 292 if the tiny Dinka edition is included; 37/291 if not. And about 293k articles out of 48.6 million total. (All as of July 2018.)

Friday, August 31, 2018

Niamey 1978 & Cape Town 2018: 2. Extended Latin & African language Wikipedias


Image adapted from banner on the Yoruba Wikipedia, August 2018
What are the implications of extended Latin characters and combinations for production of digital materials in African languages written with them? The previous post discussed some of the process of seeking to harmonize transcriptions, in which the Niamey 1978 conference and its African Reference Alphabet (ARA) were prominent. That process had a logic and left a legacy for the representation in writing of many African languages. This post asks if there is a trade-off between the complexity of the Latin-based writing system and how much is produced in it using contemporary digital technologies.

One easy, although by no means conclusive, way to consider this question is to look at Wikipedia editions in African languages (those that are written in Latin script). The following table disaggregates 35 African language editions by the number of articles (from the list of Wikipedias, as of 9 August 2018) and the four "categories" of Latin-based orthography1 introduced in African Languages in a Digital Age (ch. 7, p. 58):

Number of articles
Category 1
Category 2
"Category 1" + Latin 1 
Category 3
"Cat. 1" or "2" + any of Latin Extended A, B, etc, Add'l, & IPA
Category 4
"Category 3" + Combining diacritics
< 500
Swati (447)
(Chewa)
Sango (255)

Fula (226)
Venda (265)
Chewa2 (389)
Dinka (75)
Ewe (345)
500-1000
Sotho (543)
Tumbuka2 (562)
Tsonga (563)
Kirundi (611)
Tswana (641)
Xhosa (741)
Oromo (772)
-
Akan (561)
Twi (609)
Bambara3 (646)
(Bambara)
1000-2000
Zulu (1011)
Kinyarwanda (1823)
Kongo (1179)
Luganda4 (1162)
Wolof (1167)
Gikuyu (1357)
Kabiye (1455)
Hausa (1891)
Igbo5 (1340)
2000-5,000
Shona (3761)
-
Kabyle (2860)
Lingala (3028)
5000-10,000
Somali (5307)
-
-
10,000-25,000
-
-
-
-
25,000-50,000
Swahili (44,375)
-
-
Yoruba5 (31,700)
> 50,000
Malagasy (85,033)
Afrikaans (52,847)
-
-
# of articles / # of editions = Average
146,190 / 14 =
10,442
54,281 / 3 =
18,094
20,655 /13 =
1,589
36,488 / 5 =
7,298
Grouping 1&2, 3&4
168,471 total articles / 17 editions = 
11,789
57,143 total articles / 18 editions = 
3,175


Looking at the top row with the smallest editions (less than 500 articles), one is tempted to highlight the high presence of African languages whose orthographies include extended Latin - categories 3 & 4. However, in the group of next highest number of articles (500-1000) there are more editions with category 1 orthographies (the simplest) than there are editions with category 3 in the group above that (1000-2000). And the next highest ranges (covering 2000-10,000) are roughly even between category 1 on the one hand, and 3 & 4 on the other. But then the three largest editions (and 3/4 above 25,000) are category 1 & 2.

So with just a visual analysis, there does not seem to be any clear pattern from arraying the editions in this way. Of course there will be other factors than the complexity of the script affecting the success of a Wikipedia edition written in it. But are there ways of looking at this raw data that can give us a clearer idea what might be the effect is of extended Latin - the ARA plus orthographies with other modified letters and diacritic combinations - on the size of Wikipedia editions?

One approach is to consider all the above editions combined, per category of orthography (totaling by column). This puts the focus on the degree of complexity of the writing system, perhaps muting the effect of other language- & location-specific factors. On the second to last row are column totals of the number of articles in all editions listed above, divided by the number of editions, to give an average figure.This yields an uneven pattern (2>1>4>3), since in the cases of 2 & 4, one large edition in a small total number of editions skews the category average up.

By the totals of the two simpler categories (1 & 2) and of the two extended Latin categories (3 & 4), however, one obtains possibly more useful numbers. This aggregation can be rationalized for our purposes here by the fact that the lower two categories are generally supported by commercially available keyboards and input systems,6 while the higher two categories, require a specialized way to input of additional characters and maybe diacritics (such as an alternative keyboard driver, or an online character picker).7

The figures thus obtained show editions written in extended and complex Latin having on average about a third the number or articles as those written in ASCII and Latin-1. Admittedly, this result is in part the result of the way categories have been chosen and figures aligned, but I'm proposing them as a perspective on the use of extended (and complex) Latin, and possible gaps in support. Before considering this in more detail, it is useful to compare with the numbers for non-Latin scripts.

What about non-Latin scripts & African language Wikipedias?


Number of articles
Non-Latin
< 500
Tigrinya (168)
10,000-25,000
Amharic (14,321)
Egyptian Arabic (19,170)
# of articles / # of editions = Average
33,659 / 3 = 11,220
There are only three editions of Wikipedia in African languages written in non-Latin scripts.8 Two of those - Amharic and Tigrinya - are written with the Ge'ez or Ethiopic script unique to the Horn of Africa.

Arabic is the third. How to count this language for the purposes of this informal analysis raises a question. Arabic, of course, is established as a first language in North Africa for centuries, but it is also a world language, spoken natively in the southwest Asia (having originated in Arabia), and learned as a second language in many regions. Drawing users from this wide community, the Arabic Wikipedia is among the top 20 overall, with twice as many articles as all of the editions discussed above combined. It is more than an African language edition. For this analysis, therefore, I have chosen instead to count just the Egyptian Arabic Wikipedia.

Taking these three editions, we then get an average number of articles (11,220), which is close to what is seen for the Latin categories 1 & 2 (11,789). The usual caveats apply for such a small sample, but taking the numbers as they are, it is interesting that Wikipedias in the complex Arabic alphabet and the large Ge'ez abugida (alphasyllabary) are on average much larger than those of the ostensibly simpler extended Latin (3,175).9

Again, script complexity is but one factor, and in this case probably not the most important, since the two non-Latin scripts in question have long histories of use in text in parts of Africa - much longer than any form of Latin script. Nevertheless, from the narrow perspective of what is required for users to edit Wikipedia, the technical issues are in some ways comparable if even more demanding.

Arabic has had standard keyboards since the days of typewriters. The issues there are not so much the input, but whether systems can handle the directionality and composition requirements of the script.

The Ge'ez script on the other hand, does not involve complex composition rules or bidirectionality. However, it has a total of over 300 characters (including numerals and punctuation; more again if extended ranges are added). The good news is that there are numerous input systems to facilitate their input. Literacy in the script and availability of input systems would not be limiting factors for content development in major languages using this script. The difference in development of the Amharic and Tigrinya editions of Wikipedia may relate to both the larger population speaking Amharic (as a first or second language), and its use officially in a relatively large country (Ethiopia). Development of content in Tigrinya - a cross-border language - might also be hindered by issues particular to one of the two countries where it has many speakers (Eritrea).

From the above one might suggest that complexity of the written form (to be taken here as including the nature of the script itself, and the size of the character set) may be a limiting factor on content development, but that other factors, such as a literate tradition, official use, and technical support for digital production may overcome such limitations. In the case of African languages written in Latin script, however, any literate tradition is recent, and they are often marginalized in official and educational contexts. For those written with extended Latin, there is the additional factor of lack of an easy and standardized way of inputting special characters. Paradoxically, it seems, a modification of the most widely used alphabet on the planet may actually hobble efforts to edit in these languages.

Facilitating input in extended Latin for African language Wikipedias?


Wikipedia editing screen with "Special Characters"
drop-down modified to show all available ranges.
Assuming that the inconvenience of finding ways to input extended Latin characters may be a factor in the success of African language Wikipedias written with categories 3 and 4 orthographies, a quick fix might be to add new ranges for the modified letters used in African languages to the "special characters" picker in the edit screens. As it currently structured, the extended characters necessary for a category 3 or 4 orthography might be sprinkled around in up to 3 different ranges (see at right). And within each range, they are not presented in a clear order, so sometimes hard to find.

Since it may be too complicated to have a special range for each language edition, another possibility would be to draw inspiration from the Niamey 1978 meeting's ARA, and combine all extended Latin characters and combinations needed for all current African language Wikipedias into a common new range.

Of course as mentioned above, there are other factors that can contribute to the success or not of Wikipedia editions in African languages written with extended Latin, but this innovation would at least make editing more convenient for contributors to these  editions. And perhaps it might have a positive effect on the quantity and quality of articles in these Wikipedias.

In the third, and concluding article in this series, I'll step back to look at this analysis and consider some other ways to look at the data on African language editions of Wikipedia, and in particular, those written in extended Latin.

1. This categorization was intended to help characterize the technical requirements for display and input of various languages. Although the technology has improved to the point that more complex scripts are generally displayed without the kinds of issues one encountered a even a decade ago, input still requires extra steps or workarounds. The four categories are additive in that each higher category builds on those below, with added potential issues. It is also a "one jot" system in that for example, a single extended Latin character, say š in Northeren Sotho or ŋ in Wolof, makes their orthographies category 3 rather than category 1 or 2 (respectively), and the use of the combining tilde over the extended Latin character for open-o - ɔ̃ - makes Ewe a category 4 rather than 3. In terms of input, the higher the category, the more the potential issues with display and input (although technical advances tend to level the field, esp. as concerns display).
2. The only non-basic Latin character used in Chewa is the w with circumflex: ŵ. Apparently it represents a sound important in only one dialect of the language, and is used infrequently in contemporary publications. On the other hand, there is a proposed (not adopted) orthography for Tumbuka that includes the ŵ. Without this character, either language would be a category 1 orthography; with it, category 3.
3. Bambara is a tonal language. Most often, it seems, tones are not marked in text, however they can be for clarity, and some dictionaries make a point of indicating tone in the entries (not just pronunciation). If tones are unmarked, Bambara would be considered is a category 3 orthography; with tones, category 4. 
4. The addition of the letter ŋ puts Luganda in category 3 rather than 1.
5. The dot-under (or small vertical line under) characters used notably in Yoruba and Igbo are particular to southern Nigeria, and not included in the ARA. Yoruba in Benin is written with characters from the ARA.These are tonal languages, and tone is usually marked.
6. When I first proposed the category (itself a modification of an earlier effort), there were some questions why have a category 2 separate from category 1. That distinction had its origins in the early days of computing where systems used 7-bit fonts, meaning that accented letters (diacritic characters) used in, say, French or Portuguese, could not be displayed. Even as systems using 8-bit fonts enabled use of diacritics commonly used in European languages, display issues would still crop up (as a sequence of characters where an accented letter should be). Nowadays, such display issues are rare, and limited (as far as I can tell) to documents in legacy encodings. On the other hand, input of accented characters used may require, depending on the keyboard one is using, switching keyboard drivers or using extra keystrokes - so one will occasionally see ASCIIfication of text in such languages (apparently as a user choice).
7. The difference between categories 3 (extended Latin) and 4 (complex Latin) once were significant enough from point of view of display that informal appeals to Unicode to change its policy of not encoding new "precomposed" characters were common.
8. The Wikipedia incubator projects.includes several African language projects, which are not covered here. These include some in non-Latin scripts (Arabic versions, N'Ko, and Tamazight) and some in Latin-based orthographies. I mentioned one of the latter - Krio - in a previous post, and hope to do an overview of this space in the near future.
9. Average for all African language editions is 7704. By comparison the average for all Wikipedias is 166k.

Monday, August 13, 2018

Niamey 1978 & Cape Town 2018: 1. Some thoughts about extended Latin & content in African languages

Image features the 31 modified letters & diacritic combinations in
the African Reference Alphabet, 1978. (Nor all are currently in use.)

The world 40 years ago, when the Meeting of Experts on Transcription and Harmonization of African Languages took place in Niamey, and that of the Wikimania 2018 conference in Cape Town (which ended last month) seem very distant from each other. But from the angle of the written form of African languages at least, the concerns of the two events are not so distant.

One of these concerns is the extended Latin alphabets that were on the agenda in Niamey, and which are used in about half of the African language editions of Wikipedia. This post and the next consider these two vantage points, asking whether extended Latin is associated with less content creation, and what might be done to facilitate its use of the longer Latin alphabet.

Adapting the Latin script to African realities


In 1978, representatives of countries that had gained independence no more than a couple of decades earlier, or in some cases only a few years before, met in Niamey to advance work on writing systems for the first languages of the continent. One of the linguistic legacies of the colonial period was the Latin alphabet (even in lands where other systems had been used). But given the phonological requirements sometimes very different than what Latin letters represented in Europe, linguists added various modified letters, diacritics, and digraphs to write African languages (sometimes even a special system for a single publication1.

So, that legacy also often took the form of multiple alphabets and  orthographies for a single language, reflecting the different origins of European linguists (frequently Christian missionaries from different denominations), locations in which they worked (perhaps places where speakers of a language had particular dialects or accents), and individual skills and choices. After independence, many African countries undertook to simplify this situation, but they still often ended up with alphabets and spelling conventions different from those in neighboring countries.

The linguists and language specialists in Niamey, as in other such conferences of that era (many of which, like the one in Bamako in 1966, were supported by UNESCO), were concerned with further simplifying these discrepancies, with accurate and consistent transcription of languages that were for the most part spoken in two or more countries (whose speaker communities were divided by borders). That included adopting certain modified letters and diacritic combinations for sounds that were meaningfully significant in African languages (some of which correspond with characters in the International Phonetic Alphabet).

Language standardization, which is actually a complex set of decisions, was a real concern where there were on the one hand diverse peoples grouped in each state and on the other hand limited resources for producing materials and training teachers. At its most basic level, though, standardization of any sort required an agreed upon set of symbols and conventions for transcription.2

A reference alphabet for shared orthographies


The African Reference Alphabet (ARA)3 produced by the Niamey meeting was an effort in that direction. It built on the longer post-independence process to facilitate use and development of written forms of African languages - a process that had its roots in the early introduction of the Latin script (before the formal establishment of colonial rule) and efforts during the colonial period such as the influential (at least in the British colonies) 1928 Africa Alphabet. The ARA was intended - and to some degree at least still serves - as sort of a palette from which orthographies for specific linguistic, multilingual national, and cross-border language needs could be addressed.4

And that set of concerns - alphabets, orthographies and spelling conventions - turned out to be the starting point for later efforts in the context of information and communication technology (ICT) to localize software and interfaces, including Wikipedia and other Wikimedia interfaces, and to develop African language content online, including for Wikimedia projects. Even if it does not seem as visible as other challenges.

What I haven't seen is an evaluation of the efforts at Niamey and the other expert meetings on harmonization of transcriptions, although the most used of the characters in the ARA can be seen in various publications, and all but perhaps one are in the Unicode standard.

In any event. the situations of the various African languages are diverse, with some having well established corpora while others are "less-resourced," and in the worst case, inconsistently written.

Extended Latin and composing on digital devices


One important element in discussions in the process of which Niamey was part, was the role of modified letters - what are now called extended Latin characters - in transcribing many African languages. The ARA includes no less than 30 of them (22 modified letters and 8 basic Latin with diacritics5). These added characters and combinations are not intended to all be used in any one language, but represent standard options for orthographies. The incorporation of some of these into a writing of a single language makes the writing clearer, and has no drawbacks for teaching, learning, reading, or handwriting (although there are arguments against the use of diacritics). Since the establishment of Unicode for character encoding, the screen display of these characters is not a problem (so long as fonts have been created including glyphs for the characters).

However even the presence of even just one or two extended Latin characters leads to problems with standard keyboards and keypads - where are you going to place an additional character, and how is the user to know how to find it? This is a set of issues that was of course recognized back in the era of typewriters. One of the spinoffs from the Niamey conference was the 1982 proposal by Michael Mann and David Dalby (who attended the meeting) for an all lower-case "international niamey keyboard," which put all the modified characters (of an expanded version of the ARA) in the spots normally occupied by upper-case letters.

While that proposal never went far (I hope to return to the subject later) - due in large part to its abandonment of capital letters - it was but one extreme approach to a conundrum that is still with us. That is, how to facilitate input of Latin characters and combinations that are not part of the limited character sets that physical keyboards and keyboards are primarily designed for. It's not that there aren't ways of facilitating input - virtual keyboard layouts (keyboard drivers that can be designed like and shared, like Keyman, and onscreen keyboards) have been with us for years, and there are other input systems (voice recognition / speech-to-text being one). The problem is lack of standard arrangements and systems for many languages. Or perhaps in the matter of input systems, the old wag, "the nice thing about standards is there are so many to choose from," applies.

The result, arguably, may be a drag on widespread use of extended Latin characters, and as a consequence of popular use on digital devices of languages whose orthographies include them. Or a choice to ASCIIfy text (using only basic Latin), as has been the case with Hausa on international radio websites. Or even confusion based on continued use of outdated 8-bit font + keyboard driver systems, as witnessed in at least one case with Bambara (see discussion and example).

What can the level of contributions to African language editions of Wikipedia tell us about the effect of extended Latin? This will be explored in the next post: Extended Latin & African language Wikipedias.

1. For example some works on forest flora which had lists of common names in major languages of the region.
2. Arguably in the case of a language written in two or three different scripts, one could have a system in each script and an accepted way to transliterate between or among them.
3. The only other prominent use I found of the term "reference alphabet" was that of the ITU for their version of ISO 646 (basically the same as ASCII): "International Reference Alphabet." The concept of reference alphabet seems to be a useful one in contexts where many languages are spoken and writing systems for them aren't yet established.
4. This approach - adopting a standard or reference alphabet for numerous languages - was taken by various African countries, for example Cameroon and Nigeria. These efforts were without doubt influenced by the process of which Niamey and the ARA were part.
5. By comparison, the Africa Alphabet had 11 modified letters and did not use diacritics. All 11 of the characters added in the Africa Alphabet were incorporated in the ARA. It is worth noting that in the range of modified letters / special characters created over the years, some are incorporated into many orthographies, others fewer, and some are rarely used if at all.

Wednesday, July 18, 2018

Wikimania 2018: Sessions on, or of interest to, Wikimedia projects in African languages

The 14th annual Wikimedia conference - Wikimania 2018 - starts today, 18 July, in Cape Town, South Africa, and runs through 22 July. It is the second Wikimania to be held on the African continent - the first being at Alexandria, Egypt in 2008 - and the first in Africa south of the Sahara.

Here is a quick look from afar at what Wikimania 2018 sessions in the conference program might treat questions related to African language editions of Wikipedia, Wiktionary, etc. - what we have previously referred to as "Afrophone Wikis."

Preconference


According to the program, the first two days - 18-19 July - are devoted to the Preconference, consisting of "various miniconferences and meetings." Among these, I'd make special note of the 2-day Decolonising the Internet Conference - "…the first ever conference about centering marginalized knowledge online!" Run by the NGO Whose Knowledge? (logo at right) as an invitation-only event, it has a theme that I'd consider of interest to increase African language presence on the internet.

Main conference


The main Wikimania conference follows, on 20-22 July. On the morning of the first day, Friday 20 July, there is a track devoted to Africa with three sessions, all of interest (titles link to project pages, which in some instances already have further links to slide presentations):
  • Babel's Tower: South Africa's Wikipedias: An overview and discussion of Wikipedia editions in South Africa's languages (focusing on the 11 official languages), and ways to address the poor development in most of those, including "possible interventions via both educational strategies and technological options." The presentation is by Michael Graaf, who wrote his dissertation at the University of Cape Town on South Africa's Wikipedias.
  • Africa's Wikipedias: "A panel to discuss the interesting challenges and possibilities of the Wikipedia language editions of Africa. Includes review of new tech to amplify efforts of editors." Panel includes several editors of African language Wikipedias (Afrikaans, Arabic, Swazi, Tsonga, and Xhosa).
  • The quotation of oral sources in a decolonization context: Discussion of how to incorporate oral citations in a resource that generally requires citation of written (ideally published) sources. Reference to an oral citations project in Namibia. Presentation by Bobby Shabangu and Stefanie Kastner.
That same morning, there is another session of particular interest from the perspective of working on African language projects (unfortunately conflicting with the Africa track):
 In the afternoon of the same day, another Africa-specific session that might have some content relevant to languages:
  • Coolest African Projects - Be inspired: Spotlights relatively unknown projects and activities by African Wikimedia affiliates. Presentation by Emna Mizouni, Felix Nartey, and User:Thuvack.
On the second day of the main conference, Saturday 21 July, the morning session has several sessions of special interest, including three in the Languages track:
  • Wikipedia for Indigenous Communities: Compares Western and OvaHerero (Namibia) approaches to knowledge, and discusses a project approaching Wiki editing in a way more acceptable to their community. Presented by Peter Gallert.
  • How majorities can support minority languages: Although description does not indicate Africa content, it deals with how people in positions of relative power (in this case speakers of dominant languages) can help those in positions of less power (speakers of "minority" languages) with their Wikipedia projects. Presentation by Jon Harald Søby, Astrid Carlsen, Jean-Philippe Béland, and User:Barrioflores.
  • Including minority languages in Wikimedia projects, a strategic approach: Again, no specific Africa content indicated, but a possibly relevant discussion of how to include minority languages in Wikimedia projects. Presentation by Ahmed Houamel-Bachounda.
Also in the morning, sessions dealing with Africa in the Education track (thus conflicting with the above), but without indication whether African language projects will be discussed, or just major Europhone language projects like English & French:
Source: Commons.WikiMedia.org
On the morning of the last day, Sunday 22 July, four sessions in the Communication track look interesting from the point of view of African language projects (even though none of these are specifically mentioned in the session descriptions except for the last one):
  • Working towards Growing Local Language Content on Wikipedia (GLOW): Discusses a 2017 collaboration among Wikimedia Foundation, the Centre for Internet and Society (CIS), Wikimedia India chapter (WMIN), user groups and external partners on a "pilot project in India to encourage local Wikipedia communities to create locally relevant articles in Indian languages." The results will inform development of the GLOW program, which is explained. Presentation by Jack Rabah and Rupika Sharma.
  • Record every language of the world village by village, with Lingua Libre: Discusses project to facilitate "the recording process of words in any language (even minor languages or dialects), uploading them to Wikimedia Commons and reusing them on other projects such as Wiktionary, Wikipedia or Wikidata." Presentation by User:0x010C.
  • Every Language in the World: Introducing Wikitongues: Focuses "on the activities coordinated by Wikitongues, a not-for-profit organization promoting the use and preservation of every language in the world" through collection of oral histories. Presentation by Daniel Bogre Udell.
  • Diglossia and Multilingualism: A help or a Hindrance to Arabic Wikipedians?: Explores "the ways students who are native speakers of Arabic [which has a standard & many vernacular forms] in a multilingual educational system overcome the obstacle of sharing knowledge by using a common idiom while allowing millions of readers engage with the content they create. This session will also suggest solutions for communities with similar language challenges inspired by the educational model used in Arabic-speaking schools that participate in the 'Student Write Wikipedia' program." Presented by Bekriah S. Mawasi.
The above should not be interpreted as meaning that other sessions would not be of interest. This is a subjective selection based on my reading of the descriptions. On the whole it is nice to note the optimism in several cases, with regard to African language projects, and also the efforts to accommodate and integrate oral content and sources.

By coincidence, the timing of Wikimania 2018 corresponds with the 40th anniversary of the Niamey expert meeting on transcription and harmonization of African languages, so I'll draw some connections between the two seemingly very different events in the next post.

Note: The first two images above are from the webpages for the event (Wikimania 2018) or organization (Whose Knowldge?) concerned. Attribution of the third image can be found on the linked Wikimedia Commons page.

Tuesday, July 17, 2018

Expert Meeting on the Transcription & Harmonization of African languages, Niamey, 17-21 July 1978

Niger's National Assembly, where the 1978 meeting was
formally opened. (Source: Britannica.com)
Forty years ago today, the Meeting of Experts on the Transcription and Harmonization of African Languages began in Niamey, Niger. Along with the 1966 meeting in Bamako, it was one of the more significant of a series of meetings* organized in Africa with the assistance of UNESCO to deal with questions relating to standardization of the written forms of African languages.

This expert meeting was at once less ambitious than the 1966 Bamako meeting - seeking "harmonization" rather than "unification" of systems for writing - and wider in scope - including representatives from more countries around the continent:Angola; Benin; Burundi; Cameroun; Central African Republic; Guinea Bissau; Ivory Coast; Mali; Niger; Rwanda; Senegal; Tanzania; Uganda; and Upper Volta [now Burkina Faso] (some countries had more than one person). Plus France, United Kingdom, and Yugoslavia. (Representatives from Congo, Ghana, Nigeria, Togo, and Zaire [now DR Congo] were not able to attend.)

This diversity also meant that the number and range of languages considered in Niamey was greater than in Bamako. On the other hand, like Bamako, the Niamey meeting focused only on the Latin-based transcriptions used in educational contexts (notably literacy) by the recently independent governments in sub-Saharan Africa.

This conference was particularly notable for its connection with the African Reference Alphabet, which was intended to provide a common character for each sound encountered in main African languages (rather than each country devising its own symbols or character combinations).

African Reference Alphabet. Source: Proceedings of the Meeting, UNESCO, 1981.
This alphabet was later amended by linguists David Dalby, who participated in the Niamey meeting, and Michael Mann, to include a number of additional characters. They also suggested a lower-case only alphabet,with a keyboard design using both registers to accommodate all the letters. (This keyboard was never adopted as such.)

 This effort was significant in influencing orthographies adopted for many languages (although not all). However it did not seem to be explicitly connected with the contemporaneously emerging digital text standards. Although many of the characters in ISO 6438 "African coded character set for bibliographic information interchange" were the same, there were differences that indicate the latter was the result of a separate process (or perhaps "fork" in today's software development terminology).

A few years ago I had hoped it would be possible to use the occasion of the 40th anniversary of the Niamey expert meeting to organize a conference to review the status and influence of the African Reference Alphabet and its descendants - with particular attention to technical support in ICT - and issues related to non-Latin scripts used for African languages. And perhaps to broach other topics related to use of African languages in the spirit of the efforts of a half-century ago.

Perhaps such a conference will prove useful in the future, but for the moment I'll mark this 40th anniversary with a series of short posts on the 1978 Niamey expert meeting itself and/or contemporary efforts that in one way or another reflect its aspirations.

* Several other expert meetings during this period addressed more specific sets of issues.