Politics, economics and culture determine what South Asians see on their computer screens.
Of South Asia’s 1.3 billion citizens, 96 percent are currently excluded from using the computer, the Internet, and the World Wide Web. This is due to the near-total absence of software in the languages that the majority of them speak. Restated in the jargon of the computer scientist, there has been virtually no “software localisation” to any of the major languages of the Subcontinent.
The exclusion of a full one sixth of the world’s population from what enthusiasts term “The Information Age” raises questions about politics, culture and software that are important not only to South Asia, but to the rest of the world as well. Despite internal conflict, this region has maintained a vibrant multilinguistic, multicultural society in an era of world fragmentation, and it remains committed to economic growth and to freedom and social justice. It thus has a rare, perhaps unique, opportunity to affect the directions in which the Information Age will move.
Whether or not software is localised at all, and if so, whether it is adapted to the cultures to which it is localised, are issues influenced by political and cultural factors. Future social and cultural impacts of software and of other aspects of the electronic age in South Asia are in no sense technologically determined, but depend largely on what South Asians and Americans decide to do, and specifically on the capacity to work together to set standards for localisation to non-English languages that are global without being imperialistic.
Rule by the digirati
I begin with a bad dream – a dream that is part science fiction, part nightmare, but also part sociological projection.
In the not-too-distant future, the entire world will effectively be controlled by a small group of individuals identifiable by four distinct characteristics: they are all computer literate; they all have an Internet address and/or Website; they all possess a cellular telephone (probably with direct satellite links); and they all understand – and speak and write – English as their first, second, or third language.
This new ruling class – we can call them the digirati – will be concentrated in the nations of the so-called North, but its members will also be found in Bangalore, Bombay, Dhaka, Delhi, Karachi, Nairobi, Buenos Aires, Singapore, Jakarta, Kuala Lumpur or Johannesburg. They jet from continent to continent; they communicate instantaneously in English over the Internet, World Wide Web, or whatever follows. They have instantaneous access to unbelievably comprehensive networks of information; they make financial transactions in Hong Kong, Sydney, London, Lima, Singapore and Calcutta; they exchange scientific information, weather reports, business news and personal gossip at the click of a mouse.
In addition to their economic and political powers, the masters of the new “telectronic” media will be the authors, inventors, agents, actors and controllers of a cosmopolitan, globalised, consumerist, lowest-common-denominator world culture. This new culture – if it can be called a true culture – will be inspired and perhaps dominated by Disney, Sony, Murdoch, MTV (suitably adapted to conditions in Delhi or Buenos Aires), McDonald’s, CNN, Mitsubishi, Nike, Philips, Levi’s, Nestle, Microsoft, Intel and corporations as yet to be conceived. Faced with the power of this new electronic culture, the traditional non-English-spgloeaking and non-electronic cultures will stagger and perhaps be overwhelmed.
The fully 99 percent of the world’s population that is not computer literate, not fluent in English, and without Internet-Websites and cellular phones, will be gently ruled by this new global telectronic ruling class, the digirati. This 99 percent will include the 95 percent of the people of the world who do not speak fluent English, all the world’s illiterate and innumerate, as well as the underclasses of Northern Europe and North America and the vast majority of peasants, farmers, and workers in the so-called South.
The ‘rule’ of the new telectronic class will be gentle, persuasive, and only rarely violent or coercive; it will be leveraged by the economic and cultural forces of ‘liberalised’ economies. There will be minimal physical force used, but relentless pandering to consumer desires, a youth culture that spreads to grandparents, satellite TV in every village, World Cup football witnessed by billions, universal blue jeans, T-shirts and sports shoes, locally-adapted rock, and (at the “high culture” level) the Three Tenors at the Baths of Caracalla.
But reactions against this dominant, cosmopolitan, global electronic culture will take ugly forms. Cultural, economic and political nationalisms of a fundamentalist kind will thrive because of the neglect of local traditions, practices, values, and linguistic identities and their submergence into a single global electronic culture.
These new fundamentalisms will build on imagined, recreated, and fantasised pasts. They will hearken back to ancient empires, lost languages, and imagined (though fictitious) eras of racial, ethnic, and/or cultural power and purity. They will be xenophobic and intolerant, anti-modern, hostile to political and cultural freedoms, and antagonistic to foreigners, immigrants, neighbouring nations, and minorities within their own borders. Ethnic, cultural and political purity
will be their goal; the exclusion of the ethnically, culturally or religiously impure will be their rule.
If that is a pretty dark picture of the future, it is not because this writer believes that it is an inevitable consequence of the information revolution. On the contrary, there is a chance, through actions that could be begun now, to avoid the negative cultural and political consequences of a particular kind of information age.
What does this have to do with software localisation? Localisation, after all, is that highly technical process by which computer programmes written in one language by members of one culture are translated into another language for use by members of another culture.
Currently, the major packaged software firms, almost all of which are located in the United States, prepare for localisation by setting apart the irreducible source code of major programming languages, operating systems and applications from the linguistically and culturally specific elements which need to be changed for special local markets.
This process is called the “internationalisa-tion” of the programme code. The list of elements that need to be set apart so as to be ‘localised’ is long: not just obvious text translations, but character sets, scrolling patterns, page geometries, dictionaries, search engines, colours, numbers, box sizes, names, dates, and icons. (As one observer has noted, there is no gesture of the human hand that is not obscene in some culture.)
The complex technical features of software localisation are well understood and often written about by specialists. But two other aspects of localisation, both of which have significant cultural and political implications for the Subcontinent, are sometimes mentioned but seldom studied: first, whether or not localised versions of major programmes exist at all; and second, the embedded cultural content of even technically well-localised programmes.
Occurrence of localisation
Let’s start with the first aspect, whether localised versions of English-language software already exist. At present, about 80 percent of the world market in packaged software is produced by American firms, and the percentage grows each year. With few exceptions, localisation, therefore, means whether or not software written originally for an English-speaking audience by American programmers is or can be adapted to other languages and cultures (often with the help of colleagues abroad). What factors determine whether these English-language programming languages, operating systems, and applications are made available to non-English speakers – that is, are localised?
Consider some curious facts. The Windows NT platform is currently localised, we learn, not only for major European languages with large computer-user populations – e.g., French, Spanish, German, Norwegian – but “enabled” (a lesser step than localisation) for Catalan, Rhaeto-Roman, Bahasa and Icelandic. Or, in the case of the Apple Macintosh operating system, localisation is available not only for the major European languages, but also for the language of the tiny Faeroe Islands (pop 38,000) in the North Atlantic south of Iceland, for Kazakh, for Uzbek, and so on.
But with the exception of English, none of the major languages of South Asia, including Hindi (spoken by almost as many people as English or Spanish), is included in either list. The population of the Faeroe Islands has a Macintosh localisation and the inhabitants of Norway have a localised version of Windows NT, but the populations of Bangladesh, Nepal, Pakistan, Sri Lanka, West Bengal, Uttar Pradesh, Tamil Nadu, Gujarat and Maharashtra have neither. Unless they speak English fluently, the peoples of South Asia have no access to these major computer operating systems.
How do we explain these omissions? The most common explanation is economic. A software company’s decision to localise software – a costly undertaking – most obviously is a response to its perception of the potential market demand. Where a large population uses computers, and – an important qualification – where piracy rates are low enough that software producers can sell their products rather than have them stolen, companies are more likely to invest in localisation. For this reason, we have French, Spanish, German, Finnish and Swedish versions of major programmes by international software firms like IBM, Microsoft, Digital, Oracle, SAP, etc.
In India, the absence of a significant domestic market for localised software means that dynamic Indian software firms, now primarily dedicated to overseas collaborations and the sale of software services, lack any economic incentive to produce software in languages other than English. In any event, the need is limited because, it is said, India possesses the second or third largest English-speaking population in the world. After 50 years of Independence, English remains the lingua franca for communication between Indian states; and members of Indian elites, whatever their mother tongues, generally have a superb command of English. Therefore, there ostensibly is no market and no need for localisation.
This economic explanation is quite plausible. In a region where the annual income (parity purchasing power) of the average individual is less than half the cost of a well-equipped computer, where almost half the population is illiterate, where almost a third of the population lives at or below the official level of subsistence, and where the cost of an Internet connection may exceed the cost of food for a month, computers – and therefore localisation to native languages – are today beyond the means of any but a minority.
But do economic factors alone really explain the existence of localised programmes for Iceland, for the Faeroe Islands, and for the Norwegians? Why have large commitments to localisation been made by American software firms in China, where piracy rates are said to exceed 90 percent, when there are deep differences between the political philosophies of the People’s Republic and the United States, and when doing business in the PRC is generally unprofitable and often involves, it is said, very large hidden costs?
One reason has to do with the long-term planning cycle of software firms, overseas as well as in South Asia. American firms, among them Microsoft, place long-term bets on future markets, bets which may not pay off for a decade or more. Along with the capacity for quick adaptation, then, leadership in the software industry also requires the ability to look far ahead. American software companies’ investment in R&D in China is a case in point; it is a way of establishing a foothold in a potentially vast market in the distant future, even though current or near-term profit may be low.
With regard to India, even if the corporate, business, and personal demand for local-language software is limited today, it takes little imagination to foresee a day when it might be large, indeed vast. India already is said to have the largest middle class in the world. National growth rates overall may appear modest because of the moderate growth of the huge agricultural sector, but industrial growth rates in recent years, especially in the southern states of India, have been in double digits.
And it does not, as they say, take a rocket scientist to predict that if these rates of growth continue, more and more firms, businesses, and individuals – banks, warehouses, merchants, shippers, shopkeepers, libraries, post-offices, bus lines, private and eventually public schools and parents – will little by little constitute a growing, and ultimately a large market for software in Indian languages. Moreover, piracy rates in India have been dropping due to a concerted effort to bring India’s rates closer to the European/North American rates of 20-40 percent.
Thus, from the point of view of software manufacturers in India and overseas, it would seem a reasonable economic gamble to anticipate the emergence of a substantial demand – that is a profitable market – for software in the major Indian languages. Indeed, not to anticipate this day would seem economically irrational.
Problem with virtue
Why then is localisation to Indian languages not yet happening? An exclusively economic perspective does not provide answers; politics and culture also have to be taken into account. Consider the role of culture and politics in localisation to Chinese.
The technical problems of localising from English to Chinese are formidable. Chinese is an ideographic written language with tens of thousands of ideographs (only 7000 of which currently exist in Unicode, the international standard), no phonetic alphabet, and no single agreed-upon way of using the Roman (‘qwerty’) keyboard to enter ideographs. Moreover, written Chinese is linked to complex tonal spoken languages which vary dramatically (and often unintelligibly) from region to region. These problems are staggering. Yet, localised Chinese versions of many major programmes already exist.
Why? The reasons are partly cultural and partly political. The Chinese written language is everywhere the same, even though spoken dialects often are mutually unintelligible. Moreover, the present Chinese government is authoritarian and highly centralised. So it is possible to negotiate with a single ministry in Beijing and make, at least on paper, binding agreements about standards of localisation for all the 1.3 billion citizens of the China.
A centralised political authority and uniform localisation standards make it reasonable to place a long-term bet on the eventual development of a profitable Chinese market. In India, the situation is obviously more complex. India is a democratic, federal nation with an admirable tradition of multilinguistic and multicultural practices. No ministry, no individual, no party can presume to speak for, or set standards for, all of India. There is no majority language: 18 languages are officially recognised, and many more are spoken. The prevailing policy espouses liberalism and tolerance with regard to the use of local languages.
But this virtue creates a problem. As the director of internationalisation of a large American software firm put it: “We would like to do some localising in India, but we don’t know how. Which Indian languages?”
On the face of it, this can be easily taken care of: start with Hindi because Hindi has the highest use and most-nearly national status. Depending on how its boundaries are defined, Hindi is spoken by about 400 million Indians, whether as a first or a second language. It is among the five most widely spoken languages in the world.
But there seem to be difficult issues with Hindi. Whose Hindi is to be chosen as the standard language? Jawaharlal Nehru once complained that he could not read the Indian Constitution in the variant of Hindi in which it was written. Studies of the linguistic patterns of Northern India indicate wide regional variations in spoken Hindi-Hindustani, particularly between the Persianised Urdu and Sanskritised Hindi.
In addition, the introduction of localised software raises the complex issue of the politics of language in India. At one level, the multilingualism of India as a nation, the acceptance of 18 official languages, the coexistence of many linguistic groups in all major Indian cities, and the fact that many Indians speak one or more languages in addition to their mother tongues – these facts of multilingual tolerance and pluralism first strike the eye of the foreign observer. However, localisation to only one Indian language (e.g., Hindi), or indeed only to two or three, could well arouse the quiescent passions that are now kept latent by the prevailing policy of linguistic pluralism.
But from the point of view of a software firm, localising into all the official languages of India may seem an inordinately complex, expensive, and difficult task. No matter where and by whom decisions about languages for localisation are made – whether by an all-Indian body, by a multinational corporation, or ideally by a consortium of Indians and multinationals – the possibility of stirring up ardent linguistic nationalism needs to be taken into account.
Writers of manuals for software internationalisation invariably pay lip service to cultural factors in software (e.g., they note that the meaning of the colour red differs from one culture to another). But they almost never examine thoroughly enough the built-in or embedded views about the nature of reality, the nature of users, and the social world contained in software and hardware. To deal with this subject adequately is beyond this writer’s ability, so the comments here will only be suggestive and programmatic.
For those who are engaged in advanced scientific work, who live in the so-called modern world, or who are actual or potential members of the digirati, the telectronic ruling class – for us, a whole set of assumptions about time, human nature, and society may have come to seem ‘natural’. These root assumptions are in essence the rational, analytic, reductive, scientific assumptions that were incorporated in Europe about 500 years ago, into what we now define as ‘modern’ views of time, matter, nature, and human nature. Today, they constitute the ideological bedrock on which science and technology – including electrical engineering and computer science – are built. But we need to emphasise the fact that they bring to software (and indeed to the hardware on which it runs) an inescapable commitment to a world view that was unknown throughout most of world history, and one that remains alien to much of the world’s population today.
Studies have tried to distinguish between the highly valued ‘individualism’ in North America and a more ‘collective’ orientation towards life, achievement, and social relations in other parts of the world. Thus, it was not surprising, on a recent trip to Argentina, to hear a rural Argentine primary school teacher complain that the well-translated, well-localised American educational software used in her school presupposed, as she put it, “solitary individuals sitting at a keyboard solving problems as rapidly as possible”. The translations into Spanish were excellent, she said, but somehow these values were “not Argentine”. Indeed, she wondered whether if one took this US educational software, expertly localised to Spanish, as a model of life, would it not transform young Argentine children into “little North Americans”.
It is doubtful that children can be transformed into “little North-Americans” by a single educational computer programme. Nonetheless, the teacher’s perception is crucial. Software can certainly help solve problems, but equally it can convey a set of implicit and culturally-specific assumptions about the world. Could the hegemony of American-packaged software be one small aspect of a larger pattern in which ‘American’ – or more precisely, global technetronic – culture spreads across the world at the expense of local diversity?
One other set of absolutely critical but implicit assumptions about those who use software is that they are “numerate” (i.e. they will have a reasonable command of basic arithmetic, if not of advanced mathematics), that they will be literate (that they will be able to read instructions on the screen and use a keyboard in whatever language the keyboard is designed for), that they will be accurate (that they will not misspell addresses or computer commands), and finally, that they will be capable of working in a microworld where all choices are binary (yes/no, up/down, delete/retain, go back/go forward). These assumptions presuppose the presence of a certain kind of person on one end of the computer. But what if he or she cannot use numbers? What if she or he cannot read the instructions on the screen? What if he or she cannot write, or cannot type? What percent of the world’s population today satisfies all these elementary requirements of computer use, when half of the people in the world have never made a telephone call?
Software localisation, while it is importantly a matter of technology transfer and economics, is also a practice with decisive cultural and political parameters. The content of localised software is determined not only by the language chosen for localisation, but by deep, underlying, usually implicit and unacknowledged (because it is thought to be ‘natural’) assumptions inherent in the software itself. And software carries with it a view of the world, of people, of reality, of time, and of the capabilities of users, which may or may not be compatible with any given (South Asian) cultural and social context.
Whether, how, when, for whom
Some commentators argue that the electronic-communications revolution, far from improving the condition of the Southern nations and of the poor in the Northern nations, will inevitably enlarge the gaps that exist. Some claim that this is an inevitable consequence of any new technology that is accessible only to an elite.
But it can always be argued that the consequences of the new telectronics will not be determined by the technologies as such, but rather by the ways we use them, by the contexts within which we choose to deploy them, by the wisdom and values that guide our actions in using them.
To return to the issue of localisation to South Asian languages: whether, how, or when this is accomplished, and for whom, are obviously crucial factors in determining whether the information age widens the gaps that already exist in the Subcontinent – and everywhere else – between rich and poor, powerful and powerless. If English is to remain the only easily available language for computer use, and if we make the reasonable assumption that access to computers (and to computer-based electronic communications) is empowering, then 95 percent of the people who do not speak good enough English for computer use will automatically be disempowered. Existing gaps will grow.
Whether this happens is above all a matter that knowledgeable South Asians in each of their countries and localities need to determine in collaboration with international software companies. On the one hand, there is, of course, the possibility of consolidating the existing privileges of a gifted, educated, cosmopolitan, English-speaking elite. But if this happens, it is likely that fundamentalist reactions against the growing power of the globalised English-language electronic culture and an English-speaking elite will mount, and that these reactions might overturn, as they have done elsewhere, traditions of multicultural tolerance, democracy, diversity and human rights.
There is, however, another possibility – a happy dream, if you will. It is a dream of South Asian and international cooperation to make computers accessible to the vast majority of South Asia’s people who are not fluent in English. It is a dream of localisation to South Asian languages.
In 1997, there was an unprecedented meeting of representatives of the large software firms in the United States to discuss developing common standards for software internationalisation. Although these firms compete tooth and nail for American and international markets, they are nonetheless trying to develop, over time, uniform standards for internationalising new programmes. Developing common internationalisation standards will be a complex, technical and difficult job.
These standards will require each company to change existing procedures. But they will also make it far easier for localisers in, let us say, Mysore, Colombo, Calcutta, Ahmedabad, Dhaka, Kathmandu and Lahore to develop local versions of the English-language software and applications written by these companies. As the work proceeds, we could even move towards a day when all major new software programmes have, as it were, a common “plug-and-play” localisation interface. If that day arrives, the cost of localisation from English to other languages, including Indian languages, will decrease (and the probability of doing so will increase).
As the World Wide Web grows in importance, as bandwidth increases, as traffic multiplies, as problems of encryption become more complex, as commercial uses expand, as use of the Web for telephony and digital video burgeons, the development of new worldwide standards becomes necessary. A consortium subscribed to by dozens of companies worldwide was recently created at the Massachusetts Institute of Technology. It aims to establish common standards to ensure that the worldwide digital communication networks and technologies developed by distinct firms in different nations will, in the decades ahead, be compatible with each other and indeed, compatible with all major languages. In this process, each participant has had to relinquish sovereignty, to modify existing procedures, to disclose corporate secrets, and in some cases to abandon technologies in which they had deeply invested. The important points, however, are that working groups have been established, standards hotly debated, and progress made.
The leaders of many Indian software firms also have expressed parallel uncertainty about how or whether to proceed with non-English Indian languages. This is the confusion of software producers. On the side of desired social goals, however, I also have been struck by the almost universal hope of Indians to move towards a time when village stores, shops, banks, post offices, warehouses, schools, and eventually homes can be interconnected on Internet and the World Wide Web; when Indians and South Asians of all cultures can have access to computers in their own languages, and when the potentialities of digital technologies and multimedia for recording, storing, deepening, and accessing the riches of Indian cultures can help strengthen, rather than vitiate, the variety of this nation.
But realising such a future will require an unusual degree of cooperation and visionary leadership from South Asians in both the private and public sectors. They will need both vision and determination in order to develop (surely in collaboration with international software firms) common standards for localisation to their individual languages. Enormous creativity is already going into plans for developing standards for the languages of India. At the National Centre for Software Technology, CDAC, Konkan Railways, NASSCOM, the Bhaba Centre for Atomic Research, and elsewhere, a variety of ingenious methods for entering Indian languages and scripts into computers has been developed and continues to be developed. Moreover, major international firms have announced plans to develop versions of their current operating systems in Indian languages.
The imaginativeness and diversity of all these efforts hold a promise for the future, but also a difficult challenge. The stage now seems to be set for a final act in which the key players come together to produce a grand finale – coherent and agreed-upon standards for localisation. Such standards could provide the “plug-ins” that will enable South Asian languages to dovetail with the work of the American consortium in developing common standards for localisation. Without such coordination between foreign and local developers of standards for internationalisation, the outcome is likely to be a Tower of Babel.
The alternative to Babel is that there will evolve a consortia of South Asians and multinationals to develop standards for localisation to the major regional languages, perhaps beginning with Hindi (in the case of India, but surely including other languages, especially those from the south). These consortia could bring major participants in the public software sector together with major firms in the Indian private software sector, as well as with representatives of foreign software firms. Their goal would be to establish mutually agreed-upon standards for such matters as keyboard entry, scripting and fonts, standardisation of languages, and the uniform translation of critical computer terms. Accomplishing this will not be simple, either technically or (in a broad sense) politically: too many creative people have devoted too many hundreds of hours to differing solutions, not all of which can prevail.
The stakes are high. For unless South Asians come together to develop common standards for localisation, there are only two alternatives. One is that such standards will never develop, and real localisation will not be implemented. The other is that if localisation to South Asian languages is accomplished, it will be defined by default in Redmond, Washington, rather than in Delhi, Karachi, and Dhaka, and the results could too easily be inappropriate for the region.