Francas, Linguas Francas

Before entering into the infamous metric system, let us dive deeper into the basics: mutual understanding. We have been dismantling what makes the “Western dominion” narrative so appealing; therefore, we need to look deeper at the foundational building blocks of our new age —where “humanity” can ask itself deeper questions. We have seen how commerce pushes connectivity and enables some basic communication, and how global standards emerge with the examples of timekeeping and mathematical notation. These, however, feel hopelessly limited for any meaningful exchange, as analytic philosophers discovered in the 20th century.

As with our thought experiment on first encounters, to establish an exchange one needs a basic set of communication rules. We have seen that pointing and smiling are human universals. These small gestures, in our thought experiment, allowed the initial connectivity of diverse groups of humans and contain the basic building blocks needed to create connections. To build upon that and ask complex collective questions, we need more sophisticated communication strategies. Fortunately, as illustrated, humans are born with such a strategy —or the capacity for it: language.

Indeed, if one looks at the history of many well-established exchange networks between different peoples, these are often associated with the development of a mutually understandable, but initially basic, pidgin language. As outlined, human brains seem to be made for this relatively easy acquisition of a second, third, fourth, or fifth language. Therefore, we possess not only the drive to learn a language but also the capacity to acquire additional ones —probably linked to the early onset of exchange networks in anatomically modern humans.

In the case of pidgins, these are even more interesting, as a common set of communication bits and pieces is put together on the fly by a diverse group of people who do not share a common language. Pidgins are a more or less complex set of communication strategies based on the languages already spoken by the peoples who come into contact, mixing concepts from different backgrounds. They first establish a basic shared vocabulary around a limited set of objects and actions, as is the case with linguas francas for trade and exchange. How easily pidgins can be constructed, and how organically they are established, further indicates that our brain seems to be built for social communication and for creating shared standards relatively easily, transmitting increasingly complex and abstract concepts that a language captures.

Depending on the depth of contact between the peoples with different backgrounds, the basic code —initially based on a limited shared vocabulary— can evolve to borrow the grammar of one or more of the languages involved. Grammar then becomes the scaffolding on which the vocabulary is built. These are the basic ingredients of a pidgin language. Pidgin languages then become more or less complex depending on the depth of contact, interaction, and areas of life that must be discussed. The most basic form is, as described, pointing, smiling, and saying a few shared words. At the other extreme is the creation of a brand-new language to be used by the descendants of the peoples in contact. At first, nobody speaks a pidgin as their first language. However, many of the pidgins associated with strong exchange networks have grown in complexity until they adopted all the characteristics of a fully-fledged language —spoken by many as a second language and eventually as a first language. At this point, this new language is often called a “creole”.

When a simplified language of a place is used as a trading language —while borrowing many, many, many elements from other languages— this creation is often called a lingua franca. The distinctions between lingua franca, “pidgin”, and “creole” are not clear-cut, and depend on how much influence one specific language had in the creation of the exchange code. However, in all cases, a lingua franca is not an exact copy of the parent language; it often includes vocabulary borrowed from other varieties and languages and always adopts a simplified grammatical structure.

In the case of Lingua Franca itself (or “language of the Franks”), it was in fact a commercial language spoken mostly in the eastern Mediterranean and North Africa. It was brought to these regions by North Italian (Genoese, Venetian, Pisan…) and Catalan sailors. It was not called Franca because it was spoken by the Franks or French, but because during the late Byzantine Empire, “Franks” was a blanket term applied to all Western Europeans due to their prestige after Charlemagne. In fact, over time, that language —used for commerce across the ports of the Mediterranean— was largely influenced by Italian dialects, Catalan, and Occitan more than by French. That early commercial language, lasting from the 10th to the 19th century, is what gave the name to the concept of linguas francas, or a functional language providing basic understanding between many trading peoples with different socio-cultural backgrounds. In this text, franca for short is a functional term, independent of any linguistic history or language structure. That concept can also be applied to pidgins and creoles, or whole languages like Hiri Motu —which is neither creole nor pidgin, but simply a franca from southeast Papua of Austronesian origins used for trading voyages.

Returning to the impressive Malay seafarers and traders, the current Malay language, or Bahasa Indonesia, is a language that originates from Old Malay, mostly spoken in Malacca —the great trading centre that the Portuguese conquered in 1511 CE. Old Malay, also known as Bazaar Malay, Market Malay, or Low Malay, was a trading language used in bazaars and markets, as the name implies. It is considered a pidgin, influenced by contact among Malay, Chinese, Portuguese, and Dutch traders. Old Malay underwent the general simplification typical of pidgins, to the point that the grammar became extremely simple, with no verbal forms for past or future, easy vowel-based pronunciation, and a written code that reflects the spoken language.

Back in 2016, I travelled for a few months through the Malay Peninsula and the Indonesian archipelago. During these trips I could easily pick up a few hundred words which allowed me to have simple context-based conversations despite my general ineptitude with languages. I was surprised by how much understanding could be achieved without even using verbs! The language had evolved such that verbs like ‘go’ and ‘come’ became prepositions like ‘towards’ and ‘from’, allowing me to build simple sentences describing my itineraries without proper verbs. This anecdotal example illustrates how Malay evolved to enable extremely easy preliminary communication.

But Malay is not only a franca. This is exemplified by the extreme complexity and nuance in vocabulary and verbal sophistication required to address your interlocutor based on their relation to you. You need a special way of addressing someone depending on whether they are a man, woman, young, old, or of higher, equal, or lower social status. This likely reflects the language’s other origin: High Malay or Court Malay, used by cultural elites and in courts, where making explicit hierarchical relations was (and still is) crucial.

Today, the language dominates the Indonesian archipelago, the Malay Peninsula and Brunei, and is also widely spoken in Timor-Leste and Singapore. In Singapore, however, the official and de facto trading language is English —but we’ll talk about that trading hub and English later. Malay is spoken as a first language by millions of people, but it is far more common as a secondary language, with almost 300 million speakers. Most of these speakers also know at least one other local language, like Javanese — spoken by nearly 100 million people— or Bazaar Makassar, another franca used by the Bugis, who historically landed on the shores of Western Australia for centuries.

Interestingly enough, modern Malay descends from a language spoken in ancient times in east Borneo. This language also gave rise to Malagasy, spoken by most of the population of distant Madagascar. The language arrived there via Malay seafarers and later traders. Afterwards, Bantu peoples from southeast Africa arrived and mixed with the Austronesians, giving rise to modern Malagasy —the only native language on the island (though it has three main dialect families). Apparently, nobody had the need to create new languages in this 1,500 km-long island.

The Bantu peoples themselves are also the carriers of one of the World’s largest francas: Swahili. It is spoken by up to 150 million people. Originating as a coastal trading language, it spread to the interior of East Africa, connecting the coast to the Great Lakes, and became a franca across the region and a mother tongue for many urban dwellers. It arose in present-day Tanzania during trade between the island of Zanzibar, inland Bantu groups, and Arabs (particularly from Oman). The Omani Imamate and Muscat Sultanate controlled Zanzibar and the Tanzanian coast during the 18th and 19th centuries and held considerable influence through the slave and ivory trade, among others. The name “Swahili” itself comes from the Arabic word for “coast”. Arabic has contributed about 20% of Swahili vocabulary, with words also borrowed from English, Persian, Hindustani, Portuguese, and Malay —the region’s main commerce languages. Like Malay, Swahili has a simplified grammar common in francas, making it easy to learn and pronounce. These traits have made Swahili a contender for a global communication language.

Beyond commercial francas, there are languages used exclusively as cross-border platforms for intergenerational communication, but which are not native to any sizable population. These languages are like frozen structures, called upon to allow a group of people to mutually understand one another.

Classical Latin is one early example of this function. By the 4th century, Romans were already speaking a language quite different from what Augustus spoke 300 years prior. Different parts of the empire used highly dissimilar versions of Latin, and Classical Latin served to maintain a unified system. That “old Latin” was standardised for literary production and, crucially, for imperial administration. After the division of the Roman Empire, the Western Christian Church also adopted a version of Classical Latin for internal operations: Ecclesiastical Latin. Previously, early Christians used mainly Greek and Aramaic —as we willl see.

Over time, Ecclesiastical Latin became the international language of diplomacy, scholastic exchange, and philosophy in Europe, lasting for around a millennium. It was not fully standardised until the 18th century! By then, linguists were eager to fix languages as words changed meaning too quickly —as the analyitical philosopher Russell observed. Ecclesiastical Latin flowed into the emerging sciences. The main works of Copernicus, Kepler, Galileo, and Newton were written in Latin.

But this Latin was a written language —no one really spoke it. And, unlike Swahili or Malay, it was far from easy to learn. Any Latin student knows how difficult it is to memorise its multiple, complex inflections. It became a written fossil spoken by basically no one —except a few geeks. That made the other nerd in the Peano-Russell nomenglature propose a simplified version of Latin, Latino sine flexione, as the  Interlingua de Academia. Peano, being Italian, had skin in the game. For us native latin languages speakers, such as my Catalan, a simplified academic Latin would be a great advantage compared to “ahem” we know what. But more on that and created languages to be international standards later.

Today, a stripped-down Latin does survive in science —particularly taxonomy, where the classification of living things (especially plants and animals) uses Latin binomials. For instance, humans are Homo sapiens, wolves are Canis lupus, and rice is Oryza sativa. Many scientific terms —especially in astronomy, physics, and cosmology— are still derived from Latin. So, through science’s dominance as the global system for classifying the world, Latin vocabulary lives on. It has become a kind of global pseudo-language, used by experts worldwide to communicate about shared topics but just as individual words, without any structural coherence.

Latin is but one example of an imperial language transformed into a franca and liturgical language. Another —and much older— example is Aramaic. Aramaic had the advantage of the simplicity of its written form. Unlike the complex cuneiform writing on clay tablets, Aramaic used a simple 22-character alphabet, which made it easier to learn and spread. This accessibility allowed it to be adopted in administration, commerce, and daily communication in a linguistically diverse region. The Achaemenid Empire adopted it as an administrative language, standardising “Imperial Aramaic” alongside Old Persian. Bureaucracy, scribal schools, and widespread official use helped it expand far beyond its original homeland —and its legacy lasted for over a millennium.

As with Latin, Aramaic became the medium of religious texts. Many Jews returning from exile after the fall of the First Temple continued speaking it. Scribes translated the Hebrew Bible into it, and large sections of sacred texts ended up in Aramaic. Other Levantine prophet religions like the Manichaeans also adopted it. One of these, Mandaeism, still survives, and its followers still speak a version of Aramaic. Eastern Christians adopted Syriac —an Aramaic dialect— for theology, hymns, and lengthy religious debates. Aramaic became the franca of the ancient Near East: everyone could participate. Thanks to that, like Latin, the language outlived the empires that spread it.

So, with these examples, we can begin to draw some principles for how linguas francas are established, spread, and sustained across space and time. They tend to offer accessibility benefits, borrow heavily from multiple languages, are often secondary but can become primary languages (especially in cities), and most importantly, serve specific purposes: commerce, administration, religion, or technical use. Perhaps we can even distinguish between written and spoken francas: spoken ones often have simplified grammar, are easier to pronounce, and accommodate mixed/macaroni forms. (Cool word, “macaronic” — look up its history!) Written francas, on the other hand, may retain complex grammar but offer easy ways to record text. Of course, ideographic systems like Mandarin or Japanese kanji present another accessibility puzzle —one we willl explore later, as they are closely tied to another leg of francas: formal state education.

In a world that is becoming more technical, more bureaucratic, and more formally educated, there is now ample space for new linguas francas to be established, maintained, and —for the first time— reach global scale. It is in these languages that we will begin to ask ¿what does humanity want?

Previous

Next

Mathematics, universal not the way you think

Before moving to the quasi-universal metric system —which includes the archaic Babylonian timekeeping— let us focus on probably the first universal lingua franca: mathematics. And not mathematics as in “the language of the Universe”, but mathematics as in “the set of codes, rules, concepts, and ideas that are shared and approximately mutually understood by any human using them”.

As we will see, many linguas francas originate as an often simple code (compared to a general-use language), developed rather quickly to serve a specific function. In many cases, that function has been simple commerce and exchange, where the number of items to “exchange” is limited, and the rules of the game are simple —possibly including locally standardised accounting, such as produce and monetary units, and standardised measures like weights, surfaces, and volumes.

The linguistic and symbolic part of mathematics, therefore, is not so different from commercial linguas francas. What sets it apart is that, as of the 21st century, virtually everybody using “mathematics” as a functional system —mostly algebra and calculus— uses the same notation. In other words, it is universal.

This is not surprising: when one thinks about a universal language, one often refers to mathematics. However, like with timekeeping, how we came to the specific and well-known set of symbols +, =, ÷, ∞, … has its own history.

Mathematics and mathematical notation, although common in the current world, took centuries to take shape. Over generations, it was agreed upon by scientific, technical, and mathematical communities in Europe, the Middle East, and South Asia to use the same kinds of symbols, numbers, and conventions to refer to the same concepts.

Interestingly, these “concepts” themselves were (and are) thought to be universal, even beyond the human realm —i.e. the number 3 is the same in all parts of the Universe. Therefore, unlike goods and commercial language, which had local characteristics, mathematical notation is expected to be written in the same way by everyone using those concepts and wanting to share them, regardless of location. The same applies to signs and symbols like +, =, ÷, ∞, which any reader would most likely recognise regardless of the language being used.

For some reason, written mathematics —often calculus— has always been something of a special case in many cultures. We can write numbers as they are spoken in a given language —like zero, one, two, three in English, or cero, un, dos, tres in Catalan. But often, across many writing systems, numbers have been chosen to be represented by symbols, for example: I, II, III… (Roman, no zero), 0, 1, 2, 3… (Arabic, from South Asia), 𝋠, ·, ··, ··· (Mayan, perhaps the 0 doesn’t display in Unicode), 零, 一, 二, 三 (Chinese, 零 meaning something less than one, yet not nil).

These examples show that from early on, people decided it was better to simplify numerical notation —to the point that doing otherwise seems like suffering. Try writing down the year the Portuguese took control of Malacca in the Common Era calendar: one thousand five hundred and eleven, or one-five-one-one, if simpler. Write it. Stop reading.

How do you feel?

I bet it’s a pain, and it feels right to simply write 1511. A similar thing applies to phone numbers. If you’ve ever used certain online platforms that do not allow phone numbers to be exchanged, you cannot send them using digits. A workaround is to write them out in words —for example, “one hundred and twelve” or “eleven two” instead of 112. It’s not much more effort to spell the numbers, but it still feels like a pain knowing that a shorter, cleaner alternative exists.

Although people must learn two different systems to write numbers —instead of just the phonetic one— which might seem like more effort, in the long run, simplification tends to dominate. This preference for simplicity is similar to what we will see in francas, linguas francas: the adoption of a shared, simplified, functional language is preferred over a fully developed one. So, the basis of our mathematical universality might have less to do with the Universe and more to do with a universal feeling: tediousness.

In the case of mathematics, despite numerals having been used symbolically for millennia, the simplification of other concepts —like “sum”— into symbolic script is a relatively recent development. This is exemplified by the fact that signs equivalent to + are not found in many older written systems while there is a diverse set of equivalent signs to 1. Things like +, and -, are known as “operators” in mathematical terminology. Interestingly, many of these operation symbols —unlike some numerals that are simply dots or lines —have phonetic origins. Phonetic symbols were already present in some numerical systems, like one of the two Greek numerical systems, where they would use Π as five (short for pente, 5 —Π being pi, capital P in Greek), or Δ as ten (short for deka, 10 —Δ being delta, capital D). The other Greek numerical system simply assigned the order of the alphabet to the numbers, \alpha being 1, \beta being 2, etc. Many societies around the globe have developed advanced mathematical notations. However, none of them used algebraic notation like + to mean “sum”. Other mathematical systems worked with geometry to describe concepts, or used written linguistic statements.

Linguistic statements was the European method too. Before symbolic expressions, European mathematicians wrote their sums. For example, they would put on paper: “3 plus 5 equals 8”. Since that was a pain —like writing numbers in words— they simplified it to “3 p 5 e 8”. The operations had no proper symbols, just words or shortened initials understood by context. In fact, the sum symbol, +, is one of the earliest to appear in written arithmetic. Although it originated by mid-14th century, it was only commonly used by the 15th century. While there’s no universal agreement on its origin, it most likely comes from a simplified script of et, Latin for “and”, but nobody really knows why.

Algebraic notation to define operations was strongly promoted by the Andalusi mathematician Alī al-Qalaṣādī in the 14th century, where each sign was represented by a letter of the Arabic alphabet —for example, ﻝ for ya‘dilu (meaning “equals”). But it was actually a Welsh mathematician, Robert Recorde, who coined the modern equals sign (=) in the late-16th century. By that time, Europeans were mapping coastlines beyond Europe and the Mediterranean, Copernicus was posthumously publishing his Revolutionibus and the printing press was spreading like powder all over Europe —and people were still tediously writing “is equal to” or aequale est in Latin instead of just “=”. Try to make our kids do mathematics that way and see how long they can hold!

To be fair, most of the notation was standardised by the 20th century in the context of mathematical fields like set theory, groups, graphs, and others that most readers would not be familiar with. In fact, the evolution of mathematical notation and the stages at which one learns it in the educational system are uncannily correlated.

By primary school, around the planet, one learns the first symbols standarised by the 16th century +, −, =, ×, ., √ , ( ).

By mid-high school, one would learn the rest that can be easily written on a modern keyboard or a calculator with one or two keystrokes: ·, ⁄, %, <, >, ∞, ≠, xy , º, cos, sin, tan. These were developed by mid 17th century.

Once one goes on to study sciences in upper high school, one comes into contact with integrals, differentials, functional analysis, binomials: \int, \partial, x', f(x), \sum, \binom{N}{k} = \dfrac{N!}{k!(N-k)!}. These examples have linguistic roots too, but also “famous personalities” for example Newton’s binomial —Newton was known to have anger issues, that might explain the exclamation mark (!), though it was developed by Christian Kramp. More seriously, Newton’s arch-rival of all times, Leibniz thought that having the right notation was the solution to all human problems —if humans could create a universal logical language, then everyone would be able to understand each other. In the case of mathematics, Leibniz actively corresponded with his peers at the time to convince them that notation should be minimal. That, in fact, has informed most of our modern mathematical symbolism. Going back to our tedious exercise, this decision on minimalism might have a cognitive reasons, human operating memory is limited to about 3 to 5 items, and this storage lasts only few seconds, so it makes sense to develop notation that allows computation and arithmetic to fit well in that memory space. These symbols were common use by the early-19th century, though some, like \int, \partial by Leipzig were developed earlier or at the same time as the signs ·, ⁄ —these two being simplifications of the product and the division. Many of these symbols cannot be easily typed from your keyboard and need special code to type or display.

By the end of a technical degree like engineering or physics, one gets to know most of the mathematical notation developed by the mid-20th century, with scary things like tensors written using something called Einstein Notation: \Gamma^{i,j}_{k} —Einstein was known to be bored easily, that might explain that he preferred the simplified notation to the degree that dyslexic minds like mine mix these little indices.

Beyond these, one enters into advanced or specialised studies to learn the fancy ones: \rightarrow, \Cup, \exists, \forall, \vdash, \because, \therefore. Many of these are just substitutions of words that are mathematically “conceptualised”, like the numbers. For example, the Braille-looking \because, \therefore are just symbolic representations of the verbal statements “because” and “therefore”, respectively. Many of these symbols were developed during the late 19th to late 20th century. The most avid use of signs is in the field of mathematical logic, where Peano–Russell notation informs some its rules —Russell was a known geek, self declared to know nothing about aesthetics, that might explain his dislike of using words, which have the tendency to change meaning. Funny how he did not write much about the mostly aesthetical music, which has also a standarised quasi-universal notation, as we will see.

In short, in standard regulated education, one progresses through about 100 years of mathematical notation history every two or three years of modern study —although that is a non-linear accumulation. As one enters logic and set theory, the number of symbols needed run into the low hundreds.

Symbols by approximate “popular” introduction date

Nevertheless, the point at hand with mathematical operational notation is that it took hundreds of years to adopt the standardised form that is now widely used in all the teaching systems around the world. That evolution and standardisation did not happen in isolation, but were interwoven with other branches of knowledge, mainly technical ones. These technical fields needed the rapid adoption of simplified standards that could be learned efficiently by a specialised community of experts. This process can be understood, in part, in a similar way to how linguas francas are constructed —from a simplification of an already existing language— to be the means of exchange and understanding among a subset of people from many different cultural backgrounds who share similar conceptual and material items.

This notation is nothing new by itself. It is just a reflection of human needs —mutual semantic undertanding around a limited subset of concepts— and practical solutions that might have some cognitive biases. What is new is the fact that this notation reached a planetary scale. As we have seen with the spread of communication, that process is just a matter of scale, not of quality. But, in my view, that global scale makes all the significance and sets the question of this book. Mathematical notation, and its quasi-universal use, shows one paradigmatic example of how we arrived there. How we arrive there, or not, and a standard is kept regional, is significant.

Mathematical notation has been the first of such lingua francas to become a standardised language used across the whole planet. It is, however, limited to be used only by someone who needs to do arithmetic —which, in our case, is anyone who has entered a regulated educational system. As we will see, regulated educational systems have reached over 80% of the human population, and are implanted in virtually every new human being born.

Now we have the example of how a truly global language —albeit a limited and specialised one that rides on the back of the universality of what it studies— is created, adopted, and made universal. In particular, this one has been made universal without any clear agreement or premeditated guidance, but rather by the sheer pressure of technical needs and the dominance of Western knowledge systems. Same as with time keeping. Time-keeping, by the way, will come back, and we will see that the universality actually is held by technical needs, mostly as a matter of sailing ships, driving trains and flying planes around the world while knowing where you are also in space.

So, the World has not finished with mathematics as universal communication, other technical and symbolic language are coming. The Metric System is coming, and this time, with bureaus.

Previous

Next