Unstructured, draft outlines and chapters of a book trying to unveil what does that question mean, why we can ask it, who can act upon it, and possible answers
To keep postponing the Metric, let us explore another metric — the music metric. Since my music ignorance is as immense as music is —and i do deeply apologise to all he music lovers out there— I take metric to mean notation.
As we all know, sound is difficult to record compared to visual signals. Before the invention of gramophones and ceramic disks, the only way to play music created previously was either to have an unbroken sequence of people remembering how to play it, or to encode it in a system that could be reproduced later — either by other humans or by moving machines.
As always, the Greeks and Babylonians (Sumerians, to be exact) had already dealt with the encoding of music, but music is an immensely complicated language, and these scripts were only able to store the strings, the intervals, and tuning parts of the music.
The bit that is often told about the Greeks is the mythical Pythagoras. Beyond his triangles — which mostly were a well-established Egyptian knowledge — the Pythagoreans linked music and ‘harmony’ with mathematics, which they copied from the Babylonians this time. (Interestingly, the Chinese Shí-èr-lǜ scale follows the same logic.) Beyond copying and being attributed things he did not create, Pythagoras did make the musical and mathematical ‘divinity’ a religious sect by the 6th century BCE — if you want to perpetuate nerdy, complex, boring knowledge, you’d better make a religion out of it. In instruments, pitches, notes that were a half or 3/2 apart ‘sounded’ good. With strings, this is related to length, but also tension or tuning. Pythagoras and his followers could not resist giving this a ‘mystical’ meaning, trying to fit existence into his ‘divine’ vision — but reality, as it turned out, had other ideas.
What Pythagoras actually discovered — before wrapping it in mysticism — was that the harmonics of strings resonate with each other, a simple fact of physics that our ears have evolved to enjoy. From what we understand now, it is mostly a coupling of harmonics in strings and “overtones” in general. In other words, when an instrument produces a ‘pitch’, it gives off not just one pure wave but many smaller ones that resonate with it — and these combine to sound harmonious because they complement each other. Then, if you add another tone that also resonates with the primary frequency, or some of the overtones, that is also harmonious.
Our auditory system also resonates with nearby frequencies, which is why we find it oddly unsettling when two notes are almost — but not quite — the same. And there is a cultural layer on top. If you hear something that is slightly dissonant often enough, it will “sound good enough” not to care. It’s only when we hear music from another culture — one that divides its scales differently, using another temperament — that we start to notice those dissonances. Once your ear adjusts to the new metric, returning to the old one feels oddly off for a while.
And again, we go to Western Europe to observe the origins of most modern music notation. In this case, it started with the singing of prayers. As you can notice, voice is a continuum; there is no predefined ‘do’ or ‘re’. So voice, and music, had to be discretised into common units that everybody could ‘aspire’ to. This standardisation, inspired by the mathematical spirit of universality, was thought to be divine — which made following Pythagoras rather straightforward. Until it wasn’t (but more on that later).
Before the well-known pitch system, what ecclesiastical choruses did by the 10th century was to annotate when the melody went ‘up’ or ‘down’ on top of the lyrics. Then a reference line was added to show the relative high and low of the melody. From there, by the 11th century, more lines were added above and below to structure the pitches in a standardised way. In parallel, it was necessary to know what the actual ‘pitch’ was, and that’s where the same guy who started adding the extra lines developed solmisation, with the familiar Do–Re–Mi–Fa–Sol–La–Ti (or Si) sequence appearing (though Ut instead of Do). This provided seven basic notes, which mirrored pre-existing scales like the Byzantine Pa–Vu–Ga–Di–Ke–Zo–Ni and the Indian svaras Sa–Re–Ga–Ma–Pa–Dha–Ni. This naming, however, was never fully standardised: the British preferred C–D–E–F–G–A–B (red in the map), as in guitar chords (E–A–D–G–B), while Germans and neighbouring regions (green, yellow, and sky blue) used C–D–E–F–G–A–H.
If one doubles around the central note, this provides the conventional twelve notes to be placed on top of the five-line staff developed by the 13th century. If you place a symbol on top of the five lines, plus four in between them, and two more on top and bottom, you have eleven spots; adding one more small line (a ledger line) at the bottom when needed is easy and gives the Babylonian twelve. At this point, we have something that looks a lot like the modern musical staff, or pentagram, and symbols that look a bit like the familiar ‘white’ [], ‘black’, and ‘round’ musical notes. These symbols, known as ‘neumes’, took longer to standardise; there was still variety until around 1700, but by then most European notation had settled into the familiar form — even recognisable to a musically challenged person like me. Five thin lines, a ‘clef’ symbol at the beginning, and the pleasant ant-like procession of pitch along them. But like mathematics, musical notation is immense!
For the temperament, or tuning itself, Pythagorean (Babylonian and Chinese) fixing of the twelve notes to perfect ratios of 1/2, 3/2, 2/3 and their powers (C 1⁄1, D 9⁄8, E 81⁄64, F 4⁄3, G 3⁄2, A 27⁄16, B 243⁄128, C 2⁄1) failed because, contrary to belief, some of these pitches do not sound harmonious when played together — particularly something called the Wolf interval. It also made it difficult to shift the scale up or down beyond the twelve notes, since the spacing between them was uneven. To solve this, particularly for string instruments with many keys, like the piano, equal temperament became the standard by the 18th century, in which the distance between notes is, as the name suggests, equal. Developed independently in Europe and China in the 16th century, in mathematical terms each frequency interval is 1/12 of the octave (the distance between the highest pitch and the lowest), and the ratio between consecutive intervals is $^{12}\sqrt2$ — all equal in a logarithmic scale.
There is much more to it than that, and music notation and temperament have been evolving ever since, not unlike mathematics, with hundreds of signs emerging. Moreover, other familiar notations like the C–D–E–F–G–A–B guitar chords are ever-present. And music notation beyond the Western one is rich and diverse. But acknowledging my immense ignorance of music — and the fact that the basics have been quite established and spread around the world — I will limit myself to this standard that, like mathematical notation beyond numerals, has taken over the world.
From then — plus a few more inventions in tuning, non–string-pipe instruments, and electronic music among others — we have the fascinating fact that music written in one period can be played by future or distant musicians who have never heard it, as long as they possess the relevant knowledge and skill.
That transfer of sound into visual form (and back) is a truly fascinating invention — a collective effort across generations showing how sensory experience can be encoded, standardised, shared, and reinterpreted again and again. And, for better or worse, this very standardisation enables immense creativity while also stifling non-standard forms — a kind of ‘Western’ myopia, or more aptly, tone-deafness.
Thus, to recount: by the year 1800, and to this day, we have the following global or quasi-global standards — timekeeping, the calendar, mathematical notation, the Copernican principle, music notation, and temperament. These are not that many, but we will see that the other topics introduced — commerce and francas — will play a dominant role in the following two centuries.
Plus the infamous metric system! Nobody expects the metric system.
Before entering into the infamous metric system, let us dive deeper into the basics: mutual understanding. We have been dismantling what makes the “Western dominion” narrative so appealing; therefore, we need to look deeper at the foundational building blocks of our new age —where “humanity” can ask itself deeper questions. We have seen how commerce pushes connectivity and enables some basic communication, and how global standards emerge with the examples of timekeeping and mathematical notation. These, however, feel hopelessly limited for any meaningful exchange, as analytic philosophers discovered in the 20th century.
As with our thought experiment on first encounters, to establish an exchange one needs a basic set of communication rules. We have seen that pointing and smiling are human universals. These small gestures, in our thought experiment, allowed the initial connectivity of diverse groups of humans and contain the basic building blocks needed to create connections. To build upon that and ask complex collective questions, we need more sophisticated communication strategies. Fortunately, as illustrated, humans are born with such a strategy —or the capacity for it: language.
Indeed, if one looks at the history of many well-established exchange networks between different peoples, these are often associated with the development of a mutually understandable, but initially basic, pidgin language. As outlined, human brains seem to be made for this relatively easy acquisition of a second, third, fourth, or fifth language. Therefore, we possess not only the drive to learn a language but also the capacity to acquire additional ones —probably linked to the early onset of exchange networks in anatomically modern humans.
In the case of pidgins, these are even more interesting, as a common set of communication bits and pieces is put together on the fly by a diverse group of people who do not share a common language. Pidgins are a more or less complex set of communication strategies based on the languages already spoken by the peoples who come into contact, mixing concepts from different backgrounds. They first establish a basic shared vocabulary around a limited set of objects and actions, as is the case with linguas francas for trade and exchange. How easily pidgins can be constructed, and how organically they are established, further indicates that our brain seems to be built for social communication and for creating shared standards relatively easily, transmitting increasingly complex and abstract concepts that a language captures.
Depending on the depth of contact between the peoples with different backgrounds, the basic code —initially based on a limited shared vocabulary— can evolve to borrow the grammar of one or more of the languages involved. Grammar then becomes the scaffolding on which the vocabulary is built. These are the basic ingredients of a pidgin language. Pidgin languages then become more or less complex depending on the depth of contact, interaction, and areas of life that must be discussed. The most basic form is, as described, pointing, smiling, and saying a few shared words. At the other extreme is the creation of a brand-new language to be used by the descendants of the peoples in contact. At first, nobody speaks a pidgin as their first language. However, many of the pidgins associated with strong exchange networks have grown in complexity until they adopted all the characteristics of a fully-fledged language —spoken by many as a second language and eventually as a first language. At this point, this new language is often called a “creole”.
When a simplified language of a place is used as a trading language —while borrowing many, many, many elements from other languages— this creation is often called a lingua franca. The distinctions between lingua franca, “pidgin”, and “creole” are not clear-cut, and depend on how much influence one specific language had in the creation of the exchange code. However, in all cases, a lingua franca is not an exact copy of the parent language; it often includes vocabulary borrowed from other varieties and languages and always adopts a simplified grammatical structure.
In the case of Lingua Franca itself (or “language of the Franks”), it was in fact a commercial language spoken mostly in the eastern Mediterranean and North Africa. It was brought to these regions by North Italian (Genoese, Venetian, Pisan…) and Catalan sailors. It was not called Franca because it was spoken by the Franks or French, but because during the late Byzantine Empire, “Franks” was a blanket term applied to all Western Europeans due to their prestige after Charlemagne. In fact, over time, that language —used for commerce across the ports of the Mediterranean— was largely influenced by Italian dialects, Catalan, and Occitan more than by French. That early commercial language, lasting from the 10th to the 19th century, is what gave the name to the concept of linguas francas, or a functional language providing basic understanding between many trading peoples with different socio-cultural backgrounds. In this text, franca for short is a functional term, independent of any linguistic history or language structure. That concept can also be applied to pidgins and creoles, or whole languages like Hiri Motu —which is neither creole nor pidgin, but simply a franca from southeast Papua of Austronesian origins used for trading voyages.
Returning to the impressive Malay seafarers and traders, the current Malay language, or Bahasa Indonesia, is a language that originates from Old Malay, mostly spoken in Malacca —the great trading centre that the Portuguese conquered in 1511 CE. Old Malay, also known as Bazaar Malay, Market Malay, or Low Malay, was a trading language used in bazaars and markets, as the name implies. It is considered a pidgin, influenced by contact among Malay, Chinese, Portuguese, and Dutch traders. Old Malay underwent the general simplification typical of pidgins, to the point that the grammar became extremely simple, with no verbal forms for past or future, easy vowel-based pronunciation, and a written code that reflects the spoken language.
Back in 2016, I travelled for a few months through the Malay Peninsula and the Indonesian archipelago. During these trips I could easily pick up a few hundred words which allowed me to have simple context-based conversations despite my general ineptitude with languages. I was surprised by how much understanding could be achieved without even using verbs! The language had evolved such that verbs like ‘go’ and ‘come’ became prepositions like ‘towards’ and ‘from’, allowing me to build simple sentences describing my itineraries without proper verbs. This anecdotal example illustrates how Malay evolved to enable extremely easy preliminary communication.
But Malay is not only a franca. This is exemplified by the extreme complexity and nuance in vocabulary and verbal sophistication required to address your interlocutor based on their relation to you. You need a special way of addressing someone depending on whether they are a man, woman, young, old, or of higher, equal, or lower social status. This likely reflects the language’s other origin: High Malay or Court Malay, used by cultural elites and in courts, where making explicit hierarchical relations was (and still is) crucial.
Today, the language dominates the Indonesian archipelago, the Malay Peninsula and Brunei, and is also widely spoken in Timor-Leste and Singapore. In Singapore, however, the official and de facto trading language is English —but we’ll talk about that trading hub and English later. Malay is spoken as a first language by millions of people, but it is far more common as a secondary language, with almost 300 million speakers. Most of these speakers also know at least one other local language, like Javanese — spoken by nearly 100 million people— or Bazaar Makassar, another franca used by the Bugis, who historically landed on the shores of Western Australia for centuries.
Interestingly enough, modern Malay descends from a language spoken in ancient times in east Borneo. This language also gave rise to Malagasy, spoken by most of the population of distant Madagascar. The language arrived there via Malay seafarers and later traders. Afterwards, Bantu peoples from southeast Africa arrived and mixed with the Austronesians, giving rise to modern Malagasy —the only native language on the island (though it has three main dialect families). Apparently, nobody had the need to create new languages in this 1,500 km-long island.
The Bantu peoples themselves are also the carriers of one of the World’s largest francas: Swahili. It is spoken by up to 150 million people. Originating as a coastal trading language, it spread to the interior of East Africa, connecting the coast to the Great Lakes, and became a franca across the region and a mother tongue for many urban dwellers. It arose in present-day Tanzania during trade between the island of Zanzibar, inland Bantu groups, and Arabs (particularly from Oman). The Omani Imamate and Muscat Sultanate controlled Zanzibar and the Tanzanian coast during the 18th and 19th centuries and held considerable influence through the slave and ivory trade, among others. The name “Swahili” itself comes from the Arabic word for “coast”. Arabic has contributed about 20% of Swahili vocabulary, with words also borrowed from English, Persian, Hindustani, Portuguese, and Malay —the region’s main commerce languages. Like Malay, Swahili has a simplified grammar common in francas, making it easy to learn and pronounce. These traits have made Swahili a contender for a global communication language.
Beyond commercial francas, there are languages used exclusively as cross-border platforms for intergenerational communication, but which are not native to any sizable population. These languages are like frozen structures, called upon to allow a group of people to mutually understand one another.
Classical Latin is one early example of this function. By the 4th century, Romans were already speaking a language quite different from what Augustus spoke 300 years prior. Different parts of the empire used highly dissimilar versions of Latin, and Classical Latin served to maintain a unified system. That “old Latin” was standardised for literary production and, crucially, for imperial administration. After the division of the Roman Empire, the Western Christian Church also adopted a version of Classical Latin for internal operations: Ecclesiastical Latin. Previously, early Christians used mainly Greek and Aramaic —as we willl see.
Over time, Ecclesiastical Latin became the international language of diplomacy, scholastic exchange, and philosophy in Europe, lasting for around a millennium. It was not fully standardised until the 18th century! By then, linguists were eager to fix languages as words changed meaning too quickly —as the analyitical philosopher Russell observed. Ecclesiastical Latin flowed into the emerging sciences. The main works of Copernicus, Kepler, Galileo, and Newton were written in Latin.
But this Latin was a written language —no one really spoke it. And, unlike Swahili or Malay, it was far from easy to learn. Any Latin student knows how difficult it is to memorise its multiple, complex inflections. It became a written fossil spoken by basically no one —except a few geeks. That made the other nerd in the Peano-Russell nomenglature propose a simplified version of Latin, Latino sine flexione, as the Interlingua de Academia. Peano, being Italian, had skin in the game. For us native latin languages speakers, such as my Catalan, a simplified academic Latin would be a great advantage compared to “ahem” we know what. But more on that and created languages to be international standards later.
Today, a stripped-down Latin does survive in science —particularly taxonomy, where the classification of living things (especially plants and animals) uses Latin binomials. For instance, humans are Homo sapiens, wolves are Canis lupus, and rice is Oryza sativa. Many scientific terms —especially in astronomy, physics, and cosmology— are still derived from Latin. So, through science’s dominance as the global system for classifying the world, Latin vocabulary lives on. It has become a kind of global pseudo-language, used by experts worldwide to communicate about shared topics but just as individual words, without any structural coherence.
Latin is but one example of an imperial language transformed into a franca and liturgical language. Another —and much older— example is Aramaic. Aramaic had the advantage of the simplicity of its written form. Unlike the complex cuneiform writing on clay tablets, Aramaic used a simple 22-character alphabet, which made it easier to learn and spread. This accessibility allowed it to be adopted in administration, commerce, and daily communication in a linguistically diverse region. The Achaemenid Empire adopted it as an administrative language, standardising “Imperial Aramaic” alongside Old Persian. Bureaucracy, scribal schools, and widespread official use helped it expand far beyond its original homeland —and its legacy lasted for over a millennium.
As with Latin, Aramaic became the medium of religious texts. Many Jews returning from exile after the fall of the First Temple continued speaking it. Scribes translated the Hebrew Bible into it, and large sections of sacred texts ended up in Aramaic. Other Levantine prophet religions like the Manichaeans also adopted it. One of these, Mandaeism, still survives, and its followers still speak a version of Aramaic. Eastern Christians adopted Syriac —an Aramaic dialect— for theology, hymns, and lengthy religious debates. Aramaic became the franca of the ancient Near East: everyone could participate. Thanks to that, like Latin, the language outlived the empires that spread it.
So, with these examples, we can begin to draw some principles for how linguas francas are established, spread, and sustained across space and time. They tend to offer accessibility benefits, borrow heavily from multiple languages, are often secondary but can become primary languages (especially in cities), and most importantly, serve specific purposes: commerce, administration, religion, or technical use. Perhaps we can even distinguish between written and spoken francas: spoken ones often have simplified grammar, are easier to pronounce, and accommodate mixed/macaroni forms. (Cool word, “macaronic” — look up its history!) Written francas, on the other hand, may retain complex grammar but offer easy ways to record text. Of course, ideographic systems like Mandarin or Japanese kanji present another accessibility puzzle —one we willl explore later, as they are closely tied to another leg of francas: formal state education.
In a world that is becoming more technical, more bureaucratic, and more formally educated, there is now ample space for new linguas francas to be established, maintained, and —for the first time— reach global scale. It is in these languages that we will begin to ask ¿what does humanity want?
Before moving to the quasi-universal metric system —which includes the archaic Babylonian timekeeping— let us focus on probably the first universal lingua franca: mathematics. And not mathematics as in “the language of the Universe”, but mathematics as in “the set of codes, rules, concepts, and ideas that are shared and approximately mutually understood by any human using them”.
As we will see, many linguas francas originate as an often simple code (compared to a general-use language), developed rather quickly to serve a specific function. In many cases, that function has been simple commerce and exchange, where the number of items to “exchange” is limited, and the rules of the game are simple —possibly including locally standardised accounting, such as produce and monetary units, and standardised measures like weights, surfaces, and volumes.
The linguistic and symbolic part of mathematics, therefore, is not so different from commercial linguas francas. What sets it apart is that, as of the 21st century, virtually everybody using “mathematics” as a functional system —mostly algebra and calculus— uses the same notation. In other words, it is universal.
This is not surprising: when one thinks about a universal language, one often refers to mathematics. However, like with timekeeping, how we came to the specific and well-known set of symbols +, =, ÷, ∞, … has its own history.
Mathematics and mathematical notation, although common in the current world, took centuries to take shape. Over generations, it was agreed upon by scientific, technical, and mathematical communities in Europe, the Middle East, and South Asia to use the same kinds of symbols, numbers, and conventions to refer to the same concepts.
Interestingly, these “concepts” themselves were (and are) thought to be universal, even beyond the human realm —i.e. the number 3 is the same in all parts of the Universe. Therefore, unlike goods and commercial language, which had local characteristics, mathematical notation is expected to be written in the same way by everyone using those concepts and wanting to share them, regardless of location. The same applies to signs and symbols like +, =, ÷, ∞, which any reader would most likely recognise regardless of the language being used.
For some reason, written mathematics —often calculus— has always been something of a special case in many cultures. We can write numbers as they are spoken in a given language —like zero, one, two, three in English, or cero, un, dos, tres in Catalan. But often, across many writing systems, numbers have been chosen to be represented by symbols, for example: I, II, III… (Roman, no zero), 0, 1, 2, 3… (Arabic, from South Asia), 𝋠, ·, ··, ··· (Mayan, perhaps the 0 doesn’t display in Unicode), 零, 一, 二, 三 (Chinese, 零 meaning something less than one, yet not nil).
These examples show that from early on, people decided it was better to simplify numerical notation —to the point that doing otherwise seems like suffering. Try writing down the year the Portuguese took control of Malacca in the Common Era calendar: one thousand five hundred and eleven, or one-five-one-one, if simpler. Write it. Stop reading.
How do you feel?
I bet it’s a pain, and it feels right to simply write 1511. A similar thing applies to phone numbers. If you’ve ever used certain online platforms that do not allow phone numbers to be exchanged, you cannot send them using digits. A workaround is to write them out in words —for example, “one hundred and twelve” or “eleven two” instead of 112. It’s not much more effort to spell the numbers, but it still feels like a pain knowing that a shorter, cleaner alternative exists.
Although people must learn two different systems to write numbers —instead of just the phonetic one— which might seem like more effort, in the long run, simplification tends to dominate. This preference for simplicity is similar to what we will see in francas, linguas francas: the adoption of a shared, simplified, functional language is preferred over a fully developed one. So, the basis of our mathematical universality might have less to do with the Universe and more to do with a universal feeling: tediousness.
In the case of mathematics, despite numerals having been used symbolically for millennia, the simplification of other concepts —like “sum”— into symbolic script is a relatively recent development. This is exemplified by the fact that signs equivalent to + are not found in many older written systems while there is a diverse set of equivalent signs to 1. Things like +, and -, are known as “operators” in mathematical terminology. Interestingly, many of these operation symbols —unlike some numerals that are simply dots or lines —have phonetic origins. Phonetic symbols were already present in some numerical systems, like one of the two Greek numerical systems, where they would use Π as five (short for pente, 5 —Π being pi, capital P in Greek), or Δ as ten (short for deka, 10 —Δ being delta, capital D). The other Greek numerical system simply assigned the order of the alphabet to the numbers, being 1, being 2, etc. Many societies around the globe have developed advanced mathematical notations. However, none of them used algebraic notation like + to mean “sum”. Other mathematical systems worked with geometry to describe concepts, or used written linguistic statements.
Linguistic statements was the European method too. Before symbolic expressions, European mathematicians wrote their sums. For example, they would put on paper: “3 plus 5 equals 8”. Since that was a pain —like writing numbers in words— they simplified it to “3 p 5 e 8”. The operations had no proper symbols, just words or shortened initials understood by context. In fact, the sum symbol, +, is one of the earliest to appear in written arithmetic. Although it originated by mid-14th century, it was only commonly used by the 15th century. While there’s no universal agreement on its origin, it most likely comes from a simplified script of et, Latin for “and”, but nobody really knows why.
Algebraic notation to define operations was strongly promoted by the Andalusi mathematician Alī al-Qalaṣādī in the 14th century, where each sign was represented by a letter of the Arabic alphabet —for example, ﻝ for ya‘dilu (meaning “equals”). But it was actually a Welsh mathematician, Robert Recorde, who coined the modern equals sign (=) in the late-16th century. By that time, Europeans were mapping coastlines beyond Europe and the Mediterranean, Copernicus was posthumously publishing his Revolutionibus and the printing press was spreading like powder all over Europe —and people were still tediously writing “is equal to” or aequale est in Latin instead of just “=”. Try to make our kids do mathematics that way and see how long they can hold!
To be fair, most of the notation was standardised by the 20th century in the context of mathematical fields like set theory, groups, graphs, and others that most readers would not be familiar with. In fact, the evolution of mathematical notation and the stages at which one learns it in the educational system are uncannily correlated.
By primary school, around the planet, one learns the first symbols standarised by the 16th century +, −, =, ×, ., √ , ( ).
By mid-high school, one would learn the rest that can be easily written on a modern keyboard or a calculator with one or two keystrokes: ·, ⁄, %, <, >, ∞, ≠, xy , º, cos, sin, tan. These were developed by mid 17th century.
Once one goes on to study sciences in upper high school, one comes into contact with integrals, differentials, functional analysis, binomials: , , , , , . These examples have linguistic roots too, but also “famous personalities” for example Newton’s binomial —Newton was known to have anger issues, that might explain the exclamation mark (!), though it was developed by Christian Kramp. More seriously, Newton’s arch-rival of all times, Leibniz thought that having the right notation was the solution to all human problems —if humans could create a universal logical language, then everyone would be able to understand each other. In the case of mathematics, Leibniz actively corresponded with his peers at the time to convince them that notation should be minimal. That, in fact, has informed most of our modern mathematical symbolism. Going back to our tedious exercise, this decision on minimalism might have a cognitive reasons, human operating memory is limited to about 3 to 5 items, and this storage lasts only few seconds, so it makes sense to develop notation that allows computation and arithmetic to fit well in that memory space. These symbols were common use by the early-19th century, though some, like , by Leipzig were developed earlier or at the same time as the signs ·, ⁄ —these two being simplifications of the product and the division. Many of these symbols cannot be easily typed from your keyboard and need special code to type or display.
By the end of a technical degree like engineering or physics, one gets to know most of the mathematical notation developed by the mid-20th century, with scary things like tensors written using something called Einstein Notation: —Einstein was known to be bored easily, that might explain that he preferred the simplified notation to the degree that dyslexic minds like mine mix these little indices.
Beyond these, one enters into advanced or specialised studies to learn the fancy ones: , , , , , , . Many of these are just substitutions of words that are mathematically “conceptualised”, like the numbers. For example, the Braille-looking , are just symbolic representations of the verbal statements “because” and “therefore”, respectively. Many of these symbols were developed during the late 19th to late 20th century. The most avid use of signs is in the field of mathematical logic, where Peano–Russell notation informs some its rules —Russell was a known geek, self declared to know nothing about aesthetics, that might explain his dislike of using words, which have the tendency to change meaning. Funny how he did not write much about the mostly aesthetical music, which has also a standarised quasi-universal notation, as we will see.
Symbols by approximate “popular” introduction date
Nevertheless, the point at hand with mathematical operational notation is that it took hundreds of years to adopt the standardised form that is now widely used in all the teaching systems around the world. That evolution and standardisation did not happen in isolation, but were interwoven with other branches of knowledge, mainly technical ones. These technical fields needed the rapid adoption of simplified standards that could be learned efficiently by a specialised community of experts. This process can be understood, in part, in a similar way to how linguas francas are constructed —from a simplification of an already existing language— to be the means of exchange and understanding among a subset of people from many different cultural backgrounds who share similar conceptual and material items.
This notation is nothing new by itself. It is just a reflection of human needs —mutual semantic undertanding around a limited subset of concepts— and practical solutions that might have some cognitive biases. What is new is the fact that this notation reached a planetary scale. As we have seen with the spread of communication, that process is just a matter of scale, not of quality. But, in my view, that global scale makes all the significance and sets the question of this book. Mathematical notation, and its quasi-universal use, shows one paradigmatic example of how we arrived there. How we arrive there, or not, and a standard is kept regional, is significant.
Mathematical notation has been the first of such lingua francas to become a standardised language used across the whole planet. It is, however, limited to be used only by someone who needs to do arithmetic —which, in our case, is anyone who has entered a regulated educational system. As we will see, regulated educational systems have reached over 80% of the human population, and are implanted in virtually every new human being born.
Now we have the example of how a truly global language —albeit a limited and specialised one that rides on the back of the universality of what it studies— is created, adopted, and made universal. In particular, this one has been made universal without any clear agreement or premeditated guidance, but rather by the sheer pressure of technical needs and the dominance of Western knowledge systems. Same as with time keeping. Time-keeping, by the way, will come back, and we will see that the universality actually is held by technical needs, mostly as a matter of sailing ships, driving trains and flying planes around the world while knowing where you are also in space.
So, the World has not finished with mathematics as universal communication, other technical and symbolic language are coming. The Metric System is coming, and this time, with bureaus.
One interesting predecessor to unified measurement and standardisation is that of time. Most people might not be aware how puzzling it is that, as of now, across the whole planet we share a common timekeeping system spread throughout most societies of the world. When you look at your watch, you see it divided into 12 or 24 segments, denoting hours, and each hour is divided into 60 units, called minutes. Again, each minute is divided into 60, and we call that division a second. None of this is new to you, but what should surprise all of us is that it is not surprising for most people on the planet! And that’s just it—seconds are the basic unit of measurement of time across the entire world. Why, and how this fact came to be, is not a given. In fact, timekeeping is an extremely aberrant, arbitrary, and silly system if we compare it to the more common numerical system in divisions of 10 (we will see the metric system later on). Why aren’t there 100 seconds in a minute, 100 minutes in an hour, and 10 hours in a day? We could all stop dividing by 60! Our system seems complicated, and it’s not only our adult selves that feel so. My father, who has worked in the educational system all his life, told me that children learn the decimal system quite quickly; however, it takes them much longer to internalise timekeeping. You might have experienced this difficulty yourself as a child, or seen it in your own children if you’ve raised them. Then why does this strange and somewhat difficult system not only exist but is also the same everywhere? Didn’t other parts of the world create different timekeeping systems that made more sense? Why are these no longer around? As we will see, in large part it is because our current international timekeeping standard comes from one of the oldest measures.
Timekeeping is quite common across cultures—perhaps it is a human universal. The easiest division of time is into days, as it is a cycle that dominates all our actions in life, especially our sleep cycles. The next division of time across cultures is usually the cycle of the moon. About 29.3 days have to pass for us to see the moon in the same phase and position in the sky. This moon cycle gives us a close approximation to our current month lengths of 30–31 or 28–29 days. The third rhythm that many cultures pick up on is the annual cycle of the Earth orbiting the Sun. For higher latitudes in both hemispheres, that annual revolution—together with the tilt of the Earth’s axis—gives strong variations between seasons. Days become noticeably longer and shorter along that rhythm, and the weather and natural world follow these changes, with cold during the short days, and heat during the long days. In tropical latitudes, where the variation in the length of the day is not as pronounced, the coming and going of the rainy seasons usually plays a similar role to the cold and hot cycles. In short, most humans around the planet adopted these three naturally occurring cycles as the basic units of time division. When combining the Moon and Sun cycles, this gives us the numbers 12 and 13—i.e. the number of lunar months in a solar year. But this concerns the calendar more than the clock.
We have two main methods of temporal measurement: the calendar and continuous timekeeping, which could in principle be independent of natural cycles. But the calendar is a kind of timekeeping and has given us two numbers to play with. The number 12 is prominent in many counting systems; it even has a specific name in English: a dozen. The number 13, not so much—it is even seen as a “bad luck” number in some cultures. Why is 12 popular and 13 hated, then? This difference is also due to boring calculus. It is easy to divide 12 by 2, 3, 4, and 6. Try doing that with 13—any luck? If you remember your prime numbers, 13 is one of them—only divisible by 1 and itself. Probably, most nerdy ancient people who had to do the tedious task of measuring time preferred the “neat” 12 instead of the unfriendly 13.
But why 60, then, for continuous timekeeping on a watch or clock? Why the 60, 60, 24 division?
We need to start talking about the Babylonians, and how they counted. How do you count using your hands? Most of us would count the number of fingers on each hand, up to 10. But one can increase the amount that can be counted by using the phalanges in each finger. If you count them on the four longer fingers of one hand, that gives you 12—once again this neat nerd number derived from solar and lunar cycles. Then, what better number to divide the daylight hours than 12? But there are an equal number of hours in the night (in equatorial regions), so the number of hours when the sun is out is roughly the same as when the sun is away. If daylight is 12 units and night is 12 units, that gives us the universal 24 divisions of the day: the infamous hours.
Then the 60. Going back to the Babylonian counting, if you count a dozen on, say, your left hand, and on your right hand you keep track of the number of dozens by flexing one finger each time, that gives you five dozens—or a total of 60. If we divide that hour into 60, we get the infamous minutes. The punchline, however, is that despite the importance of 12, the Babylonian sexagesimal system was based on six groups of ten, not five groups of twelve! In any case, that sexagesimal system is the basis for the 60 divisions—or hexagesimal—which the Babylonians also used to divide a circle into angles, another of the universal measures we will examine.
Why the infamous seconds exist, and are simply 60 divisions of a minute, is not such a clear story. Why wasn’t it a division of 10, or 100, or 24? The subunit of a second—a millisecond—is divided into 1,000 units, so 60, although used to define the minutes from an hour, had no need to be used again. What might explain the 60 seconds is another natural unit, quite random at that: the standard resting human heartbeat. If you measure your heartbeat after a period of rest, or just after waking up, there is a high likelihood that you’ll have just over 60 beats per minute.
However the infamous seconds really came to be, the Babylonians standardised them, and due to their central location, vertebrating the Africa–Eurasia connection, seconds, minutes, and hours spread. The large-scale societies around Babylon—such as Egypt, the Greeks, one of their successors Iran, and the polities of the Indian subcontinent—adopted the Babylonian system early on. Crucially, it spread to all the European nations, who then forced it into the administrative apparatus of their colonies, which, as we have seen, covered most of the planet. Even the Chinese adopted a version of the Babylonian sexagesimal division when the Ming dynasty commissioned Xu Guangqi in collaboration with the Jesuits to adapt the Gregorian calendar and timekeeping to the imperial system. Although this reform was only officially adopted during the Qing dynasty in the mid-17th century, it was partly influenced by the Jesuit Johann Adam Schall von Bell and his improved methods of predicting eclipses. Astronomy, we must remember, was deeply linked to astrology in both European and Chinese courts, and astronomers performed the functions of astrologers in advising rulers.
Calendar
Concerning the universality of the Gregorian calendar, it also seems a convoluted, silly, arbitrary system. Why are some months longer than others? Why is your birthday on a Tuesday one year and on a Friday another? There are vastly superior calendar systems out there. Though, some of these alternatives tend to require the addition of a 13th month and a bizarre annual “blank day”, which doesn’t go on the calendar at all. We just chill out, have a holiday, and pretend it’s not there.
The Gregorian calendar has a complicated and protracted history. All the successors of the Roman Empire, and the Christian churches, used the Julian calendar until AD 1582. The Julian calendar, as the name suggests, comes from the Julius Caesar. He borrowed it from the Egyptians and imposed it on the Roman Republic as a more stable alternative to the Roman system. The Catholic Church adopted the Julian calendar at the First Council of Nicaea in AD 325. However, Christians had conflicting ideas on how to celebrate Easter, and it took nearly half a millennium before most Christians agreed to follow the Nicaea rules. In the Egyptian calendar, once every four years a day is added to February. That’s the year when February has 29 days—and some people still joke that those born on that day don’t get to have birthdays.
The problem with adding one day every four years is that, after a few centuries, it messes up the seasons—meaning that the spring and autumn equinoxes, and the summer and winter solstices, begin to drift on the calendar. By the 16th century, this was still a minor issue (only ten days had shifted in 1550 years), and it didn’t seriously affect agricultural practices. However, the motivation for change was religious: the Catholic Church was concerned that Easter might not be celebrated in accordance with the scriptures. Easter follows a lunisolar rule, which causes the date to shift every year: it must occur on the first Sunday after the first full moon of spring. This meant that, technically, Easter could end up being celebrated in winter if the calendar drifted too far—risking some sort of cosmic blunder. It wasn’t equally important for all Christians, as many Eastern Orthodox churches still follow the Julian calendar.
Some sectors of the Catholic Church pushed for reform early on. In the late 15th century, the man chosen to oversee it was a German mathematician with the wonderful name Regiomontanus (Latin for “royal mountain”). Unfortunately, he died before the reform could be implemented. A century passed, during which all the Protestant wars took place. In the aftermath, a diminished Catholic Church finally agreed to set the new calendar—approved by Pope Gregory XIII, who gave the calendar its name. But the Pope could only set the liturgical calendar for the Church. The civil calendar—used by governments—had to be adopted by each administration. Moreover, the emerging Protestant denominations were deeply sceptical of anything coming from the Pope and were not eager to adopt a “papist” invention, even if it made sense. The Puritans even tried to ban Christmas for being too Catholic.
Nevertheless, Catholic powers and administrations such as the Polish–Lithuanian Commonwealth, the Kingdom of Spain (which then included Portugal and most of Italy), and France, as well as their colonies and dependencies, adopted the Gregorian calendar as their administrative standard. Parts of the Netherlands under Spanish control (now Belgium) also adopted it; the rest of the United Provinces followed over the next few decades. So did the Holy Roman Empire, including Austria, Hungary, Bohemia, and many of the German states.
Some Protestant nations, like Denmark and Sweden, also adopted the Gregorian calendar relatively early. Though Sweden did so in a somewhat chaotic way—switching to the Julian calendar, then to the Gregorian, then back again, and finally settling on the Gregorian in 1752, the same year Britain and its colonies adopted it. To avoid referring to the Pope, the British called it “An Act for regulating the Commencement of the Year, and for correcting the Calendar now in Use”. For years, Swiss towns just a few kilometres apart had calendars ten days out of sync—allowing people to celebrate Christmas or Carnival twice in one year!
Later on, Eastern Orthodox countries such as Greece, Serbia, and Russia adopted the Gregorian calendar for civil purposes, though many retained the Julian calendar for liturgical use—or switched to the “New Julian” calendar, making things even more confusing. This is why people in Moscow celebrate Christmas on 7 January (in the standard Gregorian calendar). Others, like Ukraine, have switched to the Gregorian calendar entirely.
The Gregorian calendar is now the de facto global calendar—though it was never formally agreed upon. Administratively, all European countries and their colonies adopted it. Even the Chinese, as we’ve seen, integrated it into imperial systems. Of the few non-colonised countries, only Nepal, Afghanistan, Iran, and Ethiopia still use different civil calendars. Others like Japan, China, Thailand, and Saudi Arabia eventually adopted the Gregorian calendar for administrative purposes—Saudi Arabia only doing so in 2016! Some of these countries still use different systems for counting years or determining the New Year, but their months, leap years, and weekdays follow the Gregorian calendar. In many places—such as the Orthodox Christian and Muslim worlds—two systems coexist: one for liturgy and another for administration. Only Iran’s Solar Hijri calendar, or Shamsi, is more astronomically accurate than the Gregorian. It sets the start of the year to within a second of the spring equinox on Iran’s standard meridian at 52.5° East.
Finally, most international institutions—the United Nations, the Olympics, global research networks—use the Gregorian calendar, reinforcing its role as the de facto global timekeeping standard. Timekeeping—both in hours–minutes–seconds and in calendar form—illustrates how ancient measurement systems, copied or adapted from long-gone administrations like Pharaonic Egypt and Babylon, are still with us and have become near-universal norms, despite never being formally imposed on the entire world.
Later we will see that something like “timekeeping” is not so unquestioned. And that this seemingly universal but rule-less agreement on time has also undergone standardisation—and even the creation of global institutions, as we will see in the case of standard time and the truly infamous leap second—imagine scary quotes. Like the second, double globals, in use and in institunionalised, is what we will need to pay attention to when asking these texts question.
What the eight countries that where not fully dominated by the European powers show, is that western powers where obsessed by commerce, or in a formal nomenclature, opening markets. Not in vain this period is know as mercantilism. Even the administrations that more or less escaped direct colonial control were forced to commerce with them, or serve as territories that could be open to the trade routes that crossed them. In all instances, sooner or later, the European powers got their way in and forced the rest of the planet to adopt progressively more European structures, laws, teaching, infrastructure, ways of organizing the territory, their internal affairs, the military, the government, and so on and so forth, culminating in the present day organization of virtually all nations and states on the planet. Even as simple things as having a national flag, a national hymn, is something no nation can escape. Only Nepal has avoided having a square or rectangular flag. The New Zealanders were not allowed to have a Laser Kiwi flag by their own politicians.
Hand in hand of administrative structures, is the administrative terminology. All the nations now talk the same “language”, that of GDP (gross domestic product), unemployment, literacy rate, life expectancy, income per capita, legal system, courts, jails, human rights, commerce treaties, sanitation, health system, infrastructure, mapping, etc.
Many of the areas of life have been completely dominated by Western world practices, organizations, solutions and ways of thinking. This has been accomplished both by forcing the others to adopt these, or by simply efficiency of the scientific and industrial methods that emerged out of the Copernican Revolution initiated by the age of discoveries.
Let us center in one thing that opened the way for all the other administrations to piggyback on this “westernization” and get implanted all over the World. That is the one that started it all, the commerce, or seen in other ways, the exchange markets, the wanting what one does not have and having the means to gain it, be it forcibly, amicably, or in unequal exchange.
As the age of colonization illustrates, there are many ways to go about commencing, but what is common is that curiosity and desire for materials that one community feels it is missing, and the capacity to communicate with the ones that have them. On top of this triad of curiosity, desire, and communication, the European nations imposed European market ways global scales by abusing the superior power of knowledge, technical means, political shabbiness, technology and in some cases sheer luck to overpower the rivals, to the point of annihilation as it happened with the Banda and many native peoples of the Americas. Commerce gone awry results in abuse and, in the worst case, in extinction of the other party to keep access to their material resources. European mentality got to be imposed into a planetary scale, but that was not because of any superior wisdom, but just because of something that we share as humans and make now “humanity” as a global concept, that capacity to have basic communication. Then Europeans exploited it to new levels, to travel, commerce, conquer and ultimately, colonise. Colonisation is no more than the total control of the resources of the land and its peoples.
Lest’s reinforce the idea that commerce was the driver of it all. It was not exploration, it was not even the desire to have more places in the map (although it might play a role, as we will see later), it wasn’t neither the proselytism of many religions and ideologies, it was just the ever increasing complexity that commerce allowed. When peoples where forcibly connected to exchange goods, that made all the other events happen.
Dismantling the Age of Exploration
Let’s first destroy the myth of exploration. What started the globalism movement is the so called “age of discovery” which has already been presented. However, that age of discovery really didn’t discover that much by itself. There was not a sustained desire to “discover”. The places and peoples that were made known to the Europeans was just a secondary outcome of the main motivation, that of the commerce.
That lack of exploratory desire can easily be seen by the desire of Columbus to reach India by a western route, not to see what laid beyond the known waters of the Atlantic. Similarly the Portuguese fueled their exploration of the route to circumnavigate Africa thanks to the commerce of gold and slaves that they encountered when they arrived to Dakar and further beyond, to the Guinean gulf. These preliminary network and exchange routes allowed them to gain the confidence, skills, and interest to do the attempt to Asia. Without that the Portuguese interest to go further south in the almost 100 years that took them to reach Asia would have been lost.
The fact that exploration was not high in the agenda can be easily exemplified by the fact that the Portuguese never “explored” the interior of Africa even though they circumnavigated it for centuries. Hawaii was never “discovered” by the Spaniards, or if they saw it, they never set a foot there, although it was just in the center of their trans pacific galleons voyages from Acapulco to Manila and back. Similarly, Australia’s east was not explored until the end of the XVIII c. although it was jut a bit off from the spice islands. Same, as we have seen, the Bugis never went beyond the areas of western Australia where they could harvest sea cucumber.
These examples and motivations illustrate how exploration was an aforethought of the desire to establish commercial ventures in far away lands. If these could not be commercially viable, i.e. the motivation was not high and the affordability not good enough, then the venture was not pursued.
Dismantling Settler Colonialism
The seeking of material exchanges is further highlighted by the dynamics of the control of the land and of the colonization. Similarly as many lands where not explored, they also where not settled or colonized by the European forces. A good case at hand is the Spaniards. Although they claimed possession of most of the American continent, they never really set forth to control even the first islands where Columbus landed, that is, the Bahamas. Even now, many of the lands of the Americas bear Spanish names, like Guadalupe, a big island in the smaller Antilles. Is not that the spaniards lost its control in a war, is just that they did not even care to keep the possession or minimally defend Guadalupe against other powers. Guadalupe simply was not of interest to them once the initial resources where extinguished these resources where no more than the peoples themselves. Once they where enslaved or forced to work until almost extinction then their interest on the islands was no more.
Fun enough the spaniards where always looking for gold and “el dorado” in the americas. And they never found it, but the big areas of present day Brazil, that later the Portuguese took, had several “el dorados” over the land, just not in the form of a civilization that had already harvest the gold. The gold was in the beds of rivers where the humans that lived around had no interest on it, it had been accumulating by millennia of natural water erosion and sedimentation of the dense metal in the rive beds. Therefore, the spaniards had the tons of gold they where seeking just laying under their foot. Presently, the states of Minas Gerai (general mines, or mines abound in portuguese) and Diamantina (Diamond land), in central-east Brazil, show the gold and diamonds that the Spanish monarchy had under their initial jurisdiction. Similarly in California and Oregon, where the Spanish monarchy had nominal control for almost 300 years, and later the Mexican Empire. But neither mined any river bed, until the US took over.
The spaniards just focusing on either commercing or exploiting the peoples that already exploited the goods they where interested in. We have seen this also in the papuan axe makers, only few groups did the axes, while the rest commerced with them. In the case of the spaniards they first did some exchange, like with Tidore, but soon enough they abused their superiority brought about by their knowledge, technology and the devastating effects that the illnesses that they carried had on most of the local population. The control of the territory through the control of existing administrations can be seen if you check a map of the Spanish empire in the Americas. Their dominions mostly are concentrated in the areas where most of the population lived before their arrival, that is, the Andean region and Mesoamerica. They also had firm control only on the biggest islands of the Caribbean: Cuba, Puerto Rico and Hispaniola. But even Hispaniola western part was eventually controlled by the french, and the fourth biggest island, Jamaica, fell on the hands of the British by 1650. By that point these islands and all of the smaller ones where almost depopulated by the original Tainos, the first people that Columbus’ expedition met. There was not much interest by the spaniards in maintaining a land with no people, that is, no easy means to extract resources, be it by commerce or by force.
Going back to Papua, off shore of Australia and also close to the Maluku spice islands, the interior of Papua and part of its coast was not thoughtfully explored and mapped until 1930, and just because the invention of the aeroplane made it easier the access to these lands. The peoples there had little to commerce with or to exploit at the beginning. Therefore, papuans were left to their own means until the industrialized world sought new resources that where never explored before in an industrial scale, like coal, oil, steel, gas, tin, copper, saltpeter… That way many of the lands of the planet that did not have large administrative centers where left to their own means until the beginning of the XX c. where they where thoughtfully explored, not in search of filling the gaps in our knowledge, but to look for these mineral deposits.
Then, at the end of the XX c. and beginning of the XXI c. the lands that where still almost untouched where reached to exploit a new brand of resources, that of the agrarian production. Nowadays, big areas of pristine forests are falling each second to the hands of humans that want to expand their wealth by the simple exploitation of the production of the soil. The biggest responsible for this new phase of “exploration” of unmapped lands are palm oil, soya and cattle. These commercial products finally are putting a dent into the big tropical jungles that where left mostly untouched by this growing network of connectivity and exploitation. The mighty Amazon jungle is a shadow of itself, with almost half of its surface converted to agrarian land, the same is happening to the formerly lush islands of Borneo, Sumatra and our beloved Papua. And the Congo basing is rapidly joining the club.
The process of accessing the previously disconnected lands came hand in hand with orchastration or simply destruction when their lifestyle or lands where in conflict with the aims of the westerners. That happened often and repeatedly in history, with the clear example with the eradication of many Native Americans in what is now the US and the marginalization of the rest to small reserves in undesired lands.
But one can go further back to see this process of taking over lands to be used as a new resource. All around the planet, with the onset of agriculture in each of the different regions of the world, it has been observed that the diversity of Y chromosomes, these carried by men only, was lost by about 90% in a short period of time. That means that of all the male linages that existed on the planet at that time only about one in ten remained, while the rest died out. On the other hand, the diversity of X chromosomes, carried by men and women, remained mostly the same. This happened more or less simultaneously in Eurasia, coinciding with the agriculture there, while in the Americas it happened few thousands of years later, as the onset of agriculture was later and in two different areas, Mesoamerica and the Andres. In Africa it happened later still, by 5000 bc
Cumulative Bayesian skyline plots of Y chromosome (left) and mtDNA diversity by world regions (right). The red dashed lines highlight the horizons of 10 kya and 50 kya. Credit to Karmin et al 2015 under a Creative Commons License (Attribution-NonCommercial 4.0 International).
There are different theories on why this dramatic loss of genetic diversity could had happened, like the grab os lands, active warfare and taking on wives from other bands as slaves, or simply the creation of a male elite that could maintain harems and overbreed the rest of the males over time. However it was, certain linages dominated the genetic landscape and that coincides with the emergence of the big agricultural cultures and later civilizations of the Neolithic. At the same time many hunter-gatherer societies where no longer to be found in many of the lands controlled by the agriculturalists. That points to an ancient clash of cultures, where the ones that where integrated to the agricultural and husbandry networks survived and dominated, while the others where wiped by their neighbors who had adopted a lifestyle different enough from them. Nowadays the world is in danger of repeating the same history in the current global scale, but we will address that, its consequences and possible actions, in another chapter.
The only areas that so far escape a big exploitation are the desserts, Arctic tundra, and the bottom and surface of the oceans. There almost nobody lives, and the affordability of the resources is too high and the cost of the resources there does not compensate it so far in most of the cases.
Dismantling Proselytism
The fact that ideology or religion comes by the hands of commerce should not be of a big surprise, exchange is stablished better with these entities that are similar to the known ones, where more channels of trust can be build. Therefore, the societies that had structures more similar to the known ones for the European world (cities, states, nobility…) where easier to be assimilated to the system.
In the case of proselytism, another area of contact with other peoples and of imposing a common understanding, it also mostly by the hands of commerce. A good example of this connection between commerce and religion spread is the islamization of parts of the Malai peninsula, Indonesian archipelago and Mindanao island from the XVI c. Islam did not spread to these areas at the hands of an invading army or imposed by a ruler. No, the adoption of islam was by the hands of the Muslim dealers that arrived to these shores. The spread of Islam was initially driven by the trade links with the distant lands to their west, especially the species that we have seen. Commercial ventures usually where done or mediated by the local king, or orang kaya. Arab and Indian traders of muslim faith would do better deals with fellow faith followers, therefore the kings and rich peoples had a great incentive to convert, at least nominally, to Islam. That’s how the local rulers become Sultans and sultanates, which exist to this day, like in Brunei. Then the faith would actively or passively percolate to their submits and fellow neighboring lands to integrate them into the commercial network reaching the Spice Islands and their rulers becoming sultans. However, often islam went just as far as the commercial networks. In Halmahera, the big island neighboring the small Tidore and Ternate, even until the XX century the people were believers of their ancestral faiths based on animism and Hinduism. These inland peoples were loosely connected to the commercial networks, therefore there was no strong imperative for religious conversion. Furthermore, many of the Indonesian islands to the east of the spice islands, papua included, and much of the highlands of the bigger islands never converted to Islam, leaving Christian missionaries to proselitise them by the XX century.
Christianity did a similar thing on its own in Northern Europe. When the relationships between the waring bands after the fall of the Western Roman Empire subsided, there was room to establish regular commerce with peoples outside of the formal roman empire. With the commercial networks the Cristian faith came, it took centuries to take hold but eventually much of Scandinavia converted to Christianity. However, less commercially integrated peoples, like the Samis who where neighbors to the Scandinavian peoples, kept their religion until the XIX c
Similarly, the pious spaniards in the americas never went about to evangelize much of the areas that where not under their commercial networks. Many of the forested areas of the Yucatan peninsula and south America where never actively proselytized under the Spanish monarchy, nor where the southern lands of south America, from Tierra del Fuego to Patagonia. These areas where never integrated into dominion or commercial networks until the XX c by the modern American states.
Returning to the specie islands, the preponderance of comenrce over proselitism is sadly ilustrated again by the going tragic legacy of the Banda genocide and extermination. The VOC governor, prior to the extermination asked, repetedly, to their superiors back in Europe to allow him to fight against the locals because “God is with us, and do not draw a conclusion from the preceding failures, because there, in the Indies, something grand can be accomplished” (p47, original in Dutch), grand in the sense of wealth. His religious inclinations where completely oriented to mercantile domination (species monopoly) at the price of the razing of a uniquely distinct population.
Non proselitist religions like Buddhism did have periods of sending missionaries and mass conversion of their population when king Astoga converted to the religion, even sending their own children to Sri Lanka, where Buddhism still traces its roots to that period.
To ilustrate all the avobe cases, a depressing case of a single natural product exposing peoples, —who where until then outside commercial networks– to the cruelty of global markets is natural rubber, or borracha as it is called in Brazil. Until the late 19th century, the Amazon region remained largely uncolonised, but that changed dramatically with the industrial demand for rubber. Natural rubber proved essential for the production of vehicle tyres and the insulation of electric and telegraphic cables—including submarine communication wires, and inspiring Bibendum, Michelin‘s white tyres pile mascot. The Amazon basin was the only region where Hevea brasiliensis, the rubber tree, grew in the wild. Rubber extraction involved “tapping” or cutting into the bark of scattered trees and collecting the latex in small containers, which had to be emptied and processed frequently. However, attempts to plant rubber trees in monoculture plantations failed in the Amazon, as the trees were susceptible to a deadly fungal infection when grown in large monocultures. This biological limitation made wild harvesting the only viable option in the region.
The rubber boom triggered an influx of uncontrolled commercial operators into the Amazon, eager to exploit this monopoly. The most “economical” method of production relied on systematic violence: indigenous populations were enslaved, tortured, mutilated, and forced to traverse the forest, collecting latex from distant trees, with the lives of family at stake.
Men, women, and children were confined in them for days, weeks, and often months. ... Whole families ... were imprisoned—fathers, mothers, and children, and many cases were reported of parents dying thus, either from starvation or from wounds caused by flogging, while their offspring were attached alongside them to watch in misery themselves the dying agonies of their parents. Roger Casement, 1910
They are inhumanly whipped until their bones are exposed and large, raw wounds cover them. They are given no medical treatment, but are left to die, eaten by worms, when they are fed to the chieftains' dogs. They are castrated and mutilated, and their ears, fingers, arms, and legs are cut off. They are tortured with fire and water, and tied up and crucified upside down.Walter Hardenburg, 1912
The trade generated such enormous profits that remote cities like Manaus and Belém boasted European-style opera houses and other symbols of opulence. Yet, this wealth was built on a foundation of brutality—an open secret ignored by the companies, missionaries, and urban populations. Things only began to change when Walter Hardenburg, an US engineer, and Roger Casement, an Irish diplomat and revolutionary, exposed the atrocities to the international community. These reports—fictionalised in the novel El sueño del celta—coincided with another turning point: rubber tree seeds had been successfully smuggled out of the Amazon and cultivated in British and Dutch colonies in Southeast Asia. The fungal pathogen did not survive the voyage, allowing for large-scale monocultures that drastically lowered production costs. This shift, coupled with global outrage, forced an end to the Amazon rubber monopoly and slightly improved conditions for the surviving indigenous communities.
Similarly to what the British did with ending the nutmeg monopoly, they planted the rubber tree in Malaysia and enforced a policy of prosecution of the abusers of indigenous peoples. As we will see also with slavery, the humanitarian side of it was accompanied by the changing commercial winds, a pattern that we will see repeatedly, and which highlights how much our current world is shaped by the networks, tentacles and shadows of commerce.
The borracha and other examples show that commercial, dominion, and ideological network often went hand in hand, and those that where not part of the commercial networks escaped much of the ideological and political influence. We are seeing that in the case of the Western mentality taking over the world, a link with commercial ventures, be them pacific or aggressive, is a common one. From these examples we can conclude that exploration, colonialism and proselytism are not substantially different than exchange networks —in terms of connectivity. The true real change over time is scale of these networks, with the Western dominions pushing the preexisting connectivity conditions to virtually global scales for the first time in human history. Not that the other themes analysed here did not produce equally or more severe damaging and horrendous outcomes for the peoples affected. What is important to notice is commerce, and maximising profit at the cost of the well-being of everybody else, as one of the main drivers of connectivity, and probably the basis for all the other ones in our current Western dominated world.
From here, we can going back to the current movement of connectivity that was driven by European powers globalisation. With the commerce and greed as a root, we can look at the main frameworks under which this global communication vertebrates. These tools, at times tainted, will be media and mechanisms available to create the concept of humanity, to communicate it to most of human beings, and to ask the question, what does humanity want?
To understand the networks that can establish this project’s question, it would be interesting to reflect on how such networks were created. And for that, one has to reflect on the dominance of the planet by the Western European powers by the end of the 19th century. This dominion, as described before, started with the parallel events of the arrival in the Americas, wiping out about 90% of their pre-contact population, and conquering Malacca after the circumnavigation of Africa. These two events were achieved by two small powers, inhabiting a medium-sized peninsula at the end of the Earth (Finisterre, the end of the land, is in Galicia, north of Portugal). The peninsula lies between the Mediterranean and the Atlantic and was shared with two more kingdoms, Navarra and Aragon, which did not participate in these events. Other Atlantic-facing small powers soon joined the party, with France, the Netherlands and England taking over the lion’s share the century after, and some other colonising efforts conducted by Denmark, Scotland, and even Poland. Once Germany and Italy were created, they eagerly jumped into the “game” as colonial administrators. But beyond the administrations, emigrant European populations, mostly from the western and central portions—but in reality from all over—had huge influences all over the world. And last but not least, Russia, which still holds its colonial land-based empire, conducted overland the same land conquests that the rest of the powers were conducting over the oceans.
To see the effect that about 10 nation-states had over the world, you can go to the modern world political map and start crossing over the countries and territories that at some point fell under their control, be it nominal or real, where they had actual power in deciding much of their political and economic actions.
Colored are 8 Nominally non-colonized modern nations. Turkey is in dashed lines because it can be considered a “European” power. In black is all the parts of the world where a imperial European power or their ex-colonies, plus Japan and China, took over the administration of the land in the last 500 years. Antartica is in white, under the Antarctic Treaty.
All the “New World”—i.e. America—fell under the actual or nominal control of these states or their post-independence nations controlled by European elites.
In Africa, with the exception of present-day Ethiopia (which was occupied for 4–5 years under Fascist Italy), the entire continent was nominally under the control of the Western European nations by the end of the 19th century. The African continent’s political map now bears the scars of that colonisation in the form of the terrible borders left over by the Europeans, which still today force historically antagonistic communities to share a state, while others that were historically unified are now split by an invisible line.
Oceania was swept away by the Europeans, with the Kingdom of Hawaii—one of the last remaining independent archipelagos—losing its sovereignty and most of its population to the US by the end of the 19th century.
In Asia, Portuguese, Dutch, English, French and Russian powers took over most of the land. Only six sovereign administrations were never actually controlled by the Europeans. These were the isolated Japan, large parts of mighty China, Thailand, Afghanistan, Persia (now called Iran), and the Arabian desert now controlled by the Saudis. It is debatable whether the Asian Ottoman Empire controlled parts of Europe, or the European Ottoman Empire controlled parts of Asia, but whichever it is, it had strong European influence in its administration, which can still be seen in modern Turkey. However, unlike Russia, Ottoman rulers did not intermarry with the rest of European aristocracy, in part limiting European influences in the ruling class. Other territories not controlled by the Europeans include Mongolia, which was under Qing Chinese dynastic control and then briefly independent as a puppet state under Soviet influence. Similarly, the two Koreas were under Chinese and then Japanese dominion and colonisation, and then divided in two—with the US influencing the South, and the Soviet Union and China influencing the North. The British had a mixed dominion policy. Oman (with Muscat being a Portuguese trading colony) formerly controlled great parts of the coast of present-day Tanzania due to its lucrative slave trade; that control was destroyed by the British, who then took over most of Oman’s government and internal affairs until the 1970s. Similarly, other sovereign lands—like Bhutan, Nepal, many Indian kingdoms, and Oman—at one point or another left their external political affairs and some internal ones in the hands of the British. Finally, Japan began imitating the European powers and took colonial control over Korea and large parts of China, even creating a puppet state called Manchukuo in Manchu lands in China’s northeast.
For these seven or eight places on the map that can be painted as outside direct colonial control, each suffered, to some extent, imperial influence. Saudi Arabia was mostly empty desert land with few resources until the discovery of oil, and its existence is linked to British foreign policy—to create a Saudi force as a counter-power to the Ottoman Empire at the end of the 19th century. Iran was divided into spheres of influence by the Russians and British, and its modern borders were mostly decided by them. For Afghanistan, its borders were drawn by the British and the Russians, including the strange northeastern “corridor,” which was made the width of the most powerful cannons at the time, so that British and Russian artillery could not shoot each other over Afghan territory. Most of Afghanistan’s external political affairs were controlled by the British. Thailand suffered a similar fate; being between French and British-controlled territories, it was used as a buffer state. The British and the French drew its current borders and split the country into spheres of influence, as in Iran. China was defeated first by the British, and then by a coalition of the British, French, Russians, US, Japanese, and Germans. Though they did not take full control, the British strongly influenced China’s foreign policy for decades, and China was divided into spheres of influence. Japan won several wars against Chinese administrations and took control over large parts of the land before the end of WWII. Japan itself was forced to open its borders and commerce to foreign powers when Tokyo was bombarded by a US armada in the late 19th century, and later—after some mushroom-shaped explosions—was occupied by the US and the British. It was forced to adopt an army for self-defence only and to remain aligned with US interests.
Antarctica was claimed only by European nations, and the Antarctic Treaty, which theoretically reserves these lands for all humanity, was drawn and signed by six European nations. Currently, most of the scientific bases that exist there are European ones.
Out of the roughly 200 sovereign administrations now covering the land masses of the Earth, plus Antarctica, only about seven or eight experienced little direct control by European powers. This simple map illustrates the extent to which European powers exerted near-global influence over the planet 100 years ago. Even today, of these eight territories, only Japan, China, Saudi Arabia and Iran can be said to have—or have had—notable autonomy and influence beyond their borders. Turkey may also be included, depending on which continental perspective is used. Therefore, the world remains dominated by European nations and their administrative legacies. Alternative sovereign administrations with global influence emerge only from four or five distinct cultural backgrounds. These numbers highlight how five Western European nations, and one Eastern European one, took over most of the world.
To illustrate how the Europeans went about conquering the world, let’s go back to the spice islands. The interaction between three European powers and three native sovereign powers provides three different examples of forms of dominion: by annihilation, by trade, or by playing European powers against each other. We can centre the native powers in the two islands of Tidore and Ternate, and the Banda archipelago. Similarly, we can focus on Portugal, Spain, and the Netherlands as the three European powers. As we have seen, the lucrative trade in the spice islands centred on cloves, mace, and nutmeg. Cloves are the dried flowers of a tropical tree found only in Tidore and Ternate (and some other nearby islands). Nutmeg and mace are inside the seeds of another tree, which was only present in the Banda archipelago.
As described, Tidore and Ternate are two small volcanic islands neighbouring each other at a distance of less than two kilometres. To this day, both have rival sultans. At the time of the Portuguese and Spanish arrival, each controlled its respective island and the cloves trade, plus claimed rival control of most lands east of them, all the way to western Papua. Perhaps luckily for them, the Portuguese allied with the Ternate sultanate, while the Spanish soon after allied with the Tidore one. These two sultanates had been long-term neighbouring rivals, but also intermarried, not unlike the Spanish and Portuguese aristocracies. The European powers never conquered the sultanates, although they allied with them and built forts on their territories. The Ternateans were able to expel the Portuguese after a few decades. The Tidoreans used the Spanish as convenient allies against the Ternateans.
Claimed dominions of Ternate (1, upper circle) before the Dutch appeared on the area. Tidore is just slightly south to Ternate, also circled, difficult to see. The Banda archipelago is the small islands (lower circle). Credit to https://apaitukerajaan.blogspot.com/2018/07/sejarah-kesultanan-ternate.html
Things became more complicated after the Dutch East India Company (VOC, founded in 1602 and with authority to declare wars!) took over the nearby Banda islands (Ambon), home to mace and nutmeg.
Soon after, the Ternateans placed themselves under Dutch influence to fend off the Tidoreans allied with the Spanish, who controlled half the island and even captured the sultan. After the Spanish left the area, the sultan rebelled against the VOC but instead lost independence and came under VOC rule. Ternate became the capital of the Moluccas and the wider Indonesian possessions until the Dutch founded Batavia (now Jakarta) in 1619. Today, Ternate is the capital of the North Maluku province of Indonesia, with a population around 200,000. The sultanate continued until 1975 and has now been restored by the royal family in a ceremonial role.
Meanwhile, the Tidorean aristocracy descended into infighting, ditched the Spanish and allied with the Dutch. The VOC convinced the sultan to eradicate all clove trees in his realm to strengthen their monopoly. In compensation, the VOC gave generous donations to the sultan. With the obvious impoverishment that followed losing control of the spices, Tidorean rebels allied with the British, who soon conquered it. Later, the Dutch took back control of the territory—but not before the British took seeds from the clove trees and began planting them elsewhere, beginning the end of the monopoly. The Tidore sultanate lapsed in 1905 and became a regency, but was revived to counter Indonesian independence claims over West Papua. Today, it holds a ceremonial role in the Indonesian state.
Unlike Tidore and Ternate, the Banda Islands—a small archipelago of a maximum 15.000 inhabitants south of Halmahera—were run by orang kaya, or “rich people”. As said, Banda was the only source of nutmeg and mace. These were sold by Arab traders to the Venetians at exorbitant prices. The Bandanese also traded cloves, bird of paradise feathers, massoi bark medicine, and salves. The Portuguese tried to build a fort in the central island but were expelled by the locals and did not return often, buying nutmeg and mace through intermediaries. Initially, the Bandanese were left to their own affairs, but they were unprotected by any other European powers and their artillery.
By 1609, the VOC arrived. To put it mildly, the Bandanese were not exactly enthusiastic about these slightly different Europeans, who brought only wool and odd Dutch crafts in exchange for a monopoly. Like the Portuguese, the Dutch wanted to build a fort. The Bandanese responded in the best way they could—by ambushing and decapitating the VOC representatives. The VOC retaliated, levelling random villages. In the resulting peace treaty, the Bandanese finally allowed a fort.
Meanwhile, two of the islands, the westernmost ones—sadly named Ai and Run—allied with the British East India Company, who began trading with them. The VOC launched an annihilation campaign, first against Ai (Ay in the opening post map), killing all men, while women and children died fleeing or were enslaved. On Run (Rhun in the opening post map), the natives, with the help of several Englishmen, held out for over four years but ultimately lost. Again, the Dutch killed or enslaved all adult men, exiled the women and children, and chopped down every nutmeg tree to prevent English trade. Run is the famous island that was exchanged for Manhattan (New Amsterdam) in 1667. Incredibly, the British did not replant nutmeg trees elsewhere at the time. They would only do so in 1809, during the Napoleonic Wars, ending the Dutch monopoly and making the tragedy of the Bandanese even more sorrowful.
By 1821, the VOC wanted a renewed monopoly so badly that they decided to annihilate the remaining Bandanese. They assembled an invading force of thousands of Dutch and hundreds of Japanese soldiers and launched it on the islands—then home to only a few thousand people. After a failed peace treaty, the invading commander declared that “about 2,500” inhabitants died “of hunger and misery or by the sword,” and that “a good party of women and children” were taken, with not more than 300 escaping. The original natives were enslaved and forced to teach newcomers about nutmeg and mace agriculture. At the cost of genocide—and facilitated by natural plant endemism—the VOC had a monopoly for about 180 years. The British effortlessly invaded in 1796 and 1808, and this time decided to plant nutmeg trees in another former Dutch colony: modern Sri Lanka.
Sadly, Tidore, Ternate, and the Bandas illustrate the fate of many other European colonial efforts until the 20th century: bare survival by cleverly playing European powers against one another, becoming important administrative centres at the loss of complete autonomy or independence, or facing total annihilation and repopulation of blood-stained lands. Despite these different destinies, the outcome was the same: being utterly dominated by Western European administrative frameworks, as we will see in what follows.
The peoples that embarked and supported the exploration of new trading and colonising routes soon discovered that the free-riding of the technological advantage could be easily perpetuated. Thanks to that, the ones that invested in better and faster understanding of the world, plus the technical innovations that that understanding and implementation represented, contributed to a further control of the world’s connectivity. From that on, there were no major barriers to a hyperconnected world. What they could not control by exchange, they would control by overpowering, as the conquest of Malacca, the Aztecs, the Incas demonstrates. If you kept on expanding your technological and resource allocation dominance over other peoples, your system would be the one to dominate, and that’s exactly what the Western nations did over a period of a few centuries.
New trading routes led to an excess of wealth that could be poured into more navigational sophistication, which in turn would make the trading networks more reliable and affordable, freeing more resources for further improvement. Part of these resources went to the birth of modern science, changing forever the way our understanding of the world was established.
To make a boat sail safely from port to port you would rely less and less on divinity and more on your instruments, navigational skills, the capacity to understand the sky, star positions, read the winds, proper sails, masts, ropes to withstand storms, carrying lemons to stop scurvy, social structures to govern a ship and stop mutinies, etc. Those powers that would put scientific knowledge to good use would have in their hands better control of the high seas and the peoples cruising them. Likewise, those who understood better the fabrication techniques could build better vessels, and equip them with better weapons. On the other hand, the faraway encounters would contribute to the scientific understanding of the world, like sea currents, Volta do mar, steady trade winds, or even catamaran technology from the Pacific and front crawl swimming techniques from North, South America, or South Africa.
In fact, Columbus’s error regarding the radius of the Earth (which he was convinced until he died) was due to the preliminary stages of scientific knowledge attempting to describe the world we lived in. In that case, he was mistaken, but the geographers’ community soon recognised the error and corrected it (or lent more credibility to other estimates circulating at the time). This iterative process helped to better understand the world that was opening up before them as they tried to cartograph the new routes faster than they were explored.
From these explorations and shocks to the perceived worldview, it is not difficult to imagine that the notion of an entire landmass the size of the Americas suddenly appearing on maps (over about 20 years) might have led to the acceptance of rethinking the entire Universe. If the Earth contained a whole part of itself that was unknown to the Old Scriptures, how much more knowledge might be out there—waiting to be found, explored, and understood—not through the lens of the Scriptures, but through the lens of something new? These cartographic shifts might easily have been the seeds for scientific enquiry—the seed of the Scientific Revolution.
In fact, it is interesting to reflect on that word, “revolution.” What does it stand for? It comes from the root “to revolve”, which means to spin around. Why—if an entire continent had been missed, and Jerusalem is not the centre of the Earth—could it not be that the Earth is not the centre of the Universe either? That kind of thought might have helped Copernicus push the heliocentric idea: that the model best suited to describe the Universe is not a geocentric one, with the Earth at the centre, but one with the Sun at the centre of the known cosmos. Copernicus was not the first to propose that idea; the Pythagoreans had already supposed the Earth might move, and Aristarchus of Samos proposed a heliocentric model in the 3rd century BCE. Seleucus of Seleucia said something along the same lines in the 2nd century BCE. About 600 years later, in the 5th century, Martianus Capella, from Roman Carthage, proposed that Mercury and Venus spun around the Sun. At about the same time, Aryabhata in Patna, India, proposed that the Earth spins and that the planets orbit the Sun. In the 12th century, Ibn Rushd (Averroes) and Nur ad-Din al-Bitruji of the Cordoba Caliphate were also critical of the geocentric model and proposed alternatives. Their views spread into European intellectual spheres. However, none of these theories gained much traction at the time they were proposed. One can say that the mindset of the people of those generations was not particularly open to such a shift in worldview, nor was it needed for any practical purpose.
To be open to other worldviews becomes more likely if a sweeping 30% extra landmass is literally put on the map. The same world that the Scriptures plus Classical Philosophy were so certain they understood. Even though the Catholic Church did not pay much attention to the fact that the world was different than said, surely minds would become more open—even if obtuse. Moreover, those same conceptualisations ended up making navigation more precise. And the required navigational observations and technical means (star and planet positions, astrolabes, compasses, telescopes, clocks…) helped to question the worldview in a more rigorous way—with the newly discovered facts holding more face value than old beliefs. In short, cosmological views came to serve a practical purpose.
Therefore, the landscape was set. After Europeans became aware of a New Continent, Copernicus was able to push his idea (initially as a short leaflet in 1514), and later publish, after his death, his De revolutionibus orbium coelestium. His heliocentric model was not the one we know today. Copernicus’s model was not that innovative, nor significantly simpler than the Ptolemaic one, because he still needed the use of epicycles (small circumferences around the circular orbit of the planets) to accurately describe the rotation of the planets around the Sun. It would be Kepler—about 70 years later—who, after throwing out his own Mysterium theory of planetary movement because Tycho Brahe’s observations did not match, solved the motion of objects in the Solar System with simple elliptical orbits and delivered us pretty much the view that we now have.
The difference with all the previous scholars—after Copernicus’s posthumous publication—proposing that the Earth was not static, was that the public at the time was much more accepting of the revolution of the Earth thought. A thought that would be revolutionary!
Revolution, at the time, had the meaning that Copernicus used in his title: simply the spinning around of the celestial bodies—how they revolved around the Sun. Revolved, revoluted, revolution. It was a physical description, like that of the revolutions or cycles of an engine, or as one famous revolutions podcaster puts it, “coming full circle”, just to come back to the beginning. Revolution did have, on rare occasions, the meaning of change prior to Copernicus’s work. However, the acceptance of heliocentric theory by the public of the time. It was so disruptive to the mindset of the age, overturning millennia of knowledge and worldview—so Earth-shattering (pun intended)—that the first main word of the work itself, revolutionibus, was adapted less than a century later to mean the overthrow of a political system (the Glorious Revolution in Britain). When transferring the physical meaning to the political one, revolution meant “a circular process under which an old system of values is restored to its original position, with England’s supposed ‘ancient constitution’ being reasserted, rather than formed anew”. At that point the use of the word was far form the meaning that it has now, as a radical new direction, or changing of course of what was before. Soon after, however, the word gained the modern concept of revolution, as used for the French one, which probably someone has heard about. Now revolution is more widely understood as the shattering of a previous political, social, technological—or otherwise—system, and the establishment of a new one: the Glorious Revolution, French Revolution, Industrial Revolution, Agrarian Revolution, Sexual Revolution…
It could be that the people at the time—after the Earth had been kicked aside, given rotation, put in orbit around the Sun, and the stars made still—experienced a mental shift so profound that it allowed for a reshuffling of many pre-existing mentalities. Maybe it can be compared to the shattering effect, almost a rite of passage, that many children in the Western world experience when they realise that Santa Claus is not a real being, but a construct created by society to make them believe that the bringer of presents is this exotic figure from faraway lands, and not their parents or families. For the child, it is already a big impact—and if you experienced that, you probably remember the moment, even if it was decades ago. Then imagine if instead of just one child at a time, it were an entire society realising, more or less simultaneously (within a generation), that the reality they had so strongly believed to be true, no longer was. That is what the so-called Copernican Revolution brought to European thought in the 16th century: a collective, mind-shattering effect. We, as humans, have been toying with these moments ever since. But more about that later.
In fact, the public that was more open to these ideas was also in the midst of another revolutionary movement, which at the time was called a protest, for lack of a better word: Protestantism. If the world, the solar system, the Universe, the Cosmos, was not as the Church claimed—with extra continents unaccounted for, the Earth in motion, and stars being other suns, perhaps with other Earths—then the Church became open to protest and reform. And if protest and reform were possible, then the acceptance of truly exotic ideas—like the Earth revolving—became easier in a society already undergoing profound transitions. In fact, different solar system models were readily adopted by Tycho Brahe and Johannes Kepler, Danish and German astronomers sponsored by Protestant-friendly kings. Meanwhile, Latin astronomers such as Galileo Galilei and Giordano Bruno had major conflicts with the Catholic Church in Northern Italy—Galileo famously tried, Bruno burned at the stake. Bruno’s seven-year trial and sentence to be burned alive was not specifically for his belief that the stars in the sky were other distant suns orbited by other planets, but also because of his rejection of most Catholic doctrines.
The difference between Copernicus in the 16th century and all those who proposed alternative cosmological systems before might be that society was more open to new ideas because of empirical slaps in the face—steadily, repeatedly, forcefully. First, sailors and their investors realised that direct observations could actually shift reality—such as the discovery of a continent, accurate measurement of latitudes and longitudes, and the real size of the Earth’s circumference. Second, astronomers and their sponsors (who were often astrologers for European courts—better predictions meant better horoscopes; the zodiac pays for your smartphone, if you think about it) found that when your health or the outcome of a war depends on the conjunction of Saturn and Jupiter, and your astronomer looks through a telescope and tells you that these planets have rings and moons orbiting them, you might predict better when to wage your next war. Third, traders could more precisely calculate profits or invest in new products—like new dyes and pigments (e.g. scarlet), or learning how to plant species such as pepper, potatoes, tomatoes, tobacco, coconuts, sugar cane, among others, across the world. Actual measurements began to overturn established doctrines one after another; these facts reinforced the critiques of the old system and laid the foundation for an alternative system of establishing knowledge. The Scientific Revolution went hand in hand with the development of better instruments and measurements that define the modern world we experience today.
It was equally important that these new ideas travelled and multiplied faster than ever before. On one hand, naval interconnectivity regularly reached all continents and the major inhabited landmasses of the planet. From there, peoples—willingly or unwillingly—became part of a shared system of exchange, a process that continues today, where nearly every human being is regularly connected to the rest of the world in one form or another. Our present hyperconnected world is extending the reach and frequency of connection to ever more remote places. On the other hand, the printing press allowed for the multiplication of ideas at a rate faster than authorities could suppress them. Even if the works of figures like Copernicus or Bruno were censored, confiscated, destroyed, or burned, it was much more likely that one copy would escape, be read, and be copied again. Before the printing press, Protestant ideas—like those of the Hussites in the 15th century—did not spread far beyond their place of origin (e.g. Bohemia). Later, Prague—with its famous astronomical clock—would host Brahe and Kepler. On the other end of the chain, at the point of reception, Spanish missionaries actively protected indigenous languages (while simultaneously suppressing their cultures) in regions such as Mesoamerica, the Andes, and the Philippines, to prevent indigenous peoples from being exposed to “dangerous” Protestant, Enlightenment, or revolutionary ideas. To this day, these regions preserve some of their linguistic diversity and remain heavily Catholic, with the Philippines being the only nation (alongside the Vatican) that does not permit divorce.
Our hyperconnected and idea-copying world is the one that gave birth to the concept of humanity—a “humanity” that can now begin to ask itself what it wants to do, now that we have the means to communicate with one another, and the resources (or energy levels) to invest a fraction of that energy in specific goals. But before asking that question, we first need to understand the mechanisms by which a hyperconnected people is able to pose it: which networks are activated, in which language communication occurs, with whom that exchange is implemented, and what actions can—or cannot—be taken. What is the agency?
Curiously, one of the early adopters of Copernicus’s thesis was Thomas Digges, who removed the need for the sphere of fixed stars. He proposed the existence of distant stars scattered throughout the Universe. This led him to raise the paradox of the dark sky: in an infinite Universe filled with stars, the sky should look like the surface of the Sun, because in every direction there should be at least one star. Since the sky is black, the Universe cannot be infinite. With that in mind, the Copernican Revolution—which displaced us from the centre of the Universe—is still not complete. It is geographical, but not temporal. Heliocentrism kicked the Earth and its peoples out of the centre of space, but the dark sky placed us in a special time—a time when we can still see the horizon of the visible Universe. Now we are in another special time—the time when humanity is conceptualised. The time to ask: what does humanity want?
Olía a orines de mico. 'Así huelen todos los europeos, sobre todo en verano', nos dijo mi padre. 'Es el olor de la civilización'. She smelled as money piss, that the smell of all Europeans, specially in summer" our father told us "that is the smell of civilization" Gabriel García Márquez, Doce Cuentos Peregrinos
With the previous examples of failed connections, we can see the limits of the system presented earlier—just linking neighbours allows goods to flow only as far as they can be afforded.
The key point we will focus on now—and simplifying immensely complex human relations—is cost versus affordability: who is able to establish and maintain long-distance links that are worth the effort, over long periods of time?
To answer that, we will skip through 320,000 years of history, from the “first hints of exchange” to the onset of globalisation. I will place the beginning of our current globalisation trend at the breakdown of the ocean navigation barrier, accomplished by Western Europeans in the 15th and 16th centuries. In between these two time points—separated by more than 300,000 years—there is the accumulation of regional innovations, the constitution and entrenchment of increasingly diverse networks and hierarchies, which expanded and intensified the use of resources and energy. These “merely” 300,000 years have been widely studied elsewhere, and the reader may be familiar with the traditional “linear” narrative: from hunter-gatherers to agriculturalists, to city-states, to kingdoms, to empires, to nation-states. This image has been widely discredited, as the previous section shows, with many failures and non-linear trends. It has also been demonstrated repeatedly that no group is inherently more “advanced” than another. What people do exceptionally well is adapt to their environments—and the simpler the environment, the more efficient they tend to become. However, across these 300,000 years, there is an unmistakable increase in the scale of connectivity. This reflects episodes of creation and stabilisation of ever more distant networks, as seen in the previous section. These serendipitous mechanisms anchor larger-scale connections and reinforce otherwise fragile or unaffordable links.
So, if you will pardon me for glossing over 300,000 years of history, in the next section I will focus on the integration of all these networks into one worldwide system of connectivity—an unprecedented moment in human history—which leads us to the main question of these writings.
We all know the story of how European societies reached the shores of the Americas, changing the map of the world forever—quite literally, by creating modern mappae mundi and placing Europe at the centre. What you might be less familiar with is why that happened—why did Columbus sail west at that particular time?
Starting with cost versus affordability, we first needed something of high cost—in this case, we can focus on the spice trade. Specifically, the costliest spices were cloves and nutmeg (plus mace—the red aril of the nutmeg seed). Unlike silk, pepper, cinnamon, and ginger (other highly valued goods that travelled long distances), in the 15th century, nutmeg and cloves could only be found in two very small archipelagos the reader may be unfamiliar with. These are Tidore and Ternate for cloves, and the Banda Islands for nutmeg and mace. Today, they are part of Indonesia, in the Maluku Islands area. These spices were highly aromatic and highly valued in European markets, where they arrived in refined forms such as oleoresins, butter, essential oils, and powders. For Europeans, it was impossible to know the exact origin of these spices—some even thought they were of mineral origin. They did indeed come from faraway lands, often erroneously attributed to India, which was a genuine source of cinnamon and pepper. These species had a high cost relative to their volume and therefore represented a strong incentive to bridge the distance between Western Europe and India. In fact, these very spices were the reason Western Europeans—starting with Portugal and Castile—eventually bridged that 14,000-kilometre gap.
But why were these spices so costly, and why did people at the opposite end of the Eurasian continent want them so badly? Nutmeg and cloves had many uses. You may be familiar with their use in recipes—they did enhance the taste of food—but even today, with their lower price, ever-present in European cuisine, unlike other ingredients like tomatoes or potatoes, not present before the European expansion over the world. Another major use of these species was in perfumes. By the 15th century, European hygienic practices had changed significantly. In the 14th century, public baths were common throughout Europe, as it is quite typical for human beings across cultures to enjoy being clean and having clean people around them. However, the Black Death waves of the 14th century caused many bathhouses to close. Though some reopened, European aristocracy had, by then, begun adopting new fashions—such as undergarments and tight-fitting tailoring—which, in turn, created new ways to “trap the body’s evacuations in a layer above the skin”. At the same time, certain strands of the Catholic Church increasingly viewed bathing, nudity, and hygiene as contrary to chastity.
In the Iberian Peninsula, the Christian aristocracy—especially in the north—was gradually reclaiming land from Muslim rulers. Bathing and hygiene were (and still are) highly valued cultural practices in the Muslim world, and often also served as forms of social interaction. Therefore, part of the Christian Iberian aristocracy cemented its identity by rejecting cultural practices closely associated with their Muslim subjects, neighbours, and slaves. All of this led parts of European and Iberian elites to adopt habits that amplified body odour: less bathing, less washing, and tighter clothing. In warmer climates, this likely made body odours more intense. Consequently, rare, expensive, highly perfumed spices could help mask these natural human emissions—while also signalling wealth and status. This helps explain why the Iberian aristocracy, in particular, was motivated to fund direct expeditions to distant lands.
Ironically, Europeans not only brought diseases to the Americas, but also carried American diseases back to the continent. The most infamous of these was syphilis. Once syphilis began ravaging Europe, bathhouses once again closed—this time because baths were associated with libertine behaviour, and syphilis was primarily sexually transmitted. Additionally, beliefs grew that disease could be carried through water, and that open pores (caused by bathing) allowed sickness to enter the body. Within a few decades, bathing had largely disappeared from European daily life. Few households had running water, so opportunities to bathe were almost non-existent. Coupled with aristocratic customs and religious identity-making, this gave rise to figures like Louis XIV, who famously claimed to have bathed only twice in his life.
This cultural environment turned humble botanical species like nutmeg and cloves into commodities capable of financing multi-year expeditions—risky ventures where many ships and sailors might be lost. In the case of the Magellan expedition, only one of five ships completed the circumnavigation—without Magellan himself—but it returned so full of spices (including pepper) that it paid off the entire venture and paved the way for Spanish colonisation of the Philippines. From the association with perfume and the abandonment of bathing, we derive the enduring stereotype that Europeans don’t smell especially pleasant—particularly in summer. That this has anything to do with “civilisation” is now a running joke used by formerly colonised peoples to poke fun at their colonisers—whose domination was often justified on the basis of “cleanliness”.
Let’s now focus on the affordability of finding an alternative route to the spice lands—ultimately disrupting existing sea and overland trade networks by bypassing dozens of intermediaries. The process of making that circumvention possible was itself the result of 300,000 years of innovation, technological development, and skill acquisition. Crucially, as with the Austronesians and Vikings, it was the development of highly reliable navigational skills that enabled this. By the 15th century, Mediterranean and Atlantic ships were sturdy and autonomous enough, and the Iberian Peninsula had become a melting pot of maritime knowledge, making it affordable to attempt reaching India via new routes that bypassed the Levant.
Columbus believed his own (incorrect) computation of the Earth’s diameter—around a third smaller than it actually is—and convinced the Castilian crown to fund his westward voyage to reach “India”. His proposed route was reckless and misguided. But, as previously mentioned, there were hints that land existed across the Atlantic—not so far away. For instance, the Portuguese were aware of pau-brasil (or firewood) drifting from the west. Columbus, Castile, and luck all converged when they stumbled upon the Americas.
For the Portuguese, finding an alternative route to the spice islands was far less reckless. It took them over a century to circumnavigate Africa, learning gradually and establishing trading posts and outposts along the eastern African coast. Each expedition went a little further. By 1488, they reached the Cape of Storms (later the Cape of Good Hope), and in 1497, Vasco da Gama became the first European to reach India by sea—just five years after Columbus’s voyage.
At that point, for the first time in history, humans had the means to connect all the major landmasses in a way that was both affordable and repeatable. Just 20 years later (1519–1522), the first circumnavigation of the world was completed. For the first time, a human could potentially reach the shores of almost any land within a few years.
In the 15th century, the world changed dramatically due to the technologies and navigational expertise accumulated in the Western world over millennia—coupled with the high cost and demand for spices in Iberia. Within a few decades, the world transitioned from having vast disconnected landmasses to a planet where the first movers—Western Europeans—gained the upper hand. These few kingdoms, and later much of Europe, changed the existing order, wiping out tens of millions of people and cultures in the process, and subjugating nearly every major power on the planet.
That change forever transformed both ends of the lands reached by the Iberians. In the Americas, in less than half a century, a few thousand Europeans—mainly Castilians—overthrew two massive empires ruling tens of millions. In the East, in present-day Malaysia, the Portuguese seized Malacca in 1511 with just 18 ships—then the commercial hub of East Asia. It was the furthest territorial conquest in human history up to that time. To grasp its magnitude: imagine a fleet of advanced vehicles from Malaysia travelling 20,000 km to conquer Rotterdam—Europe’s major transport hub—and controlling it for centuries thereafter. This event had immense geopolitical implications, shifting the balance across Eurasia and reshaping global self-understanding until the present day.
Control of far-flung lands, trade routes, and global connectivity was taken over by small elites from a few North Atlantic coasts. These elites, supported by large home populations, had more resilience than earlier seafarers like the Vikings or Austronesians. Those earlier explorers made equally bold journeys and had the skills and knowledge—but little margin for failure and limited rewards. In contrast, the North Atlantic nations had the means, knowledge, and desire. They could afford the long-distance connection, and more importantly, its coercive and commercial control.
Pandemics, repression, and colonisation enabled long-term European settlement. As Eduardo Galeano famously wrote: “They came with the Bible and we had the land; we blinked, and now we have the Bible and they have the land.”
The basic structure described in the previous section allowed inland communities to have access, for example, to seashells, despite the fact they might never have seen the ocean themselves. However, one may ask: how is it that the people of Papua were still trading with stone axes in the 20th century? Even more so if the reader knows that Papua might be the oldest place on the planet where agriculture was developed, with some estimates suggesting that root vegetables were cultivated at least 10,000 years ago—slightly earlier than the domestication of grains in the Fertile Crescent.
That is actually a question Yali, an exceptional politician from Papua, asked Jared Diamond in 1972 when he was conducting fieldwork there to study ornithology, including birds such as the bowerbird. The actual question was: “Why do you white people have so much cargo and bring it to Papua, but we natives have so little of our own cargo?”—summarised as: “Why do your people have so many things compared to us?” For Yali, cargo was the generic term for all the items Westerners had brought to Papua since World War II. He was, in fact, asking about the technological gap between the Westerners travelling there and the local people.
Diamond spent 30 years developing an answer, culminating in his book Guns, Germs and Steel. In it, he presents two main theses:
First, geographically, some populations had more access to natural resources to begin with, such as plants and animals that were easy to domesticate. For instance, in Papua, the largest domesticable animal was the pig, while in Eurasia and Africa, cattle have symbolised wealth and power for generations. Papua also had no access to grain, while civilisations like the Mayas domesticated maize. Despite not having large animals, that was sufficient to develop a thriving and complex civilisation with many types of “cargo”. Additionally, domestic animals made human populations more exposed to germs, which they gradually adapted to. However, when these naturally engineered biological weapons encountered previously isolated populations, they wiped out 90 to 99% of the locals in less than a century. The pigs kept by Papuans may have spared them a similar fate to that of the Americas and the distant Pacific Islands.
The second thesis is that, due to geography, some areas of the world were better connected than others. Again, Diamond argued that it was relatively easy for trade networks to span Eurasia and Africa, with goods, ideas, domesticated foods, technologies, and ideologies spreading far in just a few generations. This was especially true for crops; species domesticated in one location often had suitable growing conditions across the climate zones from the Iberian Peninsula to Japan. This did not occur in other regions, such as the Americas, where many similar technologies had to be independently developed by both the Mesoamerican and Andean peoples. Their centres of domestication were only a few thousand kilometres apart. However, both groups had to independently domesticate crops like maize, cotton, and beans. Moreover, useful animals like the llama and crops like the potato, domesticated in the Andes, never reached the Mayas, while the writing system developed by the Mayas never made it to the Andes. According to Diamond, these gaps are due to difficult and diverse geographies. There is no easy land route connecting these American regions—dense tropical jungles, vast swamps, and rugged mountain ranges with dramatically different climates hinder the spread of domesticated species. Even today, in the 21st century, there is no road connecting these two areas. The Darién region, on the border between Panama and Colombia, remains impassable by vehicle, making it the only place on the continent without a road from north to south.
With these two main theses and strong reasoning, Diamond makes his case to answer Yali’s question. According to him, Papua did not have the species or connections that benefitted Europeans upon arrival. Europeans were simply lucky and thus came to dominate the known world. That left Papuans with a relatively limited set of food sources and restricted access to technologies developed elsewhere.
This view has been widely debated and does not fully account for the timing of major expansionist events. Still, the picture Diamond paints holds reasonably well until the 16th–17th centuries and the largest biological genocide in human history. Afterwards, the situation becomes more nuanced, as the connectivity of the world began to increase exponentially—but we will explore this in another chapter.
The limitation on access to technologies and information for the Papuans is related to our earlier examples of basic trading networks. These can only extend as far as humans can reliably reach each other at a more or less consistent pace. If mountains, oceans, and jungles must be traversed, the task may be too dangerous or uncertain to attempt. In such cases, communities at each end remain isolated. On the other hand, if obstacles are surmountable and there is a desire to connect, these networks can transform the well-being of participants. This is the case with the Eurasian and Indian Ocean trade networks. These spanned over 2,000 years, bringing silk, gunpowder, spices, and paper westward, and silver and wool eastward. Or take the so-called Columbian Exchange, where Europe plundered the immense wealth of the Americas, borrowed some botanical knowledge and cultural inspirations, and in turn colonised and Christianised native populations, erasing or warping their lands, traditions, institutions, and knowledge systems.
Returning to our earlier examples and thought experiments, these scenarios depicted only weakly connected communities. For instance, fragmented travel and exchange networks were the norm in the Papuan highlands. Although goods like axes or shells could travel freely, people could not. Residents of a group traditionally could not travel far beyond their territories or their closest trading partners. Unannounced or long-distance travel posed great risks—aggression, even death—making lone long-distance trade virtually non-existent. Commerce beyond immediate neighbours was carried out by intermediaries. Each of these intermediaries usually took a cut or incurred costs, inflating the final price of the item. This inflation could only go so far—only items of high value or buyers with considerable resources could justify the costs. This effectively limited how far an object could travel and placed natural boundaries on the kind of connection network described in the previous section. This is comparable today to drug or wildlife trafficking, where lightweight, high-value items traverse vast regulatory and law enforcement hurdles over thousands of kilometres.
Conversely, if exchange links are too weak or complex, they may collapse shortly after forming—before significant transfers of goods, ideas, or technologies can occur. This has happened countless times across different regions and eras. Think of a group of friends that never fully bonds, or a business that cannot reach its customers. Let’s consider some more striking historical examples. At least twice before Columbus, people from faraway regions reached the Americas, but failed to establish lasting presence or strong cultural exchange.
You might be thinking of one such example: the Vikings from Scandinavia in the 11th century. They arrived from sparsely populated Greenland, but their colonisation efforts in Vinland (modern-day North America) failed. Without delving too deeply into why, it’s clear they had the means to reach distant shores and found good land, but not much more. The distances were vast, the local resources were not especially valuable, the natives were not always welcoming—possibly becoming infected or hostile—and the Vikings had limited capacity for sustained support. Climate and political factors played a role, but ultimately the venture proved too costly for too little return.
The second example is even more epic and deserves wider recognition: the Polynesian crossing of the Pacific Ocean to reach the coast of South America. Sadly, we lack written records—like the Vinland Sagas—or significant archaeological evidence. But through genetic, linguistic, and species transfer evidence, we know that about 800 years ago, seafarers from the Polynesian islands reached South America. For context, that’s more than twice the distance Columbus travelled—and his crew believed land awaited. It’s also more than three times the longest Viking sea crossing to reach the Americas. The Polynesians had no clear reason to expect a continent ahead, yet they sailed into the unknown.
Imagine being in a boat no more than 30 metres long and about one metre wide, possibly connected to another boat as a catamaran. This platform allowed a few dozen people to bring animals, water, and supplies across the vast ocean. Some examples of these vessels—such as Druas in Fiji—could carry more than 200 people. Now imagine that your only known geography was a scattering of islands, and you did not know where or if more land existed. Countless such expeditions must have failed before one succeeded in making the 6,000 km journey—until finally, they found an enormous continent. Then they had to sail back, locating tiny home islands amid the ocean after weeks at sea. The adventure, mindset, skill, and ultimate success—after who knows how many failures—is one of the most remarkable, untold stories of human exploration.
This is not comparable with the Viking or Iberian voyages across the Atlantic. Those sailors knew something awaited beyond. The Vikings had seen driftwood from the west wash ashore in Greenland. Columbus, though mistaken in his estimation of the world’s size, expected land. The Portuguese, too, found driftwood in the newly colonised Cape Verde islands—Paubrasilia, or “firewood”, due to its red colour—which gave Brazil its name. All these peoples had reasons to expect land in the west.
We know Polynesians completed this journey because they brought coconuts and chickens with them—and brought back sweet potatoes, which later spread across the Pacific islands as a staple crop supporting larger populations. They also had children with local peoples, leaving genes still found today in Mesoamerican and Mapuche populations. There is even evidence of American ancestry in Polynesian populations.
This widespread gene flow shows that Polynesians not only crossed the ocean more than once but established contact across a broad swathe of the Pacific coast of the Americas. Unfortunately, the contact was not maintained over time, and no further instances of intermarriage are evident after 1300 CE. Furthermore, Polynesian navigational technology was not passed on to the local peoples. Native Americans would have greatly benefitted from such skills—especially considering the lack of transport links between North and South America even today.
We can speculate why the contact faded and the technology was not adopted—unlike sweet potatoes. In a simplified view, the connection was likely too distant and involved too few people to become meaningful. Perhaps the Marquesas Islands had only a few hundred inhabitants at the time, while the continent had millions and two sophisticated civilisations that saw little value in these distant seafarers. Whatever the case, the exchange was short-lived and limited.
More complex reasons could also explain the lack of adoption. For instance, Austronesian sailors from Makassar routinely travelled to the northeast coast of present-day Australia—around 3,000 km away—to harvest sea cucumbers for the Chinese market. This trade continued for centuries, ending only in the 20th century due to Australian colonial restrictions. Although contact endured, no lasting colonies were established, and the mixed communities that emerged never thrived. Some wives were exchanged, and a basic trade language developed, but local people only adopted simple technologies such as dugout canoes and shovel-nosed spears. More advanced knowledge may not have been shared—or perhaps locals weren’t interested.
Similar patterns emerged with Austronesian expansion to Madagascar and Taiwan. Though close to the mainland, these regions show little lasting influence on nearby East Africa or Southeast Asia. This suggests that Austronesians successfully expanded to uninhabited or sparsely populated areas (e.g. Pacific islands, Madagascar), but failed to make inroads in already densely populated regions like Asia, Africa, Papua, or the Americas.
In Australia’s case, the issue might have been resource scarcity. Northern Australia may not have offered enough to entice settlers. Local people may not have seen value in adopting agriculture or foreign technologies. Additionally, complex knowledge like seafaring is often guarded. Sailing is more than boat-building—it involves reading stars, winds, currents, and more. Mastery takes time, risk, and community effort. If local life was already sufficient, why go to the trouble?
Moreover, the fact that sea cucumber expeditions were male-dominated may have prevented the creation of Austronesian communities in Australia.
A good related example, but limited to technology and infrastructure, is the first transatlantic telegram cable. It was build in 1856 and it only worked poorly for 3 weeks until it completely failed. It took 15 years to build the successful 2nd cable. In this case a combination of affordability, technological improvement and willingness to communicate more instantly two continents made the 2nd attempt stuck. Now we have tens of thousands of underwater cables connecting all continents to transfer high speed data. But we could imagine many scenarios in which after the first failed transatlantic cable, the trend did not continue.
From the cases outlined here, we can see how many different scenarios could limit cultural and technological exchange, creating a nuanced and often unpredictable picture. Establishing sustained connections across natural and cultural boundaries is a long, fragile process—often with little success on the first attempt.
Going back to your encounter with a stranger in East Africa, the stranger might point to one especially appealing shell garment that you are wearing. You might decide to gift it away to create a bond, and it might not be a high price to pay because, back home, there are plenty more. Therefore, you might give away more spares, as we have seen Columbus did with cloth. To reciprocate, the stranger might give some of the ore. And with that, you might part ways, the only thing remaining from the encounter being an exchange or gifting of goods. The other person will go back to her or his community and show off the newly acquired item. Between curiosity and desire, more members of the same group might crave more of such objects. Then they might go in the general direction where the first object was found. Or, alternatively, the initial individual will go back to gather more. The same can happen with you and your home group returning to the meeting point.
However it is, after your exchange with the stranger, a link has already been established between distant groups. And maybe, after the first initial gift-giving from you or the others, a more constant exchange can be produced. Initially, you do not know where to obtain their shells, nor do they know where to obtain your ore—only that you know how to obtain it from each other. They actually get the shells not from the source but from another group. And these from another, and so on, until getting to the coast. It might be more efficient to continue doing so than to start the long journey, hundreds of kilometres away into the unknown, to get to the sea and obtain the sea shells directly. Same with the ore, food, spices or other objects. It is more valuable to keep good relations with your neighbours, if there is something to gain, than going solo.
With that thought experiment, we see how a basic exchange network and collaboration is established. Such a network, however, does not seem to be sustainable, as once the desire and rarity of an item are gone, the need for the network might erode.
With more useful items beyond rarity, like tools, that network and cooperation might be strengthened. This has often happened with specialised tool-making communities. For example, in the hinterlands of Papua, in the Wahgi Valley, existed the Tuman quarries, with stones of sufficient quality to make high-quality stone axes. These stones were not found in many other places in the highlands of Papua, and the quarries and the axe-making process were solely controlled by the Tungei people. They were excellent stone-axe makers, an activity to which they dedicated most of their time. That produced a high-quality product that they exchanged with people all around the region. The trade was not only in other goods, but also in dedicated rituals, respect, holding them in high esteem, and providing women to the Tungei community. They had access to the stones and the means to extract them in an efficient communal way. Every few years, they quarried as a whole community in expeditions that lasted for months. They also had access to the main source, so they had a higher number of people with the knowledge and skills on how to prepare them—both giving shape to the blade and hafting the T-shaped handle. With that knowledge, they were able to trade for goods far beyond their immediate neighbourhood.
However, the main exchange of the stone axes was for marriage. In their area, most marriages were arranged through a complex exchange of goods between the original community of the women and the receiving one. Sea shells, ropes, pigs, feathers of the birds of paradise, and salt were used in the exchange. But in the case of the Tungei, the stone axes were used profusely to buy wives, which allowed them to be relatively wealthy, because they were the source of the stones. They would be the petrol producers of the highlands! More than two-thirds of the marriages of the Tungei were with women from external communities with whom they traded. Interestingly, they did not trade with faraway groups like the coastal ones, but they did receive, through many intermediaries, the shells coming from there.
This dominion of the trading network of the Tuman quarries collapsed when patrols from the colonial era, and later the Papua New Guinea state, routinely accessed the area and brought with them steel axes and plenty of seashells, which they gifted to the locals or traded in large quantities. Needless to say, steel axes were much more appreciated. They were more efficient—cutting one tree in a day instead of several days with the stone ones—lasted longer, and were sharper, allowing them to be used for more refined work. With the European arrival of trade goods, the marriage pattern of the Tungei swiftly changed, with only half of brides being from exogenous communities. This reflects economic changes resulting from the loss of the axe trade.
On the other hand, other Papuan communities like the Goroka still traded axes well after the steel axes and pearl shells had saturated their communities. That is because they had a specialised workforce that was in charge of mining and crafting the Dom Gaima “bride axes”—large ceremonial stone axes so big and elaborate that they were crafted in a non-functional way. They were used for bride price and display purposes, and so were a sign of prestige.
In conclusion, the thought experiments and examples show how trade, commerce, needs, rarity, crafting, prestige, and aesthetically attractive items, plus basic communication, allow the establishment of rudimentary networks that extended much further than the basic interactions a community would have with its neighbours. The basic desire, need, and curiosity for what lies beyond might have created the roots for collaborative action.
But beyond the seeds of networks, there is the establishment of strong bonds. There are many ways in which these can be maintained, with one basic mechanism being intergroup marriages. Once the network and the trust between neighbouring groups are created, exogenous marriages can follow. In fact, this is quite common in nature, where many social animals have one group member who, after reaching maturity, travels, mingles, and often reproduces in a new group. Curiously, in the case of our closest relatives—the chimpanzees—females tend to migrate to a new group as teenagers. For bonobos, it is also usually the case that females migrate, though it is not rare for males to do so, while females from high-ranking matriarchs remain in their natal group. This exogenous mating and breeding is quite common in humans. The difference lies in the diversity of strategies. Exogenous reproduction is not always the case, and many human groups display a whole spectrum of migration patterns: from only females, to only males, to a mix, or exclusively endogamous systems. On top of that, there are complex kinship strategies regarding whom one can marry, from clan systems spanning hundreds of kilometres to, on the contrary, first-cousin marriages. Contrary to popular belief, first-cousin marriages are among the most genetically fertile unions. So keep in mind: if you want to have many children, have a lot of unprotected intercourse with one of your first cousins—like Darwin did.
Another way of strengthening networks is that of debt or gift exchanges. Many academics point out that one way to keep social relations alive, strong, and reciprocal is to build them upon accountability in the form of an unpaid debt or the need to return gifts. That happens continuously in our daily lives—when friends or family do us a favour, we “feel indebted” to them. The debt is not precisely measured (i.e., there is no numerical value assigned) and might never be returned in the same exact form—it might even be rejected by the giver—but it can be given to somebody else, like when someone pays for your dinner and you later do the same for someone else to keep the balance of the world. A good example of this is the Toraja, in central Sulawesi. Their current culture invests massively in the funerals of their people. When a family member dies, the body is kept at home for years and considered still a member of the family, even being served meals daily. It is kept until the extended family can secure enough funds to hold a massive funeral—one anecdote told to me was of a family that kept the body for 20 years before finally holding the funeral. Once the body is placed in a permanent tomb—which can be a sarcophagus in the rock, hanging on cliffs, in caves, or in a grain storage-like structure—the person is considered to be truly dead. Despite that, they parade many of the mummies every year over Christmas, dressing them in new clothes, jewellery, even sunglasses. At the funerals, guests are given large gifts. However, these are not “debt-free” gifts. Each gift is recorded, along with which family it was given to, with the expectation that when invited to a funeral by that family, the gift-givers will be given something of equal or similar value—or they will lose face. One young person I spoke with there told me that “Torajas are indebted from the womb of their mothers”. Debt is inherited if you belong to a specific family and culture.
Beyond marriages and debt, infrastructure is a way to keep long-distance connections over generations and across large areas. For example, roads might extend much further than the migration routes a group would typically follow, and information—in the form of basic agreed signs and trade-related language—would travel along these early infrastructures. Once infrastructure is created, it might stabilise a distant connection, though it comes at a cost. Maintenance can be expensive. Bridges must be built and maintained by the people on both sides of a river—or rebuilt repeatedly, like the Q’ichwa Chakarope bridge rebuilt every year using a species of grass, despite a modern bridge being nearby. One theory posits that this is the future of humanity: the maintenance of an ever-expanding technological sphere—or technosphere (which we will explore further in the future).
Once the boundary of vicinity is broken, using this array of methods, connection might expand unhindered to take over the planet in a universal way. But, as we have seen, maintaining all of this is costly, and motivation may be lacking. Why should I learn seven different languages? Why should I tire myself travelling to another village with different food and unfamiliar people? Why should I marry into another community where none of my loved ones can protect me if things go sour? Why should I be indebted to traditions and infrastructure before I’m even born? Many of these costs may have prevented communication from expanding and stabilising any faster than it has—and it hasn’t happened until relatively recently in part due to major geographic obstacles like oceans, swamps, and mountain ranges, as well as cultural taboos and long, expensive trade networks. But if those obstacles didn’t exist, there would be nothing stopping a valued good or service—existing only in one place—from eventually reaching every other part of the world, along with all the communication needed to sustain that. This was exemplified by species trade and related colonisation— but we’ll talk about that in the next section.