by Lauren Probert
Bilingualism—the goal of many a language student, spurred on by many factors - such as the inevitable second-hand embarrassment of our monolingual parents massacring the language on every holiday. But is fluid communication the only advantage? Studies into bilingualism and our brains could show otherwise.
The topic of bilingualism has been polemical amongst researchers for years, with scientists and linguists alike still divided on its benefits and disadvantages. On one hand, there is some evidence to suggest the existence of a ‘bilingual advantage’ concerning enhanced executive functioning (brain plasticity and increased cognitive reserve) and how bilingualism could be potentially aiding stroke recovery and dementia prevention. On the other hand, bilingualism’s former and less favourable reputation is built on some remaining researchers who argue that there is a ‘processing cost’--referring to the longer time taken to select the target language and suppress the other language—which presents lexical and syntactical disadvantages to a language-learner.
However, one could argue that there is evidence of oversimplification in this field of research. In particular, the diversity in age amongst the bilinguals researched-- most importantly the age at which the bilingual acquired its their second language and how bilingualism affects the brain at different stages of each subject’s life.
Therefore, children often find it easier to pick up on the subtleties of language. This initial utilisation of both hemispheres means that both will now be activated throughout the bilingual’s life, preventing the loss of some neural pathways. However, because language acquisition is easier in childhood, there is less of a ‘mental workout’.
Peak mental age in humans is assumed to be in the mid 20’s, thus young adults already have comparatively enhanced executive functioning. Leading researcher Ellen Bialystok felt that Professor Kenneth Paap’s criticism of the benefits of bilingualism was ‘selectively focusing on studies of young adults, who are least likely to show a bilingual advantage; they’re already at the peak of their cognitive powers.’
Interestingly, elderly people may benefit most. Some studies have shown that elderly bilinguals have better-maintained white matter in their brains compared to elderly monolinguals. The heightened difficulty of learning languages in elderly life is beneficial because ‘as people gain new experiences, nerve pathways that are used frequently develop stronger connection, whereas neurons that are rarely or never used eventually die’ (Mike Cardwell and Cara Flanagan). This means bilingualism keeps the neurons in the left and right hemispheres active. Plus, the benefits are not reserved for elderly people with many years of bilingualism-- Boyke et al (2008) found that there was evidence of brain plasticity amongst 60-year-olds that employed a new skill, which could be applied to language learning. They also found increases in grey matter in the visual cortex. So, it does seem possible that neglecting to recognise age, specifically that at which bilingualism was obtained, may have led to some research inconsistencies.
Unfortunately, one cannot underestimate the extent to which the issue has become politicised. Bilingualism’s dramatic image makeover may be partially due to political rather than scientific influence, no thanks to those who objected to bilingualism socially and spread misinformation to do with scientific ideas linking bilingualism and mental degeneration, masquerading politics as science.
Nowadays, the opposite opinion is more prevalent. Amongst the bilingual controversy there is “undoubtedly some concern” that it could reverse the “positive press” bilingualism has enjoyed because of the claims of cognitive benefits (Marek Kohn). It could be argued that recent data has been manipulated to support bilingualism given the social inclusion it is seen to engender.
This has provoked an interpretation and publication bias. In one instance, a group of researchers used 104 abstracts on bilingualism and reported “68 percent of abstracts that found an executive-function advantage were eventually published in journals, compared to just 29 percent that found no advantage” (Ed Yong). So, we can quite clearly see here that there may be potential inaccuracies not only in the data, but in what data is presented.
So, it’s obvious from all this that the proposition of a ‘bilingual advantage’ is clearly controversial and that an incomplete knowledge of linguistic processing, an absence of collaboration in this field, an unfortunate political agenda and a conflict between some leading researchers has only exacerbated problems.
But even if we gloss over all those issues, recognising the inherent flaws in research to-date and the imperative for more accurate and in-depth testing, personally I would still argue that age does establish a diversity amongst bilinguals with varying potential cerebral benefits. Additionally, I think we can all agree that being able to communicate with people outside of your own culture and upholding attachments to the countries and people connected with their language is an uncontested value of bilingualism, regardless of age.
Edited by Rosie Bell
by Rosie Bell
It would be difficult to argue that identity and self expression don't hold importance in the eyes of young people today. Discussions of pronouns and identity have been common over the past few years alongside the popularisation of gender-neutral and neo-pronouns amongst non-binary people. But as the world moves into this age of acceptance for all, languages must develop and change to be able to keep up with the progressions of society. However, it is often easier said than done, with some languages being more resistant to change than others.
For English speakers, a gender neutral pronoun is not particularly difficult to adjust to--we already have one. The pronouns ‘they/them’ are the third person plural pronouns of the English language. But, English speakers will also use these pronouns in the singular, if the gender of the person discussed is not known.
Think about it-- if you were to ask a friend if they (look, I’m doing it right now) were seeing a friend later, but you didn’t know their gender, wouldn’t you ask “are you seeing them later?” A not-so-rare singular they! Therefore, it's easy to see why, despite some resistance, English speakers can fairly easily adopt ‘they/them’ as a third, gender-neutral pronoun. The singular ‘they/them’ of course has its critics--as does all change--but the world moves on without them, as in 2017, the Associated Press Stylebook, a sort of book of standards set for journalists, featured “they” as a gender-neutral form. And jumping onto that bandwagon came the Merriam-Webster dictionary, which added “they” as the pronoun that should be used for a “single person whose identity is nonbinary” in 2019.
However, in languages that include more than just gendered pronouns, the switch can become slightly more complicated. To take French as an example, in terms of the pronouns, you can find not only gendered third person singular pronouns (il, elle), but the continuation of gendered pronouns in the third person plural also. This means that they do not have the same luxury of a nice gender-neutral pronoun, built in and inconspicuous, as English speakers do. The pronoun ‘they’ in French is ‘ils/elles’ depending on the occupants of the group one is describing. If the group is entirely male, or mixed, then one should use ‘ils’ and if the group in reference is all female, then one should use ‘elles.’ Not only does this seem to erase the existence of nonbinary people, but also to an extent women, whose presence is largely dismissed through this grammatical feature. If ‘elles’ are a group of all women, why should the presence of one ‘il’ erase that? What is it about that man that is grammatically overpowering the presence of all the women in that group? Perhaps it is a discussion for another day, but activists who are passionate about changing language to be more inclusive are questioning why it is the masculine that is the universal term, and the feminine that is only used in exceptions. Some French speakers have hit upon a solution of gender neutral pronoun ‘iel’ (or occasionally ‘ille’) which is intended to be a combination of the two pronouns currently in use. This does seem a reasonable rectification for the Pronouns Problem, but unfortunately does not solve the issue with gendered nouns. For this, francophones suggest the inclusion of asterisks to include all genders in the noun; for example, “les ami*e*s.”
Alas, this regrettably does not solve the biggest obstacle facing activists which is the approval of the illustrious Academie Française. Despite the efforts to make the French language a little more neutral, the ancient institution’s grip on the language will not loosen on this occasion, as in a declaration from the Academie on inclusive writing they anointed themselves the “guardian of the norm” against the "'inclusive' aberration.” Following this, in 2017, the French government banned the use of inclusive gender-neutral language in official documents with French Prime Minister Edouard Philippe saying “the masculine is a neutral form that should be used in terms applicable to women as well as men.”
So, it’s looking like a “non” from the Academie, mostly on the possibly understandable basis that the addition of the asterisks make French on the whole harder to learn. It must be said though, that despite their great influence, it can be difficult to control a language as it evolves to fit its speakers and their needs, so one may find that gender-neutrality will be this “norm” before long, and will require a solution.
Edited by Rosie Bell
Sign languages are visual languages used in a variety of contexts either as a substitute for or complement to speech and are among the most misunderstood languages globally. Often, they are viewed simply as a modified, slightly developed, version of the system of gestures hearing people use to communicate meaning. More extreme, is the idea that sign languages simply replicate every word in spoken languages with a corresponding gesture. While this may seem a reasonable assumption, it denies the complexity that sign languages possess.
Neuroscientists have demonstrated that deaf people engage the same regions of the brain when signing as hearing people do when speaking. More interestingly, those not deaf from birth who have subsequently learnt to sign achieve this in the same way neurologically that bilingual speakers of any language do. In contrast, the standard gestures used in regular speech do not elicit the same neural response. This of course makes sense. Sign languages, unlike simply pointing or nodding one’s head, have complex grammar rules not present in simple gesticulation.
These rules are often very different to the spoken language of the primary country in which they are used. British Sign Language (BSL) for example, does not use the subject-verb-object (SVO) sentence structure typical of English. Rather, BSL typically uses topic-comment structure common in East Asian languages. Where this is absent, BSL follows OSV word order, quite remarkable given this is the least common sentence structure among spoken languages globally.
Similar to spoken languages such as Mandarin, BSL also lacks verb conjugation to express tense. Instead, expressions indicating time are used more frequently than in English in order to convey when an action occurred.
These differences of course arise because sign languages did not, with some exceptions, develop amongst communities of hearing people. For this reason, sign languages also differ greatly from one another. Standardised national sign languages arose in Britain and France around the same time in the eighteenth century. Naturally, sign languages existed before this, but they were far more localised.
In France, for example, a particular Parisian dialect was discovered by chance by the philanthropist and priest Charles-Michel de l'Épée, when he went into the home of two deaf sisters to escape the rain. Here, he learnt that they used a sign language to communicate both with each other and the wider deaf community within the city. He endeavoured to learn it, created a school for the deaf, and encouraged the church to engage with them. His advocacy work encouraged the spread of this dialect in France and abroad. In 1791, France became the first country to guarantee deaf people legal rights and protections, such as the right to interpreters, and defence in court.
French Sign Language went on to heavily influence most European sign languages, as well as those further afield in the United States, Brazil, and Canada. It is for this reason that British and American sign language are mutually unintelligible despite both countries speaking English.
Perhaps the most challenging concept for people who do not sign however is the use of non-manual motions and expressions to convey meaning. Some words use identical manual motions, but are distinguished by differing facial expressions, such as doctor and battery in Dutch Sign Language. This can present difficulties for some second language learners who are sometimes inclined to vocalise signs to aid with memorisation, but this can often distort their meaning.
The reason for this is simple: while many features of different languages can be difficult to grasp or comprehend with increasing distance from one’s own, for many they all share a common feature in that they all consist of sounds produced from the mouth. Appreciating that this is not essential took a very long time for society to achieve. In fact, before his chance encounter, the aforementioned Charles-Michel de l'Épée believed that deaf individuals could not achieve salvation as they lacked all comprehension for language and therefore the Gospel. While this thinking is now outdated, sign languages are often still be viewed as little more than a substitute, rather than languages in their own right, which can lead to social ostracism and an inaccessible society.
Collective nouns are nouns which refer to a group of similar things as a singular entity. Indeed, the word group is itself a collective noun. They exist in most languages and many are so common in the everyday vernacular that their usage is entirely unremarkable to a native speaker. Similarly, while remembering the fact that a collection of flowers is referred to a bouquet may be initially challenging to a student of English, it soon becomes unnoticeable with enough exposure.
Other collective nouns – almost exclusively those referring to people or animals – are remarkably unordinary. A party of friars is a term that, although accepted, still seems rather quaint to most people. Yet such curiosities are a peephole into a distant past and understanding their origins can reveal the inner workings of the minds of our ancestors.
For example, a group of cobblers was historically referred to as a drunkship. This collective noun, itself a relic of a different era, makes apparent the perception of cobblers in medieval society. They were considered the lowliest of the tradesmen and, unlike richer merchants and skilled craftspeople, drank ale as opposed to wine. While they were still wealthier and of higher social status than the majority of the population, within towns and cities they were towards the bottom end of the social hierarchy, and their collective designation as a drunkship by the societal elites reflects this.
Beneath the cobblers, and indeed most people in medieval England, in status were prostitutes. The alliterative expression herd of harlots was intended to dehumanise and further stigmatise sex workers, likening them to farm animals. While even nobles frequented brothels, the women who worked in them were scorned by their contemporaries, regardless of their clientele.
While nuns perhaps had an immeasurably different profession, their collective noun was not significantly more kind. A superfluity describes an excessive amount of something, and convents, so crowded as they were, were thought to have an excess of nuns. Moreover, many in England believed that abbeys, monasteries and convents should be disbanded, or at the very least see their influence diminished, and as such deemed their inhabitants too great in number as well.
Yet these examples are incredibly outdated to the modern reader, and one is unlikely to come across them even in a literary setting. Terms of venery however are still widely used yet can appear bizarre even to those who have spoken English their entire life.
Terms of venery are the collective nouns specific to a species or other grouping of animals. Some, such as a pride of lions or school of fish are unlikely to raise any eyebrows. Others, while notably less common, are still viewed as quite normal. A gaggle of geese is such a case, although intriguingly this only refers to geese wandering on the ground. In flight the correct term is a skein.
Yet a more poetic turn of phrase, a murder of crows, is quite a startling name. This too has its roots in medieval folklore, yet unlike the strange collective nouns for people, it is still used today. Their tendency to scavenge livestock carcasses, as well as their dark plumage, gave crows a reputation as ‘creatures of the night’ and the superstitious considered them witches in disguise. With such a sinister reputation, the name murder probably seemed apt.
Most other terms of venery stem from the desire of the upper classes to prove their erudition and use specific terms for the groups of animals they would hunt. A herd of deer, cete of badgers, and husk of hares can all trace their origins to noble hunters.
But before you fear that our collective creativity dried up in the 1500s, know that the popular parliament of owls was only coined by C. S. Lewis in the 1950s, and remains in use today. While strange names for groups of almost every mammal and bird have already been assigned, there is a notable lack of unique collective nouns for insects, reptiles, and amphibians. If you wish to make a new word to stand the test of time, that might be a place to start.
A few weeks ago, I encountered a languages student’s worst nightmare.
I had been working for a startup in Rome for about a month but had barely understood a word of the casual Roman dialect which crackled around the office every day. Linguistically, I was operating on about the same level as the office dog, a British bulldog called Cecilio, both of us perking up and wagging our tails only when we heard our own names or the word ‘pranzo’ (lunch).
I had got the job about 3 months before by assuring them that I spoke Italian, knew how to use social media, and was willing to work for free for 6 months. I pictured them high fiving on the other end of the line, thrilled with the deal that they just pulled off with this English student desperate for experience. Little did they know that the only social media account I had wouldn’t look out of place in a line-up of Russian bots, and my Italian was so rusty that it was practically on the scrap heap.
I was what they would call in Italy a ‘Cavallo di Troia’, but instead of a highly-trained army inside there was an intern clutching a CV so exaggerated that it may as well have been a Top Trump card for the Incredible Hulk. They discovered this fact sooner than even I had expected, however, when I arrived on my first day sweating in the Roman summer heat, buzzed the front of the building, and confidently said what I thought meant ‘hello, I’m the Intern’ but, as it turned out, translated as ‘hello, I’m on the inside’. There were about 5 seconds of radio silence as the person at the other end must have thought ‘who the hell is in the building, how did they get inside, and why are they buzzing the door to tell us?’
You’d be forgiven for thinking that linguistically it could only get better for me from here on out, but you’d be wrong. Over the next few days I had asked if I could pay for lunch with farmers rather than cash (contadini/contanti), accidentally started a conversation with a colleague by asking if she was married in Rome (sposarsi/spostarsi) and asked another if he wanted to copulate with me for lunch (accopiarsi/unirsi).
Amazingly the worst was still yet to come as, when asked during a work social if I liked the look of the pasta I’d ordered for dinner, attempting to use a Roman expression I’d been taught meaning roughly ‘it’s no small thing‘ instead replied ‘yeah.. well, it’s not exactly pizza and pussy’ (mica pizza e fichi/mica pizza e fica).
Yet after many more excruciating failures to get to grips with the dialect, my language ability eventually leap-frogged Cecilio’s to about the level of a toddler with slight hearing difficulties. Just as I was beginning to feel like the office translator rather than the office pervert, a colleague got out his phone, jabbered some Italian into the microphone, and the phone translated it aloud in perfect English. He turned to me, waving his phone in front of my face with glee and said what was probably ‘hey it’s the new iPhone update’ but all I heard was ‘hey, what the hell are you still doing here?’
During a few minutes of crisis, I considered, among other things, changing degree, jumping out the window, and/or emulating what the previous intern had done who, speaking no Italian, made himself indispensable to the office by snorting lines of pecorino cheese on work outings.
Short of inhaling cheese, I realised there are a few precious saving graces for languages students which translation software still can’t compete with, two of which can even be explained using the phrase: ‘let’s address the elephant in the room.’
The first is shown by words like ‘address’, which can mean several things in different contexts. You could, for example, ‘address’ the elephant in the room by talking about the elephant. You could also ‘address’ the elephant by talking to it, or even by putting an address and a stamp on the elephant and trying to get it the hell out of your room.
The second is idiom, the vast majority of which make no sense run through a translator. To my utter confusion on arriving in Italy, it’s not uncommon to hear people say phrases like ‘I did it dog’s dick’, ‘what a fig’, or hear someone cheerfully exclaim: ‘in the whale’s ass!’ which I eventually found out to mean ‘good luck.’
Of course, the reasons for learning a language rather than using a translator are not only linguistic. Try flirting with someone using a translator, for example, and you’ll sound so formal that they’ll feel like they’re being hit on by the Pope and, in Rome, 5 times out of 10 this might not work in your favour.
So, to those who say that there’s no point in learning languages in the era of translation software, you’re welcome to spend your time abroad staring down at your phone, opening and closing your mouth like a trout after a botched lobotomy but I’m afraid I will no longer join you.
And I say to Google, Apple, DeepL and the many others that – from painful personal experience – they have a hell of a lot of problems yet to solve and, until they do, ‘in the whale’s ass’ to them all!
Suzette Haden Elgin’s Native Tongue is a work of feminist science fiction. Set for the most part in a 2205 dystopia long after the repeal of the Nineteenth Amendment granting women’s suffrage, Elgin’s novel is as much a linguistics thesis and social commentary as it is a work of science fiction.
In Native Tongue, Earth’s economy is intertwined with that of far-flung planets, and democracy has been replaced with a technocracy of sorts dominated by a quasi-dynastic clique of linguists. Their management of inter-planetary relations from trade agreements to peace treaties has afforded Earth’s linguists a degree of power unimaginable today – human prosperity is determined almost entirely by their successes and failures.
Yet through extensive reported speech, Native Tongue’s narrator exposes the reader to the fragility of their status. Simultaneously revered and despised by the populace for their influence and wealth, the linguists go to great lengths to placate the public, embracing a sort of asceticism to prevent challenge to the social order.
Their dilemma is most evident when a panel of linguists votes upon medical treatment for Nazareth, the novel’s principal protagonist, who since a young age has been an integral part of their operation. While she is certainly not viewed as equal to male linguists, there is evident affection for her, much like that one would have for a pet. Although saving her breasts would make but a tiny dent in their substantial funds, the panel ultimately votes against the operation, as spending their money on cosmetic procedures would trigger considerable backlash from the public.
Meanwhile the female linguists of Barren House, an ostensive retirement home for women past their child-bearing years, are at work creating a language to subvert the patriarchal order – a movement that Nazareth becomes increasingly involved in throughout her life.
Notwithstanding the socio-political backdrop, and the varied, often humorous descriptions of alien species, it is Elgin’s exploration of linguistic themes throughout the novel that is most intriguing. Drawing from her Ph.D. knowledge, Elgin intersperses her novel with academic concepts and terminology, adding realism to a rather fantastical setting.
The second chapter, for example, begins with an excerpt from a fictional feminist linguistics manual:
‘The linguistic term lexical encoding refers to the way human beings choose a particular chunk of their world, external or internal, and assign that chunk a surface shape that will be its name; it refers to the process of word-making.’
Elgin takes this idea, firmly established and accepted by her contemporaries, and develops it further. She coins the term “Encoding,” which she uses to refer to the creation of words for those ‘chunks’ of the world which have been deemed undeserving of their own name. In Native Tongue, these are almost exclusively aspects of the female experience, overlooked in a profoundly sexist society.
Similarly, the government’s justification of the breeding and often brutal training of children to be linguists (although it must be mentioned that children not ‘of the Lines’ are often subject to far worse fates) invokes the very-real critical period hypothesis. The critical period hypothesis maintains that language acquisition is biologically linked with age, and that infants are inherently more able to learn a language than adults. Although this theory is not unchallenged among modern linguists, it is another example of Elgin drawing upon her extensive education in order to present her dystopia as a plausible future.
Yet the linguistic concept which perhaps influenced Elgin’s novel the most is the Sapir-Whorf hypothesis: the idea that one’s language shapes one’s perception and worldview. This is the basis for Laadan, the constructed language used by female linguists to communicate amongst one another without male oversight.
The concept is so incendiary that Nazareth refuses to accept it at first, knowing what it could potentially mean. She asks whether Laadan has ‘one hundred separate vowels,’ making clear how ridiculous she finds the whole notion. While she has been aware of “Encodings” since her childhood, the progression of Langlish, a mere pet-project of the women of Barren House, to a pidgin and eventually a creole in the form of Laadan, is as preposterous as it is enticing.
Despite this, the use of secret languages by oppressed groups and minorities is not unprecedented. When homosexuality was still criminalised in the UK, Polari was used, predominantly by gay men, to evade arrest and violence. Subtly inserting Polari words into conversation could alert others to the speaker’s sexual orientation in a safe way. Moreover, it allowed gay men to communicate in public spaces without being assaulted by either law enforcement or the general public.
Laadan goes further in that it is not only a language designed for secret communication, but also one which incorporates the aforementioned “Encodings” to designate ideas and objects significant to women, but not to men, as important. It is this aspect of Laadan which makes it dangerous – it broadens the thinking of its speakers. While Native Tongue itself does not delve into great detail regarding the vocabulary and grammar of Laadan, Elgin separately published a dictionary and grammar reference of the language which complements the novel, and asserts Elgin’s belief that a language created by and for women is a necessity. For its part, Native Tongue gives life and colour to this conlang, much as The Lord of the Rings did for Tolkien’s.
The novel does have numerous flaws, however. Aspects of its fundamental premise seem far-fetched, and the degree of gender essentialism can make the characters one-dimensional at times. At one point, Elgin attempts to garner sympathy for a murderess by virtue of her sex, and while a handful of male characters are granted some complexity, most are defined only by their misogyny, or worse.
While the situation itself is more interesting than other contemporary works of feminist fiction, the sometimes-disjunct prose and incoherent structure prevent Native Tongue achieving the same cult following of The Handmaid’s Tale. Indeed, it is at times a challenging read, and a lack of direction can render entire chapters unenjoyable.
The conclusion is at best unfulfilling. While it does strengthen the case that language is a powerful tool, and therefore dangerous to those atop the social hierarchy, it is incredibly unsatisfying. Without giving away too much, it makes decades of progress entirely futile.
While the concept behind Native Tongue is inspired, and some of the world-building is both entertaining and thought-provoking, Elgin’s lack of experience as a fiction author is painfully apparent at times. Despite this, the linguistic themes are engaging, and the questions posed thought-provoking. Simply for this reason, it is worth a read.
The periodic table can often seem senseless. Chemical symbols often bear no semblance to an element’s English name, and as one ventures through the lanthanides and actinides, even the names themselves appear to become progressively more absurd. Californium anyone?
Yet all 118 of the presently discovered elements from hydrogen to oganesson have stories hidden in their names. Sometimes an epithet discreetly details an element’s properties; sometimes it offers a fascinating glimpse into an element’s history.
As it is the first of the elements, let us start with hydrogen. From the French hydrogène it is a compound of two words, the Greek hydor (ὕδωρ) meaning water, and the French gène meaning producing. The German name, Wasserstoff is all the more explicit: literally ‘water stuff’. Although its etymology is perhaps not the most complex or intriguing, it is certainly revealing. While it is common knowledge that water (H2O) comprises two hydrogen atoms and an oxygen atom, it is somewhat anthropocentric to name the most abundant element in the universe after the property most apparently relevant to our existence.
The names of nitrogen, oxygen and carbon are similarly reductive. Nitrogen was so named when Jean-Antoine-Claude Chaptal found it to be present in nitrates, particularly nitre or potassium nitrate. His contemporary Antoine Lavoisier coined the alternative name azote, stemming from the Greek and meaning ‘no life’, as nitrogen is an asphyxiant. It is now the common name in most romance languages. Many other languages have developed their own names based on this same property, such as German (Stickstoff), Swedish (kväve), and Czech (dusík).
Oxygen meanwhile is a Hellenised form of the French principe acidifiant, owing to the fact that oxygen was originally considered an essential component of acid formation. Carbon on the other hand is derived from the Latin carbonem via the French charbone meaning coal.
More intriguing however are those names which describe an element’s discovery. Technetium’s name comes from the Greek tekhnētós (τεχνητός) meaning artificial or manmade, as it was the first element to be synthesised by humans. Promethium meanwhile evokes images of the (almost) eponymous Greek legend and pays homage to the fact that progress – here synthesising and discovering new elements as opposed to stealing fire – often requires great sacrifice.
Geographically inspired names are perhaps less inventive, but nonetheless offer some insight into their discovery. Scandinavia has the interesting accolade of being the region of the planet with the most elements named after it: erbium, hafnium, holmium, scandium, terbium, thulium, ytterbium and yttrium. All were first discovered by Swedes, except hafnium which was discovered by Dirk Coster and Georg von Hevesy in 1923, who named it after Copenhagen where they both lived. Thereupon it became the last stable element to be identified.
Argentina in contrast is the only country to have been named after an element: silver. It is an abbreviation of the Italian Terra Argentina or ‘land of silver’. The word itself was originally Latin, argentum, helping to explain its symbol Ag on the periodic table. Au, the similarly incongruous symbol for gold, has Latin origins as well, being an abbreviation of aurum.
Today, new element names continue to be Latinised to maintain consistency and continuity, but naming is now overseen by the International Union of Pure and Applied Chemistry (IUPAC). While ultimate authority still nominally rests in the hands of the discoverer, the IUPAC offers guidelines, and steps in when necessary.
One such case arose in the 1960s, in a controversy now termed the Transfermium Wars. Scientists from both the Soviet Union and United States claimed to have first discovered elements 104, 105, and 106. In attempting to work a compromise by allowing element 108 to be named hahnium by the American team, the IUAPC only drew more international scorn as a research team in Germany were undoubtedly the first to discover elements 107 to 109.
Moreover, the name hahnium referred to Otto Hahn who co-discovered nuclear fission with Lise Meitner. German scientists intended to name element 108 meitnerium as tribute to Meitner as she was overlooked by the Nobel Prize Committee in favour of Hahn. Naming elements after both of them simultaneously would have again overshadowed her accomplishments.
After a long process of committee hearings and tense compromises which involved rearranging the names of almost every element from 101 to 112, the present list was finalised. It was not only the pride of individual teams of scientists that was at stake in this dispute. The names chosen for elements reflect the values scientists wish to cement in history.
Whether it be righting a historical injustice in the case of meitnerium, highlighting the efforts of all scientists in the case of promethium, or simply paying homage to one’s place of birth. That ‘a rose by any other name would smell as sweet’ may well be true of Romeo, but in classifying elements nothing could be further from it. We have created from the building blocks of the universe a wondrous poetry. Today, the names of new elements are imbued with deep emotions, ranging from delight and enchantment to regret and melancholy. As Andrea Sella of University College London explains, ‘[t]here’s a tremendous romance to [naming elements]. Names are always important’.
WARNING: This article contains strong language. Words traditionally used to degrade specific groups, or considered shocking or blasphemous are not intended to offend, and are used only in an academic and journalistic context in order to inform and educate.
A taboo refers to an implicit prohibition on something contrary to social norms, incest, for example. In the context of spoken language however, taboos are words or topics typically considered inappropriate for polite discourse. The etymology is rooted in the Tongan tapu and Fijian tabu. When James Cook visited Tonga, he commented that Tongans used the word to refer to anything that was ‘forbidden to be made use of’. Such was nature of these taboos that mere mention of them could be a grave violation of unwritten social codes.
Yet today, unlike other social taboos which would quickly see an individual become an anathema, swear words are ubiquitous. Only five percent of British adults can go about their daily routine without being exposed to expletives, while nine in ten admit to using them personally. They are about half as common as first person pronouns, making up between 0.5 and 0.7% of the average individual’s speech.
Despite their pervasiveness, they are almost absent from writing. Notwithstanding quoted speech in fiction or media, authors and academics rarely make use of these words. Unfortunately, this means relatively few studies on profanity exist, despite much evidence suggesting profanity is as old as language itself. Perhaps not surprising given the difficulty of discussing them without inducing either amusement or offense.
It is intriguing however that these words can be so offensive at all. Much as money is essentially just paper that we have collectively decided to ascribe arbitrary and inordinate value to, so too are swear words powerful by virtue of the reverence we collectively show them.
There are of course some features which unite swear words. Phonoaesthetic clusters are groups of words with a similar meaning as well as similar phonology. For example, English words related to light, reflectivity and luminosity make disproportionate use of the gl consonant cluster: glisten, glitter, gleam, gloss, glaze etc. Here a combination of etymology, and the tendency of phonoaesthemes to inspire neologisms (new words) are at play.
Similarly, words like fuck, prick and cock make use of the ck consonant cluster, and expletives often have intensely stressed syllables. The euphemistic expression ‘four-letter word’ exists due to the frequency of profane words with this number of letters, appearing at a rate twice as high as typical among all English words. Stephen Fry and Hugh Laurie made clever use of these phonological similarities in a sketch satirising the BBC’s restrictions on strong language, creating words like prunk, cloff, fusk and shote.
Yet both banknotes and passports are essentially just paper embedded with security features. One allows access into foreign countries and one can be exchanged for groceries. Both are issued by authority of the state, but they are distinctly different. Similarly, fuck and muck, and cunt and hunt share similar phonemes, are the same length, and are Germanic in origin, but evoke entirely different feelings and emotions.
Particularly in English, which will often have pairs of words of similar meaning, one with Germanic roots the other with Latin, similarities are more likely to stem from etymology than any endeavour to be onomatopoeic or vulgar. Indeed, no word is (nor can be) innately profane, and profanity is culturally relative.
In Harry Potter, calling Voldemort ‘he-who-must-not-be-named’ or ‘you-know-who’ is so ingrained into wizarding psyche that even after his (apparent) death Minister for Magic Cornelius Fudge cannot bring himself to speak his name to the muggle Prime Minister. Here an entirely made up word has become taboo through association with a dark wizard, yet as Dumbledore says, ‘fear of a name only increases fear of the thing itself.’
Interestingly, profanity in human language is thought to have first arisen due to the belief that words have an almost magical power about them, and the belief that their misuse would attract misfortune to the speaker. In the Second Wizarding War this was precisely the case. A spell, coincidentally called the taboo curse, was used to track those people who would dare to speak Voldemort’s name, and death eaters would flock to them.
Some words, notably those considered blasphemous, are still considered by some groups to attract the attention of higher powers. Ancient Jews revered the name of God so greatly that it could only be spoken by the priests inside of the temple. Similarly many Italians believed the expression porco dio (God [is a] pig) to be offensive enough to warrant divine retribution.
Despite this, most swear words and profane expressions in western cultures today are not believed to inspire condemnation from an external force. Rather, they can be categorised into distinct albeit somewhat arbritary categories: those used to express emphatic emotion, those used cathartically, idiomatic cursing, and those words intended to abuse or degrade.
Among these, only abusive or degrading profanity is considered almost universally taboo. Prince Andrew’s remark ‘that really is the nigger in the woodpile’ was criticised widely by all media outlets, whereas Boris Johnson’s alleged outburst ‘fuck business’ was defended by some commenters. Unlike other curses, slurs derive their power from the desire of one person to belittle and dehumanise another.
Efforts to appropriate them exploit the same principle which downgraded the word bastard from the most severe of insults in Shakespeare’s time to one which can be used in jest today – by insisting the underlying insult is neutral or even positive. Queer was appropriated to such a degree that it now forms part of the acronym LGBTQ+, thereby limiting its ability to cause hurt.
Hermione similarly tried to appropriate the slur ‘mudblood’ in Harry Potter and the Deathly Hallows, declaring herself a ‘mudblood and proud’. Taking the wizard equivalent of racial epithet and wearing it as a badge of pride, Hermione denies her enemies the one weapon they have to degrade her since she was in all other ways superior.
Other swear words have undergone a similar process but without the active effort of any particular group. By creating idioms like ‘shit hit the fan’ and acronyms like ‘MILF’ and ‘WTF’ the usage of these words has been somewhat normalised. This has made them less offensive, and studies suggest that with the exception of a handful of words such as ‘faggot’ all traditional swear words are becoming less offensive with time. Examining data from Google Book’s Ngram Viewer confirms this:
<iframe name="ngram_chart" src="https://books.google.com/ngrams/interactive_chart?content=shit%2C+fuck%2C+cunt%2C+bitch%2C+faggot%2C+nigger&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cshit%3B%2Cc0%3B.t1%3B%2Cfuck%3B%2Cc0%3B.t1%3B%2Ccunt%3B%2Cc0%3B.t1%3B%2Cbitch%3B%2Cc0%3B.t1%3B%2Cfaggot%3B%2Cc0%3B.t1%3B%2Cnigger%3B%2Cc0" width=900 height=500 marginwidth=0 marginheight=0 hspace=0 vspace=0 frameborder=0 scrolling=no></iframe>
Ultimately, as society (hopefully) becomes more tolerant, slurs should continue to fall out of usage. Although ableist and transphobic slurs are becoming more common, this is, with luck, a shortlived trend. At the same time, words concerning scatology and sex will likely become as normalised as hell or damn to the point where they cause little shock. While profanity has always existed, it appears that it may have seen its last century as a powerful force in language.
All - from the top
- Bilingual brains - does age matter?
- The Pronoun Problem
- Sign Languages
-Crazy Collective Nouns
- Elephant in the Room: Will translation software make language studies extinct?
- Native Tongue: A review
- Element Etymology
- Why can’t I say that? The Origins, Evolution and Usage of Profanity.
- Who are you anyway? A Brief Look at Kinship Terminology
- Beauty is in the Eye of the Beholder?
- Interpreting for the Queen: Dr Kevin Lin´s Appointment to the School of Modern Languages
- The problem with Auxlangs
- Language Revitalisation
- Christmas Etymology
- Our Tower of Babel: What is a language?
- Gender Confused? Grammatical Gender Explained
- Dialectal Discrimination- How the climate crisis is impacting language- 'Feminisé.e : to what extent does gendered language affect our attitudes towards gender?'
- The Three Japanese Writing Styles: Where they come from, what they’re used for and why they exist
- Italy: Division in Unity
- From schadenfreude to mudita: “Untranslatable” Words
- A Conversation in Ignorance