Skip to main content

Impact of Artificial Intelligence on Language and Society

A current round of AI hype is being driven by GPTs, or generative pre-trained transformers, and large language models. The major innovation of this round over previous markov chain based generative models is that current models hold context, and readjust probabilities for the next token based on not just the current token, but the entire context window, allowing for more selective generation by effectively pruning paths by assigning them a very small selection probability.

The context strategy enables modern chatbots to be quite good at generating words approximating speech, and in some cases, where there is high signal via training on a specific string of context, to provide accurate statements. The combination of being able to generate better autocompletes over longer contexts and the ability 'get the right answer' in some cases has led to lots of opportunistic assessment across basically any profession that values written communication.

LLMs have seen rapid adoption across many economic sectors, shifting the role of the individual to that of interviewer, providing multi-shot prompting to get an answer, and then, reviewing that answer to determine if it will work. This lowers the floor for entry into a field, because the amount that one can know through LLM facilitated discovery is much greater. However, it has also flattened human communication, because it is all being passed through a sieve of generative artificial intelligence.

There's a clear ethical implication of the prior, especially the loss of dialectic and semantic diversity, and the cultural implications of flattening language. We all begin to sound the same, and no one is entirely clear who we sound like, because we are an amalgamation of voices that have come before us, and have been assimilated, often unwillingly.