The Intricate Tapestry of ChatGPT Texts: Why LLM overuses some words at the expense of others?

In this Post

Ever wondered why ChatGPT texts are heavy on certaint words? This article talks about word frequencies in fake posts. We'll review some examples and introduce a vocabulary-based ChatGPT detector.

Table of Content


Try free online ChatGPT detector

Try free online Keyword Extractor and Text Analyzer

If you’ve been wondering why ChatGPT texts are heavy on certain words, you are surely not alone. There is a recent thread on Reddit didicated to OpenAI’s «intricate tapestry» phenomenon, as these are among word that you often see across prompts. An anecdotal vocabulary of ChatGPT’s favourites also includes the words «intricacy», «vibrant», «breathtaking», «innovative», and so on. ChatGPT would write about «catering» to the needs of clients, making something «seamless» and suggesting «a no hassle solution»… As we shall see below, there is also a remarkable place for the «t-word» in ChatGPT’s reply to prompts. «Tapestry», yes, we are getting there.

In the vibrant, dynamic, multifaceted, kaleidoscopic and multidimensional world of AI, one linguistic generator program stood out from the rest: ChatGPT. A testament to its algorithm, the program sought to weave intricate threads of information to create a rich tapestry of knowledge. Redit User

You can download data in this post by cloning chatgpt_corpus.

In the world of AI generated texts there is a big issue: spam that originates from ChatGPT and other LLMs. Generally speaking, that is the problem of automated detection of potentially useless texts that were generated by LLMs. In this post, I would like to share some of my findings from a collection of about 2K texts created by ChatGPT. We’ll take into consideration certain lexicographical features that are evident if you compare AI texts with human ones.

You can view the ChatGPT collection in semascope viewer, which shows graphically how words relate to each other: here.

Collecting data

Discovering a website that contained large amount of generated content turned our an easy task. It was as simple as typing a search query, «collection of AI-generated texts». And that is exactly how I found a web page titled «Smarhon: A Journey into Belarus’ Untouched Cultural Heritage». It screams, ‘ChatGPT wrote me’, and you will appreciate if I show you a sample:

As you wander through the charming streets of Smarhon, make sure to marvel at its architectural treasures. The Church of St. Michael the Archangel, built in the 18th century, stands as a symbol of the town’s religious heritage. Its stunning frescoes and intricate wood carvings are a sight to behold. Another architectural gem is the Smarhon Castle, which dates back to the 17th century. Once a powerful fortress, it now houses a museum that offers a glimpse into the town’s past. Explore its exhibition halls, which display artifacts highlighting Smarhon’s historical significance. Immersing in Nature’s Beauty…

There are many thousand posts like this, they literally go without end. I found myself looking at sitemap.xml of the site. Reading into the urls clearly suggests that the posts are iterations of machine-written and rewritten text, all over and over again. A kind of fake promo info about a traveling destination.

Once the sitemap.xml file is downloaded, we can create a list of URLs and pass it over to wget:

curl -s https://THEWEBSITE/wp-sitemap-posts.xml | grep -oP '<loc>\K[^<]*' > urls.txt
wget -i urls.txt -P ./local-directory --reject 'jpg,jpeg,png,gif,css,js'

The –reject option is used to exclude certain file types (images, CSS, and JS files) as we only need texts.

Preparing text files

In a few hours, not without impatience, I got a collection of about 2000 posts from the «Smarhon» website. I went on to parse html into pure txt, as the layout contained actual posts (whithout the navigation elements) inside <p> tags.

Here is the Python script that was used for parsing:

import os
from bs4 import BeautifulSoup

def extract_text_from_html(html_content):
    soup = BeautifulSoup(html_content, 'html.parser')
    paragraphs = soup.find_all('p')
    return '\n'.join(paragraph.get_text(separator='\n') for paragraph in paragraphs)

def process_html_files(directory):
    file_counter = 111  # Starting number for output files
    for filename in os.listdir(directory):
        input_path = os.path.join(directory, filename)
        output_path = os.path.join(directory, f'{file_counter}.txt')

        with open(input_path, 'r', encoding='utf-8') as file:
            html_content =

        extracted_text = extract_text_from_html(html_content)

        with open(output_path, 'w', encoding='utf-8') as output_file:

        print(f"Processed: {input_path} -> {output_path}")

        file_counter += 1

if __name__ == "__main__":
    # Set HTML files directory here:
    html_directory = '/local-directory'

Now we have the posts as clean txt files, although some additional parsing is required to get rid of noise. An inspection of parsed data suggested that some pages were in German, while most of them in English. What should we do? First I also gut stuck with the problem, because there is no way I was going to check every file manually.

A good thing is that we can use Python to check each text for its language - and you can do it automatically! Firstly, install the needed module, it is called langdetect:

pip install langdetect

Well done. And here is the script to do language detection, comments are to give you a clue of what’s going on.

import os
from langdetect import detect

def detect_language(text):
        language = detect(text)
        return language
        return "Unknown"

def process_txt_files(directory):
    for filename in os.listdir(directory):
        if filename.endswith('.txt'):
            input_path = os.path.join(directory, filename)

            # Read the content of the file
            with open(input_path, 'r', encoding='utf-8') as file:
                content =

            # Detect the language of the content
            language = detect_language(content)

            if language == "en":
                # If the detected language is English, rename the file
                new_filename = f'EN-{filename}'
                output_path = os.path.join(directory, new_filename)
                os.rename(input_path, output_path)

                print(f"Renamed: {input_path} -> {output_path}")
                print(f"Ignored: {input_path} (Language: {language})")

if __name__ == "__main__":
    txt_directory = '/local-directory'

Please, do not call the script as such a name will interfere with the module. I called the script and ran it by doing python3 from the Linux console. All worked surprisingly fast, the script properly inspected the data, and upon a brief checkup I moved files starting with EN- to a separate folder:

cp ./local-directory/EN* ./local-directory/EN/

The preparation part is almost over, so read on.

Building frequency list for AI-generated corpus

Now we have a small corpus of AI-generated texts that we took from what seems to be a SEO spam website with many thousands of ChatGPT junk posts. You can download it here.

And now, finally, we can start with our data mining, namely with a corpus-linguistical analysis. Our research questions are as follows:

  1. Which words are underrepresented in the ChatGPT corpus?
  2. What lexis is overrepresented in AI-generated texts?
  3. Can we distinguish between actual human texts and ChatGPT prompts? What is the difference?

Such questions are many, and they can only be answered when you have sufficient data, so let’s try playing around with the LLM spam corpus of 1.2 mln words.

A few notes on the text file we are going to analyze. Each line contains a separate web article, so there are as many posts as there are lines in the file, exactly N=1922 shitposts. Texts do not repeat each other literally.

So, the big text file with all GPT posts is prepared, let’s see how many tokens are there:

$ wc -w ./chatgpt-corpus/chatgpt-replies-small-corpus.txt 
1325123 ./chatgpt-corpus/chatgpt-replies-small-corpus.txt

Python script to create word frequency lists

A wordlist is a frequency list, where words are listed with the most frequent coming first, descending to the least frequent. But such data alone is often useless: instead, we need to calculate probability of meeting the word in our little corpus, a collection of SEO spam documents. For this purpose we can take tokens, the number of individual words in the text, and calculate relative freqency of each word, i.e. ratio of word frequency to total number of words.

There is a number of word rank lists, or word frequency vocabularies, for different languages, and you can get this data from the web. Wikipedia is a good starting point to get it. To compare how probable we are to find a word in AI-generated texts against natural texts, I chose the list from Project Gutenberg books. It is dated back to 2006, way before AI-generated texts flooded the web, so we can safely assume that the texts are «pure human». This is what Wikipedia writes about the list: «These lists are the most frequent words, when performing a simple, straight (obvious) frequency count of all the books found on Project Gutenberg».

The PG list that we are going to use contains top 40000 English words as seen in all Project Gutenberg books in April 2006, good enough for our purpose. The PG list includes word frequency in the format of ‘per billion’, fractions are limited to two decimal places and the words are case insensitive.

Let us now extract data from our ChatGPT corpus, using the same format as in the Project Gutenberg list:

import re
import csv
from collections import Counter

def calculate_word_frequency(file_path, output_path):
    # Read the content of the file
    with open(file_path, 'r', encoding='utf-8') as file:
        text =

    # Tokenize the text using a simple regex
    tokens = re.findall(r'\b\w+\b', text.lower())

    # Calculate word frequencies
    word_frequencies = Counter(tokens)

    # Total number of tokens
    total_tokens = len(tokens)

    # Calculate frequency per actual tokens and induced frequency per billion tokens
    result = []
    for word, frequency in word_frequencies.items():
        frequency_per_token = frequency / total_tokens
        frequency_per_billion_tokens = frequency_per_token * 1e9
        result.append((word, frequency, frequency_per_token, frequency_per_billion_tokens))

    # Sort the result by frequency in descending order
    result.sort(key=lambda x: x[1], reverse=True)

    # Write the result to a CSV file
    with open(output_path, 'w', newline='', encoding='utf-8') as csvfile:
        fieldnames = ['Word', 'Frequency', 'Frequency per Token', 'Frequency per Billion Tokens']
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)

        # Write the header

        # Write each row
        for word, frequency, freq_per_token, freq_per_billion in result:
                'Word': word,
                'Frequency': frequency,
                'Frequency per Token': freq_per_token,
                'Frequency per Billion Tokens': freq_per_billion
            }) # Example usage: replace 'your_file.txt' and 'results.csv' with the actual file paths
calculate_word_frequency('/local-folder/EN/chat-gpt-EN.txt', 'gpt-results.csv')

The resulting file, gpt-results.csv, contains a ChatGPT word frequency list according to about 2000 texts that were harvested on the web, although it was taken from just one particular website. This is the way, however, to expand the ChatGPT corpus by adding new AI-generated texts, should anyone want to do that.

Next stage, we need to merge the PG word frequency list and our vocabulary file, gpt-results.csv.

You can do that by executing the following UNIX one-liner, so wait no more and go back to your command line prompt:

$ awk -F',' 'NR==FNR{a[$1]=$0; next} $1 in a {print a[$1] "," $0}' <(sort PGrank.csv) <(sort gpt-results.csv) > CompareRanksGPTvsPG.csv

On a side note, wk -F',' means, ‘use comma as a field separator’. The idea is, firstly, sort the two lists that we are going to merge. And take only words that exist in both lists and keep corresponding word frequency statistics from the original two files.

The datafile that I got (and you can download it later) looks like this:

WordFrequency per Billion 1WordFrequencyFrequency per TokenFrequency per Billion 2Ratio

… and so on. There are 4604 words in it, each attested at lest once in the ChatGPT corpus. That is slightly over 10 per cent of the top 40000 common English words from Project Gutenberg books!.

Frequency per Billion 1 is the Project Gutenberg probability to find the word, and Frequency per Billion 2 is same for our 2K posts ChatGPT corpus. Ratio is Frequency per Billion 1 divided by Frequency per Billion 2. I calculated it as a measure of how many times the word in the ChatGPT word list overused in comparison to the Project Gutenberg word frequency. The table is sorted by this ratio, so you can immediately recognize some of ChatGPT’s favourites, including the words ‘breathtaking’, ‘effortlessly’, ‘innovative’, ‘showcase’, ‘immerse’. So far, very ‘informative’ and ‘no hassle’!

Tapestry is seriously overused by ChatGPT

Attention, now we come to the big fun part!

How about the word ‘tapestry’? Is it really used so often in ChatGPT posts? The answer: It turns out, in a text sample of about 2000 texts that were generated by ChatGPT, the word ‘tapestry’ is used at the rate of 102959 words per billion, whereas in the Project Gutenberg corpus the same word is 25 times less common (occurs at 4099.65 words per billion).

Here are some more examples of English words overused by ChatGPT:

WordFrequency per Billion 1WordFrequencyFrequency per TokenFrequency per Billion 2Ratio

The situation gets even more dramatic with some notorious words from the ‘Reddit Lexicon of ChatGPT’. Here are a few examples from the data. The word ‘intricate’ was used 115x more often in ChatGPT sample of texts when compared to Project Gutenberg books, ‘vibrant’ - a shocking 1260x times increase. ‘Extravaganza’ - a 25x overuse. The award goes to ‘breathtaking’. In the sample data it is the top words overused by GPT. It has the rate of 659812 words per billion in ChatGPT texts, that is 36261 times more than in texts written by us, humans. ‘Testament’ is 228 times more frequent in GTP texts, ‘landscape’ - 10 times, and so on, see full data in the repo files.

What you read in this post is implemented as a vocabulary-based ChatGPT detector. You can try the ChatGPT Detector here

Words systematically underused by ChatGPT

Let us further ‘dwell into the intricacies’ of ChatGPT lingo. Our sample of AI-written texts reveal an interesting list of underused words. These are words that happen unnaturally rare in ChatGPT posts when compared to the corpus of Project Gutenberg books.

This table lists words that occur in ChatGPT posts with the probability of at least 200x less when compared to same words in Project Gutenberg corpus:

WordFrequency per Billion 1WordFrequencyFrequency per TokenFrequency per Billion 2Ratio

The pronoun ‘he’ - over 11500 more frequent in natural texts when compared to ChatGPT replies, ‘her’ - about 3600 times, ‘reason’ - 317 times, ‘god’ - 254 times, ‘purpose’ - 223 times. One can expect that these words are not seen often in typical ChatGPT replies.

The word ‘woman’ was not seen in the ChatGPT sample at all. Surprisingly, the pronoun ‘she’, one of the most common words in the English language, was not attested neither. You can check this by downloading the data used in this post, to do that, clone chatgpt_corpus.

Are word frequencies in ChatGPT texts and in natural texts correlated?

An important question about the quantitative nature of AI-generated texts is if rate of vocabulary use is at all realated to what we find in natural language. The short preliminary answer is, yes, but the correlation is not very strong. Firstly, a few words on the quality of Project Gutenberg word frequency list. It appears that the PG list from Wikipedia can be trusted, because if you compare word frequencies (per billion) in the PG list and in a comparatively large human text, the values are strongly correlated. Using same script as in «Python script to create word frequency lists» above, I gathered statistics from Tolstoy’s War and Peace, and compared the Frequency per Billion 1 with Frequency per Billion 2 columns. The first one is from PG list, while the second was induced from War and Peace. Here is the result: Pearson r is very strong, 0.96 with a highly significant p-value < 0.00001.

ChatGPT vocabulary, compared with the PG list. Pearson’s correlation was r=0.078 with p < 0.00001. Seems strong. But if you look at the scatter plot below, you can see that there is a handsome of outliers, namely the most common English words, ‘the’, ‘and’, ‘to’, ‘of’. They make an impact on the statistic. These words are apparently more or less at the same rate of occurrence in both lists (we are comparing ChatGPT words and the Project Gutenberg list). For example, ‘the’ shows at 56271872 words per billion in PG and 40964921 in the ChatGPT sample. Well, that is at least at the same order. ‘And’ plus a few other function words exhibit same properties in the AI-generated sample of 1,325 mln tokens that we gathered in the web.

ChatGPT samples compared with Project Gutenberg frequency list

War and Peace compared with Project Gutenberg frequency list

Finally, let’s see some descriptive statistics for word lists. In War and Peace, when compared against the PG list, expected word frequency vs observed (that ratio we were previously talking about) has the mean of 1.82 and the median of about 0.97. It means, half of words in War and Peace that are also among 40 000 most frequent in English, show same probability of occurrence. That can not be said of the GPT corpus. In the AI-generated sample, the mean is 17.25 and the median is 1.64. Descriptive data for ChatGPT texts, when compared to Project Gutenberg data, suggests that AI-generated texts are fundamentally different from human writings.

You should now be wondering, what exactly is the difference between ChatGPT texts and in natural data? We’ll continue investigating this question in the next post. I will show that the differnce deals with the distribution of word frequencies. A preliminary analysis of textual data indicates that ChatGPT avoids word aggregation, or ‘burstiness’, so common in natural language. So, please, read about it in the next post.

Oh, since you’ve finally got here: What does ChatGPT has to say about the «intricate tapestry» phenomenon? I presented the results of our research to ChatGPT to see their replies and explanations. Here is what I learned from a few prompts - read in the next post!

Alexander Sotov

Text: Alexandre Sotov
Comments or Questions? Contact me on LinkedIn

𝕏   Facebook   Telegram

Other Posts:

Sentiment Analysis API

Semascope: Tool for Text Mining and Analysis

Track media sentiment with this app

How AI sees Dante's Divine Comedy in 27 words

Keyword Extractor and Text Analyzer - Help

Exploring Sacred Texts with Probabilistic Keyword Extractor

FAQ: Automated keyword detection, content extraction and text visualization

Make ChatGPT Content Undetectable with this App

ChatGPT Detector, a free online tool

How to build word frequency matrix using AWK or Python

How to prepare your texts for creating a word frequency matrix

Intro to Automated Keyword Extraction

How to automatically tag posts in Hugo Static Site Generator with Python

Using Hugo and Goaccess to show most read posts of a static website

How and its semascope 👁️ compare with traditional tag clouds?


What is this website?