Before I get into the meat of this post, I’d like to provide some background for those of you who bother to listen.
Recently, I was assigned a project which seemingly required the usage of generative AI to “review” our blog posts and “improve” them. I myself firmly refused this and instead reviewed it myself, and submitted that. Mr. Pool seemingly wasn’t happy with this, so he assigned me to create the article you’re reading now. It was a bit hard to scrounge up time in-between classes to actually finish this, but alas.
What is it?
The acronym “AI” stands for Artificial Intelligence. The concept is to create a machine that can think and learn in the same way that a human can. Even before the birth of computers, humans have been intrigued (or alternatively scared) by this concept. Modern AI systems are almost always Generative, which is a newer form of AI and the focus of this article. Generative AI typically utilizes a system that is designed to create “new” content based on its input, which is different from more basic machine learning algorithms, which primarily focus on predicting what would come after something and have smaller and stricter datasets.
Digital Theft
One of the biggest issues with AI is that in order to appear lifelike (if you could call it that), is that it requires massive amounts of data to train itself on. So, most generative AI models will go through the internet and collect whatever data they can find, almost always without the permission of the humans behind said data. For example, Charlotte artist Elliana Esquivel, who uses artwork as her sole income, had her artwork put into one of the largest datasets available for AI models, and in an interview by WCNC stated “I’ve tried to get it taken down and everything, but it doesn’t really matter, because it’ll just get scraped again, and it’ll end up back on the website. It’s kind of a dystopian thing to be dealing with”. Chatbots are not exempt from this, either. Websites used by authors to host novels and fanfiction are regularly raided for training data, without original authors having any say in it. The big AI companies do this on purpose, knowing that they can get away with it due to their size.
Environmental Issues
Due to being one of the biggest industries in the US, huge AI datacenters have popped up across the country. These centers require hundreds of thousands of devices and servers to power these massive machines. And those devices require a lot of power to be able to piece together sentences and images. In fact, according to an article by Goldman Sachs, a single query from ChatGPT uses up around 10 to even 20 times the amount of power that a regular Google search does. The most common energy source used by these datacenters is fossil fuels, which, when burned, release greenhouse gases that pollute the air and accelerate climate change. AI centers also use massive amounts of fresh water, which to cut costs is usually disposed of into local water supplies, which causes serious concerns for not only the environment, but with citizens of surrounding areas, which relied on said water supplies for drinking.
One Intelligence to Another
One of the biggest pull-factors to AI is its convenience. Why take hours writing an essay when this bot can just make it for you in 30 seconds? I don’t know what to do about this thing, so I’ll let ChatGPT make up all of these ideas! No need to make your own decisions or choices, when it’s just so much easier to let the algorithms do it. Well, using AI repeatedly can lead to cognitive decline and even at some point dependance on AI tech. An MIT study from 2025 showed that, compared to using an LLM (Language Learning Model, like ChatGPT), people who wrote essays using only search engines or just their brain showed significantly more brain connectivity.
an Insult to Life Itself
While the quote from Hayao Miyazaki is about another form of machine learning, it is highly applicable to Generative AI as well. In all works, whether artistic, musical, literary, there is at the very least a glimpse into the person who created it. The chaos, the grammar mistakes, the wording, are all a part of the human behind that. In work done by AI, this humanity is stripped away, only being hinted at by prompts and copied from existing work. The “Turing Test” is a well-known concept that tests how easily someone can tell an AI from a human. Almost always, the human can identify a robot due to one simple factor: the lack of humanity that artificial intelligence inherently brings with it wherever it’s used. AI tends to sterilize, to break things down and paint them corporate white before putting it back together with school glue sticks and masking tape. The mere fact that a soulless robot like ChatGPT could take what you have made and spit out a copy of it with all of its life sucked away is itself a disturbing one. The consequences that it produces only furthers this.
Generative AI is a category of program that I personally wish to stay far away from in any stage of work. My opinion on this isn’t going to change easily, especially without major changes to the way Generative AI works fundamentally and how the operation is run. Otherwise, I’d rather use my own brain instead of giving it away to false humans.
(This article is part of a school project, however it reflects my own views and was written by hand.)