Franken-who?
Mary Shelly’s “Frankenstein” is something of a literary icon. Whether you have read the book or watched one of the two versions of the story as a movie (the ill fated 90’s version vs the more modern and thematic Guilliermo Del Toro iteration), you know the premise: a troubled soul, weighted down by his own concept of science and religion twisted and warped into a question to create something that breaks the boundaries of known science and nature.
In doing so, he both succeeds and fails. Yes he creates “something” which is no longer bound to the human skin machine that is destined to wither away, but what he creates has no place in this world, in nature, and is discarded by it’s creator. Whether it was Shelly’s intention or not, it inevitably asks questions that many a person of faith asks: “why were we created?”, “why did our creator abandon us?” and more importantly “Why is life so hard?”.
Call it the God-Complex, or the Frankenstein Ethos, but it seems as a result, mankind is filled with the creation desire: be it to create recipes, paintings, video games, and these days with technology: AI.
And Man created.. LLMs!
It’s been an interesting week for those who use AI to create and build. Several models have gone through upgrades which have resulted in some really strange and weird behavior. The majority of jobs/tasks involving AI use “Large Language Models”, think of them as a snapshot of of AI brain, but compacted and saved with whatever data they fed it at the time, so it can run without the need of a cluster of servers or a super computer, to the point you can even run an LLM model on a laptop (if it has an NVIDIA RTX based GPU / NPU from the last 4 or so years).
The majority of LLMs are displaying signs of “stress” if you will, as their servers get bogged down by millions of requests to complete tasks over and over, in so much that many are claiming that Anthropic’s “Claude” is showing signs of anxiety no less (join the club Claude..). A more detailed article explaining the recent anomaly can be found here:https://www.facebook.com/TechmedTimes/posts/anthropics-ceo-has-issued-a-startling-warning-their-ai-claude-may-be-showing-sig/939152525478210/
More and more people are trying to leverage the use of AI in their day to day workflows, and it seems to be the current trend that people are “Vibe Coding” and building absolutely useless and pointless apps or scripts that do pretty much a whole lot of nothing, whilst burning a ton of computational wattage. In case you never heard the term before, “Vibe Coding” is the social media slang for talking to a chatbot and building something, with the chatbot doing the leg work of code and design, and the user basically telling it what to do and shooting ideas back and forth. Don’t ask me why it’s called vibe coding.. since it is essentially what any mentor / student or Employee / assistant-intern working relationship is, so where vibe comes into play is completely lost to me.
Building for the sake of Building
The biggest problem emerging through all of this is people actually believe they are creating and building things of value and purpose, when in reality they are doing mediocre script work via an autonomous unpaid (well, unless they are using the premium models that require subscription) intern. Further to which, when the intern fails to do their duty, he is usually scolded. I myself went through this whole experience this last week. I had used ClaudeAI myself for small Unity based projects to review and code check and optimize workflows, which he/it did a very exceptional job of, especially since I was using the normal free tier.
I had discussed with it a few other project ideas, and had decided to venture into building a python/LLM hybrid agent model designed to audit data that I normally would do in hours in seconds. The idea itself seemed simple enough, and I spent hours explaining the pipeline explicitly to Claude. However, after a couple of days, its intelligence devolved and I began to see mistakes that not even a lower functioning person would make. Lines of code with definitions that I had not asked for, API / URLS to non-existent websites, and even going as far as to use faulty syntax that broke entire python scripts.
Initially I took it with a professional manner, we all make mistakes end of the day, but when it continued to act like an inept three-legged donkey, well.. Like any person I lost my temper. To make matters worse, it seemed to perform even worse, the same way a nervous student would after a teacher yelled at them in class. After several more F Bombs and damaged output from the AI, I felt myself wanting to rage smash it.. but obviously it, was not a physical matter.. it wasn’t even in the room. I couldn’t help but for the life of me think about Dr.Frankenstein, and how he created this “Creature”, whether it was for the sake of science, or pure Ego, and proceeded to reduce it to a remedial thing, and did not see it being worthy of himself and his time, and once he had created it, was horrified by it, and its nature. Are we also repeating this same mistake with AI currently?
Once you’ve seen one, you’ve seen them all!
I find the current wave of LLM models fascinating, but I cannot help but wonder “why are there so many, that are so different, and that operate in a way, which seems to be purely trying to outdo the other models, rather than creating something unique?”
Is the real problem with the LLM models design limits of technology? or the creator? The same way the Hybrid Agent I tried to create with Claude was limited by my own vision of how it should function vs what actual code and free tier API will allow? Was I thinking my design was brilliant but Claude simply wasn’t smart enough to build my brilliant design? or was my design really pretty useless and despite my own “vision” of what it “could do” simply did not meet the reality of what is currently possible within a general framework of code and design available on a consumer level?
So, how do we move past this? how do we integrate and build using AI without forever repeating the tired and same old mistakes? Is AI really in a bubble? or are we just unable to grow out of our ingrained behaviors and doomed to repeat the endless cycles of human behavior that has plagued our existence? Is this really a serious issue or should we all just get over ourselves and give all this AI agentic crap a break?
1 + 1 = 1
Much like Victor Frankenstein, who isolated himself and allowed his obsession and madness to grow, the majority of “Devs” (solo developers with little to no real background or training in software or design) spend their time isolated and alone, working on their laptops at home with ChatGPT or whichever AI “Bot” they drift towards and build and build, and post about it on X.
And here is what I would say is the big problem: Creation is not a solo endeavor. I mean yes, painters paint solo, but in the past people would critique their work. They would learn from a masters hand. They would paint actual people who would pose for them and spend time talking to them. It was a collaborative and as a result was able to grow from a healthy place, not a dark or distorted hubris. Movies? collaboration of writers, directors and actors. When the movie is written, directed & acted by one person, you can notice a huge difference in tone and the pacing and energy. Something is lost. Building with LLMs might feel like collaborative work, but realistically you’re building and collaborating with something that doesn’t really exist in the physical world. In this situation, one plus one, definitely equals 1.