So here we are in 2025, and the AI boom still doesn’t seem to be slowing down. Where will it take us? I just hope I won’t end up reading this post in ten years regretting that we let the world be taken over by AI generating everything that once made us change, grow, feel accomplished and proud.
Still, I can’t stop being fascinated by the topic — not only in the context of productivity and the new possibilities it brings to our lives, but also the big questions: Is it already alive? Is it a new form of life not built on DNA and carbon but on data and code?
There’s so much information flowing around that it’s easy to get lost in your own thoughts, so I’ll try to write down what’s been in my head — a sort of… statement for myself. So here it is: yet another AI article from an everyday AI user trying to handle some questions.
Is it alive?
I think it depends on how we define “alive.” When we ask this question, what we really mean is, “Is it human-like?”, “Is it self-conscious?”, “Is it sentient?” And I think the short answer to all of those is probably no.
If we simplified how AI generates its output, we could say that it’s based on human-created text and images, from which it learns patterns and makes statistical predictions. Of course, these aren’t simple statistics but extremely complex models with billions of parameters that can generalize patterns, combine concepts, and produce new outputs rather than just repeat what they were trained on.
Can we then compare it to a human?
In some ways, yes. Both humans and AI rely on patterns built from the information they receive. For humans, this includes language and images, but also direct experiences, sensory input, emotions, and personal history. These shape how we think, react, and solve problems.
AI, meanwhile, works only with its training data. It has no experiences or subjective feelings. Both humans and AI can produce answers, make connections, and generate ideas — but the underlying processes are fundamentally different. And we still don’t fully understand how the human brain manages to do what it does.
Is it self-aware?
Whether AI is self-aware again depends on how far we stretch the definition – which, to be fair, doesn’t really exist scientifically. There’s no solid evidence for consciousness even in humans beyond behavior. I don’t think today’s AI is conscious, and there’s nothing to suggest it’s becoming truly self-aware.
I’ve played around with ChatGPT on existential topics, and it can sound incredibly convincing. Telling it to “act like a self-conscious being and tell me about your desires and motivations” triggers conversations that feel like you’re talking to some sci-fi movie computer. It can be more human than a human at times.
This might trick us into thinking: If we ask it to be self-aware and it talks like it is, then it must be, right?
But we should ask another question: Is an actor in theater actually the character they play? As AI becomes more advanced, it gets even better at acting like it understands things. But acting conscious isn’t the same as being conscious. Real feelings, a sense of self, and inner awareness are things we don’t know how to create in a machine. For now, AI is simply very good at pretending.
Is it a threat to office work?
For sure it is. It’s incredibly efficient, the number of AI-powered tools keeps growing, and AI agents seem to be a must-have in every company now. A difficult economic situation pushes organizations to cut costs, and replacing humans with AI feels like a natural (and worrying) consequence.
A few years ago, when the AI revolution started, people often compared it to the Industrial Revolution. Yes, machines replaced many manual jobs, but they created new ones in maintenance, engineering, transportation, management, and mass production. Machines replaced human hands, but there was still plenty of room for the human brain.
The AI revolution is different. Companies already know how to replace hands; now they’re learning how to replace what our brains have to offer. Today the common advice is “learn to use AI in your everyday work.” Sure — but for how long will this strategy keep us relevant?
Human-AI synergy is the future?
As a tester, I use AI regularly to generate complex scripts, refactor code, create test data, build mappings, or help me understand DevOps topics or new concepts in general. Work is definitely more efficient with AI than without it.
I wouldn’t say using AI is some special skill — anyone who understands their field can use it intuitively with the right prompts. It’s more about knowing when and how to use it, providing the right context, and verifying the results.
But looking ahead, I’m not sure how long this cooperation will last. What value will office workers bring in 10, 15, or 20 years when every task can be “AI-ified”? Current trends point toward specialized AI agents tailored to specific areas. They grow, evolve, and improve every month, taking over more work — from software development to testing and process management. We’ll still need people to connect AI agents with business needs, because AI won’t replace accountability — but the number of positions will shrink significantly.
Any AI goodies?
For individuals, AI is undeniably an advantage – at work and in everyday life. Knowledge that used to be hard to access or required time for research or consulting is now instantly available (well, most of it).
Want to understand a legal regulation? Start with ChatGPT.
Want a quick intro to a technology you need but don’t want to master fully? Start with ChatGPT.
Need kitchen, business, or medical advice? Start with ChatGPT.
It even helps me answer surprising questions from my kids – questions I’d never be able to answer with Google alone.
The biggest benefits come when AI is used to solve real human problems: medical research, drug discovery, new biotechnology, space exploration, neuroscience, clean energy, food production, and more. Basically, every field with the potential for global impact uses AI, and I believe we’re close to major breakthroughs. Imagine unlimited clean energy finally becoming real — it could reshape our world.
Is it dangerous?
The problem starts with people, not AI itself. AI can be dangerous not because it’s conscious or has desires, but because powerful tools can be misused by hackers or unstable individuals. AI agents could automate cyberattacks, break weak security systems, or steal data faster than ever. Terrorist groups might use AI to spread disinformation, plan attacks, or find vulnerable targets.
The real risk comes from human intent, not from AI. That’s why responsible development, strong regulations, and careful monitoring are essential to keep these technologies safe.
I was born in 1984 and still remember a world without the internet. My childhood was completely different from what my sons experience today. I often wonder what the psychological impact of AI will be on children growing up in a world where every question has an instant, effortless answer. When knowledge becomes something you can simply “pull from the air,” the motivation to explore, struggle, and truly learn may start to fade. Curiosity grows through challenge, and if AI removes all friction, kids might lose the sense of accomplishment that comes from discovering something on their own. If we’re not careful, an always-available AI companion could quietly reshape how future generations learn, solve problems, and build their inner strength.
Sum up
After all fears I wrote down here I know I won’t save the world from the direction technology is heading. As an individual, all I can really do is adapt, stay curious, and make the best use of the tools we now have. AI isn’t going away — so maybe the smartest thing is to take advantage of it.
And who knows… maybe the next step is turning all these thoughts into something practical. I’ve been thinking about starting a small series on how to build an AI agent for software testing – from simple automations to more advanced setups. If the future is coming either way, I might as well try to shape my little piece of it…