top of page
Frequently Asked Questions
AI Misconceptions
Misuses of AI Among Students
AI Ethics & Safety for Schools
Handling Student Data Privacy
Misconception:
AI works exactly like the human brain and thinks like a human.
Clarification:
As said by John McCarthy, Father of AI, "Artificial intelligence is the simulation of human intelligence in machines that are designed to think and act like humans."
This means AI systems are built to mimic some human abilities, such as recognising speech, understanding language, seeing and identifying objects, making decisions, or solving problems.
However, AI is not the same as human intelligence.
AI can make decisions based on its programming and the data it has been trained on. For example:
• A self-driving car might decide to change lanes if the current lane is blocked and another is clear.
• A recommendation system might suggest a song or video based on a student’s listening history.
But unlike humans, AI doesn’t have emotions, self-awareness, or the ability to understand meaning the way people do. It doesn't have free will or moral judgment. It can’t make decisions outside of what it’s been taught or programmed to do.
Even when AI appears to act independently, it is still following rules set by humans. Its "thinking" is actually a complex process of calculations and pattern recognition, not conscious or intentional thought.
In short, AI can imitate some parts of how humans think and act, but it doesn’t actually "think" like we do. It is a powerful tool, but it does not have a mind of its own.
Misconception:
AI is the same as robots.
Clarification:
No, AI is not the same as robots.
While films and media often portray AI as humanoid robots, most AI today is software-based. It powers voice assistants like Siri and Amazon Alexa, recommends videos on YouTube and Netflix, filters spam in your inbox, and helps detect fraud in banking – all without a robotic body.
Most AI is embedded in apps, websites, and devices we use every day. Robots may use AI, but AI itself is a much broader technology that exists far beyond the realm of robotics.
So, AI isn’t just about robots – it’s a tool that enhances many parts of our digital world.
Misconception:
AI is infallible and provides fair, unbiased outputs.
Clarification:
No, AI is not always accurate or unbiased.
AI systems learn from data, and if that data contains errors or biases – as real-world data often does – the AI can make mistakes or reinforce those biases. For example, if an AI model is trained on hiring data that reflects past discrimination, it may unintentionally favor certain groups over others.
Additionally, AI doesn't understand context the way humans do. It can misinterpret information, struggle with unusual cases, or be misled by poor-quality inputs.
So, while AI can support decision-making and improve efficiency, it is not infallible. Human oversight, transparency, and careful design are essential to ensure AI systems are used responsibly and fairly.
Misconception:
AI learns by experiencing and understanding in the same way that humans do.
Clarification:
No, AI does not “learn” the same way humans do.
Humans can learn something brand new from just a few examples, draw connections across different situations, and apply knowledge flexibly.
AI, on the other hand, "learns" by processing large amounts of data to identify patterns and make predictions.
For example, both humans and AI can learn to recognise a cat in an image, but the way they learn is fundamentally different.
A human child might see a few cats – real ones, drawings, or toys – and quickly understand what a “cat” is. They don’t memorise what a cat looks like; rather, they develop an intuitive, conceptual understanding that includes how a cats move, sound, and feel. In contrast, an AI needs to be trained on thousands or even millions of labelled images to recognise a cat. It doesn’t truly understand what a cat is – instead, it simply detects pixel-level statistical patterns that match the examples of cats from its previous training.
So while AI can produce results that resemble learning, it lacks the rich, adaptable, and meaningful learning process that defines human intelligence.
Misconception:
AI is fully autonomous and can improve itself indefinitely.
Clarification:
No, AI cannot learn and improve entirely on its own without human intervention.
While some AI systems can adapt through methods like reinforcement learning, they still require human input to define goals, provide training data, adjust models, and evaluate performance. Even the most advanced systems don't set their own objectives or understand the real-world impact of their actions. They rely on humans for direction and correction.
Moreover, AI does not improve indefinitely on its own. It hits limits based on the quality of data, the design of algorithms, and the context in which it's used. Without ongoing human oversight, updates, and ethical guidance, AI systems can become outdated, biased, or even harmful.
In short, AI is a powerful tool, but not a fully autonomous or self-improving entity. Human involvement remains essential at every stage.
Misconception:
AI can create original ideas and think creatively like humans.
Clarification:
No, AI cannot think creatively like humans.
AI can generate content that appears creative, such as writing, music, or art, by remixing patterns it has learned from existing data. However, it doesn't have consciousness, emotions, personal experiences, or intentions, all of which are essential for genuine human creativity.
Human creativity involves imagination, curiosity, intuition, and the ability to make novel connections across unrelated ideas. AI, on the other hand, relies on statistical patterns and past examples. It doesn’t truly generate original ideas. It predicts likely combinations based on what it has seen before.
For example, Midjourney generates images and videos by learning from a vast dataset of existing artworks, capturing patterns in composition, colour, texture, and style. It excels at producing visually striking images that mimic or blend the styles of famous artists, genres, or specific prompts. However, it does so by recombining what it has already seen rather than truly inventing something new. In contrast, human artists can develop novel artistic styles that redefine what art is, like how Picasso pioneered Cubism or how Van Gogh’s expressive brushwork transformed post-impressionism. These innovations often emerge from personal vision, cultural critique, or emotional exploration – qualities AI does not possess.
In short, while AI can assist with and inspire creative work, it does not think creatively in the human sense. Creativity remains a fundamentally human trait.
Misconception:
AI will completely replace humans in the workforce.
Clarification:
AI will not replace all human jobs.
While AI is increasingly capable of performing specific tasks, especially those that are repetitive, data-driven, or rule-based, it lacks the broader human capacities for creativity, critical thinking, empathy, and complex social interactions that many jobs require. For example, therapists rely on emotional intelligence and deep human connection to support people through mental and emotional challenges, which AI cannot authentically replicate [1]. Teachers do more than deliver content; they adapt to students' needs, build trust, and foster motivation [2]. Writers, artists, and designers generate original ideas and embody cultural expressions in their work, pushing beyond imitation or repetition. Leaders and managers navigate interpersonal dynamics, inspire teams, and make nuanced decisions in unpredictable contexts. These roles illustrate the enduring value of human insight, emotion, and adaptability – dimensions where AI, despite its impressive capabilities, still falls short.
Instead of complete replacement, the more accurate scenario is job transformation. AI will automate certain aspects of jobs, allowing humans to focus on higher-value or more interpersonal tasks. New roles will also emerge as AI creates demand for oversight, ethical guidance, development, and integration of these technologies.
In short, AI will change the nature of work, but it won’t eliminate the need for human workers. The future will likely involve collaboration between humans and AI, not a total handover.
References:
[1] Chan, C. K. Y. (2025). AI as the therapist: Student insights on the challenges of using generative AI for school mental health frameworks. Behavioral Sciences, 15(3), 287.
[2] Chan, C. K. Y., & Tsi, L. H. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Studies in Educational Evaluation, 83, 101395.
Misconception:
AI is either a force for good or a dangerous threat.
Clarification:
No, AI is not inherently good or bad.
AI is a tool, and like any tool, its impact depends on how humans design, use, and regulate it. It can be used for good, such as improving healthcare, education, and accessibility, or for harm, such as spreading misinformation, invading privacy, or enabling biased decision-making.
The key lies in human responsibility: how we choose to build, apply, and govern AI systems. Ethical design, transparent practices, and strong oversight are essential to ensure that AI benefits society while minimising risks.
In short, AI itself isn’t good or evil. It reflects the values and intentions of the people who create and use it.
bottom of page

