Speaking as a software developer, AI is just code and data.
Code is easy to write. Good code is very hard to write. Code will do exactly what you tell it to do, not what you intended it to do. The difference is in understanding the context around you want the code to accomplish. If you don't understand that context, your code may work for the extremely narrow set of criteria that you've tested, and fail spectacularly when it is fed anything outside of that context. Or much worse, it will fail subtly or silently in a way that you don't even notice, leading to cascading problems down the line that take weeks of deep investigation to unravel.
Large language models are mostly just "next word guessers", which take a human prompt as input and construct a response by picking words that have high associations with each other, based on their code and the data that they have been trained on. Same sort of thing with AI image generators. Which is a fancy way of saying that a significant part of AIs/LLMs is simply smashing together words or image elements from whatever they've been fed in the first place.
The best way to think of this is to imagine an AI scraping the responses to every oil, battery, counter-steering, or "hadda-lay-er-down" thread in every motorcycle forum that has ever existed, and blending all of the responses together. The AI has no credibility of its own, it's just parroting jumbled up fragments of phrases that have been repeated most often in the data that it was trained on.
One of the massive problems with AIs/LLMs is the data that you feed them. If you feed them data from the open internet, you're going to get a ton of racist, sexist garbage in response, because the open internet is a seething cesspool. Is your medical AI advisor trained on data from actual medical studies and textbooks, or is trained on a collection of facebook groups?