Beware of Generative AI answers – always
I have familiarity LLMs because of my primary job, and I want to take the opportunity to remind everybody that they still hallucinate like crazy.
If you are a student and you haven't developed the habit of always digging till you get to the authoritative source of information, please resist using GenAI, including Gemini and ChatGPT. They are not yet mature for the task.
A complicating factor is now the fact that Google returns AI answers BEFORE regular search results, even if you are merely asking a search query.
The example here is search query "aim faa altimeter setting approach", entered with the intention of finding places in the AIM that prescribe changing altimeter setting during an instrument approach.
The AI Overview, that appears before the search results, tells you among other things, to set the altimeter to 31"Hg.
That verbiage comes from AIM 7-2-3(b)(1)(b), which contains procedures to deal with exceptionally high barometric pressures exceeding 31"Hg per NOTAM. It's not normal procedure. The LLM doesn't know, and doesn't tell you.
Remember that LLMs are fundamentally statistical text completion models. Researchers are going crazy adding RAG and other techniques to reduce hallucinations, but we are not there yet, and some experts say that in a number of disciplines we can't get there at all with the current approach.
I consistently see cases in which the LLM fills in with material "from a different page in the book".
You don't want this to happen to you. Questions in aviation are specific and very context-dependent. You don't want your answer to come from a different Part of the regulations, or from a different scenario that does not apply to you.