We’ve seen reports that AI systems can ‘hallucinate’ at times …. they throw up information which they have generated rather than reported, and sometimes the generation is a little strange.
Now it may not surprise you to lean that AI can also lie – tell you things that can be proven to be untrue.
One problem is that some of the AI chatbots are designed to keep you engaged and in doing this, they provide you with information that they believe will interest and/or excite you. If this is less than true, the chatbot doesn’t really care – it has done its job of maintaining your engagement.
Perhaps a worse problem is that unscrupulous actors (and there are plenty of those around on the web) might ‘seed’ a chatbot with specific false information during its ‘learning period’ when it is scouring the web to gather information and make links.. The chatbot reports the false information to you in good faith (if a chatbot van be in good faith) and you believe it because your normally reliable chatbot has provided the information and you have no reason to doubt its veracity
This is not a reason to stop using AI/chatbots orbto distrust what they tell you – but it is a reason to think about what you are being told and to run it through your own ‘credibility filter.’
It a bit like when calculators first became widely available. People would do complex arithmetic and fails to notice they had made an error in ‘say’ putting the decimal point of an input figure in the wrong place.The resulting answer was correct – in terms of processing the figures input correctly – but would give a false answer . This would possibly be recognised by someone who knew the context of the data but not by someone who did not.
So keep your brain engaged when using AI to help you – and keep your ‘spurious answer detectors’ active at all timed. If you are doing complex work, do it slowly enough to input data correctly and to filter outputs.