Computer pioneer Alan Turing's remarks in 1950 on the question, "Can machines think?" were misquoted, misinterpreted and morphed into the so-called "Turing Test". The modern version says if you can't tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like "thinking" and "intelligent" to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let's put this new software to the Turing Test - by Grabthar's Hammer, it passed! We've achieved Artificial Intelligence!
I think the Chinese room argument published in 1980 gives a pretty convincing reason why the Turing test doesn't demonstrate intelligence.
That just shows a fundamental misunderstanding of levels. Neither the computer nor the human understands Chinese. Both the programs do, however.
The programs don't really understand Chinese either. They are just filled with an understanding that is provided to them up-front. I mean as in they do not derive that understanding from something they perceive where there was no understanding before, they don't draw conclusions, don't understand words from context,.... the way an intelligent being would learn a language.
Nothing in the thought experiment says that the program doesn't behave that way. If the program really seems like it understands language to an outside observer, you would assume it did learn language that way.