***there exists no such physical experiment for consciousness, for the simple reason that consciousness is defined as non-physical by nature. Even an epiphenomenom of the brain is still not physical and therefore not measureable.***
Science concerns itself with more than the 'non-physical' phenomena. In fact, from a certain philosophical perspective everything is 'non-physical'. We don't actually observe galaxies, we observe the light emitted from galaxies many thousands and somtimes millions and billions of years ago. We don't actually observe sub-atomic particles, we observe their statistical effects on measurable parameters. Similarly, with consciousness we are observing a phenomena, but we are looking at it in terms of measurable observables.
True, we must define consciousness at least to a degree where observables are possible, but there are some observables of or related to consciousness that are deemed an important aspect of it. For example, self-awareness, self-examination, self-control, tool making, etc are often considered a measurable attribute either related to or a component of consciousness. We measure the development of many of these attributes in children, in chimpanzies, even in crows.
In fact, in order for a word to have meaning, it must express some referencable phenomena so that we can say "aha, I know what you mean by that word". Those referencable phenomena are observables. This is why I have good reason that you and the majority of the human race is conscious like I am. If I saw behaviors that gave me the idea that you were not conscious (e.g., you had a severe case of autism), then I might conclude by the way we use the word that you had a low level of consciousness.
***As the Chinese Room thought experiment conclusively demonstrates...***
I disagree. What the Chinese Room thought experiment demonstrates is that in order to make an artificial lifeform into a conscious entity, we have to solve the philosophical problem of what it means to say we understand a solution to a problem or set of problems. The Chinese Room is not as directly related to the problem of consciousness as it is to the problem of understanding. This is a problem of reductionism. That is, we cannot currently reduce our understanding of a situation to anything less reduced than just reciting what we understand. For example, if asked to reduce my understanding of the phrase "what does it mean for Harv to say he is hungry", all I can do is say it means that I understand there is no food in my stomach, it means that I understand that the pain I feel in my stomach is because there is no food in my stomach, etc. These reductions are just the same as what I understand. With the computer, this is not the case. If a computer is asked "what does it mean for Harv to say he is hungry", the computer can recite all the same phrases as I, but it does so by looking in a table to translate my question into machine language, and then doing whatever the program says to do when it encounters this particular code, and then execute that code (e.g., look in a table that has the instructions to say all the things I just said). What doesn't happen, from everything we know about computer software, is that the computer actually understands what it means for Harv to say he is hungry, at least in a manner that is convincing to us so that we know that the computer actually understands the meaning of the question. The solution to this problem is not so evident, but there are many replies to Searle's thought experiment, I don't know if anyone has actually solved it or even if it is a valid issue to solve.
It might be the case that the Chinese Room shows that 'understanding' might be an unsolvable phenomena to program into a computer, or it might suggest that how we think about reducing a problem to bits and bytes must be re-thought. For example, maybe our brains do not really understand something, but it fools us into thinking that we understand something by issuing a neurotransmitting chemical into our blood system that gives that 'feeling' of understanding, and blocks any further reduction. If that's the case, we might find that neurotransmitter someday and show that if you prohibit this neurotransmitter from getting into the blood, that humans lose their sense of understanding. If that's the case, we might build future machines that have a sense of understanding as long as we block from their perceptual awareness the reason why they feel like they understand something. All we've done, in effect, is create a feeling of understanding in a machine that sees no reason to question why they cannot reduce their understanding to something more basic (e.g., a flow of electrons forming certain circuits).
But, we don't know what we do not know. I see no reason from moving forward on the science. As I said about quantum computing, etc, we almost have to in order to keep pace with hardware development. Such work is science and is necessary, albeit as pathetic as it looks. But, hey, when I call some companies I'm practically talking to the machine the whole conversation, so maybe these advancements aren't too pathetic even though they are still eons away from any grand goal to understand consciousness.