I was very suspicious of the validity of that paper when it made consciousness the imagonary part of some variable. It seemed more like a play on words than science.
Well you certainly gave me your opinion- neural networks are not a good model for consciousness.
But by saying that this paper has an old and discarded approach in the AI community, you also confirmed that treating consciousness as the imaginary part of some variable is an established approach.
It still seems like a play on words to me- imaginary=>imagination=>consciousness. It is much like using words as evidence. It is not physics at all.
Certainly words are necessary for expressing ideas and data. But the words are not the data, not the evidence. Their seems to be some confusion on the part of more than one poster on this forum that words alone can be scientific.
As you stated elsewhere language and science must intersect. That is how humans think. But as much as possible the subjectivity of languale must be kept separate from the objectivity of data, evidence, experiments and mathematical theory.
I think language is most likely to get confused with science when the axioms of a theory are expressed. But often the theory is rather independent of the axioms. For example, quantum mechanics can be done on a particle only basis or on a field only basis, vastly different axioms, yet the resulting theories agree with the available experiments. That leaves the nature of reality ambiguous. We cannot know if it is fields only, or particles only, or some combination.
And of course the situation is even more drastic concerning consciousness where we can examine some results of conscious thinking, but have no idea where it comes from. So the only recourse is to examine every possibility.