> There is an idea that the predictions made by physicists in quantum mechanics are from circular reasoning and already in their premises...
Circular reasoning is when A comes from B and B comes from A. Don't see such in derivation and it enables predictions what are not in premises, for example the "processing nodes" I talked about were not in premises, but it comes when you think of what might happen in such system.
> By running a computer programme "absolutely dynamic systems" with bias to "uncommon ground" you will by default build up restrictions (or limits) in what common ground COULD be.
> The more interactions the programme runs through, the more it will generate imaginary walls around any common ground it MIGHT have navigated.
> Like if you HAVE to change your game-plan or strategy EVERY move you make in Chess; you will increasingly restrict what common ground COULD have been conserved from one game-plan to the next.
This is a good reasoning. But I suppose you agree that it comes from the premise that such system cannot find common ground. Common ground don't disappear when we derive noncommon from it, and also is the possibility excluded that we can find common by finding noncommon from noncommon? I say again that this is a good reasoning if you can show that this is a fatal problem some way. First, the mechanism derived must been as simple as possible, think it must be elegant, and there must be exact mechanism. The only other possibility I see is that we create both uncommon and common, but creating common is not exactly creating anything new (connecting together what was not connected together before), this is something what already is there. Well, if you can show that it makes sense or you can propose another unrestricted mechanism.
> I once described a system how I thought the human brain might operate like:
> Start with one item meets one item: result: "A" now has a history: 1.A; 2.A met B.
> Now A with 1,2 history meets C.
> History of A now is: 1.A; 2.A met B; 3. (A; then A met B) met C.
> Eventually a complex history is built up. The idea is when complex A meets complex R say; with both complex A and complex R being very complicated objects:
> complex A can look throughout itself and find familiar looking simpler patterns and complex patterns and combination patterns; with elements drawn from many different levels of its history; to map patterns of data incoming from complex R.
How *exactly*? I just see two things here. 1. The system is a result of its development and therefore contains its history. 2. Matching patterns in other system is necessary to find information from other system and to process that information. Both true, but all that may happen in my system or in cellular automata or in other systems as well.
> This multi-level pattern-matching by searching libraries and sub-libraries and sub-sub-libraries for suitable way to map incoming information looks similar to the windows system on a computer.
You talk about libraries, these must be organised in certain way, everything adds to complexity of the basic mechanism until the basic mechanism is indeed as complex as Windows. Now consider how much restrictions you code such way into the basic mechanism, so it would be very far from self-developing. I proposed an exact mechanism what is already completely implemented in a computer program, but you talk about something what may work somehow in some way, by far not enough to implement anything. I propose, if you (or others) develop any theory, to implement it with a computer program (when it cannot be implemented by equations), then you can show that it really does something. I can help you of how to use c and MinGW, if you didn't write c programs before. In some areas (not all) the time of philosophers writing long essays may soon be over, there were no means to implement and test their ideas before, but now we have computers, and we can implement and test logic with them.