Sorry for being so slow to respond; I have been very busy the last few days and you have given me a lot to think about; I want to do my best to be clear. I think I am beginning to understand your confusion and, the more I think about it, the more I think it is totally my fault. In response to your comments, I think the real problem here is a trees for the woods difficulty. I do not believe you have a good picture of what I am trying to show; either that or you letting it out of your sight when you look at the details.
The first thing I think I need to do is let you know what I mean when I say I want to create a model. Exactly what do I expect a model of an explanation to do in order to qualify as a model. In order to describe what I mean, I should have provided an acceptable example of an explanation to be modeled.
I hope you understand the reason I begin by defining A, B and C. These are nothing more than abstract divisions of the information to be explained so that we can specify the exact constraints on a valid explanation and the fact that most valuable explanations are based on less than complete information. If any part of that bothers you, please let me know.
Let us look at a specific case: i.e., some defined sets A, B and C. If no valid explanation of C exists I have nothing to model so I will assume a valid explanation exists. That is, somehow, by some means totally unknown to me, Joe Blow has discovered an explanation for A consistent with C which provides him with expectations for the B's. I require but one constraint on Joe Blow's explanation: there can exist no B's in C which invalidate that explanation (if there are, Joe should spend more time on developing that explanation as it is clearly invalid in C and he should be able to discover that without any new information). It then follows that Joe has a valid explanation for me to model.
So, let's go back to that definition of an explanation so I can know what my model must do in order to "explain C". First, it must provide a set of labels for the elements of A making up B (so that we may refer to them) and second, it must provide an algorithm to yield my expectations as a function of those B's. Any model which does that is a model of an explanation by definition. You should see that there are lots of ways to provide labels and lots of ways to provide an algorithm, so explanations so defined are easy to come by. (We are not talking about "good explanations" or "valid explanations" here, we are only talking about the set of "explanations" in general.) Good or valid refers to the quality you assign to that "explanation" as a measure of its value.
So the issue of creating a model of an explanation is settled, at least in my mind. If it is not in yours, I need to know exactly where you think I have gone astray. The next question then is, can Joe's explanation be modeled by this model. We have already constrained his explanation to be "valid" (at least in C which is the absolute best which can be done as, if it isn't part of C, Joe's explanation cannot be based on it) so we are now talking about modeling a given valid explanation.
Let us go to the algorithm problem first. All we have to do is find an algorithm which yields exactly the same expectations yielded by Joe's explanation. Actually, the problem of finding an algorithm which yields exactly Joe's expectations is not the real problem here at all. All that is really required is a proof that an algorithm exists. Since Joe has a method of generating those expectations it follows that an algorithm must exist. If we had the proper algorithm, the problem of mapping our labels into his is rather straight forward. The issue is that the set of explanations included in model I have proposed includes Joe's explanation.
Since I have made no constraints on A, B and C (other than those internal relationships which I presume you understand) it follows that my model can represent any explanation of anything.
What you need to understand is that I have laid out a specific model of the concept "an explanation", not a model of any specific explanation. Neither do I claim my model provides a mechanism for discovering an explanation or checking the validity of any given explanation. I make these comments only because people seem to invariably presume I am doing one of these things or some variation. My only claim is that any explanation, which satisfies my definition of an explanation, can be represented with my model. I do not claim this as a theory or a hypothesis, I claim it as a logical fact! That is, it follows directly from my definition of an explanation.
Once you understand the model then we can proceed to discus the consequences of the fact that any explanation can be so represented.
I hope I have cleared up some of your difficulties. Having said what I said above, I think I understand why you found those two comments so contradictory. I apologize for being so obtuse. The comments you quote were meant to be seen in quite different contexts where the relationships of interest change. Certainly you were right, as written they are easily seen as contradictory.
Certainly what I said could have been said better. The issue of the relationship between the mapping and the explanation is a very important issue and I hope I can communicate it properly. Of great significance is the existence of an infinite number of re-mappings which do not shuffle the B's or the elements of B. The fact that these re-mappings exist tells us some thing very important about the algorithm we are looking for and has some very important consequences. But we really shouldn't be talking about that until you understand the model.
Again, I apologize for being unclear.
Looking to hear from you again – Dick