Adjustments to the proofs of the convergence theorems
In the book, we have considered two learning rules for a single multi-valued neuron (MVN). There are a learning rule based on the closeness of the desired output to an actual output in terms of angular distance (Section 3.2) and the error-correction learning rule (Section 3.3). There are two corresponding convergence theorems (Theorem 3.16, p. 107 and Theorem 3.17, p. 116), which state that the learning algorithms based on these learning rules converge after a finite number of learning steps.
We have also considered a learning algorithm for a multilayer neural network with multi-valued neurons (MLMVN), which is based on the error-correction learning rule (Section 4.3.2) and a corresponding convergence theorem (Section 4.3.3, Theorem 4.19, p. 158).
All the three convergence theorems are correct. However, there is a common inconsistency in their proofs. Again, this inconsistency does not have any negative impact on the correctness and (or) applicability of the convergence theorems, which are correct (the only minor problem is the use of the error-correction learning algorithm whose convergence is determined by Theorem 3.17 with the input/output mappings described by the regular threshold Boolean functions, but with the complex-valued weights and the learning rule (3.92). The inconsistency is in the assumption that the learning rate in all the learning rules can always be taken equal to 1. However, this assumption compromises one step, which is common in the proofs of the mentioned theorems. In fact, the learning rate cannot always be equal to 1, it is in general a complex number (may be with a unitary magnitude, so located on the unit circle). This issue can easily be overcome and we would like to do this here, presenting the adjusted proofs.
We would like to distinguish again that the theorems are correct anyway. However, we believe that it is important to provide the reader with the correct proofs, which do not suffer from the mentioned inconsistency. The adjusted proofs will definitely be included in the second edition of the book if it will be published ever. But today the corrected proofs are available here (see the corresponding links below).
Finally, the author appreciates discussions, which he had on these proofs with a number of colleagues, particularly Dr. Dongpo Xu.
Here there are the adjusted proofs:
Theorem 3.16 pdf
Theorem 3.17 pdf
Theorem 4.19 pdf