Ensemble models are widely used to solve complex tasks by their decomposition into multiple simpler tasks, each one solved locally by a single member of the ensemble. Decoding of error-correction codes is a hard problem due to the curse of dimensionality, leading one to consider ensembles-of-decoders as a possible solution. Nonetheless, one must take complexity into account, especially in decoding. We suggest a low-complexity scheme where a single member participates in the decoding of each word. First, the distribution of feasible words is partitioned into non-overlapping regions. Thereafter, specialized experts are formed by independently training each member on a single region. A classical hard-decision decoder (HDD) is employed to map every word to a single expert in an injective manner. FER gains of up to 0.4dB at the waterfall region, and of 1.25dB at the error floor region are achieved for two BCH(63,36) and (63,45) codes with cycle-reduced parity-check matrices, compared to the previous best result of the paper "Active Deep Decoding of Linear Codes".
Absorbing sets (ASs) cause the error floor phenomenon in many low-density parity-check (LDPC) codes by entrapping iterative decoders. A recent simplified system model for practical min-sum (MS) LDPC decoding predicts that if all variable nodes in an AS have channel messages above a certain threshold, the AS cannot entrap the decoder. The threshold is an AS parameter that depends on its Tanner graph, and is the result of a nonlinear optimization. In this paper, we analyze the messages exchanged in the directed graph (digraph) of the AS during MS decoding while evaluating the AS threshold. By doing this, we unveil the meaning of the threshold value, which is the minimum channel message for which positive feedback loops in the digraph involve all the messages exchanged.
This paper proposes a novel shift-sum decoding scheme for non-binary cyclic codes. Using minimum-weight dual codewords and their cyclic shifts, a reliability measure can be yielded as an indicator for the error position and the error magnitude. Based on this shift-sum decoding concept, a hard-decision iterative decoding algorithm is proposed, which can correct errors beyond half of the code's minimum Hamming distance. By utilizing reliability information from the channel, a soft-decision iterative decoding algorithm is further introduced to improve the decoding performance. These two shift-sum based iterative decoding algorithms are realized with polynomial multiplication and integer (or real number) comparisons, which are hardware-friendly. Simulation results on Reed-Solomon codes and non-binary BCH codes show the decoding potential of the proposed algorithms.
We consider near maximum-likelihood (ML) decoding of short linear block codes based on neural belief propagation (BP) decoding recently introduced by Nachmani et al.. While this method significantly outperforms conventional BP decoding, the underlying parity-check matrix may still limit the overall performance. In this paper, we introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning. We consider the weights in the Tanner graph as an indication of the importance of the connected check nodes (CNs) to decoding and use them to prune unimportant CNs. As the pruning is not tied over iterations, the final decoder uses a different parity-check matrix in each iteration. For Reed-Muller and short low-density parity-check codes, we achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
Maximum likelihood (ML) and symbolwise maximum aposteriori (MAP) estimation for discrete input sequences play a central role in a number of applications that arise in communications, information and coding theory. Many instances of these problems are proven to be intractable, for example through reduction to NP-complete integer optimization problems. In this work, we prove that the ML estimation of a discrete input sequence (with no assumptions on the encoder/channel used) is equivalent to the solution of a continuous non-convex optimization problem, and that this formulation is closely related to the computation of symbolwise MAP estimates. This equivalence is particularly useful in situations where a function we term the expected likelihood is efficiently computable. In such situations, we give a ML heuristic and show numerics for sequence estimation over the deletion channel.