FIND ME ON

GitHub

LinkedIn

Hamming Network

🌱

MachineLearning

Intro

A Hamming Network is built on top of the concept of Hamming Distance which looks to quantify the number of positions where two strings of equal-length are different. These networks compute the Hamming Distance between input vectors and all stored vectors/nodes.

So, given we have PP stored vectors, when inserted an input vector we get PP outputs which represent the Hamming Distance of the input with each stored vector.

For vector length nn we have nn input nodes, each node contains PP weights, each corresponding to components of the stored vector’s ii’th element. Essentially since we have bipolar values, any ones are the same will generate an output of zero and any that are different will be 1-1.

Definition

We assign weights as so: W=12[i1TiPT]=12[i11Ti12Ti1nTiP1TiP2TiPnT]W=\frac{1}{2}\begin{bmatrix} i_{1}^{T} \\ \vdots \\ i_{P}^{T} \end{bmatrix}=\frac{1}{2} \begin{bmatrix} i_{11}^{T}&i_{12}^{T}&\cdots&i_{1n}^{T}\\ \vdots&\vdots&\ddots&\vdots \\i_{P1}^{T}&i_{P2}^{T}&\cdots&i_{Pn}^{T} \\ \end{bmatrix} with θ=[n2n2]\theta=\begin{bmatrix}- \frac{n}{2}\\\vdots\\- \frac{n}{2}\end{bmatrix}where iijT{+1.1}i_{ij}^{T}\in\{+1.-1\}. Then we perform inference by applying this expression: o=WTi+θo=W^{T}i+\theta

Linked from