Mathematical Breakdown
Network Architecture:
• Input layer: 2 neurons (x₁, x₂)
• Hidden layer: 1 neuron with sigmoid activation
• Output layer: 1 neuron with sigmoid activation
• Total neurons: 2 (hidden + output)
Weights and Biases:
• Hidden neuron: w₁ = 1, w₂ = 1, bias = -0.5
• Output neuron: w₃ = 1, w₄ = -2, bias = -0.5
(w₄ connects x₁ AND x₂ to output, bypassing hidden layer)
Forward Pass Equations:
Hidden: h₁ = σ(x₁ + x₂ - 0.5)
Output: y = σ(h₁ - 2(x₁ × x₂) - 0.5)
Where σ(z) = 1/(1 + e^(-z))
How it works:
1. Hidden neuron computes (x₁ OR x₂) - activates when either input is 1
2. Output neuron combines this with -(x₁ AND x₂) term
3. Result: fires when exactly one input is 1 (XOR behavior)
This minimal network demonstrates how just two neurons in two layers
can solve the linearly non-separable XOR problem!