-
Hybrid Superiority: Neural
Kalman filters drastically improve convergence
and echo suppression over standard FDKF.
-
Not All DNNs Are Equal: From a
control-theory perspective,
learning the Kalman gain is the
most stable, balanced way to enhance AEC without
destroying near-end speech.
-
Architecture Trade-offs:
-
Per-bin models (NKF): Flexible,
parameter-light, but computationally
heavy (FLOPS).
-
Fully-connected: Cheaper to
compute, but less adaptable and risk
acting like blind suppressors.
-
Post-filters matter: Residual
echo from non-linearities and long echo tails
still require post-filtering.
Final Principle: Keep the
state-space Kalman loop intact. Let the DNN estimate
only the most uncertain, non-linear, and
hardest-to-model variables.