This paper proposes a robust near-field adaptive beamformer for microphone array applications in small rooms. Robustness against location errors is crucial for near-field adaptive beamforming due to the difficulty in estimating near-field signal locations especially the radial distances. A near-field regionally constrained adaptive beamformer is proposed to design a set of linear constraints by filtering on a low rank subspace of the near-field signal over a spatial region and frequency band such that the beamformer response over the designed spatial-temporal region can be accurately controlled by a small number of linear constraint vectors. The proposed constraint design method is a systematic approach which guarantees real arithmetic implementation and direct time domain algorithms for broadband beamforming. It improves the robustness against large errors in distance and directions of arrival, and achieves good distance discrimination simultaneously. We show with a nine-element uniform linear array that the proposed near-field adaptive beamformer is robust against distance errors as large as ±32% of the presumed radial distance and angle errors up to ±20°. It can suppress a far field interfering signal with the same angle of incidence as a near-field target by more than 20 dB with no loss of the array gain at the near-field target. The significant distance discrimination of the proposed near-field beamformer also helps to improve the dereverberation gain and reduce the desired signal cancellation in reverberant environments.

Additional Metadata
Keywords Dereverberation, Distance discrimination, Interference suppression, Microphone array, Near-field beamforming, Regionally constrained beamforming, Robust adaptive beamforming, Robustness against location errors
Persistent URL dx.doi.org/10.1109/TSA.2004.832982
Journal IEEE Transactions on Speech and Audio Processing
Citation
Zheng, Y.R. (Yahong Rosa), Goubran, R, & El-Tanany, M. (2004). Robust near-field adaptive beamforming with distance discrimination. IEEE Transactions on Speech and Audio Processing, 12(5), 478–488. doi:10.1109/TSA.2004.832982