Training a deep neural network often requires a huge amount of annotated data, which is scarce in the medical image analysis domain. In this work, we present a simple yet effective strategy that relies on the information fusion to enhance existing neural networks. The proposed approach utilizes information from different spatial scales and combines them in a learnable way. Experimental results on two benchmark datasets demonstrate that the proposed fusion module improves the segmentation performance of state-of-the-art neural networks.