Summary form only given. MPICH2, the successor of one of the most popular open source message passing implementations, aims to fully support the MPI-2 standard. Due to a complete redesign, MPICH2 is also cleaner, more flexible, and faster. The InfiniBand network technology is an open industry standard and provides high bandwidth and low latency, as well as reliability, availability, serviceability (RAS) features. It is currently spreading its influence on the market of cost-effective cluster computing. We expect for the near future that upcoming requirements in many cluster environments can only be satisfied by the functionality of MPlCH2 and the performance of InfiniBand. Hence, there is the need for an effective support of the InfiniBand interconnect technology by MPICH2. We present our experience that has been gained during the implementation of our MPICH2 device for InfiniBand. Further, a performance overview is given, as well as ideas for future developments. The device is implemented in terms of the channel interface (CHS) and uses both the channel semantics (send/receive) and memory semantics (RDMA) provided by Mellanox' verbs implementation VAPI. With this combined approach a significant performance gain can be achieved.