Testing for independence between two random vectors is a fundamental problem in statistics. It is observed from empirical studies that many existing omnibus consistent tests may not perform well for some strongly nonmonotonic and nonlinear dependence. To get insights into this issue, this paper discovers that the multivariate independence testing problem can be cast into an equivalent test of the equality of two bivariate means, and further reveals the power loss phenomenon is mainly due to cancellation of positive and negative terms in dependence metrics. The cancellation leads to the sum of these terms very close to zero. Motivated by this finding, we propose a class of consistent metrics indexed by a positive integer $\gamma$ to characterize independence. We further prove that the metrics with even or infinity $\gamma$ can effectively avoid the cancellation, and have better powers under the alternatives in which two mean differences offset each other. In practice, it is desirable to target at a wide range of dependence scenarios. Thus, we further advocate to combine the p-values of test statistics with different $\gamma$'s through the Fisher's method. The advantages of the newly proposed tests are illustrated by numerical studies.