Currently, reciprical relations are applied based on the scoring_technique. However, recipricals are not always useful. For example in Section 2 of this paper, it is stated that "The use of reciprical relations may also lead to better model performance e.g., for relations in which one direction is easier to predict)"
For example, in case of UMLS, and DistMult:
With recipricals:
{'H@1': 0.5763993948562783, 'H@3': 0.7473524962178517, 'H@10': 0.9190620272314675, 'MRR': 0.6878625225735597}
Without recipricals:
{'H@1': 0.7095310136157338, 'H@3': 0.8245083207261724, 'H@10': 0.9500756429652042, 'MRR': 0.7877015541524235}
I suggest using the "reciprical relations" as an argument for training to make it optional.
Currently, reciprical relations are applied based on the scoring_technique. However, recipricals are not always useful. For example in Section 2 of this paper, it is stated that "The use of reciprical relations may also lead to better model performance e.g., for relations in which one direction is easier to predict)"
For example, in case of UMLS, and DistMult:
With recipricals:
{'H@1': 0.5763993948562783, 'H@3': 0.7473524962178517, 'H@10': 0.9190620272314675, 'MRR': 0.6878625225735597}Without recipricals:
{'H@1': 0.7095310136157338, 'H@3': 0.8245083207261724, 'H@10': 0.9500756429652042, 'MRR': 0.7877015541524235}I suggest using the "reciprical relations" as an argument for training to make it optional.