Skip to main content

A Two-Step Approach for Explainable Relation Extraction

  • Conference paper
  • First Online:
Advances in Intelligent Data Analysis XX (IDA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13205))

Included in the following conference series:

Abstract

Knowledge Graphs (KG) offer easy-to-process information. An important issue to build a KG from texts is the Relation Extraction (RE) task that identifies and labels relationships between entity mentions. In this paper, to address the RE problem, we propose to combine a deep learning approach for relation detection, and a symbolic method for relation classification. It allows to have at the same time the performance of deep learning methods and the interpretability of symbolic methods. This method has been evaluated and compared with state-of-the-art methods on TACRED, a relation extraction benchmark, and has shown interesting quantitative and qualitative results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 64.99
Price excludes VAT (USA)
Softcover Book
USD 84.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We decided to use the Stanford CoreNLP toolkit [15].

  2. 2.

    We used WordNet [18] to do so.

  3. 3.

    See https://gitlab.inria.fr/hayats/luke-redect.

  4. 4.

    Accessible here: https://gitlab.inria.fr/hayats/conceptualknn-relex.

  5. 5.

    https://gitlab.inria.fr/hayats/jena-conceptsofneighbours.

  6. 6.

    https://jena.apache.org/.

References

  1. Ayats, H., Cellier, P., Ferré, S.: Extracting relations in texts with concepts of neighbours. In: Braud, A., Buzmakov, A., Hanika, T., Le Ber, F. (eds.) ICFCA 2021. LNCS (LNAI), vol. 12733, pp. 155–171. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77867-5_10

    Chapter  Google Scholar 

  2. Ben Abacha, A., Zweigenbaum, P.: Automatic extraction of semantic relations between medical entities: a rule based approach. J. Biomed. Semant. 2, 1–11 (2011)

    Article  Google Scholar 

  3. Cellier, P., et al.: Sequential pattern mining for discovering gene interactions and their contextual information from biomedical texts. J. Biomed. Semant. 6, 27 (2015)

    Article  Google Scholar 

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT (2019)

    Google Scholar 

  5. Ferré, S.: Concepts de plus proches voisins dans des graphes de connaissances. In: Ingénierie des Connaissances (IC) (2017)

    Google Scholar 

  6. Ferré, S.: Application of concepts of neighbours to knowledge graph completion. Data Sci. 4(1), 1–28 (2021)

    Article  MathSciNet  Google Scholar 

  7. Fundel, K., Küffner, R., Zimmer, R.: RelEx - relation extraction using dependency parse trees. Bioinformatics 23(3), 365–371 (2007)

    Google Scholar 

  8. Giuliano, C., Lavelli, A., Romano, L.: Exploiting shallow linguistic information for relation extraction from biomedical literature. In: Conference European Chapter of the Association for Computational Linguistics, pp. 401–408 (2006)

    Google Scholar 

  9. Grishman, R.: Twenty-five years of information extraction. Nat. Lang. Eng. 25, 677–692 (2019)

    Article  Google Scholar 

  10. Gutierrez, C., Sequeda, J.F.: Knowledge graphs. Commun. ACM 64(3), 96–104 (2021)

    Article  Google Scholar 

  11. Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: SpanBERT: improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguist. 8, 64–77 (2020)

    Article  Google Scholar 

  12. Leeuwenberg, A., Buzmakov, A., Toussaint, Y., Napoli, A.: Exploring pattern structures of syntactic trees for relation extraction. In: Baixeries, J., Sacarea, C., Ojeda-Aciego, M. (eds.) ICFCA 2015. LNCS (LNAI), vol. 9113, pp. 153–168. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19545-2_10

    Chapter  Google Scholar 

  13. Lyu, S., Chen, H.: Relation Classification with Entity Type Restriction, May 2021. http://arxiv.org/abs/2105.08393

  14. Mallart, C., Le Nouy, M., Gravier, G., Sébillot, P.: Active learning for interactive relation extraction in a french newspaper’s articles. In: Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Methods and Applications (2021)

    Google Scholar 

  15. Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S.J., McClosky, D.: The Stanford CoreNLP Natural Language Processing Toolkit. In: Annual Meeting of the Association for Computational Linguistics: System Demonstrations (2014)

    Google Scholar��

  16. Martinez-Rodriguez, J.L., Hogan, A., Lopez-Arevalo, I.: Information extraction meets the semantic web: a survey. Semant. Web 11(2), 255–335 (2020)

    Article  Google Scholar 

  17. Meilicke, C., Chekol, M.W., Ruffinelli, D., Stuckenschmidt, H.: Anytime bottom-up rule learning for knowledge graph completion. In: International Joint Conference Artificial Intelligence, pp. 3137–3143 (2019)

    Google Scholar 

  18. Miller, G.A.: WordNet: An Electronic Lexical Database. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  19. Nguyen, T.H., Grishman, R.: Relation extraction: perspective from convolutional neural networks. In: Workshop Vector Space Modeling for Natural Language Processing, pp. 39–48 (2015)

    Google Scholar 

  20. Shi, P., Lin, J.: Simple BERT Models for Relation Extraction and Semantic Role Labeling, April 2019. arXiv: 1904.05255

  21. Xu, Y., Mou, L., Li, G., Chen, Y., Peng, H., Jin, Z.: Classifying relations via LSTM networks along shortest dependency paths. In: Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (2015)

    Google Scholar 

  22. Yamada, I., Asai, A., Shindo, H., Takeda, H., Matsumoto, Y.: LUKE: deep contextualized entity representations with entity-aware self-attention. In: Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)

    Google Scholar 

  23. Zhang, Y., Qi, P., Manning, C.D.: Graph convolution over pruned dependency trees improves relation extraction. In: Conference on Empirical Methods in Natural Language Processing, pp. 2205–2215 (2018)

    Google Scholar 

  24. Zhang, Y., Zhong, V., Chen, D., Angeli, G., Manning, C.D.: Position-aware attention and supervised data improve slot filling. In: Conference on Empirical Methods in Natural Language Processing, pp. 35–45 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hugo Ayats .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ayats, H., Cellier, P., Ferré, S. (2022). A Two-Step Approach for Explainable Relation Extraction. In: Bouadi, T., Fromont, E., Hüllermeier, E. (eds) Advances in Intelligent Data Analysis XX. IDA 2022. Lecture Notes in Computer Science, vol 13205. Springer, Cham. https://doi.org/10.1007/978-3-031-01333-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-01333-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-01332-4

  • Online ISBN: 978-3-031-01333-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics