From bddca759e7667d6851a79174d8a15ac41242ef3f Mon Sep 17 00:00:00 2001 From: robertturner <143536791+robertdhayanturner@users.noreply.github.com> Date: Fri, 5 Jan 2024 09:35:07 -0500 Subject: [PATCH] Update knowledge_graph_embedding.md add dismult png source: [dglke](https://dglke.dgl.ai/doc/kg.html) --- docs/use_cases/knowledge_graph_embedding.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/use_cases/knowledge_graph_embedding.md b/docs/use_cases/knowledge_graph_embedding.md index c6b2b23d6..204257cfa 100644 --- a/docs/use_cases/knowledge_graph_embedding.md +++ b/docs/use_cases/knowledge_graph_embedding.md @@ -39,6 +39,7 @@ KGE algorithms vary in the similarity functions they employ, and how they define For our KGE model demo, we opted for the DistMult KGE algorithm. It works by representing the likelihood of relationships between entities (i.e., similarity) as a _bilinear_ function. Essentially, DisMult KGE assumes that the score of a given triple (comprised of a head entity $h$, a relationship $r$, and a tail entity $t$) can be computed as: $h^T \text{diag}(r) t$. ![DistMult similarity function](../assets/use_cases/knowledge_graph_embedding/distmult.png) +source: [dglke](https://dglke.dgl.ai/doc/kg.html) The model parameters are learned (internalizing the intricate relationships within the KG) by _minimizing cross entropy between real and corrupted triplets_.