diff --git a/SUMMARY.md b/SUMMARY.md index 1bcd9b09..6eee21da 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -11,6 +11,7 @@ * [Deduplication](social-registry/features/deduplication/README.md) * [📔 User Guides](social-registry/features/deduplication/user-guides/README.md) * [📔 Configure ID Deduplication, Deduplicate, and Save Duplicate Groups/Individuals](social-registry/features/deduplication/user-guides/configure-id-deduplication-deduplicate-and-save-duplicate-groups-individuals.md) + * [Deduplicator Service](social-registry/features/deduplication/deduplicator-service.md) * [Lock and Unlock](social-registry/features/lock-and-unlock.md) * [Enumerator](social-registry/features/enumerator/README.md) * [Enumerator ID](social-registry/features/enumerator/enumerator-id.md) @@ -58,7 +59,9 @@ * [📔 ID Authentication Process](social-registry/features/id-integration/id-authentication/user-guides/id-authentication-process.md) * [📔 eSignet Client Creation](social-registry/features/id-integration/id-authentication/user-guides/esignet-client-creation.md) * [Fayda ID Integration](social-registry/features/id-integration/fayda-id-integration.md) - * [Verifiable Credentials Issuance](social-registry/features/verifiable-credentials-issuance.md) + * [Verifiable Credentials Issuance](social-registry/features/verifiable-credentials-issuance/README.md) + * [📔 User Guides](social-registry/features/verifiable-credentials-issuance/user-guides/README.md) + * [📔 Configure Inji to download Social Registry VCs](social-registry/features/verifiable-credentials-issuance/user-guides/configure-inji-to-download-social-registry-vcs.md) * [Computed fields](social-registry/features/score-computation.md) * [Record Revision History](social-registry/features/record-revision-history.md) * [SPAR Integration for Account Info](social-registry/features/spar-integration-for-account-info.md) @@ -388,9 +391,10 @@ * [Apache Superset](monitoring-and-reporting/apache-superset.md) * [Reporting Framework](monitoring-and-reporting/reporting-framework/README.md) * [📔 User Guides](monitoring-and-reporting/reporting-framework/user-guides/README.md) - * [Connector Creation Guide](monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md) - * [Dashboards Creation Guide](monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md) - * [Installation & Troubleshooting](monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md) + * [📔 Connector Creation Guide](monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md) + * [📔 Dashboards Creation Guide](monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md) + * [📔 Installation & Troubleshooting](monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md) + * [Page 1](monitoring-and-reporting/reporting-framework/user-guides/page-1.md) * [Kafka Connect Transform Reference](monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md) * [System Logging](monitoring-and-reporting/logging.md) * [System Health](monitoring-and-reporting/system-health.md) diff --git a/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md b/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md index 8df176bd..3b323bc0 100644 --- a/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md +++ b/monitoring-and-reporting/reporting-framework/kafka-connect-transform-reference.md @@ -1,6 +1,6 @@ # Kafka Connect Transform Reference -also supports the extraction of nested fieldsThis document is the configuration reference guide for Kafka SMTs developed by OpenG2P, that can be used on [OpenSearch Sink Connectors](https://github.com/OpenG2P/openg2p-reporting). +This document is the configuration reference guide for Kafka SMTs developed by OpenG2P, that can be used on [OpenSearch Sink Connectors](https://github.com/OpenG2P/openg2p-reporting). Following is a list of some of the other transformations available on the OpenSearch Connectors, apart from the ones developed by OpenG2P: @@ -27,28 +27,39 @@ Following is a list of some of the other transformations available on the OpenSe
Field nameField titleDescriptionDefault Value
query.typeQuery Type

This is the type of query made to retrieve new field values.

Supported values:

  • es (Elasticsearch based).
es
input.fieldsInput Fields

List of comma-separated fields that will be considered as input fields in the current record.

Nested input fields are supported, like: (where profile is json that contains name and birthdate fields)

profile.name,profile.birthdate
 
output.fieldsOutput FieldsList of comma-separated fields to be added to this record.
input.default.valuesInput Default ValuesList of comma-separated values to give in place of the input fields when an input field is empty or null.
Length of this has to match that of input.fields.
es.indexES IndexElasticsearch(or OpenSearch) index to query for.
es.input.fieldsES Input FieldsList of comma-separated fields, to be queried on the ES index, each of which maps to the fields on input.fields.
Length of this has to match that of input.fields.
es.output.fieldsES Output FieldsList of comma-separated fields, to be retrieved from the ES query response document, each of which maps to the fields on output.fields.
Length of this has to match that of output.fields.
es.input.query.add.keywordES Input Query Add KeywordWhether or not to add .keyword to the es.input.fields during the term query. Supported values: true / false .false
es.security.enabledES Security EnabledIf this value is given as true, then Security is enabled on ES.
es.urlES UrlElasticsearch/OpenSearch base URL.
es.usernameES Username
es.passwordES Password
-### ExtractFieldAdv +### DynamicNewFieldInsertBack #### Class name: -* `org.openg2p.reporting.kafka.connect.ExtractFieldAdv$Key` - Applies transform only to the _Key_ of Kafka Connect Record. -* `org.openg2p.reporting.kafka.connect.ExtractFieldAdv$Value` - Applies transform only to the _Value_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.DynamicNewFieldInsertBack$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.DynamicNewFieldInsertBack$Value` - Applies transform only to the _Value_ of Kafka Connect Record. #### Description: -* This transformation can be used to extract, merge, and/or rename fields in the record. -* This also supports the extraction of nested fields. +* This transformation can be used to add additional data to documents of different index. +* If record matches the configured condition, the given data will be updated into the record with given id. + +#### Configuration: + +
Field nameField titleDescriptionDefault Value
query.typeQuery Type

This is the type of query made to retrieve new field values.

Supported values:

  • es (Elasticsearch based).
es
id.exprID Jq ExpressionJq expression to evaluate the ID of the external document into which the data is supposed to be updated.
conditionConditionJq expression that evaluates to a boolean value which decides whether or not to update.
valueValueJq expression of the value, that evaluates to a JSON, that is to be updated into the external document.
es.indexES IndexElasticsearch(or OpenSearch) index to update into.
es.security.enabledES Security EnabledIf this value is given as true, then Security is enabled on ES.
es.urlES UrlElasticsearch/OpenSearch base URL.
es.usernameES Username
es.passwordES Password
+ +### ApplyJq + +#### Class name: + +* `org.openg2p.reporting.kafka.connect.ApplyJq$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.ApplyJq$Value` - Applies transform only to the _Value_ of Kafka Connect Record. + +#### Description: + +* This transformation applies the given Jq expression on the current record and replace the current record with the result from Jq. +* This transformation can be used for operations like extracting, merging, removing, and/or renaming fields. * For example: - * `"field": "payload",` : The `payload` field is extracted and the record is replaced with the extracted value. - * `"field": "payload.before",` : The `before` field inside `payload` field is extracted and the record is replaced with the extracted value. - * `"field": "source,payload.before,payload.after",` : The `source` field and the `before` and `after` fields inside the `payload` field are merged, and the record is replaced with the final merged value. Fields in `after` will be prioritized over fields in `before`, and `before` is prioritized over `source` and so on from the config list. Maps or arrays inside the above fields will be merged. - * `"field": "source.ts_ms->source_ts_ms,source.table->source_table",` : The `ts_ms` field in `source` will be extracted and renamed to `source_ts_ms`. The `table` field in `source` will be extracted and renamed to `source_table`. The final record contains only `source_ts_ms` and `source_table` fields. - * `"field": "source.ts_ms->source_ts_ms,payload.before,payload.after",` : The `source_ts_ms` field is added to the merged value of `before` and `after` fields (from `payload` ), and the record is replaced with the final merged value. If the `source_ts_ms` field is already present in `after` , it will be prioritised over the one from `source`. -* This transformation extends from the `ExtractField` transform by Apache.[https://kafka.apache.org/documentation/#org.apache.kafka.connect.transforms.ExtractField](https://kafka.apache.org/documentation/#org.apache.kafka.connect.transforms.ExtractField) + * `"expr": ".payload.after + {source_ts_ms: .payload.source.ts_ms}",` : The expr field should contain a valid Jq expression. #### Configuration: -
Field nameField titleDescriptionDefault value
fieldField name to extractName of the field (or list of fields) to be extracted and merged.
array.merge.strategyArray merge strategy
  • Strategy to merge nested arrays.
  • Available values:

    • concat : Merge two arrays.
    • replace : Replace array with new array.
concat
map.merge.strategyMap merge strategy
  • Strategy to merge nested maps.
  • Available values:

    • deep : Deep merge two maps.
    • replace : Replace map with new map.
deep
+
Field nameField titleDescriptionDefault value
exprExpressionJq expression to be applied.
behavior.on.errorBehaviour on error

What to do when encountering error applying Jq expression. Possible values:

  • halt : Throws exception upon encountering error.
  • ignore : Ignores any errors encountered.
halt
### StringToJson @@ -109,9 +120,28 @@ Following is a list of some of the other transformations available on the OpenSe #### Configuration -
Field nameField titleDescriptionDefault Value
ts.orderTimestamp orderList of comma-separated fields to select output from. The output will be selected based on whichever field in the order is not null first. Nested fields are supported.
output.fieldOutput FieldName of the output field into which the selected timestamp is put.

@ts_generated
+
Field nameField titleDescriptionDefault Value
ts.orderTimestamp orderList of comma-separated fields to select output from. The output will be selected based on whichever field in the order is not null first. Nested fields are supported.
output.fieldOutput FieldName of the output field into which the selected timestamp is put.
@ts_generated
 
+### TriggerDeduplication + +#### Class name: + +* `org.openg2p.reporting.kafka.connect.TriggerDeduplication$Key` - Applies transform only to the _Key_ of Kafka Connect Record. +* `org.openg2p.reporting.kafka.connect.TriggerDeduplication$Value` - Applies transform only to the _Value_ of Kafka Connect Record. + +#### Description: + +* This transformation can be used to trigger deduplication when there is a change in any one of the configured fields. +* This transformation is best used before applying any other transformation. + +#### Configuration + +
Field nameField titleDescriptionDefault Value
deduplication.base.urlBase URL of Deduplicator Service
dedupe.config.nameDedupe Config nameName of config used for deduplication by deduplicatordefault
id.exprID Jq ExpressionJq expression that evaluates the ID of the document that is to be deduplicated
.payload.after.id
+
before.exprBefore Jq ExpressionJq expression that evaluates the before part of the change. (Used to compare fields with the after part of the change).
.payload.before
+
after.exprAfter Jq ExpressionJq expression that evaluates the after part of the change. (Used to compare fields with the before part of the change).
.payload.after
+
wait.before.exec.secsWait before Exec (in secs)Time to wait (in secs) before starting deduplication. Useful so that the transformations get applied and the record get indexed into OpenSearch10
+ ## Source code [https://github.com/OpenG2P/openg2p-reporting/tree/develop/opensearch-kafka-connector](https://github.com/OpenG2P/openg2p-reporting/tree/develop/opensearch-kafka-connector) diff --git a/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md b/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md index a1d7fcff..a5285969 100644 --- a/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md +++ b/monitoring-and-reporting/reporting-framework/user-guides/connector-creation-guide.md @@ -1,4 +1,4 @@ -# Connector Creation Guide +# 📔 Connector Creation Guide Creating Dashboards for Reporting involves the following steps: @@ -42,7 +42,6 @@ Follow the [Installation guide](installation-and-troubleshooting.md) to install/ "database.dbname": "${DB_NAME}", "topic.prefix": "${DB_PREFIX_INDEX}", "table.include.list": "", - "column.exclude.list": "", "heartbeat.interval.ms": "${DEFAULT_DEBEZIUM_CONNECTOR_HEARTBEAT_MS}", "decimal.handling.mode": "double" } @@ -60,11 +59,6 @@ Each `$` in the json file will be treated as an environment variable. Environmen ``` * This list needs to include relationship tables of the current table. For example: if you want to index `g2p_program_membership` but would also like to retrieve the name of the program in which the beneficiary belongs, then you have to add `g2p_program` as well. -* This will index all the columns into OpenSearch by default. Every column that you don't want to index into OpenSearch has to be explicitly mentioned in the `column.exclude.list` (Accepts regex). For example PII fields like name, phone number, address, etc. As a general rule, fields that are not required for dashboards must be excluded explicitly. - - ```json - "column.exclude.list": "public.res_partner.name,public.res_partner.phone,public.res_partner.address" - ``` * Example debezium connector [https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/social-registry/debezium-connectors/default.json](https://github.com/OpenG2P/openg2p-reporting/blob/develop/scripts/social-registry/debezium-connectors/default.json) * [Debezium PostgreSQL Connector](https://debezium.io/documentation/reference/stable/connectors/postgresql.html) Reference. @@ -82,24 +76,24 @@ Each `$` in the json file will be treated as an environment variable. Environmen "connection.password": "${OPENSEARCH_PASSWORD}", "tasks.max": "1", "topics": "${DB_PREFIX_INDEX}.public.res_partner", - "key.ignore": "true", + "key.ignore": "false", "schema.ignore": "true", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable": "true", "value.converter.schemas.enable": "false", - "behavior.on.null.values": "ignore", + "behavior.on.null.values": "delete", "behavior.on.malformed.documents": "warn", "behavior.on.version.conflict": "warn", - "transforms": "keyExtId,valExt1,valExt2,tsconvert01,...", + "transforms": "keyExtId,valExt,tsconvert01,...", "transforms.keyExtId.type": "org.apache.kafka.connect.transforms.ExtractField${dollar}Key", "transforms.keyExtId.field": "id", - "transforms.valExt1.type": "org.openg2p.reporting.kafka.connect.transforms.ExtractFieldAdv${dollar}Value", - "transforms.valExt1.field": "payload.source.ts_ms->source_ts_ms,payload.after", + "transforms.valExt.type": "org.openg2p.reporting.kafka.connect.transforms.ApplyJq${dollar}Value", + "transforms.valExt.expr": ".payload.after + {source_ts_ms: .payload.source.ts_ms}", "transforms.tsconvert01.type": "org.openg2p.reporting.kafka.connect.transforms.TimestampConverterAdv${dollar}Value", "transforms.tsconvert01.field": "source_ts_ms", @@ -113,8 +107,6 @@ Each `$` in the json file will be treated as an environment variable. Environmen ```json "topics": "${DB_PREFIX_INDEX}.public.g2p_program", ``` -* To stop capturing changes to records and to maintain only the the latest data of a record on OpenSearch, set `key.ignore` to false. With this config, whenever there is a change to a record on the registry, the same change will be applied to the data on OpenSearch (rather than creating a new entry for the change.). - * Also if you want the data to get deleted from OpenSearch, when the record is deleted on the Registry, set `behavior.on.null.values` to `delete`. * After the base file is configured, you can now add transformations to your connector at the end of the file (denoted by `...` in the above example). Each transformation (SMT) will apply some change to the data or a particular field from the table, before pushing the entry to OpenSearch. * Add the following transformations to your connector based on the data available in the table. * For every Datetime field / Date field in the table add the following transform. @@ -156,10 +148,43 @@ Each `$` in the json file will be treated as an environment variable. Environmen "transforms.join02.es.username": "${OPENSEARCH_USERNAME}", "transforms.join02.es.password": "${OPENSEARCH_PASSWORD}", ``` + * If you want to add data/fields from one connector to another index on OpenSearch, use the DynamicNewFieldInsertBack transform. For example; NATIONAL IDs of registrants are saved in g2p\_reg\_id table. But if that field data is needed on res\_partner index (main registrant data table) the following can be done on the g2p\_reg\_id connector. (The following adds `reg_id_NATIONAL_ID` field into res\_partner index from g2p\_reg\_id connector into the document with ID from `partner_id` field) : + + ```json + "transforms.insertBack1.type": "org.openg2p.reporting.kafka.connect.transforms.DynamicNewFieldInsertBack${dollar}Value", + "transforms.insertBack1.id.expr": ".partner_id", + "transforms.insertBack1.condition": ".id_type_name == \"NATIONAL ID\"", + "transforms.insertBack1.value": "{reg_id_NATIONAL_ID: .value}", + "transforms.insertBack1.es.index": "${DB_PREFIX_INDEX}.public.res_partner", + "transforms.insertBack1.es.url": "${OPENSEARCH_URL}", + "transforms.insertBack1.es.security.enabled": "${OPENSEARCH_SECURITY_ENABLED}", + "transforms.insertBack1.es.username": "${OPENSEARCH_USERNAME}", + "transforms.insertBack1.es.password": "${OPENSEARCH_PASSWORD}", + ``` + * If you wish to apply a [Jq filter](https://jqlang.github.io/jq/manual/) on the record, use ApplyJq transform. The current record will be replaced with the result after applying Jq. Example: + + ```json + "transforms.jqApply1.type": "org.openg2p.reporting.kafka.connect.transforms.ApplyJq{dollar}Value", + "transforms.jqApply1.expr": "{new_field: .payload.old_field, operation: .source.op}" + ``` + * The connector by default indexes all the fields from the DB into OpenSearch. If you want to exclude fields from getting indexed, they must be explicitly deleted using a transform like given below. For example PII fields like name, phone number, address, etc. As a general rule, fields that are not required for dashboards must be excluded explicitly. + + ```json + "transforms.excludeFields.type": "org.openg2p.reporting.kafka.connect.transforms.ApplyJq{dollar}Value", + "transforms.excludeFields.expr": "del(.name,.address)", + ``` + + * `column.exclude.list` property can also be used to remove specific columns from being indexed (Not preferred method). The disadvantage is that this excludes the fields from Kafka topics itself. If there are multiple OpenSearch Connectors, referring to the same topic, each with different data requirements, then this is not possible to control at the SINK connector side. + * If you wish to change the name of the Index into which data is supposed to be inserted, use RenameTopic transform. The default index name before rename will be that of the topic name given in the `topics` config field. Example : + + ```json + "transforms.renameTopic.type": "org.openg2p.reporting.kafka.connect.transforms.RenameTopic", + "transforms.renameTopic.topic": "res_partner_new" + ``` * After configuring all the transforms, add the names of all transforms, in the order in which they have to be applied, in the `transforms` field. ```json - "transforms": "keyExtId,valExt1,valExt2,tsconvert01,tsconvert02,tsSelect", + "transforms": "keyExtId,valExt1,valExt2,tsconvert01,tsconvert02,tsSelect,excludeFields,renameTopic", ``` {% hint style="info" %} @@ -171,6 +196,26 @@ Each `$` in the json file will be treated as an environment variable. Environmen * For detailed transform configuration, refer to [Apache Kafka Connect Transformations](https://kafka.apache.org/documentation/#connect\_transforms) doc. * For a list of all available SMTs and their configs, refer to [Reporting Kafka Connect Transforms](../kafka-connect-transform-reference.md). +#### Capturing Change History + +* If you also wish to record all the changes that are made to the records of a table, create a new OpenSearch connector for the same topic as given in [this section](connector-creation-guide.md#opensearch-connector-creation) and change the following properties. + + ```json + { + "name": "res_partner_history_${DB_PREFIX_INDEX}", + "config": { + ... + "key.ignore": "true", + ... + "behavior.on.null.values": "ignore", + ... + "transforms.renameTopic.type": "org.openg2p.reporting.kafka.connect.transforms.RenameTopic", + "transforms.renameTopic.topic": "${DB_PREFIX_INDEX}.public.res_partner_history" + } + } + ``` +* With this configuration, you will have two OpenSearch connectors. One that tracks the latest data of a table. And one that tracks all the changes. Correspondingly you have two indexes on OpenSearch (one with `_history` and one with regular data). + ## OpenSearch dashboard creation Refer to [OpenSearch Dashboard Creation Guide](dashboards-creation-guide.md). diff --git a/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md b/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md index 04a9f6ed..47fc8709 100644 --- a/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md +++ b/monitoring-and-reporting/reporting-framework/user-guides/dashboards-creation-guide.md @@ -1,4 +1,4 @@ -# Dashboards Creation Guide +# 📔 Dashboards Creation Guide This document contains instructions for the developers (or dashboard creators) to create dashboards to visualize data on OpenSearch. diff --git a/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md b/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md index e99bfb88..e07693b8 100644 --- a/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md +++ b/monitoring-and-reporting/reporting-framework/user-guides/installation-and-troubleshooting.md @@ -1,4 +1,4 @@ -# Installation & Troubleshooting +# 📔 Installation & Troubleshooting ## Installation diff --git a/monitoring-and-reporting/reporting-framework/user-guides/page-1.md b/monitoring-and-reporting/reporting-framework/user-guides/page-1.md new file mode 100644 index 00000000..6f8b4979 --- /dev/null +++ b/monitoring-and-reporting/reporting-framework/user-guides/page-1.md @@ -0,0 +1,2 @@ +# Page 1 + diff --git a/social-registry/features/deduplication/deduplicator-service.md b/social-registry/features/deduplication/deduplicator-service.md new file mode 100644 index 00000000..08c0d252 --- /dev/null +++ b/social-registry/features/deduplication/deduplicator-service.md @@ -0,0 +1,44 @@ +--- +description: WORK IN PROGRESS +--- + +# Deduplicator Service + +This deduplicator involves the computation of duplicates based on how closely the data from a record matches with other records (fuzziness). This system also ensures that duplicates are computed as and when a new record is created or an existing record is changed so that the retrieval of duplicates becomes easier. The deduplication process can be triggered manually as well. + +## Design + +### Architecture + +{% embed url="https://miro.com/app/board/uXjVLY7r4VY=" %} + +### OpenSearch Connector + +* The connector will be set up in the following way. + * Every record in the DB table contains only one equal record on the OpenSearch index. + * The primary key of the record in DB ("database ID") will be equal to the ID of the document on OpenSearch ("document ID"). + * If a record is deleted from DB, it will be deleted from OpenSearch also. +* On every change of a record (or insertion of records), a Kafka Connect SMT triggers the deduplicate API of the deduplicator with given fields, fuzzinesses and weightages. + +{% hint style="info" %} +The term "record" means an entry in the DB table. The term "document" means an entry on OpenSearch. Because of the way the connector is set up above, the terms "entry", "record", and "document" are used interchangeably for the rest of this document, since they all mean the same thing. +{% endhint %} + +### Deduplication Service + +* API Service - Based on FastAPI +* Interfaces only with OpenSearch backend (No database connection required). +* Exposes API that allows triggering deduplication for a record (by document ID) in OpenSearch. + * API inputs: + * Fields to be considered for deduplication + * Allowed fuzziness of each field + * Weightage of each field + * Threshold of score to consider a result as duplicate + * Output: + * Deduplication Request ID (generated). +* Exposes API to retrieve duplicates of a record (by document ID). +* Exposes API to retrieve the status of deduplication request (by deduplication request ID). +* Deduplication of a record involves the following process: + * Get field values from the current record. + * Run an [OpenSearch match query](https://opensearch.org/docs/latest/query-dsl/full-text/match/) (multiple field match queries are wrapped inside a [boolean must query](https://opensearch.org/docs/latest/query-dsl/compound/bool/)) with given fuzziness and weightages. + * Receives the list of duplicates from the above query response (and picks results with scores above the given threshold). Updates the list of duplicates and their match scores against each entry from the response, including the last deduplication request ID. diff --git a/social-registry/features/verifiable-credentials-issuance.md b/social-registry/features/verifiable-credentials-issuance/README.md similarity index 69% rename from social-registry/features/verifiable-credentials-issuance.md rename to social-registry/features/verifiable-credentials-issuance/README.md index 53367708..304b3cd1 100644 --- a/social-registry/features/verifiable-credentials-issuance.md +++ b/social-registry/features/verifiable-credentials-issuance/README.md @@ -2,6 +2,10 @@ Social Registry can issue credentials in the form of [Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) (VC). Upon authentication, these can be downloaded into the beneficiary's **digital wallet** or printed on paper as a **QR code.** These credentials indicate that the individual is the member of the Social Registry. Credentials are sometimes also referred to as e-cards, Farmer Registry e-Card, etc. +## High-level workflow + +{% embed url="https://miro.com/app/board/uXjVKfQuzM4=/" %} + ## Feature and functionality * Social Registry exposes [OpenID for VCI](https://openid.net/specs/openid-4-verifiable-credential-issuance-1\_0.html) APIs. A wallet can use these APIs to retrieve the VCs. @@ -9,11 +13,18 @@ Social Registry can issue credentials in the form of [Verifiable Credentials](ht * Any functional/foundational ID can be configured for authentication. For example, if the Registry contains National IDs (or National ID tokens) of individuals, and a valid eSignet authentication mechanism (or similar) exists against the given National ID, then the credential can be issued. * **Future possibilities**: If the Social Registry generates a unique ID, then that can be used to authenticate and retrieve the credential. -## High-level workflow +{% hint style="info" %} +Bulk VC issuance is not supported. +{% endhint %} + +## Configuration & Technical documentation + +VCI uses Odoo modules: -
+* [G2P OpenID VCI: Base](../../../pbms/developer-zone/odoo-modules/g2p-openid-vci-base.md) +* [G2P OpenID VCI: Rest API](../../../pbms/developer-zone/odoo-modules/g2p-openid-vci-rest-api.md) -## Source code and configuration +## Related user guides -Link to [Configuration and source code](broken-reference). +* [Configure Inji to download Social Registry VCs](user-guides/configure-inji-to-download-social-registry-vcs.md) diff --git a/social-registry/features/verifiable-credentials-issuance/user-guides/README.md b/social-registry/features/verifiable-credentials-issuance/user-guides/README.md new file mode 100644 index 00000000..5bf4dc3c --- /dev/null +++ b/social-registry/features/verifiable-credentials-issuance/user-guides/README.md @@ -0,0 +1,2 @@ +# 📔 User Guides + diff --git a/social-registry/features/verifiable-credentials-issuance/user-guides/configure-inji-to-download-social-registry-vcs.md b/social-registry/features/verifiable-credentials-issuance/user-guides/configure-inji-to-download-social-registry-vcs.md new file mode 100644 index 00000000..1f7b03be --- /dev/null +++ b/social-registry/features/verifiable-credentials-issuance/user-guides/configure-inji-to-download-social-registry-vcs.md @@ -0,0 +1,5 @@ +# 📔 Configure Inji to download Social Registry VCs + +TODO: + +Similar to [Configure Inji to download Beneficiary VCs](../../../../pbms/functionality/verifiable-credential-issuance/user-guides/configure-inji-to-download-beneficiary-vcs.md).